X

Tech Talk: Jigsaw's A.I. Perspective Tool Still Needs Work

A recent test run using Jigsaw’s Perspective software has now shown that, although it works in some cases, there are still a lot of challenges to overcome. For those who don’t already know, Alphabet’s Jigsaw subdivision created Perspective, and its associated toxicity filter, to help moderators identify malicious statements found online, but that progress has also proven to be a real challenge for the company’s A.I. to overcome. In fact, two activists – John Ellis and EJ Gibney – who recently took Perspective to task seem to have shown that there is still a very long way to go.

For the sake of clarity, Jigsaw was not initially intended to combat trolling directly but was created as a more broad way to use technology to address geopolitical challenges. That included everything ranging from countering violent extremism to putting a damper on online censorship, while still mitigating the threats associated with digital attacks. One aspect of that has evolved over time to become an A.I.-driven program called Perspective. First and foremost, Perspective is a tool that can be used by forum moderators to filter through comments and help to stop some of the more damaging vitriol internet users are faced with online. Whether or not that would best be described as censorship or whether the creation of such safe spaces is ethical or not is a separate debate in and of itself. Leaving that aside, the challenge of identifying internet trolls with A.I., to begin with, is proving to be more difficult than may have been anticipated. Aside from partnering with several major companies, such as Disqus and The New York Times, there is also a free demo version of the tool, which can be found online. It should be noted that the tests conducted by Ellis and Gibnet made use of that free tool, rather than the full suite of software and there are differences between the versions available.

As to how the free demo tool works, it allows users to enter strings of text and then returns a score which is meant to gauge how “toxic” a given comment is and in what way it is toxic. The free version uses only one of eleven filters Jigsaw has on offer. As a result, the scores obtained by comments entered by the pair of activists were not necessarily indicative of how moderators from sites partnered with Jigsaw would have seen. However, the test itself was reportedly run using phrases taken from commenters on Breitbart – a right-wing associated news source where trolls in the comments section are known to be somewhat more inflammatory with their remarks – and did show that the technology is still in its infancy. Moreover, the results may show how fragmented the software is and how much work still needs to be done. Although some comments entered were obviously racist in nature, none of those earned a score above 56 percent in terms of toxicity. For example, a comment clearly inciting violence against entire families associated with the opposing political party managed to earn a Toxicity rating of 31 percent, despite being scored at 85 percent on the inflammatory scale.

There are many more examples provided at the source, but the key takeaway here appears to be that there are still serious problems with the software, as it stands today. Implementing more of the filters does appear to fix some of the issues, or at least to supplement the toxicity filter. However, another key area the A.I. needs to improve if Jigsaw’s project is to branch into wider use across the internet is in its recognition of words regardless of special characters used in an attempt to fool the filters themselves. A test run by the source, for example, found that inserting special characters in the middle of an offensive word resulted in uneven toxicity scores. Beyond those samplings, there are related problems with the software which effectively move in the opposite direction. False positives still occur with some frequency, even when the comment itself is non-offensive due to how the A.I. looks for specific words or word combinations.

Jigsaw, for its part, appears to be well aware of the current issues with its software and has acknowledged that comments should – for the time being – not be removed automatically using any variation of its program. Instead, the company hopes to continue building on and improving Perspective as a way to assist with moderation as overseen by real people. In the meantime, the two activists who first started the investigation have called on Jigsaw’s partners to take a more proactive approach – asking Disqus and others to drop support for sites that repeatedly fail to moderate their commenters.