Let's face it: The Internet can be an exceedingly rough place to have a conversation, particularly when the subject matter is politically charged. A research team from Jigsaw—part of Alphabet, Google's parent company—is hoping to radically change this through a new API that uses machine learning models to screen website and forum content and determine whether it is "toxic."
Jigsaw CEO Jared Cohen describes the tools, which is called Perspective, in a blog post:
Imagine trying to have a conversation with your friends about the news you read this morning, but every time you said something, someone shouted in your face, called you a nasty name or accused you of some awful crime. You’d probably leave the conversation. Unfortunately, this happens all too frequently online as people try to discuss ideas on their favorite news sites but instead get bombarded with toxic comments.
Perspective reviews comments and scores them based on how similar they are to comments people said were “toxic” or likely to make someone leave a conversation. ... Each time Perspective finds new examples of potentially toxic comments, or is provided with corrections from users, it can get better at scoring future comments.
[A] publisher could flag comments for its own moderators to review and decide whether to include them in a conversation. Or a publisher could provide tools to help their community understand the impact of what they are writing—by, for example, letting the commenter see the potential toxicity of their comment as they write it. Publishers could even just allow readers to sort comments by toxicity themselves, making it easier to find great discussions hidden under toxic ones.
Perspective has been being used by human moderators at the New York Times to analyze about 11,000 comments per day, and is helping them do their jobs faster, Cohen adds. The Economist, which uses human moderators, has signed on to use the tool as well. All publishers are welcome to ask for access to Perspective as of today.
Jigsaw plans to release machine learning models for Perspective that target other aspects of online commenting, such as whether a post is on-topic or offers substantive perspective and information.
Analysis: Putting Perspective's Goals In Perspective
A Forbes article on Perspective makes a shrewd point about the tool's inherent challenges with the vagaries of language. As author Jeff John Roberts writes:
I tried it out, and unsurprisingly, familiar insults or slurs score high on the "toxicity" meter—while truly offensive terms score near the 100% mark. So far, so good.
But I also tried a lesser-known insult ("libtard") that's become a nasty slang term and fixture on certain political sites. It appeared to pass muster.
The sarcastic phrase, "nice work, libtard" only obtained a 4% "toxicity" store, raising the possibility that would-be trolls will start reaching for newer or unusual slurs to avoid detection.
Machine learning has its challenges, "including the need for training, the difficulty of achieving machine understanding of subtlety and the ease of gaming machine learning systems with new words and novel patterns of expression," says Constellation Research VP and principal analyst Doug Henschen. "We've seen this problem before with spam filtering—machine-learning-based systems that email services providers have used for years to screen out the junk. These systems perform pretty well, but they have more structured information and metadata to go on, including the sender and subject lines. It's easy enough to 'learn' the addresses of spammers, subject lines and link-intensive messages that lead to suspect commercial sites."
But meanwhile, depending on the requirements of a given reader comment system, Perspective tool may have less to go on and there are other factors to consider, Henschen adds.
"Many publishers now restrict commenting to registred users with known email addresses," he says. "Abusers are easily kicked out so long as there's human moderation, but the promise here is hands-off moderation. I expect sites will have to get lots of comments and topics of articles will have to come up frequently to ensure accurate results. Even then, devious sorts who want to game the system will inevitably learn how to avoid certain words or to use new words to be abusive without triggering machine-learning-based filtering."
"The bottom line is that this may be helpful in fostering a higher level of discourse on high-traffic sites on well-known topics, but there will always be the false positives, false negatives and training problem that lets 10 percent to 20 percent of the bad content slip by," Henschen adds.
However, there's also a philosophical debate to be had around tools such as Perspective. "Identifying toxicity is a subjective capability based on contextual needs," says Constellation Research founder and CEO R "Ray" Wang. "We have to balance freedom of speech with tolerance. However, hate speech is still free speech, so while this is a step towards civility for some, it could be could be intolerance for others."
24/7 Access to Constellation Insights
Subscribe today for unrestricted access to expert analyst views on breaking news.