I recently hosted a conversation about trust and safety with two leaders in the field: Kanti Kopalle, VP of Intuitive Operations and Automation at the information and cloud services giant Cognizant, and Louis-Victor de Franssu, co-founder of the content moderation platform Tremau. We were joined by Constellation Research founder & chair, Ray Wang.

The bumpy road from analog to digital

As the world becomes more digital and borders seem to be disappearing, one of the paradoxes is that sovereignty remains such a sticky issue. I see this as one of the many dimensions of humankind’s grand analogue to digital conversion. This “project” has been running for a couple of decades and has a long way to go. 

Online today, nations want to retain and enforce their own safety rules, as part of their national identity. In some regions, increasingly assertive regulators are holding multi-national digital platforms to account for meeting local media and content rules.

There are huge challenges for digital and cloud businesses operating globally. Automation of Trust & Safety controls is inevitable, for reasons of scale, cost and responsiveness.

So content moderation as a service is emerging.  Tremau was launched in 2021 to provide auto-moderation managed services to the global digital platforms. Co-founder Louis-Victor de Franssu was educated in the humanities and cut his teeth in financial risk management before joining the French government at a key period of digital regulation development. As Deputy to the French Ambassador for Digital Affairs, Louis-Victor worked on landmark initiatives including the Christchurch Call to Action to fight terrorist content online and the EU Digital Services Act (DSA).

He saw the pressure mounting on platforms and their ad hoc content governance. Content management with its challenges of scale, cultural and legal nuance, needed to shift to “the center of their operations” Victor-Louis told us. And so he helped launch Tremau.

The results of democratizing creativity

Prior to personal computing, laser printing and digital photography, media content was a very special type of product. You needed special equipment and complex skills in order to generate audio and video.

Kanti told us that seventy percent of content online now is user-generated. That’s a mind-blowing paradigm shift.

It’s been well documented how the democratization of content creation has overturned the businesses of print media, television, video rental, book sales and advertising.  But it seems to me that the implications for content regulation have taken longer to emerge.

The print and TV media industries in their heyday were largely monocultures. They became pretty cosy; compliance with public standards was mostly self-enforced.

But as media companies lost their monopoly over creation and distribution, it forced content moderation to come out into the open.

The benefits of objectivity

In our conversation, Kanti Kopalle reflected on the balance between human and automated content moderation. While automation is essential for scale, “we also need humans for the nuance” he said. “How do we seamlessly do the handover between an auto-moderation (using AI or some of the traditional techniques) to how a human overlays on top of that?”

Cognizant focuses on a balance between scale and nuance, aiming for consistency within the many and varied policy environments of its clients.

It strikes me that a less obvious benefit of automating content moderation is the potential for AI to fine-tune the rules deployed in different regions for what is acceptable and what’s not. With dozens of statutes to deal with, most of which are in flux, platforms trying to deliver millions of pieces of new content every day cannot hope to stay up to date without automation.

There is always going to be a judgement call about whether certain content is culturally acceptable and/or legal under prevailing norms in each place. If AI can make that call in a reasonably reliable manner, the efficiency dividends will be enormous. The algorithms don’t need to be perfect; after all, any human’s opinion about the acceptability of content is always debatable.

I can see advantages in making content moderation decisions purely mechanical, because the resulting disputes will be more technical than subjective, and may be easier to resolve systemically.

If the acceptability of content can be assessed algorithmically, then the algorithms can themselves be reviewed and improved in a methodical way.