Content Moderation on Social Platforms

February 15, 2018
|
Peter Kelly
6 Min
|
1.9K Views
previous article

Content moderation on social media is a weird pet obsession of mine. The internet is full of garbage, both benign and insidious, but a number of recent headlines bring the issue into stark relief:

Google links to hoax articles during breaking news. Twitter is filthy with Nazis. YouTube’s algorithm has been suggesting and promoting bizarre, disturbing content to children. And Facebook has an overseas army of content janitors, a legion of hundreds who still can’t keep the platform clean. Indeed, many tasked with scrubbing Facebook of offensive content end up traumatized, permanently shaken by the sheer volume of violence and depravity the job exposes them to, and what it reveals about the darkness of humanity.

The reason I describe my obsession with content moderation on social platforms as ‘weird’ is that I have no solution in mind. The more I understand about the problem, the more convinced I am that social platforms, as they are presently understood, will never be able to keep their users safe from offensive content. It’s like fighting a Hydra, where if you cut off one of its heads, two will grow in its place. But in this case, it’s hundreds and thousands of hours of content that a handful of underpaid moderators are tasked with parsing.

Unwanted Interactions

Here’s a story: a few years back I was doing community management for a small social media platform operating in stealth. This was a social network built on small groups that facilitated text, image, video, and audio sharing, and (alarm bells) live video chat. It was also (glaring siren) aimed at a teen demographic. If you remember Chatroulette, I don’t need to tell you what happened next.

Now before you sprain your neck rolling your eyes at the hubris of such a venture, a few facts about this company: we had a famously brilliant CEO, the seed funding for the company was brow-raising, and our engineering staff was world-class. Our, ahem, “unwanted interactions” problem was not the result of unskilled dilettantes throwing darts at a board. The people behind the product strongly believed in both social discovery and the power of live interactions. They knew they had both a moral and business duty to make their platform a place that was fun and safe for everyone. It was their number one priority.

Yet in the end, they decided they simply could not do it. At least, not with the product they had. Ultimately, they pivoted from large, public groups to small, private groups. Last I checked they’re doing fine. But they’re not huge.

Because it turns out, the very thing that drives growth (and valuation) in social platforms, facilitates the abuse that comes to plague them.

It’s very hard to have viral growth of any app without incorporating levers built to aggressively expand every user’s social graph: suggested friends, suggested pages, suggested rooms. In the hands of users with good intentions, this can very quickly bring one into contact with toxic elements. In the hands of malicious users, this presents endless new targets to harass, stalk, and intimidate, each of whom is outside the malicious user’s real-world social circle.

So that’s how abuse becomes commonplace on social platforms. But why is it so hard to stop? Why can’t platforms simply ban the offending user, delete the offending content, close the offending channel?

As I see it, there are 3 reasons why no major social platform has “solved” abuse:

  1. The valuation of social media platforms depends on the discovery capabilities it provides and on wide, active social graphs.
  2. Many (most?) major platform founders and top brass, genuinely believe in unfettered discovery and in free speech as universal, absolute social positives.
  3. Decision makers at major platforms know that any effective solution to offensive content on their platform would involve unbelievably complex, contextually aware AI, that simply does not exist right now.

Worth noting: the major platform with probably the smallest abuse problem is Snapchat, which requires users to either know each other’s name or snap each other’s QR code in order to connect. In other words, the app that is least aggressive at expanding the social graph has the smallest abuse problem. Not a coincidence.

Audience Plus Content

Point number 1 seems self-explanatory: social media platforms require a virtuous cycle of content generation and discovery to stay relevant. A user posts content knowing it will have an audience. That audience interacts with the content, sending positive reinforcement to the author, inspiring the author to make more content. The audience itself is inspired by the content to create its own, and the cycle continues. Keeping the user returning requires consistently fresh content, and fresh content requires both an active community and easy discovery of content outside a user’s existing network. Nail that and you’ll always have users, which means you’ll always have advertisers. Nail that, and you’re a billionaire.

Free Speech Issues

Point number 2 might be a bit controversial, but without making a political judgment of any kind, I’d direct you to former Twitter VP Tony Wang’s famous “free speech wing of the free speech party” line, Facebook’s stated purpose (until recently) of “making the world more open and connected,” and various statements made by Reddit founders and executives on why, among other things, they let /r/jailbait exist for more than one nanosecond.

Machines Aren’t Ready to Help with Content Moderation (Yet)

The real rub comes from point number 3. The sad fact of the matter is that the flow of content generated by both human and bot actors on the internet is far wider and faster than any human-dependent solution could possibly counter. There are 500 million Tweets posted, 30 trillion pages crawled by Google, 432,000 hours of video uploaded to YouTube, and 576,000 new users added to Facebook every single day. You can’t ban every offensive account, you can’t screen every video, you can’t even tell who is going to be toxic when they join Facebook.

Or at least, no human, no amount of humans can (not when malicious actors are deploying bots to facilitate their aims). But word filters can be built, AI can identify certain errm, human parts, and patterns of behavior can be mapped for the most regularly reported users.

But these are all imperfect half-measures. Because AI is bad at stopping abuse before it happens and bad at using context to flag offensive content. The purest example of this failure can be seen in the recent viral post, There Is Something Wrong On The Internet, where the author documents how the YouTube algorithm has been gamed by spam accounts to rack up millions of views of bizarre, frightening content, all the while reaping ad dollars for the account owners.

No AI could stop content like this because you couldn’t describe to an AI why it was offensive – or at least, you couldn’t describe why in a useful way, a way that prevents future content from this from ever being served from children. Filtering stuff like this takes a human eye. At least, for now.

Of course, there is another option. Google can stop pushing discovery on its YouTube platform. It can trust that if the user wants something, she will seek that out. But for that to happen, for Google to stop thinking about their platform as a neutral territory for discovery and start thinking of it as either an archive to be browsed, or a media channel to be managed, Google would have to completely upend what they see as the actual purpose of YouTube. That of course, would mean accepting slower, smaller growth.

So I wouldn’t count on it.

VIEW POST

Things You Didn’t Know About Bitcoin’s Blockchain

March 22, 2020
|
By
Michael Yoon
VIEW POST

Welcome to Loeb.nyc – A Company Factory

March 22, 2020
|
By
Michael Loeb
VIEW POST

Nicole Williams’ Top 3 Transferrable Skills

March 22, 2020
|
By
Nicole Williams
VIEW POST

The Entrepreneur’s Gene

March 22, 2020
|
By
Michael Loeb
VIEW POST

Things You Didn’t Know About Bitcoin’s Blockchain

November 4, 2017
|
By
Michael Yoon
VIEW POST

Welcome to Loeb.nyc – A Company Factory

November 13, 2017
|
By
Michael Loeb
VIEW POST

Nicole Williams’ Top 3 Transferrable Skills

March 22, 2020
|
By
Nicole Williams
VIEW POST

The Entrepreneur’s Gene

March 22, 2020
|
By
Michael Loeb
VIEW POST

Office Culture is Key to Startup Success

March 22, 2020
|
By
Chris Dowling
VIEW POST

Silicon Valley Valuations – Let it Be Real

March 22, 2020
|
By
Michael Loeb
VIEW POST

Things You Didn’t Know About Bitcoin’s Blockchain

November 4, 2017
|
By
Michael Yoon
VIEW POST

Welcome to Loeb.nyc – A Company Factory

November 13, 2017
|
By
Michael Loeb
VIEW POST

Nicole Williams’ Top 3 Transferrable Skills

November 13, 2017
|
By
Nicole Williams
VIEW POST

The Entrepreneur’s Gene

March 22, 2020
|
By
Michael Loeb
VIEW POST

Office Culture is Key to Startup Success

March 22, 2020
|
By
Chris Dowling
VIEW POST

Silicon Valley Valuations – Let it Be Real

March 22, 2020
|
By
Michael Loeb
VIEW POST

Content Moderation on Social Platforms

March 22, 2020
|
By
Peter Kelly
VIEW POST

Loeb.nyc Speakers Series: Cryptocurrencies with Alyse Killeen

March 22, 2020
|
By
Adam Rice
VIEW POST

Loeb.nyc Brings Real Inclusivity to SxSW

March 22, 2020
|
By
Katie Loeb

Sign up for our Quarterly Digest

Receive the latest news, insights, and inspiration from Loeb.nyc's venture collective.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
next article