Content Moderation

Content Moderation

Share
Link Copied

The future of content security and our responsibility in keeping it acceptable, clean and decent.

When future generations read comments on news pages and social media sites, they will be forgiven for thinking that we were a pretty schizophrenic society with wild opinions ranging from out and out hate speech to schmalzy inspirational quotes. Free speech has never been freer, but with this freedom comes a responsibility to moderate, temper and censor and ultimately to differentiate between acceptable and abhorrent content. 

If you have a website or use a social media platform that allows for user content to be published, you have a responsibility to ensure that what appears on your site or page is not illegal, is free from scams and obscenities and that it reflects your own ethos and principles. This means so much more than engaging the ‘profanity filter’ on Facebook and keeping an eye out for unsolicited advertisements from ‘lonely women in your area’. Content Moderation is now a fully recognised job and it is estimated that more than 100,000 people are moderating globally.

Their job is to screen for violence, pornography, hate speech and a myriad of inappropriate content that would turn most people off their dinner for more than a few days. One Former Facebook employee famously took the Tech giant to task with claims of mental anguish and post-traumatic stress disorder brought on by daily exposure to inappropriate content.

Stress from alleged overwork and constantly dealing with complaints of horrific words and imagery took a huge toll on his life. He maintained that the Tech company should have done more to shield him from this stream of cyberbullying, hate crimes and horrors even as his job was to make critical decisions regarding the content. “And the takeaway I want you to remember from this kind of crap is people are awful. This is what my job has taught me. People are largely awful and I’m there behind my desk doing my best to save the world. But if no one tells the moderator what’s going on, they don’t know.”

Facebook now outsources this work. Exposure to nasty content and the ensuing mental health effects are not the only problem. No matter how experienced, hardworking and resilient the human content moderators are, there is no way they can keep up with the sheer volume of user generated content being uploaded daily. 300 hours of video are uploaded to YouTube per minute, 400 million photos go on Facebook daily and this does not even touch the amount of display ads, Tik-Tok moments, websites launched, and comments viciously tapped in capital letters on the local hardware stores Instagram post! (Sources: YouTube, Facebook, comScore and Accenture analysis.) Current content moderation methods cannot physically or mentally continue to cope with the vast amount of user generated content being uploaded daily, hourly and even as you read this ‘word’. (1.7 MB of data, actually!). Artificial intelligence may be the answer.  

Artificial Intelligence may well offer the solution for the future of content moderation. It brings the promise the ability to identify large quantities of inappropriate data, across multiple platforms and in real time. Social Media platforms have already built some content moderation algorithms that can spot up to 99.9% of spam content and terrorist propaganda, for example. Where it all falls down is when hate speech, leet speech (or hack speak) and cyber bullying that resorts to mis spellings, whether deliberate or not, is used. It also fails to understand the nature and context of some horrific insults and hate speech. The AI option also lacks the ability to decipher whether nudity is offensive content or simply the reproduction of classic, well-loved paintings. 

Where the process of monitoring and assessing content may reach a happy medium is when the AI weeds out the obvious spam etc and then provides an investigator with potentially offending content for action. The list can then be reviewed by the human moderator who can then ban users, delete or hide content etc. This exercise can be two-way and provide the Artificial intelligence with more information on what constitutes unwelcome contributions in user generated content, making it more sophisticated at detecting future issues. ‘AI will indeed turn content moderation on its head— not by eliminating the role but by turbocharging it. Certainly, the total number of people working in content moderation can be reduced, but only if Internet companies stay on the forefront of AI technology.’  

The online arguments around whether moderation flies in the face of free speech are understandably loud and complicated. At what point does the right to free expression meet the necessity for swift censorship? It could be said that some content moderation is dependent on a personal moral compass and opinion which may vary from user to user. Most of the tech giants have given clear and well publicised guidelines on what constitutes inappropriate content and to be fair, most of us will know instinctively what is offensive, whether morally, culturally or legally, even if we are not particularly ‘woke’! Clearly it is best to delete and remove anything which may cause offense and neglecting to do so, could have dire consequences.

It is interesting to note that in the wake of the US Capitol riots, Apple and Google removed the social media platform, Parler from their app stores. Amazon Web Services stopped hosting the company on its servers and all three argued that Parler had not done enough to moderate content posted on its platform. The conversation then became very much about what constitutes free speech and free expression and what is defined as hate speech and incitement. The closure of Parler has seen a large migration to lesser-known platforms such as Gab and Telegram, with Parler reportedly enlisting the help of a Russian web hosting company to get back online.   Moderators can start sharpening their virtual pencils now as clearly, there will be much work to be done going forward.

But where does this leave the average Joe or Joan, who welcomes user generated comments on all their platforms. A failure by the owner or the platform itself to catch inappropriate content might result in financial risk, brand degradation and a loss of consumer trust. You may not be charged with inciting a coup but whatever appears on your site will appear to be your ethos. This is such a serious issue that it has now generated many and varied Content Moderation Software Applications for upload to a variety of platforms. These apps claim to reduce the risk to your brand by detecting profanity, hate speech, harassment sexual content and can even negate your GDPR risks by instantly removing PII or personally identifiable information from your platform. Others simply make review queues of potentially damaging content which you can peruse at your leisure later on.

The task of moderating, editing and accepting accountability for user generated items published online is going to become more complicated in the coming years. Keeping track could force many off the platforms and on to using sites which do not accept user generated content at all. The solution appears to lie with more sophisticated A1 monitoring and keeping ahead of the trolls, the scamsters and the haters. 

Sadly, the solution does not seem to rest with a more caring and responsible use of the internet, with a change of hearts and minds and users posting more acceptable and agreeable content. Until then, moderate your content.

Industry NewsWebsite Management