How Facebook Could Respond To Moderators' Nightmare

Content moderators on Facebook are going through hell to keep your feeds palatable. They sieve through videos of murders, suicides, explicit contents and other horrors day in day out. They have strict bathroom breaks and an annual salary of just $28,800. Their treatment certainly doesn't make up for the daily torment. The Verge reported many workers have turned to smoking weed during breaks and questionable activities to take the edge off the trauma, not to mention the potential long-term psychological impact, which is an alarming concern.

Advertisement

This isn't exactly new knowledge, but The Verge's graphic exposé draws attention to Facebook and tech giants once again and puts much-needed pressure on them to evaluate this ethical crisis. From Artificial Intelligence, to a change in culture from the ground up, there is no easy fix for Facebook. But big compromises need to be made to sort out this case of wanton neglect.

Improve working culture

If the reported effects of constant exposure to extreme content proves anything, being a content moderator is a dangerous job. As with any other risky occupation, efficiency cannot be the number-one priority of its managers. You'd see a lot more tragedies if pilots, builders, or belayers assisting rock climbers worked solely with efficiency in mind. It's about safety, physical and psychological.

Advertisement

Stubborn efficiency seems to be the culture at Facebook. According to reports, the moderation vendor Facebook employs, Cognizant, only allows employees nine minutes of "wellness" time daily. Even Muslim workers are told that praying is not a valid excuse to take time off.

It's an inhuman way to manage workers constantly exposed to mentally punishing content. These employees need to be protected and encouraged.

Working in teams

Employees shared with The Verge ways they spark humor in this disheartening environment. They crack dark humour, share memes they come across and engage in good-natured banter.

It may seem trivial, but Facebook can build a better – and safer – culture by having moderators work in pairs. Moderation is a depressingly isolated task right now, with workers glued to screens, single-handedly processing over 400 twisted posts alone daily. Pairing up helps to alleviate some psychological impact and encourages communication that can create a better working environment. It can't be about efficiency.

Advertisement

Rotation of staff

Facebook must accept this is an outcome of the flaws of their own platform. If they truly cared about ethics as a world-leading institution, they can't continue sweeping moderators under the rug, paying subsidiary companies to clean their mess. This very mess is a symptom of Facebook.

Advertisement

While it is still necessary that human eyes filter this content, perhaps it is a burden that can be undertaken by members of Facebook itself, to share the burden of the company's own mistakes. It takes the load off the 15,000 workers, employed for nothing but to filter pornography, murder and other forms of mind-numbing violence.

It certainly is painful to subject more of its employees to such content, but this is a reality that social media giants today face. Someone's gotta do it and it's safer to share the burden.

Hope in Artificial Intelligence

AI is the unavoidable future that the tech industry is building toward. It certainly looks like the light at the end of the tunnel for moderators and social media companies, as technology takes the place of humans to filter content and clean up the cyber-sphere.

Advertisement

Doubt surrounds this notion though, as experts are skeptical that AI can ever understand human culture and its nuances. It may be difficult for a machine to differentiate between pornography and a video for sex-education, for example. If we humans struggle to draw the line for porn and art, what more can we ask of a machine.

It doesn't look hopeful, but perhaps there is hope in AI if coupled with new guidelines for content permissible on social media.

AI can flag harmful content

AI may not be able to appreciate little nuances, but it can certainly detect and compartmentalise extremities, like gun violence or nudity. When the technology gets there, moderators can utilize the recognition abilities of AI to flag such content posted by users and add a warning label for viewers, which could read: "Be warned: This video could contain violent images."

Advertisement

By having AI do this, it makes the life of Facebook's moderators simpler too, as content is readily filtered by AI, streamlining the process and reducing their exposure to violent imagery.

Licensing content creators

Getting tighter with regulations, the content that AI flags could be blocked initially, until review. It could prove to be a necessary evil, to keep harmful content off our feeds – efficiently.

Companies that require nasty images to be published regularly, like news outlets and other informative channels could perhaps seek permissions to readily publish adult content via their own discretion, without going through the stagnating process of AI checks and censorship.

Advertisement

It also alleviates Facebook's problem of fake news. Ordinary people who want to get messages publicised, which feature violent images that would otherwise be rejected by AI, could turn to these "credible" news outlets with clearances to publish any content on Facebook, creating a logical system of filtering and control. A mixture of these strategies could potentially help moderation significantly.

Increasing moral barriers

Obviously, none of the ideas shared here are close to how social media operates today.

Freedom of speech is at the heart of social media today. This liberty has led to plenty of important and informative content, but also symptomatically produced harmful material that has created these problems.

Advertisement

Maybe it's time to meet in the middle. Freedom of speech isn't a value that is cherished worldwide. Plenty of societies are flourishing just as well with stricter regulations about what can be published and said immediately.

Facebook serves an international audience of over 1.8 billion users in all sorts of governments and countries – and only 214 million of them are American.

The world has shown that it isn't civil enough to handle such limitless freedom in the cyber world. Clamping down on expression could be key to aiding moderation and contributing to a safer internet.

Crossroads for Facebook

Security, fraud, moderation and all sorts of crises in recent years may in fact play in favor of Facebook. It presents a fine time to call for change.

It needs to decide fast: is it a platform for any individual, regardless of background, to upload and get away with anything they want? Or is it a place that values safety and responsible sharing of content?

Advertisement

Situations right now certainly call for a radical shift in how move ahead and build the culture of media. AI is getting there; moderators – and children – are suffering; people don't have the civility to post sensible things. A compromise must be made, and perhaps it begins first with tighter controls and greater rigidity.

Yes, it's all a set of ideals, but what are we without them?

Recommended

Advertisement