On March 16, Facebook sent home thousands of content intermediaries amid the global epidemic. Other social media giants like YouTube and Twitter took the same precautions. In the absence of a large portion of their human teams, these companies switched almost exclusively to their artificial intelligence solutions to implement their moderation policies. To complicate things, not only are there less human eyes reviewing content, housebound users are causing increased use of social media.
This heavy reliance on AI leadership industry professionals to predict an increase in potentially harmful materials passing through their filters. Within the past month, some disturbing mistakes have occurred and research suggests that there is much to come if content intermediaries are unable to do their work.
Initial complications … or just a bug?
It’s no secret that a hybrid of live moderators and AI is the most effective weapon against offensive content, so being forced to rely only on AI is less than ideal. Social media companies were absolutely accurate in warning users that they should expect errors in marking policy violations.
For example, YouTube announced “Users and creators may see an increase in video removal, including some videos that may not violate policies.” The company also said that, in the interim, the video violating the rules would not be issued a strike, unless there was little doubt that the video was harmful.
As expected, problems began to manifest themselves early. Last month, posts including links to several legitimate news sources including USA Today and The Atlantic were taken for violating Facebook’s spam rules. However, the company was quick to push it to the bug in its spam filter.
Even though Facebook’s reputable news sources flagged a bug in their spam filter, the question remains, “How well does the company’s AI capture real misinformation?This does not criticize Facebook’s AI in any way, as this marriage of humans and AI is a common requirement in most moderation programs.
Consumer Reports’ investigation
To get a clear idea of how effectively Facebook’s AI works, Consumer Reports put it to the test using fake organization ads they created. The “Self Preservation Society” publicized misinformation advertisements about coronoviruses, which are probably the most popular inaccurate claims, such as whether the virus is a hoax or small doses of bleach can strengthen your immune system.
Consumer Reports found that the social media platform did not flag a single advertisement for a week, save for a stock photo of a respirator-style face mask that was simply swapped with a similar-looking mask Which was later approved stage.
When Consumer Reports asked Facebook how many content moderators are now working from home, a company spokesperson called them “a few thousand”, keeping the numbers unclear.
The coronovirus epidemic has created many unprecedented situations. As mentioned earlier, the use of social media is increasing as a result of the person’s inability to interact with others. It can almost be called an ideal storm, that a swarm of social media activity is misleading users to face the problem and thereby putting them and others at risk. In addition, hate material and conspiracy theories have increased in relation to China and the origin of the virus there.
As a result of limited human resources, some groups of people are more likely to receive misinformation. Not only are intermediaries prioritizing certain areas, most AI algorithms have not been trained in less widely spoken languages, meaning they are not effective in capturing bogus content for that audience.
A report by online activist group Avaz, for example, states that Facebook issues significantly fewer warning labels to Italian, Spanish and Portuguese speakers.
End in sight?
No one can say with certainty when social media companies will send their intermediaries to office. Not even the companies themselves. Although Mark Zuckerberg said that Facebook’s goal is to get “important employees” back to work sooner than others – important employees who are mediators who seek content related to terrorism or suicide – most of the company’s employees need to work Will need home by at least June.