At the start of the pandemic, YouTube decided to push AI moderators, which were thought to be effective. However, the company has observed that their filters have low accuracy and has decided to bring back more human moderators.
The initial decision was based on the fact that AI machine-learning systems could be train to find and remove videos that went against the policies of the platform, with a major focus on content filled with hate speech and disinformation. Statistics have revealed that the amount of video removals and incorrect takedowns has soared since this approach was introduced.
More than 11 million videos have been removed from YouTube between April and June, more than double in comparison to previous years. YouTube content creators appealed 320,000 takedowns, and half of the videos were reinstated after the appeal requests were analyzed.
YouTube’s CFO mentioned in an interview offered to a popular financial news outlet that the decision to use AI was taken with the knowledge that the AI system might be a bit overzealous, since the safety of YouTube users comes first, even if the AI can make more mistakes.
Not the only one
Major online social media and media platforms have struggled to combat disinformation and other forms of toxic content for years, a task which is far more complicated than it may seem. As such, many have resorted to the use of automated filters that can save a lot of time and block potentially harmful content faster.
However, skeptics have underlined the fact that AI-based solutions lack the ability to interpret subtlety or nuance, which are essential components of language, and can be used in a variety of cases. Cultural context is another trait that can’t be understood by machines but can play a major role in some content. YouTube will continue to use AI moderation since it is effective even if there are some flaws.