Based on violations of policies and guidelines, the unit of Alphabet Inc’s Google, Youtube took down more than 58 million videos, 1.7 million channels and more than 224 million comments during the last quarter of 2018. Government officials and interest groups in the United States, Europe, and Asia have been pressuring YouTube, Facebook, and other social media services to quickly identify and remove extremist and hateful content that critics have said incite violence.
“We’ve always used a mix of human reviewers and technology to address violative content on our platform, and in 2017 we started applying more advanced machine learning technology to flag content for review by our teams. This combination of smart detection technology and highly-trained human reviewers has enabled us to consistently enforce our policies with increasing speed,” the company said.
YouTube has had issues in the past regarding inappropriate and disturbing kids’ videos as well as extremism. To tackle with the issue, Google committed to boosting its machine learning efforts and adding more people to YouTube’s Trusted Flagger program, last year. But YouTube faces a bigger challenge with material promoting hateful rhetoric and dangerous behaviour.
Automated detection tools help YouTube quickly identify spam, extremist content and nudity. During September, 90 percent of the nearly 10,400 videos removed for violent extremism or 279,600 videos removed for child safety issues received fewer than 10 views, according to YouTube. As automated detection technologies are relatively new and less efficient, YouTube has been relying on users to report potentially problematic videos and comments. Hoping of reviewing user reports faster, Google recruited thousands of moderators to tackle the issue.
Nearly, about 80% of the channels that were taken down were mainly related to spam uploads along with 13% containing nudity and 4.5% in relevance to child safety.