TikTok’s popularity has soared in the last few months but that has come at a cost: Its content moderation team is struggling to keep the video platform free of spam and malicious content. As per TikTok’s latest transparency report, it had to take down more videos than ever in the first half of 2020 (January-June) for violating its guidelines and fielded an increasing number of government requests for user information.
Over 104 million videos were removed from TikTok across the world in the first six months of this year, more than double from the second half of 2019. About 37 million of these were from India followed by nearly 10 million in the United States.
TikTok says this is still less than 1% of the total number of videos uploaded on its app. In the report, it adds that it took action on 96.4% of the removed clips before a user reported them and 90.3% of them didn’t have any views. The service’s algorithms automatically took care of and discarded 10 million of these clips.
“As a result of the coronavirus pandemic, we relied more heavily on technology to detect and automatically remove violating content in markets such as India, Brazil, and Pakistan,” TikTok wrote in a blog post.
TikTok’s content moderation practices haven’t always been effective, however. Earlier this month, the company was scrambling to suppress the spread of a viral, gruesome video that showed a man taking his own life with a gun.
With over 100 million users in the U.S. alone, TikTok is now also a significant potential resource for law enforcement agencies looking for personal data in investigations. In the U.S., TikTok received 226 requests from law enforcement or government entities for user information and content restrictions, substantially up from 100 from the six months before 2020, and the company agreed to comply with 85% of these cases. In India, that figure was in four digits.
TikTok has been repeatedly criticized for censoring content that’s critical of China. However, those stats are not available in this report since Bytedance, the China-based owner of TikTok, operates a separate, localized alternative called Duoyin in China. To fend off these accusations, back in March this year, TikTok formed a new committee of experts to offer more transparency in its content moderation process and seek “unvarnished views” on the social video app’s policies.
Alongside its latest transparency report, TikTok also published a proposal for a global coalition encompassing nine social and content platforms to take a collaborative approach against hate speech and other harmful content. In a public letter you can read here, TikTok’s interim global chief, Vanessa Pappas, suggests the safety teams of these social media companies should keep each other notified of any harmful content that they may have identified on their respective platform and that has the potential of proliferating through the rest of the internet. And last weekend, in a tweet, Pappas called on Facebook and Instagram to join its legal challenge against the Trump administration.
We’ve reached out to TikTok and other major social media platforms for a comment and we’ll update the story when we hear back.
Related Posts
WhatsApp has begun testing a long-overdue group chat feature
The Meta-owned messaging platform is testing a new feature called "group chat history sharing" (via a WABetaInfo report). As the name suggests, the feature lets a WhatsApp user (likely the admin) share the chat history (up to 100 messages sent within 14 days) with someone while adding them to a group.
You can now choose the kind of content you see on Instagram Reels
The announcement came from Instagram CEO Adam Mosseri, giving people a more direct way to shape the kind of videos they actually want to see. At its core, Your Algorithm lets users actively tune their Reels experience.
New UK under-5 screen time guidance targets passive time, what it changes for you
The push is rooted in government-commissioned research that links the highest screen use in two-year-olds, around five hours a day, with weaker vocabulary than peers closer to 44 minutes a day. Screens are already close to universal at age two, so the guidance is being framed as help you can actually use, not a ban.