On the same day TikTok released its Global Transparency Report for the first half of 2020, interim head Vanessa Pappas sent a letter to the heads of nine other social and content platforms proposing a memorandum of understanding, under which they would warn one another of violent, graphic content that appeared on their own platforms.
Heads of trust and safety Jeff Collins (Americas), Cormac Keenan (Europe, the Middle East and Africa) and Arjun Narayan Bettadapur Manjunath (Asia-Pacific) wrote in a blog post Tuesday, “Social and content platforms are continually challenged by the posting and cross-posting of harmful content, and this affects all of us—our users, our teams and the broader community. As content moves from one application to another, platforms are sometimes left with a whack-a-mole approach when unsafe content first comes to them. Technology can help auto-detect and limit much but not all of that, and human moderators and collaborative teams are often on the frontlines of these issues.”
TikTok had not responded to a request to name the nine other platforms at the time of this post.
Pappas wrote in her letter, “Recently, social and content platforms have once again been challenged by the posting and cross-posting of explicit suicide content that has affected all of us—as well as our teams, users and broader communities. Like each of you, we worked diligently to mitigate its proliferation by removing the original content and its many variants and curtailing it from being viewed or shared by others. However, we believe each of our individual efforts to safeguard our own users and the collective community would be boosted significantly through a formal, collaborative approach to early identification and notification among industry participants of extremely violent, graphic content, including suicide.”
She proposed a meeting of the platforms’ trust and safety teams to further this goal.
TikTok vice president and head of U.S. public policy Michael Beckerman and U.S. head of safety Eric Han provided highlights from the video creation platform’s Global Transparency Report for the first six months of 2020.
A total of 104,543,719 videos were removed worldwide during that time period for violations of TikTok’s community guidelines or terms of service, representing less than 1% of the total number of videos uploaded to the platform.
TikTok said 96.4% of those videos were discovered and removed before being reported, and 90.3% were pulled before receiving any views.
The U.S. accounted for 9.4% of removed videos, or 9,822,996 of them.
TikTok said 41,820 of those videos, or less than 0.5% of the total, were removed for violating its misinformation and disinformation policies, while 321,786 (roughly 3.3%) were pulled for violations of its hate speech policies.
The company added that 91.5% of videos that were removed in the U.S. were taken off its platform in less than 24 hours after being uploaded.
TikTok also received 1,768 requests for user information in the first half of the year, originating from 42 countries or markets, with 290 of them (16.4%) coming from law enforcement agencies in the U.S.: 126 subpoenas, 90 search warrants, 68 emergency disclosure requests and six court orders.
The company received 135 requests from government agencies to restrict or remove content, from 15 countries or markets, with four of those requests coming from government agencies in the U.S.
Finally, TikTok said it evaluated 10,625 copyrighted content take-down notices globally in the first six months of 2020.