Twitter provided an update on its efforts to prevent misinformation related to the coronavirus pandemic, saying that since updating its safety rules in March to account for content on Covid-19, 14,900 tweets have been removed and 4.5 million accounts have been challenged.
The social network added that more than 160 million people have visited its curated Covid-19 page over 2 billion times thus far.
Twitter also provided clarification on how it evaluates whether to remove potentially misleading Covid-19 information from its platform, breaking it up into three criteria:
- Is the content advancing a claim of fact regarding Covid-19? The social network said the claim must be presented as fact, not opinion, on topics including: the origin nature and characteristics of the coronavirus; preventative measures, treatments, cures and other precautions; viral spread or the current state of the crisis; official health advisories, restrictions, regulations and public-service announcements; and how vulnerable communities are affected by or responding to the pandemic.
- Is the claim demonstrably false or misleading? Tweets fall under this category if the information within has been confirmed as false by experts, such as public health authorities, or if that information is shared in a way that could confuse or deceive people. Factors considered include: whether the content of the tweet, including media, has been altered, manipulated, doctored or fabricated; if claims are presented improperly or out of context; and whether claims are widely accepted by experts to be inaccurate or false.
- Would belief in this information, as presented, lead to harm? Twitter said in its ongoing blog post about all things Covid-19, “We will not be able to take enforcement action on every tweet that contains incomplete or disputed information about Covid-19. Our focus in the Covid-19 policy is narrowed to address those claims that could adversely impact an individual, group or community.” Examples included misleading information that could increase the likelihood of exposure, have adverse effects on the public health system’s capacity to cope with the crisis or could lead to discrimination against or avoidance of communities or places of business based on their perceived affiliation with protected groups.
The social network reiterated that content meeting all three of the criteria above is not permitted on its platform, and accounts sharing that content may be permanently suspended.
Twitter also said it would continue using warning labels in cases where the risks of harm associated with the content are less severe, but it could still be confusing or misleading.
The social network wrote, “The world has changed since this pandemic was first declared, and public health experts, medical professionals, scientists and researchers now know more about how we can best stay safe and healthy. As the situation evolves and as global health advisories shift to cope with the pandemic, we are committed to ensuring that our rules and enforcement actions reflect that evolution.”