Twitter provided an update on its efforts to maintain a safe environment without relying solely on its users reporting abusive content.
Vice president of services Donald Hicks and director of product management, health David Gasca revealed in a blog post that 38 percent of the abusive content it has taken enforcement actions against since last March—including for abusive behavior, hateful conduct, encouraging self-harm and threats—has been surfaced proactively by the social network’s artificial intelligence tools for review, rather than having those teams rely on user reporting to discover that content.
They
WORK SMARTER - LEARN, GROW AND BE INSPIRED.
Subscribe today!
To Read the Full Story Become an Adweek+ Subscriber
Already a member? Sign in