The social network shared examples of tweets that violate the new policy (pictured above) and said in a blog post that starting Tuesday, it will require content of this sort to be removed when it is reported.
Tweets posted prior to Tuesday must also be deleted, but accounts will not face suspension for those tweets, as the rule was not in effect yet.
The social network requested feedback last year on steps it should take to keep its users safe, and the themes that emerged most in the over 8,000 responses it received were making the language in its policies clearer and more detailed, being more specific than “identifiable groups” and striving for consistency in its enforcement.
Twitter added in its blog post that it needs to better understand the answers to the following questions before expanding its hateful conduct policies to other protected groups:
How do we protect conversations people have within marginalized groups, including those using reclaimed terminology?
How do we ensure that our range of enforcement actions take context fully into account, reflect the severity of violations and are necessary and proportionate?
How can—or should—we factor in considerations as to whether a given protected group has been historically marginalized and/or is currently being targeted into our evaluation of severity of harm?
Last month, the social network took steps to make its Twitter Rules easier for users to understand, reorganizing them into three categories—safety, privacy and authenticity—and tightening up the descriptions for each rule to match the social network’s 280-character limit.