Twitter said at the time, “This way, you have more control over the conversations you start, but people can still see the entire conversation.”
The social network said in a series of tweets from its @TwitterDev account Wednesday, “Introducing a new Labs endpoint so that developers can help people hide replies they find irrelevant, off-topic or toxic. Developers can help people manage their replies faster and more efficiently.
Twitter provided more information in its developer forum, writing, “People hide replies for many reasons, including to remove comments that are abusive, irrelevant or distracting. The endpoint enables you to build tools to help people on Twitter hide replies faster, more efficiently or in circumstances where they’d normally give up.”
Developers that are interested in incorporating the endpoint into their applications must create a developer account if they already haven’t, and then join Labs via the Labs section of the developer portal.
They can then select Activate next to Hide Replies, and choose an app to connect.
The social network included more guidance for developers in its documentation.
Twitter cautioned, “Because perspective is not trained on actual tweets, certain language nuances may cause this app to hide a reply that a user wants to remain unhidden. Regardless of the technology or the approach you use when designing your app, always make the best possible effort to ensure that your users understand what your app has hidden and can make changes.”
The social network said the best option is to trust users and give them full control over their decisions, adding that in cases where this is not desirable, “your app should use a very high confidence threshold to detect and hide tweets.”
Twitter added, “Not everybody uses the same words, and your app should be designed to avoid any bias. Be mindful of reclaimed words and slang that may lead to false positives. If you are training an artificial intelligence, consider adopting a model that closely reflects language often used on Twitter.”
Finally, Twitter shared three examples of how developers incorporated the new endpoint:
- Jigsaw integrated the endpoint with Perspective API 3 (application-programming interface), AI that scores tweets for their toxicity, to automatically hide replies to tweets that exceed a high toxicity threshold.
- Reshuffle, a platform that connects business apps, developed a script that detects and hides replies including certain keywords.
- QuotedReplies developer Dara Oladosu built an app that automatically hides replies that meet some of the criteria that he has determined are more likely to exhibit abusive behavior, including those that contain certain keywords he has muted in the past.