Instagram said in May that it would begin using parent company Facebook’s third-party fact-checking initiative, but there was no way for its users to report suspicious content—until now.
The menu that Instagram users see when they choose to report a post now includes false information as one of the reasons for the report. Users can click “Report,” and then “It’s inappropriate,” and finally “False information.”
Instagram said posts that are reported in this manner may be sent to third-party fact-checkers and, if found to contain false information, may be hidden from its Explore and hashtag pages. Fact-checkers have discretion over which posts they choose to rate.
Facebook spokesperson Stephanie Otway said, “Explore and hashtags allow people on Instagram to find content that they haven’t already chosen to follow, and by filtering misinformation from these places, we can significantly limit its reach.”
She added, “Starting today, people can let us know if they see posts on Instagram that they believe may be false. We’re investing heavily in limiting the spread of misinformation across our applications, and we plan to share more updates in the coming months.”
Otway said “a combination of factors” will be used to determine whether reported posts are presented to fact-checkers for review.
She added that feedback from this process will also be used to train Facebook’s and Instagram’s artificial intelligence technology, helping it to proactively find and rate questionable content even before it is reported.
Otway said the pilot will not be expanded outside of the U.S. at this time, but the company will have more to share on further developments soon.