Facebook is testing the extension of its third-party fact-checking partnership initiative to its Instagram photo- and video-sharing network.
Instagram brand communications manager Stephanie Otway told Daniel Funke of Poynter that Instagram started working more closely with its parent company’s News Feed integrity team for the U.S. midterm elections last November.
In a test starting this week, Instagram will begin sending posts that potentially contain misinformation to the same dashboard currently used by Facebook’s fact-checking partners, giving their teams the chance to help limit the spread of hoaxes on Instagram with no change to the work flow they are already familiar with.
An Instagram spokesperson provided more details on the test: “For Instagram-specific misinformation, we will begin testing surfacing a limited number of Instagram links into the fact-checking product and allowing the fact-checkers to rate the photos. If confirmed false, photos will be filtered out from appearing on Instagram recommendation surfaces (Explore and hashtags). We are trialing this change and will roll it out more broadly if we see promising results.”
When misinformation is found on Instagram, rather than removing it, steps are taken to reduce its distribution by filtering it out of its Explore tab and its hashtag results pages, making it unlikely that users will discover those posts.
The spokesperson added, “We use a variety of signals to proactively find content that we can add to the fact-checking product.”
Instagram also plans to test other features to help its users spot misinformation, including a pop-up that would appear when people search for anti-vaccine topics.
Otway told Funke, “Our approach to misinformation is the same as Facebook’s: When we find misinfo, rather than removing it, we’ll reduce its distribution. We can use image recognition technology to find the same piece of content on Instagram and take automatic action.”