Facebook, Microsoft and the Partnership on AI Form the Deepfake Detection Challenge

Academics from several universities will help develop technology to sniff out altered videos

The Deepfake Detection Challenge will include a realistic data set, featuring paid actors
Facebook

Imitation is the sincerest form of … prevention?

Facebook chief technology officer Mike Schroepfer said in a blog post this week that the social network is taking steps to combat deepfakes, with some highly qualified assistance.

Facebook is teaming up with Microsoft, the Partnership on Artificial Intelligence to Benefit People and Society and academics from Cornell Tech, Massachusetts Institute of Technology, the University of Oxford, University of California-Berkeley, the University of Maryland-College Park and the University at Albany-State University of New York on the Deepfake Detection Challenge.

The goal of the Deepfake Detection Challenge is to develop technology to better detect when AI was used to alter videos and mislead viewers.

Schroepfer said the Deepfake Detection Challenge will include a realistic data set, featuring paid actors, adding that user data from the social network will not be used.

Facebook is dedicating over $10 million to fund the industrywide effort, including the funding of research collaboration and prizes for the challenge.

Schroepfer said the data set and challenge parameters will initially be tested in a targeted technical working session at the International Conference on Computer Vision in Seoul, South Korea, in October, with the full release slated for the Conference on Neural Information Processing Systems in Vancouver in December.

Facebook will enter the challenge, but the company will not accept any financial prizes.

Governance of the challenge will be facilitated and overseen by the Partnership on AI’s new steering committee on AI and media integrity, which is made up of a broad cross-sector coalition of organizations including Facebook, Witness, Microsoft and others in civil society and the technology, media and academic communities.

Schroepfer wrote, “This is a constantly evolving problem, much like spam or other adversarial challenges, and our hope is that by helping the industry and AI community come together, we can make faster progress.”

Recommended articles