Facebook, Twitter and Google+ are all in agreement: there’s no room on the web for images of child abuse. And they’re working to (or already have) implement technology to not only remove it, but to report the sick twists uploading or sharing it.
The Times of London shared that “Facebook, Microsoft, Google, Twitter and at least three other major companies have signed up to — or are in discussions about — a plan intended to tackle the spread of abuse pictures on their sites, according to industry sources.”
It would see the creation of a single database of the “worst of the worst” images that will be maintained by Thorn: Digital Defenders of Children, a Los Angeles-based charity founded by the Hollywood actors Ashton Kutcher and Demi Moore. The project is an unprecedented industry-wide effort to deal with paedophiles using the web to share images of abuse.
The technology that would allow them to do this is software that attaches a digital signature, called a “hash” to child abuse images. Thorn plans to act as a repository for these digital signatures.
This allows companies to use hashes to identify and remove copies of offending pictures quickly. Because it is illegal to hold the actual child abuse images, Thorn’s database will be made up of these digital signatures. Facebook, a long-time user of PhotoDNA, is believed to have been among the first companies to begin testing the system.
And on Facebook, where the slippery slope of offensive vs inappropriate images gets steeper every day, child abuse images are a pretty clear image violation on the age 13+ site.
Google+ has been using “hashing” technology to tag known child sexual abuse images since 2008.
Each offending image in effect gets a unique ID that our computers can recognize without humans having to view them again. Recently, we’ve started working to incorporate encrypted “fingerprints” of child sexual abuse images into a cross-industry database. This will enable companies, law enforcement and charities to better collaborate on detecting and removing these images, and to take action against the criminals.
How big is this problem?
In 2011, the National Center for Missing & Exploited Children’s (NCMEC’s) Cybertipline Child Victim Identification Program reviewed 17.3 million images and videos of suspected child sexual abuse. This is four times more than what their Exploited Children’s Division (ECD) saw in 2007. And the number is still growing. Behind these images are real, vulnerable kids who are sexually victimized and victimized further through the distribution of their images.
And now Twitter is going to adopt the technology later this year as well. Why “later this year” and not right now? According to Del Harvey, senior director of Twitter’s Trust & Safety team, it’s pretty complicated to implement “based on the sheer scale and speed of the service.”
It is also complicated by the involvement of outside companies called Content Delivery Networks (CDNs), which store copies of data posted online at locations closer to users, so they can be downloaded more quickly.
“You think ‘we’ll just delete the image’, but then you face the question of whether it’s hosted on a CDN. In that case, how do you make sure it gets flushed out? What if there’s a backlog of requests for images to delete? You start to wonder if these things really have to be this complicated just to delete an image – and the answer turns out to be yes, it really does have to be this complicated.”
In the meantime, maybe the good folks of Anonymous could round up another Twitter pedo sting? Lord knows those folks deserve it.
(Image from Shutterstock)