John Henry Skillern was arrested last Thursday for the possession of child pornography. The 41-year-old restaurant worker was allegedly sending indecent images of children to a friend, but while it was police in Houston, Texas that obtained a search warrant for Skillern's tablet and computer and placed him in custody, it was Google that tipped them off to his illegal activities.
That's because Google actively scans the images that pass through Gmail accounts to see if they match up with known child pornography. Most seem to agree that this is a good thing, but as cyber security consultant John Hawes, of Virus Bulletin, tells the AFP, others may view this practice as a slippery slope. "There will of course be some who see it as yet another sign of how the twin Big Brothers of state agencies and corporate behemoths have nothing better to do than delve into the private lives of all and sundry, looking for dirt," Hawes says.
Skillern is alleged to have been using Gmail to send images of child sexual abuse, each of which had previously been identified and given a unique digital fingerprint. When those images were sent through Google's email service, they were identified by its automated systems. From there, Google passed Skillern's details on to the police via the National Center for Missing and Exploited Children (NCMEC).
"We only use this technology to identify child sexual abuse imagery."
Federal law requires that electronic communication providers like Google report instances of suspected child abuse when it becomes aware of them, but whether it's legally required to actively search out those cases is another question. Even if Google doesn't see the law as requiring this active scanning of private communications, it appears that it chooses to do so as part of the fight against predators. "Sadly, all internet companies have to deal with child sexual abuse," a Google spokesperson tells the AFP. "It’s why Google actively removes illegal imagery from our services — including search and Gmail — and immediately reports abuse to the NCMEC."
Google goes out of its way to note, however, that its reporting on email activity stops at child pornography. You could even orchestrate a blatantly criminal plot over Gmail, and it sounds as though Google would do nothing about it — it may well not even have technology set up to identify such a thing. "It is important to remember that we only use this technology to identify child sexual abuse imagery — not other email content that could be associated with criminal activity (for example using email to plot a burglary)."
Of course, scanning emails is a big part of Google's email service, and it likely isn't going away entirely any time soon. It's all supposed to happen anonymously as a way to let Google present relevant ads inside Gmail — as it's been doing since the service's launch. That alone has caused a seemingly continuous stream of controversy as Gmail users learn about how the ads work, but it's something that every user consents to, knowingly or not, when they sign up for an account.
Users' consent means that Google has the ability to do a lot more with its email scanning should it ever choose to or the law compel it to. However widely it can be agreed upon that this current scanning system is good, it speaks to why privacy advocates have concerns about what Google and other internet companies can do with our private communications. Even in the case of Skillern, a warrant was only needed to give law enforcement access to his physical devices — federal law already requires that Google hand over the full communication when it detects signs of child pornography in his inbox.
The police were tipped off by Google
David Drummond, Google's chief lawyer, outlined how his company's automated tagging system went about detecting this imagery last year in The Daily Telegraph. Drummond explained that Google has used the technology since 2008, building up a database that notifies the company when known child porn images are found through its search engine or in the inboxes of its 400 million Gmail users.
Google makes use of Microsoft's PhotoDNA technology to scan emails, and calculate a mathematical hash for an image of child sexual abuse that allows it to recognize photos automatically even if they have been altered. The tech is now also used by both Twitter and Facebook, after Microsoft donated it to the NCMEC in 2009. Videos, too, have become the focus of such digital fingerprinting programs. Google has its own Video ID software for detecting footage of child sexual abuse, and British company Friend MTS donated its Expose F1 detection program to the International Centre for Missing & Exploited Children (ICMEC) earlier this year.
While the technology has helped to halt the alleged activities of people such as John Henry Skillern, the automated image detection systems used by Google and others have some flaws. For one, new pictures won't be caught by software such as PhotoDNA: only images already recorded in the user's database can be spotted. And as discussed, they continue to raise concerns over what Google can do with your private communications. Google says it won't give out precise technical information on specific searches or cases, but it has been quick to make it clear that its automated detection systems were only designed to trawl for child porn.
Update August 5th, 11:55AM ET: this article has been updated with more context on the privacy concerns surrounding Google's automated scanning.
Jacob Kastrenakes contributed to this report.