Share | « Previous Technology | Next Technology » | Email Print |
Cloud-Powered Facial Recognition Is Terrifying
By
By harnessing the vast wealth of publicly available cloud-based data, researchers are taking facial recognition technology to unprecedented levels
"I never forget a face," goes the Marx Brothers one-liner, "but in your case, I'll be glad to make an exception."
Unlike Groucho Marx, unfortunately, the cloud never forgets. That's the logic behind a new application developed by Carnegie Mellon University's Heinz College that's designed to take a photograph of a total stranger and, using the facial recognition software PittPatt, track down their real identity in a matter of minutes. Facial recognition isn't that new -- the rudimentary technology has been around since the late 1960s -- but this system is faster, more efficient, and more thorough than any other system ever used. Why? Because it's powered by the cloud.
The logic of the new application is based on a series of studies designed to test the integration between facial recognition technology and the wealth of data accessible in the cloud (by which we basically mean the Internet). Facial recognition's law enforcement uses -- to identify criminals out of a surveillance video tape, say -- have always been limited by the criminal databases available for reference. When Florida deployed Viisage facial recognition software in January 2001 to search for potential troublemakers and terrorists in attendance at Super Bowl XXXV, police in Tampa Bay were only able to extract useful information on 19 people with minor criminal records who already existed in any database they had access to. But the Internet was a much smaller place in 2001; Google was in its infancy, and the sheer volume of data available in a simple search simply didn't exist.
Often, the problems with facial recognition are rooted in the need for greater processing power, human and machine. After revelers rioted in the streets of Vancouver following the Canucks' defeat in the Stanley Cup, Vancouver police received nearly 1,600 hours of footage from bystanders furious with their fellow citizens; the department was woefully inequipped to handle the sudden influx of data, anticipating that it would take nearly two years to analyze all the information. Vancouver's Digital Multimedia Evidence Processing Lab was able to cut the processing time to a mere three weeks with a relatively small 20-workstation lab.
With Carnegie Mellon's cloud-centric new mobile app, the process of matching a casual snapshot with a person's online identity takes less than a minute. Tools like PittPatt and other cloud-based facial recognition services rely on finding publicly available pictures of you online, whether it's a profile image for social networks like Facebook and Google Plus or from something more official from a company website or a college athletic portrait. In their most recent round of facial recognition studies, researchers at Carnegie Mellon were able to not only match unidentified profile photos from a dating website (where the vast majority of users operate pseudonymously) with positively identified Facebook photos, but also match pedestrians on a North American college campus with their online identities.
The repercussions of these studies go far beyond putting a name with a face; researchers Alessandro Acquisti, Ralph Gross, and Fred Stutzman anticipate that such technology represents a leap forward in the convergence of offline and online data and an advancement of the "augmented reality" of complementary lives. With the use of publicly available Web 2.0 data, the researchers can potentially go from a snapshot to a Social Security number in a matter of minutes:
The repercussions of these studies go far beyond putting a name with a face; researchers Alessandro Acquisti, Ralph Gross, and Fred Stutzman anticipate that such technology represents a leap forward in the convergence of offline and online data and an advancement of the "augmented reality" of complementary lives. With the use of publicly available Web 2.0 data, the researchers can potentially go from a snapshot to a Social Security number in a matter of minutes:
We use the term augmented reality in a slightly extended sense, to refer to the merging of online and offline data that new technologies make possible. If an individual's face in the street can be identified using a face recognizer and identified images from social network sites such as Facebook or LinkedIn, then it becomes possible not just to identify that individual, but also to infer additional, and more sensitive, information about her, once her name has been (probabilistically) inferred.
In our third experiment, as a proof-of-concept, we predicted the interests and Social Security numbers of some of the participants in the second experiment. We did so by combining face recognition with the algorithms we developed in 2009 to predict SSNs from public data. SSNs were nothing more than one example of what is possible to predict about a person: conceptually, the goal of Experiment 3 was to show that it is possible to start from an anonymous face in the street, and end up with very sensitive information about that person, in a process of data "accretion." In the context of our experiment, it is this blending of online and offline data - made possible by the convergence of face recognition, social networks, data mining, and cloud computing - that we refer to as augmented reality.
Naturally, the development of such software inspires understandably Orwellian concerns. Jason Mick at DailyTech notes that PittPatt started as a Carnegie Mellon University research project, which spun off into a company post 9/11. "At the time, U.S. intelligence was obsessed with using advanced facial recognition to identify terrorists," writes Mick. "So the Defense Advanced Research Projects Agency (DARPA) poured millions into PittPatt." While Google purchased the company in July, the potential for such intrusive technology to be used against law-abiding citizens is cause for concern.
While private organizations may vie for a piece of PittPatt's proprietary technology for marketing or advertising purposes, the idea that such technology could be utilized by a tech savvy member of the public towards criminal, fraudulent, or extralegal ends is as alarming as the potential for governmental abuse. England saw this in the wake of the rioting, looting, and arson that swept across the country when a Google group of private citizens called London Riots Facial Recognition emerged with the aim of using publicly available records and facial recognition software to identify rioters as a form of digital vigilantism. The group eventually abandoned its efforts when its experimental app, based on the much maligned photo-tagging facial software Face.com, yielded disappointing results. "Bear in mind the amount of time and money that people like Facebook, Google, and governments have put into work on facial recognition compared to a few guys playing around with some code," the group's organizer told Kashmir Hill at Forbes. "Without serious time and money we would never be able to come up with a decent facial recognition system."
The research team at Carnegie Mellon understand the potential problems posed by this convergence of facial recognition technology and the vast Web of publicly available information. Alessandro Acquisti told Steve Hann at Marketwatch after a demonstration that the prospect of selling his new app or making it available to the public "horrifies him." And while there are certainly limits to what software like PittPatt can distill from the cloud, the closing gap between life offline and life in the cloud is becoming more observable with each progressive breakthrough:
So far, however, these end-user Web 2.0 applications are limited in scope: They are constrained by, and within, the boundaries of the service in which they are deployed. Our focus, however, was on examining whether the convergence of publicly available Web 2.0 data, cheap cloud computing, data mining, and off-the-shelf face recognition is bringing us closer to a world where anyone may run face recognition on anyone else, online and offline - and then infer additional, sensitive data about the target subject, starting merely from one anonymous piece of information about her: the face.
I'm reminded in particular of this quote from Google's then-CEO Eric Schmidt during a 2009 CNBC special report on the company:
I think judgment matters. If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place. If you really need that kind of privacy, the reality is that search engines -- including Google -- do retain this information for some time and it's important, for example, that we are all subject in the United States to the Patriot Act and it is possible that all that information could be made available to the authorities.
The relevant point here is not Schmidt's thought on behavior and choice but the fact that, no matter what you choose to do or not do, your life exists in the cloud, indexed by Google, in the background of a photo album on Facebook, and across thousands of spammy directories that somehow know where you live and where you went to high school. These little bits of information exist like digital detritus. With software like PittPatt that can glean vast amounts of cloud-based data when prompted with a single photo, your digital life is becoming inseparable from your analog one. You may be able to change your name or scrub your social networking profiles to throw off the trail of digital footprints you've inadvertently scattered across the Internet, but you can't change your face. And the cloud never forgets a face.
Image: kentkb/Flickr
Presented by
More from The Atlantic
Join the Discussion
After you comment, click Post. If you’re not already logged in you will be asked to log in or register. blog comments powered by Disqus