How Worried Should You Be as Brands, Governments and Law Enforcement Embrace Facial Recognition?

Your identity is more at risk than ever

Identity theft is poised to take on a whole new dimension. Illustration: Dianna McDougall; Sources: Getty Images
Headshot of Lisa Lacy

In 2017, nearly 17 million Americans had their identities stolen. If they followed the Federal Trade Commission’s recommended 40-page recovery process, they had to contact the companies where fraud occurred; place fraud alerts with credit bureaus; review their credit reports; report the identity theft to the FTC and their local police departments; close new charges; remove bogus charges; correct their credit reports; and add extended fraud alerts or credit freezes.

It’s a nightmare, right?

Now imagine it’s your face that needs to be recovered. With the use of facial recognition soaring, and little to no laws on the books regulating it, identity theft is poised to take on a whole new dimension. Even technology companies, which are notorious for prizing innovation above privacy concerns, are taking a beat. As Google CEO Sundar Pichai noted in an interview last month with The Washington Post, “I think tech has to realize it just can’t build it and then fix it.”

Think about it: You’ve been able to use your face as your passcode ever since the iPhone X came out in 2017. Then Google convinced more than 5 million Android users alone to share their faces to see what works of art they resemble, while a growing number of stadiums allow fans to use biometrics instead of tickets (but so far, it’s still just Seattle fans who can use them to buy beer).

And those are just the instances of facial recognition that have been made public. There are plenty of other times consumers may not be aware the technology is in use—or been given the chance to provide consent.

Convenience, but at what cost?

The travel industry was among the first to embrace facial recognition, using it to expedite and enhance the customer experience.

Delta, for example, has been leveraging it to speed up boarding for long-haul flights but recently took things a step further, integrating facial recognition throughout its international terminal in Atlanta. That means passengers can use their faces to not only board, but also to check in, drop off bags and identify themselves at Transportation Security Administration checkpoints. And, upon returning from abroad, U.S. citizens can use their faces to go through customs. (The airline says it will bring this biometric experience to Detroit in 2019.)

Delta says facial recognition saves an average of two seconds per passenger at boarding, for a total of nine minutes on an aircraft like a 747. And, according to the airline, using the technology has been a popular choice with travelers, with “nearly all 25,000 customers who travel through ATL Terminal F each week … choosing this optional process, with less than 2 percent opting out.”

Also recently, Hertz announced it is bringing facial recognition to car rentals—also in Atlanta to start—with the goal of “[getting] travelers through the exit gate and on the road in 30 seconds or less.”

A rep for Delta says the airline notifies passengers that they can opt out, via airport signage, as well as on-screen messages on self-service kiosks, and in emails. However, Anna Yaffee, a physician in Atlanta, says she recently flew to Johannesburg and used facial recognition at the TSA security point and at the gate, but it didn’t seem optional.

The travel industry was among the first to embrace facial recognition.

“No one explained it, and so I was totally surprised when I went to board the plane and was getting my boarding pass out and the gate attendant was like, ‘It’s okay—we’ve got you already,’” she wrote in an email to Adweek.

Nevertheless, Yaffee’s assessment was positive: “Really streamlined the boarding process!”

And while Yaffee says she can see how the technology could make people uncomfortable, she notes we are recorded in many other ways in daily life and—to her, at least—the airport application doesn’t seem any creepier.

In addition to airlines, U.S. Customs and Border Protection has tested facial recognition with cruise lines, which a rep says has accelerated getting passengers off ships by as much as 40 percent—and also yielded commitments from “nearly every major cruise line” for even more biometrics and data-sharing pilots.

In the Department of Homeland Security’s most recent Privacy Impact Statement for its Traveler Verification Service, it referenced a future phase in which CBP will receive facial images from cruise lines, which TVS will use to create templates from the photos and then use information to “supplement a gallery of templates of known identities for the traveler.”

While the spokesperson says facial identification by CBP is limited to passport photos or other photos previously used for travel—and says the agency is developing requirements to ensure its partners delete the images—the Privacy Impact Statement notes it “does not govern how commercial partners use biometric data that they collect separately pursuant to their own agreements with their customers.”

Translation: There are no rules—and they are not responsible—for whatever else their partners capture.

Pre-crime detection

Retailers like Target and Lowe’s have also tested facial recognition, under the auspices of enhancing security. A representative for Target says the company posted signs at the entrances of impacted stores to notify consumers. Four years ago, Lowe’s invested in a three-month test of the technology at three stores, says a rep for the home-improvement retailer, and referenced the technology in its privacy statement. (Lowe’s ultimately decided against using facial recognition.) But that’s not to say all shoppers noticed the signs or read the statement in order to make an informed decision about whether to shop at those locations.

Complicating things further, facial-recognition technology isn’t always accurate.

Case in point: The ACLU tested Amazon’s facial-recognition technology, Rekognition, using photos of members of Congress, and it incorrectly matched 28 men and women with individuals in mugshots. What’s more, the ACLU says, the false matches were disproportionately of people of color. (Amazon says the ACLU test could have been improved by increasing the confidence threshold, or the percentage likelihood the system will find a match, from the default setting of 80 percent to 95 percent or higher.)

In an ACLU’s test, Amazon’s facial-recognition tool falsely matched 28 members of Congress, in some cases identifying them as people who had been arrested for a crime.

“One of the big issues with facial recognition is, unlike fingerprinting, it can be used at a distance without your knowledge and can be used for mass tracking in a way that fingerprinting does not lend itself to,” explains Jay Stanley, senior policy analyst at the ACLU. “And it’s a highly sensitive, powerful biometric and people ought to have a right to know when it’s being used, but we’re seeing private and public institutions using it without asking.”

They probably have your face already anyway

If the thought of retailers, cruise lines and the government retaining dossiers on you and your face is disconcerting, consider this: If you have a driver’s license, the odds are good you’re already in a facial-recognition database accessible by law enforcement.

Per Clare Garvie, senior associate at the Center on Privacy and Technology at the Georgetown School of Law, at least 133.5 million American adults in 31 states—or 54.4 percent of the population—are represented in one of these databases. And the number is higher when you factor in U.S. passport and visa photos, which are also searchable by the FBI.

“Facial recognition is incredibly common—more common than most people realize, especially in the government space,” Garvie says. “So think of a facial-photo database on file with a government agency—chances are good now it’s [also] in a facial-recognition database.”

Stanley says law enforcement has often purchased facial-recognition technology with surveillance grants from the federal government, so it doesn’t have to seek approval for funding or its use on the local level.

Google’s Arts & Culture app matches a selfie with a work of art using facial recognition. “Guys, this app is DEAD ON,” tweeted actress Kristen Bell.

In fact, in its 2016 report, the Privacy Center found that of 52 U.S. law enforcement agencies, only four had a publicly available use policy as of 2016—and only one received legislative approval for its policy.

Look no further than Seattle, which began dismantling its surveillance network in 2018. The system was quietly installed in 2013 thanks to $3.6 million from the DHS—and then deactivated 10 months later, after public outcry, in order to await public debate and city council approval, which never materialized.

And, per a case study on data analysis firm Palantir’s website, the Salt Lake City Police Department and the Urban Area Security Initiative (UASI)—a DHS grant program that provides funding to prepare for and prevent terrorismwere in the process of linking police record systems all the way back in 2012. Examples of data included: 40,000 mug shots, 117,000 arrest records and 520,000 case reports, as well as suspicious activity accounts and airport details.

In an email, Palantir says the firm has not done anything with Salt Lake City police or the LAPD with facial recognition. In a subsequent email, a rep called mug shots “media that attach to records of people” and said just because there is an image “does not mean there is facial recognition applied to that image.”

But it doesn’t mean it isn’t, either—and therein lies the rub.

“It’s very much a Wild West—we estimate a quarter of all law enforcement have access to face recognition and, as it stands, the system can be used in any way law enforcement so decides,” Garvie says.

The 2016 Privacy Center report found the LAPD, as well as the police departments in Chicago and Dallas, were running real-time face recognition off of street cameras, had bought the technology or had expressed written interest. The largest police force in the U.S., the NYPD, denied the Privacy Center’s records request. The latter filed a lawsuit in New York Supreme Court seeking the release of information about the NYPD’s use of facial-recognition technology. The case is pending.

Past the point of no return?

Meanwhile, the use of biometrics is only accelerating.

Consider Amazon, which has come under fire for selling Rekognition to law enforcement agencies and for pitching Immigration and Customs Enforcement. (Amazon did not respond to a request for comment about these deployments. However, in an earlier statement, a rep said customers have used Rekognition to benefit society, such as preventing human trafficking and reuniting missing children with their families—and it is “almost exclusively” used to narrow a field of images prior to human review.)

Even more recently, the ACLU flagged a patent filing from Amazon, which details plans to combine Rekognition and smart doorbell Ring, which would allow law enforcement to match the faces of passersby with photos of suspicious persons.

At the same time, CBP is developing a biometric system for the U.S. border in part by testing cameras on individuals in moving vehicles as they enter at the Anzalduas International Bridge Port of Entry in Texas and on pedestrians entering at San Luis and Nogales, Ariz. These images are then compared with photos in government records.

In an announcement, CBP said the vehicle camera is deployed in clearly marked lanes, which will enable drivers to opt out. Those on foot—and who aren’t U.S. citizens—don’t have that choice. Cameras at the processing booths automatically take photos as travelers approach—and photos of foreign nationals will be “stored in a secure DHS system.” (CBP says images of U.S. citizens are not stored and they can choose an alternate screening procedure.)

But how secure is it, really? And what happens if there is a breach?

“We can change a stolen password. We can’t change a stolen biometric,” notes Garvie.

Calling for ‘sensible regulation’

There are no federal laws governing the use of images captured by retailers—and state laws are limited. And while the FTC has published best practices for what it calls “common uses” of facial technology, it has no guide for face recovery.

“This scenario is why advocates and increasingly companies are asking for rules around facial recognition,” says Joseph Jerome, policy counsel at the Center for Democracy and Technology. “Best practices exist … but aside from public discomfort with the use of this technology, no rules are really in place to ensure retailers use facial recognition responsibly.”

Illinois has the strongest biometrics law in the nation, the Biometric Information Privacy Act, which requires informed consent before facial recognition can be used. That’s why there have been so many biometrics lawsuits in the state, including the pending Six Flags v. Rosenbach case in the Illinois Supreme Court, in which a mother is suing the theme park because it took her 14-year-old son’s fingerprint for an annual pass without written release.

But, Jerome says, two other states with biometrics laws, Texas and Washington, make exceptions for security purposes, so disclosures aren’t needed for criminal activity. Washington also excludes photographs from its definition of biometrics, which means facial recognition is exempt, Garvie adds.

“Washington even provides companies with a ton of flexibility for providing notice for commercial purposes, like tracking people from store to store to understand what they like or might be interested in,” Jerome notes.

And Woodrow Hartzog, professor of law and computer science at Northeastern University, says only Illinois restricts the sale of biometric identifiers outright.

“Absent better rules for facial recognition, this tech will continue to entrench and surveil more and more of our everyday lives, putting us all at great risk,” Hartzog says.

The ACLU has called for “sensible regulation” to ensure privacy and fairness—and for a moratorium on the use of facial recognition by law enforcement.

The Privacy Center has also asked Congress and state legislatures to come up with common-sense regulation, including a reasonable suspicion of criminal conduct prior to a face-recognition search; limiting after-the-fact searches to felonies; making mug shots the default photo databases instead of driver’s license and ID photos; requiring a court order for searches of license and ID photos; and DMV notification when such searches are carried out.

“The most long-lasting and most appropriate way to regulate the use of face recognition by law enforcement is by passing legislation,” Garvie says. “The federal government is responsible for funding many state and local programs, which are set up through DOJ or DHS grants, and … [it] is empowered to place riders or restrictions.”

After a brouhaha in which a Microsoft blog post about a contract with ICE mentioned the potential for facial recognition, Microsoft president Brad Smith wrote a post denying the company is working with ICE in that capacity and called for government regulation. In December, he followed with six principles he said will address concerns about facial recognition, which Microsoft will implement by the end of the first quarter.

The Algorithmic Justice League, a project from the MIT Media Lab that seeks to highlight algorithmic bias and develop best practices, has created a similar Safe Face Pledge, which asks signees to promise to show value for human rights, address bias, facilitate transparency and embed the pledge into their business practices.

And the Center for Democracy and Technology has published its own draft federal privacy bill to help move the conversation about protections for personal information forward in Congress.

For its part, Google has opted not to use “general-purpose facial recognition APIs” for Google Cloud “before working through important technology and policy questions,” Google svp Kent Walker wrote in a blog post. He noted that “like many technologies with multiple uses, facial recognition merits careful consideration to ensure its use is aligned with our principles and values, and avoids abuse and harmful outcomes.”

But, ultimately, it comes down to our lawmakers. Perhaps it’s time for Mitch McConnell and Nancy Pelosi to hold a bipartisan screening of Minority Report for the 116th Congress.

This story first appeared in the Jan. 7, 2019, issue of Adweek magazine. Click here to subscribe.

@lisalacy Lisa Lacy is a senior writer at Adweek, where she focuses on retail and the growing reach of Amazon.