These days, with facial recognition technology, you’ve got a face that can launch a thousand applications, so to speak.
Sure, you may love the ease of opening your phone just by facing it instead of tapping in a code. But how do you feel about having your mug scanned, identifying you as you drive across a bridge, when you board an airplane or to confirm you’re not a stalker on your way into a Taylor Swift concert?
The divisive debate over the use – or potential misuse – of facial recognition that has been raging on for years is heating up again.
Just last week the face-off between supporters of facial recognition and the privacy advocates who want to curtail or legislate its use bubbled to the surface, following a vote by government officials in San Francisco to ban use of the technology by local law enforcement. San Francisco becomes the first major municipality in the U.S. to take such an action, though likely not the last.
In March, a bipartisan bill was introduced by Senators Brian Schatz (D-Hawaii) and Roy Blunt (R-Mo.) to strengthen consumer protections by prohibiting companies that use facial recognition technology from collecting and re-sharing data for identifying or tracking consumers without their consent.
Facial recognition is spreading to every corner where you live, work, shop and travel. Cameras are trained on your face at the airport, in store aisles, even when you’re driving.
Your face could someday be the key to unlock your car.
All these may leave you wondering: Who has access to my facial data? Where is this data stored and for how long? Is the use of facial recognition about safety, surveillance, and convenience, or a way for advertisers or the government to track me?
And then there’s the question of what happens if there’s a breach and your facial data is combined with other personalized financial data that may have been collected on you.
“You can’t reset your face like you reset a password,” says Matt Cagle, a technology and civil liberties attorney with the ACLU in San Francisco. “The stakes of a data breach with biometric information are really quite high.”
Is facial recognition accurate enough?
Critics also point to false positives, or people being misidentified, particularly among minorities.
Findings from an MIT study, for instance, claim the Amazon Rekognition system has performed poorly compared to Microsoft and IBM in identifying a female’s gender and faces from darker-skinned people. Amazon Web Services global vice president for public policy Michael Punke has disputed the findings, even as he has called for transparency.
Similarly, after reviewing an internal email from New York’s Metropolitan Transportation Authority, the Wall Street Journal reported in April that facial recognition flunked an early test to identify possible criminals at the busy Robert F. Kennedy Bridge.
So why the false positives? One criticism is bias – not that the machines are inherently set against anyone, but that the people who are, in essence, teaching the programs to identify features aren’t providing them with a diverse sample in the first place.
“Even if the technology were perfectly accurate, it still poses a threat to public safety and civil rights,” Cagle insists. “Imagine stepping outside your door and walking down the street and the government knowing who you are, where you’re going and even the expressions on your face, and it would be able to do this without lifting a finger?
But Peter Trepp, CEO of FaceFirst, a facial recognition company which works in loss prevention and fraud and whose clients include some of the nation’s largest retailers, says the use of the technology “is not nearly as scary as some people would have you believe.”
Facial recognition bias: The problem with AI? Study says it’s too white and male, calls for more women, minorities
Though not yet perfect, error rates as high as 99.98% are dramatically improving through machine learning and neural networks, according to Pam Dixon, executive director of the World Privacy Forum.
“That is a stunning advancement,” she says, while acknowledging that biased results still exist, with women generally harder to identify than men.
A report issued last week by the Center on Privacy & Technology at Georgetown Law put the potentially flawed photos used in face recognition systems in stark terms: “Face recognition technology has improved immensely in the past two years alone, enabling rapid searches of larger databases and more reliable pairings in testing environments. It doesn’t matter how good the machine is if it is still being fed the wrong figures – the wrong answers are still likely to come out.
Testing facial recognition in stores
Businesses are in various stages of testing and implementing the technology.
Kroger is testing facial recognition as part of a pilot program in two stores, one outside Cincinnati, another near Seattle. The video cameras can approximate a shopper’s age and gender, but the information is kept anonymous and the data is not stored, says Erin Rolfes, corporate affairs manager at Kroger’s Cincinnati/Dayton division.
Kroger isn’t revealing much about the purpose of the tests. “As with any pilot, we’re using this as an opportunity to learn,” Rolfes says.
Walgreens is testing digital cooler doors with screens and cameras in six stores from a company called Cooler Screens. The retailer can target ads on those screens, activated when a shopper appears in front of them.
A Walgreens spokesperson indicated that no biometric data is captured on a person’s age, gender or height, and at the moment only a motion sensor is in use. That said, “there may be future camera enhancements to improve the customer shopping experience, but all such enhancements will be carefully reviewed and considered in light of any consumer privacy concerns.”
Trepp of FaceFirst, who also authored a book called “The New Rules of Consumer Privacy,” estimates that his company has saved retailers many hundreds of millions of dollars in shrinkage over the past few years. “We catch bad guys and avert crime every day of the week,” he says.
But Trepp also sees potential consumer benefits. Recognized shoppers might get special offer coupons or be able to check out faster.
“Think about the connection to the online shopping world,” Trepp says. “You log in, they know who you are, they know your shopping history, your ZIP Code, your gender, and they can make recommendations about things you might want to buy. The brick-and-mortar world wants that as well. They want to know you when you walk through the door and engage you and make that a better experience.”
The need for disclosure
Much of the hubbub around facial recognition surrounds the disclosure of its use.
Facebook uses facial recognition, in part, to suggest that others tag you or to let you know when you might be in photos or videos that haven’t been tagged. The company notes that it doesn’t have face recognition features that tell strangers who you are. And Facebook says it may also use the technology to help detect when an account may be pretending to be you.
You are supposed to be able to turn off facial recognition inside Facebook settings, though Consumer Reports said it found that eight of 31 Facebook accounts it examined were missing the setting that lets you do that.
Facebook emailed a statement saying that “everyone on Facebook can turn face recognition on or off, either through the standalone face recognition setting or through the Tag Suggestions setting.” Facebook acknowledged that some people weren’t seeing this option, so to avoid confusion, it is moving to a single setting.
NBC News recently accused a facial-recognition company Ever AI of not telling users of its photo storage app Ever that photos they were sharing were being used to train the company’s facial recognition system, “and that Ever then offers to sell that technology to private companies, law enforcement and the military.”
Ever AI CEO Doug Aley insists otherwise.
“We’ve taken proactive actions for a long time to be transparent with our users and to let them know that the Ever app uses face recognition to organize their photos, just like many other photo storage apps do,” Aley wrote in an email to USA TODAY. “There is nothing hidden about the fact that Ever uses face recognition to organize photos and create albums for our users, and we’ve made sure to give users the ability to use or not use that feature as they wish.”
Aley added that “no user information of any kind is provided from our Ever app to our enterprise face recognition customers.”
Facial recognition for travelers
Since June 2017, JetBlue says it has matched 125,000 boarding passengers on 1,400 flights through the technology. It began as a pilot with the U.S. Customs and Border Protection but is now according to the head of the airline’s customer experience programs, Caryl Spoden, “an essential part of our daily operations.”
Self-boarding through facial recognition is available on select flights and completely voluntary.
Photos captured at the gate are sent directly to the CBP, which cross-references it against a gallery of passport photos of those on the flight manifest. If a match is found, they will send back a positive result, used to board the customer. If not, a customer’s passport will be manually inspected and boarding pass scanned.
CBP hopes to use such systems for more than 97% of the departing commercial air travelers from the U.S. within the next four years, and besides Jet Blue has collaborated with Air New Zealand, British Airways, Delta, Lufthansa, and at airports in Orlando, Florida, Los Angeles and San Jose, California.
Legislating facial recognition
Meanwhile, not everyone is in favor of looming legislation: The nonpartisan Information Technology and Innovation Foundation rejects the Schatz-Blunt bill on the grounds that facial recognition applications are still in their infancy and thus the proposed legislation is too much, too soon.
“It’s really not about the technology, it’s about putting in place rules that govern its use and limit its abuse,” says ITIF president Robert Atkinson. “There’s no need to rush into this.”
But in calling for the government to start regulating facial recognition technology late last year, Microsoft president Brad Smith blogged, “The facial recognition genie, so to speak, is just emerging from the bottle. Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues.”
“The moment you step into a shopping mall, Smith writes, “it’s possible not only to be photographed but to be recognized by a computer wherever one goes… Our point is not that the law should deprive commercial establishments of this new technology….But people deserve to know when this type of technology is being used, so they can ask questions and exercise some choice in the matter if they wish.”
One poll suggests that Americans are willing to trade some privacy for safety. An online survey by the Center for Data Innovation found that 47.8% of adults in the U.S. agreed that musicians like Swift should be allowed to use facial recognition technology to identify known stalkers at their concerts. Only 20.6% disagreed.
About a year ago Rolling Stone reported that a facial recognition camera inside a kiosk during a Swift concert at the Rose Bowl took pictures of attendees that were “transferred to a Nashville, Tennessee, ‘command post,’ where they were cross-referenced with a database of hundreds of the pop star’s known stalkers.”
Using facial recognition at home
Most consumers don’t seem to have an issue when facial recognition is an opt-in feature on a device they own, as Apple has shown with the Face ID feature used to unlock iPhones and authenticate App Store and iTunes purchases. Face ID data doesn’t leave your device and is not backed up to iCloud or anywhere else.
Google showcased an opt-in Face Match feature on its upcoming Nest Hub Max smart display with a camera that can recognize and distinguish you from up to five other people living in the same space. That way, the Google Assistant can surface personalized messages that are relevant to you rather than your kin.
The face models it builds and references to “recognize” you are encrypted, processed and remain on the device.
As an additional precaution, users can disable the Nest’s camera. Google Nest vice president Rishi Chandra says the company plans to put a “privacy card” in the box to educate customers about the privacy controls built into the product.
There are really only two use cases for facial recognition, Dixon says, “either comparing one face with one face which is used for all the phone authentication and what not. Or you’re comparing one face with a database of many faces. That’s the use that you have to worry about a lot.”
Where do you think facial recognition technology is or isn’t acceptable? Send email to firstname.lastname@example.org or tweet @edbaig on Twitter