Facebook said Tuesday it will end its use of facial recognition software and delete facial data on more than a billion people, a sudden reversal for one of the Internet’s biggest face-scanning systems that could reinvigorate scrutiny about the software’s expanding prevalence around the world.
The social media giant, which has used the software to automatically tag people by name in photos since 2010, said in a blog post that it decided to drop the technology after carefully considering both its future promise and potential risks for surveillance and privacy.
“The many specific instances where facial recognition can be helpful need to be weighed against growing concerns about the use of this technology as a whole,” wrote Jerome Pesenti, the company’s vice president for artificial intelligence.
The change marks a dramatic shift for a controversial technology that the social network did more than anyone to normalize. In the more than a decade since Facebook showcased its usefulness, face-scanning systems have expanded widely across schools, airports, police investigations and worker-monitoring software.
Facebook’s reversal could further fuel skepticism about the largely unregulated technology and concerns about its potential for misuse. But some privacy experts suspect that Facebook’s promotion of the technology has already left an indelible imprint on the Internet. Companies such as Google and Apple use similar facial recognition features for photo tagging, though typically only in personal albums not available for public view.
Facebook “introduced this technology in a way that highlighted its utility while downplaying the negative downstream effects of making it ubiquitously available,” said Liz O’Sullivan, the chief executive of Parity, an algorithmic assessment start-up. “They had access to this unique and massive data-collection system — not just of people, but how people change over time. … We always said the world’s best facial recognition system is undoubtedly in the hands of Facebook.”
The move also showcases how Facebook, which for years pushed a self-proclaimed “move fast and break things” ethos, has historically pushed forward with products that have resulted in outcries from privacy experts and the public.
The social network, which last week changed the name of its parent company to Meta, is in the midst of a crisis over its public reputation after a whistleblower came forward with tens of thousands of pages of research documenting the company’s knowledge of extensive societal harms caused by its service.
The company’s leadership has appeared eager in recent weeks to show it takes potential negative consequences from its products into account. The company recently said, for example, that it was pausing the development of Instagram for children in response to allegations that the company’s internal research found that the product caused harm to the body image of some teen girls.
The company said last week that it will also develop a new suite of hardware products, including virtual reality, in concert with regulators and taking into account privacy and safety from “day one.” That could also include a smartwatch that can take biometric readings, The Washington Post and others have previously reported, potentially giving the company access to even more sensitive data.
In citing the technology’s legal uncertainty in a country where regulators have yet to provide a “clear set of rules governing its use,” Facebook will follow in the footsteps of other tech giants who have voiced concerns about the software’s legal uncertainty.
Amazon cited similar reasons in May, when it indefinitely extended a global ban of its own police facial recognition software, saying Congress had yet to implement appropriate laws. IBM and Microsoft also stopped selling their own facial recognition technology to police last year.
Pesenti said facial recognition software provided accessibility benefits for the visually impaired and noted that more than a third of Facebook users had chosen to use it. Until 2019, users were automatically opted into the service.
Facebook has faced questions in recent months of whether it would fold the technology into upcoming products, such as a pair of camera glasses the company is making with Ray-Ban or its broader shift toward the “metaverse.”
Company executives have said that feature is not included in its existing glasses. But Pesenti said in the blog post that, while the company is ending its existing Face Recognition system, it will continue to explore “potential future applications of technologies like this.”
Facebook’s facial recognition algorithms turned people’s photos into facial “templates” — mathematical representations of a person’s likeness that the software could compare to millions of other photos in an instant, experts said. But deleting those templates will not prevent the images from being used by companies such as Clearview AI, which pulled its photos without permission to build a vast facial recognition search tool that the company sells to police.
Deleting the templates will also not prevent other companies from running saved Facebook photos through their own facial recognition software. Companies such as PimEyes now allow anyone to scan for faces across billions of photos from around the Web.
Facebook’s reversal stands at odds with the federal government, which has moved aggressively to expand facial recognition use for tracking its own employees, criminal suspects or Americans at large. Ten federal agencies, including the Homeland Security and Justice departments, told government auditors this year that they intended to expand their face-scanning capabilities by 2023.
Members of Congress have proposed some federal regulations that would address facial recognition use by police and government authorities, though none have yet to pass. Last month, the European Parliament called for a ban on police use of facial recognition in public places.
More than a dozen cities and states have enacted their own laws banning or restricting the technology’s use, including Boston and San Francisco, but they mostly relate to use by governments, not companies.
In Illinois, one of three states to ban companies from collecting facial and other “biometric” data without a person’s consent, Facebook agreed last year to pay $650 million to settle a class-action lawsuit alleging it had broken the law. That settlement came one year after Facebook agreed to settle separate Federal Trade Commission allegations claiming it had misled consumers about how third-party apps could access their data during the Cambridge Analytica scandal.
The social network’s introduction of facial recognition in 2010 marked the early stage technology’s biggest debut yet on the global stage. The move was controversial at the time, because Facebook’s software automatically “tagged” people in photos, linking their online accounts and identities to images they may not have realized had been taken.
But company data scientists had discovered that notifying people they were tagged in photos was an excellent psychological tactic to lure people into engaging with the service, according to two people who engaged in the early conversations around the tech, who spoke on the condition of anonymity to discuss private matters.
At the time, Facebook’s leadership was obsessed with growing the amount of time that users spent on the platform and with reaching a billion users before going public, which happened in 2012. That same year the company purchased Instagram, and some early Instagram employees resisted adding Facebook’s photo-tagging to the app because they thought it was creepy and tacky, The Post previously reported. They were rebuffed because photo-tagging was so successful.
Early Facebook employees have said in interviews with The Post and other outlets that photo-tagging was one of the greatest “growth hacks” Facebook engineers had ever developed, because it was hard for users to resist notifications that they were showing up in other people’s pictures.
Unlike earlier facial recognition systems that relied on official photos from passports or jail mug shots, Facebook’s technology was supercharged by a sprawling and diverse set of facial images submitted by the users themselves.
Facial recognition technology has faced increasing resistance in recent years after researchers found that some algorithms performed more inaccurately for people with darker skin. The systems have been blamed for at least three wrongful arrests by U.S. police departments, all of which involved Black men.
Joy Buolamwini, an AI researcher and founder of the digital advocacy group Algorithmic Justice League who has documented racial biases in facial recognition software, tweeted Tuesday: “Legislative action is as necessary as ever to continue to fight for algorithmic justice. We need an even bigger surge of FacePurges.”
The technology has more generally conjured dystopian fears of devastating surveillance, because it can be used to identify people from afar without their knowledge or consent. The technology has been used by Chinese police to track the general public, including Uyghurs, the largely Muslim minority group that has been detained in mass “reeducation” camps.
Jake Laperruque, a senior policy counsel at Project On Government Oversight, a Washington watchdog group, said he believes Facebook’s about-face could reinvigorate calls for new legal guardrails and further shift the debate from companies to lawmakers.
“This marks another sea change on the tech and how it’s regarded. It’s not just one company in isolation now; there are a number of companies who did this mass data collection who now say the technology has gone too far over the line,” he said. “The fact is that facial recognition is everywhere now. And the only way to take it on is not through voluntary measures. It’s through laws.”
Heather Kelly contributed to this report.