The social network is scaling back facial recognition, but similar technology could show up in the metaverse.
Facebook says it will stop using facial recognition for photo-tagging. In a Monday blog post, Meta, the social network’s new parent company, announced that the platform will delete the facial templates of more than a billion people and shut off its facial recognition software, which uses an algorithm to identify people in photos they upload to Facebook. This decision represents a major step for the movement against facial recognition, which experts and activists have warned is plagued with bias and privacy problems.
But Meta’s announcement comes with a couple of big caveats. While Meta says that facial recognition isn’t a feature on Instagram and its Portal devices, the company’s new commitment doesn’t apply to its metaverse products, Meta spokesperson Jason Grosse told Recode. In fact, Meta is already exploring ways to incorporate biometrics into its emerging metaverse business, which aims to build a virtual, internet-based simulation where people can interact as avatars. Meta is also keeping DeepFace, the sophisticated algorithm that powers its photo-tagging facial recognition feature.
“We believe this technology has the potential to enable positive use cases in the future that maintain privacy, control, and transparency, and it’s an approach we’ll continue to explore as we consider how our future computing platforms and devices can best serve people’s needs,” Grosse told Recode. “For any potential future applications of technologies like this, we’ll continue to be public about intended use, how people can have control over these systems and their personal data, and how we’re living up to our responsible innovation framework.”
That facial recognition for photo-tagging is leaving Facebook, also known as the “big blue app,” is certainly significant. Facebook originally launched this tool in 2010 to make its photo-tagging feature more popular. The idea was that letting an algorithm automatically suggest tagging a particular person in a photo would make it easier than manually tagging them and, perhaps, encourage more people to tag their friends. The software is informed by the photos people post of themselves, which Facebook uses to create unique facial templates tied to their profiles. The DeepFace artificial intelligence technology, which was developed from pictures uploaded by Facebook users, helps match people’s facial templates to faces in different photos.
Privacy experts raised concerns immediately after the feature launched. Since then, pivotal studies from researchers like Joy Buolamwini, Timnit Gebru, and Deb Raji have also shown that facial recognition can have baked-in racial and gender bias, and is particularly less accurate for women with darker skin. In response to growing opposition to the technology, Facebook made the facial recognition feature opt-in in 2019. The social media network also agreed to pay a $650 million settlement last year after a lawsuit claimed the tagging tool violated Illinois’s Biometric Information Privacy Act.
It’s possible that defending this particular use of facial recognition technology has become too expensive for Facebook and that the social network has already gotten what it needs out of the tool. Meta hasn’t ruled out using DeepFace in the future, and companies including Google have already incorporated facial recognition into security cameras. Future virtual reality hardware could also collect lots of biometric data.
“Every time a person interacts with a VR environment like Facebook’s metaverse, they’re exposed to collection of their biometric data,” John Davisson, an attorney at the Electronic Privacy Information Center, told Recode. “Depending on how the system is built, that data could include eye movements, body tracking, facial scans, voiceprints, blood pressure, heart rate, details about the user’s environment, and much more. That’s a staggering amount of sensitive information in the hands of a company that’s shown over and over it can’t be trusted with our personal data.”
Several of Meta’s current projects show that the company has no plans to stop collecting data about peoples’ bodies. Meta is developing hyper-realistic avatars that people will operate as they travel through the metaverse, which requires tracking someone’s facial movements in real time so they can be recreated by their avatar. A new virtual reality headset that Meta plans to release next year will include sensors that track peoples’ eye and facial movements. The company also weighed incorporating facial recognition into its new Ray-Ban smart glasses, which allow the wearer to record their surroundings as they walk around, and Reality Labs, Meta’s hub for studying virtual and augmented reality, is conducting ongoing research into biometrics, according to postings on Facebook’s careers website.
In addition to Illinois’s biometric privacy law, there is a growing number of proposals at the local and federal levels that could rein in how private companies use facial recognition. Still, it’s not clear when regulators will come to a consensus on how to regulate this technology, and Meta wouldn’t point to any specific legislation that it supports. In the meantime, the company is welcoming the celebration over its new announcement. After all, it’s a convenient opportunity to emphasize something other than the recent leak of thousands of internal documents revealing that Facebook still isn’t capable of keeping its platform safe.