Future Tense

Facial Recognition Technology Isn’t Good Just Because It’s Used to Arrest Neo-Nazis

A mob of people carrying U.S. and Gadsden flags enters the Capitol building.
Just because we can use facial recognition technology to identify members of the Capitol building mob does not mean we should. Win McNamee/Getty Images

In a recent New Yorker article about the Capitol siege, Ronan Farrow described how investigators used a bevy of online data and facial recognition technology to confirm the identity of Larry Rendall Brock Jr., an Air Force Academy graduate and combat veteran from Texas. Brock was photographed inside the Capitol carrying zip ties, presumably to be used to restrain someone. (He claimed to Farrow that he merely picked them up off the floor and forgot about them. Brock was arrested Sunday and charged with two counts.)

Even as they stormed the Capitol, many rioters stopped to pose for photos and give excited interviews on livestream. Each photo uploaded, message posted, and stream shared created a torrent of data for police, researchers, activists, and journalists to archive and analyze. Known as open-source investigation, or OSINT for short, this is a relatively new development in security. Organizations like Bellingcat carry out detailed digital investigations using the data available through the web and other networked technologies. OSINT methods are useful across most professional sectors, but especially law enforcement and journalism. Biometric data, such as voice prints and images of faces, matched with social media histories, location data, and purchases, can be used to confirm the identity and location of ordinary people. It is truly astounding how much open data is available on any given person (including you).

Watching the Capitol rioters be arrested may be satisfying. But the role of facial recognition technology here is alarming, given the risks of false identification and the technology’s inherently biased design. Crisis is often used to increase the reach of surveillance technologies. Many who consider the use of facial recognition technology ethically wrong in the context of policing take a different stance when it’s in the hands of researchers and journalists trying to identify neo-Nazis and insurrectionists. This could end up further entrenching facial recognition technology at a time when we should be working to ban it.

During a Twitter discussion we took part in about this topic with A.I. researchers and advocates, it was clear that some still hoped there is a way forward with facial recognition technology, while those who have looked deeply at the values underlying it see it as deeply flawed, racist, and a debasement of human rights. There was also a persistent idea that the researchers using facial recognition technology to hunt for rioters were going to be consistently on the right side of things, but we know that to not be the case—we have seen examples of software that purports to identify “gayface,” assess gender, even detect Uighers.

Among these experts who are in favor of this use of the technology, there is a sense that it’s an arms race, and the only answer is for these tools to also be in the hands of regular folks. This idea was most obviously on display when a technologist in Portland used facial recognition technology to identify police officers who acted violently at Black Lives Matter protests. But turning this technology against the state (and against right-wing insurrectionists) is a temporary win, with potentially greater societal costs.

Before Wednesday’s Capitol siege, researchers and advocates urged Congress to ban facial recognition technology entirely, due its uptake by police forces and government agencies, especially as it is integrated into mundane places like airports and traffic lights. They haven’t had much luck nationally. More recently, several cities have taken up the charge and barred its use locally, but that does little to slow the development within corporations and universities.

Many of major tech companies have put a temporary moratorium on some uses of facial recognition tech, and even called for legislation (although a cynical view is that this is a tactic for companies to influence the legislative process). That is not a bad thing. But we need to see much more. In response to the ground-breaking research project Gender Shades by Joy Buolamwini and Timnit Gebru, which shows how poorly facial recognition technology performs on non-white people, companies largely vowed to amass larger and more diverse datasets. The companies have also largely failed to make public ethical commitments around A.I. and bias, something that Mutale Nkonde, founding director of AI for the People, called for in an oped in the Harvard Business Review.

Some have suggested that because facial recognition technology is both useful and profitable, there will always be incentives to build it even if it is ethically fraught. For example, reporting on Clearview AI, which sells such identification software, suggests law enforcement use of the technology increased 26 percent after Wednesday’s events at the Capitol. This increased adoption comes after last year’s Huffington Post investigation that linked the development of Clearview AI to a dark network of far right activists, tech moguls, and political operatives.

It is true that banning facial recognition technology in the U.S. altogether, while a positive development, would not prevent other countries from designing and deploying facial recognition technology. That’s why, in addition to outlawing it in the U.S., we need a coalition to make this technology into a human rights violation. Many have already started this crucial work. Big tent groups like Data for Black Lives critically examine how facial recognition technology is deployed, especially in the service of policing. Nasma Ahmed, director of Canada’s Digital Justice Lab, and Sarah Aoun, the chief technologist at the Open Technology Fund, have led international campaigns bridging technologists and civil society. AI Now conducted an international policy analysis surveying biometric data regulation and found that whether governments choose to create laws largely depends on their understanding of individual privacy, rather than focusing on  broader public harms. Sasha Costanza Chock has outlined a plan for design justice, where communities are part of the process of technological innovation.

Despite this work, the hype of facial recognition technology lives on. For example, during the melee at the Capitol, the Washington Times published a false story on social media claiming that Antifa were responsible for the breach and were identified by facial recognition. (The paper later issued a correction.) Here, the invocation of facial recognition technology is a rhetorical trick to make the claim more believable. It was later repeated by numerous right-wing politicians, including Rep. Matt Gaetz, and influencers on social media, reaffirming people’s faith in technology to make a patently false accusation. The fact that the technology is flawed matters less than that people believe in it and use it to support their chosen narrative.

Technologies often go through cycles of hope and hype, where the promise of new technology to solve difficult problems is matched with excited marketing pitches, often overselling its capabilities in the process. The truth is that much of the identification of the MAGA rioters comes from a bevy of data sources available, not facial recognition technology itself. It helps when the rioters openly organize themselves on Facebook and plaster their images on Instagram.

It is a shame that these points need to be litigated over again, but researchers, technologists, and advocates will continue to make them until everyone understands why facial recognition technology is a menace to human dignity.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.