The Washington PostDemocracy Dies in Darkness

A Detroit community college professor is fighting Silicon Valley’s surveillance machine. People are listening.

Chris Gilliard grew up with racist policing in Detroit. He sees a new form of oppression in the tech we use every day.

September 17, 2021 at 6:00 a.m. EDT
Chris Gilliard poses for a photo at his home in Dearborn, Michigan on July 30. The teacher at Macomb Community College tries to keep his face off of the Internet as much as possible. (Nic Antaya for The Washington Post)
11 min

In Ring, a doorbell video camera that can send footage to your phone and broadcast it to your neighbors, millions of American homeowners see an affordable way to gain peace of mind.

Chris Gilliard, a community college professor raised in an aggressively policed Detroit, sees something quite different: a tool of old-fashioned racial profiling dressed up in a sleek new package.

A Ring camera offers “an opportunity for people to broadcast who they think should and should not be in a particular neighborhood,” Gilliard said. “And often what that means is folks saying that certain Black people, or Black people in general, should not be in their neighborhood.”

Far from academia’s elite institutions, Gilliard, 51, has emerged as an influential thinker on the relationship between trendy tech tools, privacy and race. From “digital redlining” to “luxury surveillance,” he has helped coin concepts that are reframing the debate around technology’s impacts and awakening recognition that seemingly apolitical products can harm marginalized groups.

Ring and Nest helped normalize American surveillance and turned us into a nation of voyeurs

While some scholars confine their work to peer-reviewed journals, Gilliard posts prolifically on Twitter, wryly skewering consumer tech launches and flagging the latest example of what he sees as blinkered techno-optimism or surveillance creep. (Among his aphorisms: “Automating that racist thing is not going to make it less racist.”) It’s an irony of the world Silicon Valley has constructed that an otherwise obscure rhetoric and composition teacher with a Twitter habit could emerge as one of its sharpest foils.

Among a growing chorus of critics taking on an industry that’s remolding the world in its image, Gilliard is not the most prominent or credentialed. Yet his outsider status is integral to a worldview that is finding an audience not only on social media but in the halls of academia, journalism and Washington.

In 2019, Gilliard testified before the House Financial Services Committee about how big data and algorithms in banking can reinforce historical discrimination. Last month, when President Biden signed the infrastructure bill, it included provisions intended to address digital redlining — the use of technology to enforce race and class divisions.

A groundswell of tech reform efforts has led to scant policy change at the federal level. What notable changes tech companies have enacted — such as Amazon suspending its sale of facial recognition technology to law enforcement — have been driven by pressure from the public and the companies’ own idealistic employees, often channeling ideas that have filtered down from academia through journalism via conduits like Gilliard’s Twitter account, among many others. Especially since last year’s protests surrounding the killing of George Floyd by police, tech firms have faced heightened calls to address criticism that their products harm Black people — arguments they might previously have brushed aside.

Gilliard began calling attention to Ring’s discriminatory potential soon after Amazon acquired it in 2018. He grew so preoccupied with the camera that he changed his Twitter name to “One Ring (Doorbell) to Surveil Them All,” a play on a line from “The Lord of the Rings.” He made himself available as an expert source to Vice, USA Today and the Associated Press, explaining how consumer surveillance tools such as Ring, along with social “neighborhood watch” apps such as Neighbors and Nextdoor, feed on people’s fears and amplify their biases.

“I think about what the effect is of law enforcement having easy access to cameras from everyone’s porch,” Gilliard said. “It makes nuisance crimes” — from stolen Amazon packages to an egged car — “available for escalation in a way that they weren’t previously.”

Gradually, Gilliard’s concerns were borne out. Stories began to emerge about Ring’s secret partnerships with police departments; its security vulnerabilities; the rampant racism on its neighborhood watch app, Neighbors; and how it trades on people’s suspicion of others who look like they “don’t belong.” (Ring says it doesn’t tolerate discrimination on the Neighbors app, and it encourages users to consider whether their suspicion is reasonable before sharing footage.)

The drumbeat of negative press hasn’t stopped sales, but it has had clear impacts. This summer, Ring stopped allowing police to directly request footage from users’ doorbells without a warrant.

If Gilliard’s name and face aren’t familiar, that’s partly by design: He tries to keep his likeness off the Internet as much as possible, partly to protest facial recognition software, and he doesn’t use his real name on social media. (His Twitter handle is @hypervisible, a term that refers to, as he puts it, “Blackness being seen and not understood.”) He agreed to be photographed for this story on the condition that his face not be shown.

He is part of a broader movement to understand and expose the ways in which a tech industry historically dominated by White and Asian men has built its own biases into products used by millions — abetted at times by the blind spots of academics, advocates, journalists and lawmakers who may come from privileged backgrounds themselves. From industry whistleblowers such as Timnit Gebru and Ifeoma Ozoma — formerly of Google and Pinterest, respectively — to academics such as Ruha Benjamin of Princeton University and Safiya Umoja Noble of UCLA, women and people of color are reshaping the public’s understanding of the role of social platforms and artificial intelligence in society. This week, President Biden nominated Georgetown law professor Alvaro Bedoya to the Federal Trade Commission, elevating a scholar who shares the view that privacy and surveillance are civil rights issues.

Gilliard is “a true leading indicator,” said Rumman Chowdhury, an expert on artificial intelligence ethics who directs Twitter’s Machine Learning Ethics, Transparency, and Accountability team. “For anybody looking to understand the next narrative in tech, whatever tomorrow’s story is going to be, it would be smart to follow Chris and to listen to what he is saying.”

***

If Gilliard appears prescient about how tools of surveillance may be abused in the future, it might be because he’s intimately familiar with how they’ve been abused in the past.

Gilliard grew up in Detroit in the 1970s and ‘80s, a time and place associated with drugs and violent crime in the popular imagination. But Gilliard, one of eight children born to a washing-machine repairman and a cook, didn’t see it that way. To him, the perceptions of urban Detroit as a dangerous dystopia were more damaging than the reality. They meant that members of his community were viewed by White society primarily as a threat — a problem to be solved, or contained.

His youth was shadowed by STRESS — an infamous Detroit Police Department decoy unit whose officers posed as vulnerable civilians to lure and catch would-be robbers. The unit used computer data to focus its sting operations on high-crime areas, a cutting-edge approach at the time, but one that created a feedback loop ensuring that the same neighborhoods would be perpetually targeted. They quickly racked up arrests — at the cost of an appalling body count. In 2½ years, STRESS officers killed 22 people, 21 of them Black, and many of them unarmed, according to a 2017 history of the unit in the New Republic. It was disbanded in 1974 amid public outrage and a flurry of lawsuits.

While Gilliard was too young to be aware of the program at the time, “it really has informed my ideas about surveillance and law enforcement,” he said. “I’ve had guns pulled on me by police. I did not grow up with the belief that law enforcement was my friend.” Instead, he grew up deeply skeptical of “innovations” that promised to keep some people safer by monitoring or restricting the activities of others.

Gilliard’s parents prioritized education, and Chris became a voracious reader. He went on to obtain a PhD in rhetoric and composition from Purdue University, and these days he teaches English at Macomb Community College in Shelby Township, Mich. In phone conversations and Zoom calls, he’s affable and self-deprecating, recounting examples of ignorance or injustice with sardonic humor. He peppers his discussions with references to other academics’ work; when asked about his own impact, he deflects credit to those who have influenced him, including University of Texas professor Simone Browne, UCLA’s Noble, Detroit poet and activist Tawana Petty and Northeastern University’s Woodrow Hartzog.

While he may lack their stature in the academic hierarchy, scholars at major research institutions point back to Gilliard as an important bridge between the ivory tower and tech’s real-world impacts.

“He has such a laserlike focus in his analysis, in being able to look at any new technology and understand the consequences of it for the most vulnerable,” says Noble, associate professor of information studies at UCLA and the author of “Algorithms of Oppression,” a landmark work on search engines’ biases. “He’s like my Consumer Reports on the dangers of technology.”

In tech criticism circles, he’s best known for popularizing the term “digital redlining.” He and his Macomb colleague Hugh Culick first articulated the concept in 2016 to describe the use of Internet-filtering tools designed to block “lewd” or extreme content at community colleges — which also blocked students from research on topics such as revenge porn that their peers in four-year colleges were free to pursue.

In the years since, the concept has become pivotal to debates about broadband Internet access, Amazon same-day delivery and Facebook ads for housing and employment that excluded users based on their “ethnic affinity.”

“The concept of redlining is old,” says Joan Donovan, research director of Harvard University’s Shorenstein Center, who brought Gilliard on as a research fellow last year. “But it’s often that you have to demonstrate that the digital (realm) doesn’t erase these distinctions, but rather tends to focus them and exacerbate them.”

More recently, Gilliard coined the term “luxury surveillance” to describe tracking tools that people of privilege opt into, but which resemble and reinforce forms of surveillance that target marginalized people. Take the Transportation Security Administration’s “TSA PreCheck” program, in which people who believe they have “nothing to hide” volunteer additional information about themselves in exchange for speedy passage through airport security. As more people opt in, those who resist the program are subjected to an ever-growing share of scrutiny.

“An ankle monitor and a thing like the Fitbit are very much the same technology,” Gilliard says. “Some of the main differences are who it’s used by, or against, or on, and what it looks like.” When people buy a Fitbit or a Ring, he said, they assume the collected data will be used for their benefit. Because those consumers are often White and well-off, they’re often correct, Gilliard adds — “but not always.”

He cites a recent debate in an upscale community in Colorado over whether to install automated license-plate readers. “The assumption is that they’re going to stop crime. But there are all kinds of things they could do with that data that puts you on the surveilled end of that equation,” such as an abusive partner using it to stalk a resident’s movements.

“I bristle when people talk about privacy being dead, or privacy being some kind of First World issue,” Gilliard said. From people receiving public assistance to those who have been incarcerated, “some of the people who have been deprived of their privacy rights are most aware of how important they are.”

***

Mainstream pundits and politicians often talk of invasive technologies as being in need of adjustment, responsible oversight or reform.

Gilliard has a different view: When it comes to technologies that are fundamentally discriminatory, he says, the goal should be “abolition.”

For years, people who called for bans on facial recognition technology were “derided and told it was impossible,” Gilliard recalls. Now such bans are proliferating, from San Francisco and Portland, Ore., to New Orleans and Boston. Gilliard is part of a growing cadre of tech critics, perhaps most notably Princeton’s Benjamin, who see that as a model for reining in other technologies, from predictive policing to AI proctoring, that reinforce discrimination.

“One of the arguments people like to make is that you can’t go back on particular technologies — you can’t put the genie back in the bottle,” Gilliard says. “That’s patently untrue. No one would look at asbestos and say, ‘Well, you can’t outlaw chemistry.’ But they look at facial recognition and say, ‘You can’t outlaw math.' ”

To Gilliard, facial recognition software has no place in a free society, not just because it is often less accurate in identifying people of color, but because it means people in targeted neighborhoods can’t even walk down the street without being watched. “Facial recognition takes this kind of freedom that’s foundational to a free society and renders it obsolete.”

Mass school closures in the wake of the coronavirus are driving a new wave of student surveillance

At times, Gilliard’s privacy advocacy has earned him unwanted attention from the companies he criticizes. The remote-proctoring company Proctorio has cited critical tweets from Gilliard in a lawsuit against another of its critics, whom it accuses of infringing its copyright. The company is seeking to have that critic’s private communications with Gilliard unsealed, Proctorio legal counsel Timothy Pinos confirmed. Gilliard argues that Proctorio’s products invade students’ privacy in their homes, and that its face-detection software risks flagging as “suspicious” those whose appearance or behavior strays from what it deems to be normal.

In an emailed statement, Proctorio’s CEO and founder Mike Olsen said: “We do not agree that the software erodes expectations of privacy, as it merely replaces a human invigilator monitoring the taking of the exam. … Proctorio is committed to ensuring its software is free of bias, and welcomes comments and criticisms to that end.”

As Gilliard’s work began to attract speaking engagements and fellowship opportunities, they often came with requests for headshots. Gilliard didn’t want pictures of his face next to his name online, available for unscrupulous companies to scrape for use in facial recognition databases, which might then make their way into the hands of law enforcement. He knew that Black people are more likely to be misidentified by those systems as suspects in criminal investigations, sometimes leading to dangerous confrontations with police.

Earlier in his career, Gilliard didn’t always feel he could say no. Now he does, rejecting invitations to appear on-camera via Zoom or YouTube.

“It’s a little bit performative in that my face is already out there,” he admits. “But I don’t believe in feeding the machine any more than I have to.” Plus, he said, “I want to normalize, to the extent that I can, that other people can say no.”

The desire to stand up for the little guy is a unifying theme of Gilliard’s work. (“I hate bullies,” he says.) And it may help explain why Gilliard has remained something of a “little guy” himself, teaching at a community college despite a level of impact that many top-tier research professors would envy.

“Part of what drives it is that I really understand these practices as so harmful, and a lot of them people don’t know about or do not feel license to talk about or critique, because they involve computational processes that a lot of people don’t understand,” Gilliard says of his research on digital surveillance and predictive software.

Those computational processes are “so harmful to communities that I care about, whether that’s students, Black and Brown folks, people who look like me, people who I love, groups who are in some ways again marginalized or vulnerable. Incarcerated people, formerly incarcerated people,” he said.

“A lot of people, I think, don’t feel the license to say, ‘This thing (expletive) sucks,’ or, ‘This thing shouldn’t exist,’" Gilliard said. “Which is something I think we should allow ourselves to say more often.”