Mike Schroepfer, Facebook’s chief technology officer, is leading the social network’s efforts to build the automated tools to sort through and erase the millions of posts with toxic content.Credit...Peter Prato for The New York Times

Facebook’s A.I. Whiz Now Faces the Task of Cleaning It Up. Sometimes That Brings Him to Tears.

Facebook has heralded artificial intelligence as a solution to its toxic content problems. Mike Schroepfer, its chief technology officer, says it won’t solve everything.

MENLO PARK, Calif. — Mike Schroepfer, Facebook’s chief technology officer, was tearing up.

For half an hour, we had been sitting in a conference room at Facebook’s headquarters, surrounded by whiteboards covered in blue and red marker, discussing the technical difficulties of removing toxic content from the social network. Then we brought up an episode where the challenges had proved insurmountable: the shootings in Christchurch, New Zealand.

In March, a gunman had killed 51 people in two mosques there and live streamed it on Facebook. It took the company roughly an hour to remove the video from its site. By then, the bloody footage had spread across social media.

Mr. Schroepfer went quiet. His eyes began to glisten.

“We’re working on this right now,” he said after a minute, trying to remain composed. “It won’t be fixed tomorrow. But I do not want to have this conversation again six months from now. We can do a much, much better job of catching this.”

The question is whether that is really true or if Facebook is kidding itself.

For the past three years, the social network has been under scrutiny for the proliferation of false, misleading and inappropriate content that people publish on its site. In response, Mark Zuckerberg, Facebook’s chief executive, has invoked a technology that he says will help eliminate the problematic posts: artificial intelligence.

Before Congress last year, Mr. Zuckerberg testified that Facebook was developing machine-based systems to “identify certain classes of bad activity” and declared that “over a five- to 10-year period, we will have A.I. tools” that can detect and remove hate speech. He has since blithely repeated these claims with the media, on conference calls with Wall Street and at Facebook’s own events.

Mr. Schroepfer — or Schrep, as he is known internally — is the person at Facebook leading the efforts to build the automated tools to sort through and erase the millions of such posts. But the task is Sisyphean, he acknowledged over the course of three interviews recently.

That’s because every time Mr. Schroepfer and his more than 150 engineering specialists create A.I. solutions that flag and squelch noxious material, new and dubious posts that the A.I. systems have never seen before pop up — and are thus not caught. The task is made more difficult because “bad activity” is often in the eye of the beholder and humans, let alone machines, cannot agree on what that is.

In one interview, Mr. Schroepfer acknowledged after some prodding that A.I. alone could not cure Facebook’s ills. “I do think there’s an endgame here,” he said. But “I don’t think it’s ‘everything’s solved,’ and we all pack up and go home.”

The pressure is on, however. This past week, after widespread criticism over the Christchurch video, Facebook changed its policies to restrict the use of its live streaming service. At a summit in Paris with President Emmanuel Macron of France and Prime Minister Jacinda Ardern of New Zealand on Wednesday, the company also signed a pledge to re-examine the tools it uses to identify violent content.

Mr. Schroepfer, 44, is in a position he never wanted to be in. For years, his job was to help the social network build a top-flight A.I. lab, where the brightest minds could tackle technological challenges like using machines to pick out people’s faces in photos. He and Mr. Zuckerberg wanted an A.I. operation to rival Google’s, which was widely seen as having the deepest stable of A.I. researchers. He recruited Ph.D.s from New York University, the University of London and the Pierre and Marie Curie University in Paris.

But along the way, his role evolved into one of threat removal and toxic content eliminator. Now he and his recruits spend much of their time applying A.I. to spotting and deleting death threats, videos of suicides, misinformation and outright lies.

“None of us have ever seen anything like this,” said John Lilly, a former chief executive of Mozilla and now a venture capitalist at Greylock Partners, who studied computer science with Mr. Schroepfer at Stanford University in the mid-1990s. “There is no one else to ask about how to solve these problems.”

Facebook allowed us to talk to Mr. Schroepfer because it wanted to show how A.I. is catching troublesome content and, presumably, because it was interested in humanizing its executives. The chief technology officer often shows his feelings, according to many who know him.

“I don’t think I’m speaking out of turn to say that I’ve seen Schrep cry at work,” said Jocelyn Goldfein, a venture capitalist at Zetta Venture Partners who worked with him at Facebook.

Image
Facebook has been under pressure to deal with misinformation and other inappropriate content on its site. The company has set up “war rooms” to deal with election interference.Credit...David Paul Morris/Bloomberg

But few could have predicted how Mr. Schroepfer would react to our questions. In two of the interviews, he started with an optimistic message that A.I. could be the solution, before becoming emotional. At one point, he said coming to work had sometimes become a struggle. Each time, he choked up when discussing the scale of the issues that Facebook was confronting and his responsibilities in changing them.

“It’s never going to go to zero,” he said of the problematic posts.

One Sunday in December 2013, Clément Farabet walked into the penthouse suite at the Harrah’s hotel and casino in Lake Tahoe, Nev. Inside, he was greeted by Mr. Schroepfer and Mr. Zuckerberg.

Mr. Zuckerberg was shoeless. Over the next 30 minutes, the C.E.O. paced back and forth in his socks while keeping up a conversation with Dr. Farabet, an A.I. researcher at New York University. Mr. Zuckerberg described A.I. as “the next big thing” and “the next step for Facebook.” Mr. Schroepfer, seated on the couch, occasionally piped up to reinforce a point.

They were in town to recruit A.I. talent. Lake Tahoe was the venue that year for NIPS, an academic conference dedicated to A.I. that attracts the world’s top researchers. The Facebook brass had brought along Yann LeCun, an N.Y.U. academic who is regarded as a founding father of the modern artificial intelligence movement, and whom they had just hired to build an A.I. lab. Dr. Farabet, who regards Dr. LeCun as a mentor, was also on their shortlist.

“He basically wanted to hire everybody,” Dr. Farabet said of Mr. Zuckerberg. “He knew the names of every single researcher in the space.”

Those were heady days for Facebook, before its trajectory turned and the mission of its A.I. work changed.

At the time, Silicon Valley’s biggest tech companies — from Google to Twitter — were racing to become forces in A.I. The technology had been dismissed by the internet firms for years. But at universities, researchers like Dr. LeCun had quietly nurtured A.I. systems called “neural networks,” complex mathematical systems that can learn tasks on their own by analyzing vast amounts of data. To the surprise of many in Silicon Valley, these arcane and somewhat mysterious systems had finally started to work.

Mr. Schroepfer and Mr. Zuckerberg wanted to push Facebook into that contest, seeing the rapidly improving technology as something the company needed to jump on. A.I. could help the social network recognize faces in photos and videos posted to its site, Mr. Schroepfer said, and could aid it in better targeting ads, organizing its News Feed and translating between languages. A.I. could also be applied to deliver digital widgets like “chatbots,” which are conversational systems that let businesses interact with customers.

“We were going to hire some of the best people in the world,” Mr. Schroepfer said. “We were going to build a new kind of research lab.”

Starting in 2013, Mr. Schroepfer began hiring researchers who specialized in neural networks, at a time when the stars of the field were paid millions or tens of millions of dollars over four or five years. On that Sunday in 2013 in Lake Tahoe, they did not succeed in hiring Dr. Farabet, who went on to create an A.I. start-up that Twitter later acquired. But Mr. Schroepfer brought in dozens of top researchers from places like Google, N.Y.U. and the University of Montreal.

Mr. Schroepfer also built a second organization, the Applied Machine Learning team, which was asked to apply the Facebook A.I. lab’s technologies to real-world applications, like facial recognition, language translation and augmented reality tools.

In late 2015, some of the A.I. work started to shift. The catalyst was the Paris terrorist attack, in which Islamic militants killed 130 people and wounded nearly 500 during coordinated attacks in and around the French capital. Afterward, Mr. Zuckerberg asked the Applied Machine Learning team what it might do to combat terrorism on Facebook, according to a person with knowledge of the company who was not authorized to speak publicly.

In response, the team used technology developed inside the new Facebook A.I. lab to build a system to identify terrorist propaganda on the social network. The tool analyzed Facebook posts that mentioned the Islamic State or Al Qaeda and flagged those that most likely violated the company’s counterterrorism policies. Human curators then reviewed the posts.

It was a turning point in Facebook’s effort to use A.I. to weed through posts and eliminate the problematic ones.

Image
Mr. Schroepfer after answering questions about the Cambridge Analytica scandal at a British parliamentary committee hearing in London in April 2018.Credit...Simon Dawson/Bloomberg

That work soon gathered momentum. In November 2016, when Donald J. Trump was elected president, Facebook faced a backlash for fostering misinformation on its site that may have influenced voters and laid the groundwork for Mr. Trump’s win.

Though the company initially dismissed its role in misinformation and the election, it started shifting technical resources in early 2017 to automatically identify a wide range of unwanted content, from nudity to fake accounts. It also created dozens of “integrity” positions dedicated to fighting unwanted content on subsections of its site.

By mid-2017, the detection of toxic content accounted for more of the work at the Applied Machine Learning team than any other task. “The clear No. 1 priority for our content understanding work was integrity,” Mr. Schroepfer said.

Then in March 2018, The New York Times and others reported that the British political consulting firm Cambridge Analytica had harvested the information of millions of Facebook users without their consent, to build voter profiles for the Trump campaign. The outcry against the social network mushroomed.

Mr. Schroepfer was soon called to help deal with the controversy. In April 2018, he flew to London to be the designated executive to face questions from a British parliamentary committee about the Cambridge Analytica scandal. He was grilled for more than four hours as one parliamentary member after another heaped criticism on Facebook.

“Mr. Schroepfer, you have a head of integrity?” Ian Lucas, a Labour Party politician, said to the grim-faced executive during the hearing, which was live streamed around the world. “I remain unconvinced that your company has integrity.”

“It was too hard for me to watch,” said Forest Key, chief executive of a Seattle virtual reality start-up called Pixvana, who has known Mr. Schroepfer since they worked together at a movie effects technology start-up in the late 1990s. “What a burden. What a responsibility.”

The challenge of using A.I. to contain Facebook’s content issues was on — and Mr. Schroepfer was in the hot seat.

From his earliest days at Facebook, Mr. Schroepfer was viewed as a problem solver.

Raised in Delray Beach, Fla., where his parents ran a 1,000-watt AM radio station that played rock ’n’ roll oldies before switching to R&B, Mr. Schroepfer moved to California in 1993 to attend Stanford. There, he majored in computer science for his undergraduate and graduate degrees, mingling with fellow technologists like Mr. Lilly and Adam Nash, who is now a top executive at the file-sharing company Dropbox.

After graduating, Mr. Schroepfer stayed in Silicon Valley and went after thorny technical undertakings. He cut his teeth at a movie effects start-up and later founded a company that built software for massive computer data centers, which was acquired by Sun Microsystems. In 2005, he joined Mozilla as vice president for engineering. The San Francisco nonprofit had built a web browser to challenge the monopoly of Microsoft and its Internet Explorer browser. At the time, few technical tasks were as large.

“Browsers are complex products, and the competitive landscape is weird,” said Mike Shaver, a founder of Mozilla, who worked alongside Mr. Schroepfer for several years. “Even early on in his career, I was never worried about his ability to handle it all.”

In 2008, Dustin Moskovitz, a co-founder of Facebook, stepped down as its head of engineering. Enter Mr. Schroepfer, who came to the company to take that role. Facebook served about two million people at the time, and his mandate was to keep the site up and running as its numbers of users exploded. The job involved managing thousands of engineers and tens of thousands of computer servers across the globe.

“Most of the job was like a bus rolling downhill on fire with four flat tires. Like: How do we keep it going?” Mr. Schroepfer said. A big part of his day was “talking engineers off the ledge of quitting” because they were dealing with issues at all hours, he said.

Over the next few years, his team built a range of new technologies for running a service so large. (Facebook has more than two billion users today.) It rolled out new programming tools to help the company deliver Facebook to laptops and phones more quickly and reliably. It introduced custom server computers in data centers to streamline the operation of the enormous computer network. In the end, Facebook significantly reduced service interruptions.

Image
Mark Zuckerberg, Facebook’s chief executive, testified before Congress last year that the company was developing machine-based systems to “identify certain classes of bad activity.”Credit...Tom Brenner/The New York Times

“I can’t remember the last time I talked to an engineer who’s burned out because of scaling issues,” Mr. Schroepfer said.

For his efforts, Mr. Schroepfer gained more responsibility. In 2013, he was promoted to chief technology officer. His mandate was to home in on brand-new areas of technology that the company should explore, with an eye on the future. As a sign of his role’s importance, he uses a desk beside Mr. Zuckerberg’s at Facebook headquarters and sits between the chief executive and Sheryl Sandberg, the chief operating officer.

“He’s a good representation of how a lot of people at the company think and operate,” Mr. Zuckerberg said of Mr. Schroepfer. “Schrep’s superpower is being able to coach and build teams across very diverse problem areas. I’ve never worked really with anyone else who can do that like him.”

So it was no surprise when Mr. Zuckerberg turned to Mr. Schroepfer to deal with all the toxicity streaming onto Facebook.

Inside a Facebook conference room on a recent afternoon, Mr. Schroepfer pulled up two images on his Apple laptop computer. One was of broccoli, the other of clumped-up buds of marijuana. Everyone in the room stared at the images. Some of us were not quite sure which was which.

Mr. Schroepfer had showed the pictures to make a point. Even though some of us were having trouble distinguishing between the two, Facebook’s A.I. systems were now able to pinpoint patterns in thousands of images so that it could recognize marijuana buds on their own. Once the A.I. flagged the pot images, many of which were attached to Facebook ads that used the photos to sell marijuana over the social network, the company could remove them.

“We can now catch this sort of thing — proactively,” Mr. Schroepfer said.

The problem was that the marijuana-versus-broccoli exercise was not just a sign of progress, but also of the limits that Facebook was hitting. Mr. Schroepfer’s team has built A.I systems that the company now uses to identify and remove pot images, nudity and terrorist-related content. But the systems are not catching all of those pictures, as there is always unexpected content, which means millions of nude, marijuana-related and terrorist-related posts continue reaching the eyes of Facebook users.

Identifying rogue images is also one of the easier tasks for A.I. It is harder to build systems to identify false news stories or hate speech. False news stories can easily be fashioned to appear real. And hate speech is problematic because it is so difficult for machines to recognize linguistic nuances. Many nuances differ from language to language, while context around conversations rapidly evolves as they occur, making it difficult for the machines to keep up.

Delip Rao, head of research at A.I. Foundation, a nonprofit that explores how artificial intelligence can fight disinformation, described the challenge as “an arms race.” A.I. is built from what has come before. But so often, there is nothing to learn from. Behavior changes. Attackers create new techniques. By definition, it becomes a game of cat and mouse.

“Sometimes you are ahead of the people causing harm,” Mr. Rao said. “Sometimes they are ahead of you.”

On that afternoon, Mr. Schroepfer tried to answer our questions about the cat-and-mouse game with data and numbers. He said Facebook now automatically removed 96 percent of all nudity from the social network. Hate speech was tougher, he said — the company catches 51 percent of that on the site. (Facebook later said this had risen to 65 percent.)

Mr. Schroepfer acknowledged the arms race element. Facebook, which can automatically detect and remove problematic live video streams, did not identify the New Zealand video in March, he said, because it did not really resemble anything uploaded to the social network in the past. The video gave a first-person viewpoint, like a computer game.

In designing systems that identify graphic violence, Facebook typically works backward from existing images — images of people kicking cats, dogs attacking people, cars hitting pedestrians, one person swinging a baseball bat at another. But, he said, “none of those look a lot like this video.”

The novelty of that shooting video was why it was so shocking, Mr. Schroepfer said. “This is also the reason it did not immediately get flagged,” he said, adding that he had watched the video several times to understand how Facebook could identify the next one.

“I wish I could unsee it,” he said.

Follow Cade Metz and Mike Isaac on Twitter: @CadeMetz and @MikeIsaac

Interested in All Things Tech? Get the Bits newsletter for the latest from Silicon Valley and the technology industry. And sign up for the personal technology newsletter for advice and tips on the technology changing how you live.

A version of this article appears in print on  , Section BU, Page 1 of the New York edition with the headline: ‘It’s Never Going to Go To Zero’. Order Reprints | Today’s Paper | Subscribe

Advertisement

SKIP ADVERTISEMENT