Jack Dorsey Is Captain of the Twittanic at TED 2019

Twitter’s CEO continues his mea culpa tour as the company announces it has finally started blocking abuse with AI.
Portrait of Jack Dosey
It’s been more than a year since Jack Dorsey publicly committed to “fixing” Twitter, which the CEO himself admits is toxic and full of problems he didn't anticipate.Cole Burston/Bloomberg/Getty Images

On Tuesday, Jack Dorsey, the CEO of Twitter, came to TED 2019 to answer for the sins of his platform. In his signature black hoodie and jeans, unkempt facial hair, and black beanie, he sat with TED head Chris Anderson and Whitney Pennington Rodgers, who curates current affairs for the conference, for a conversation that left all three members, along with the audience, frustrated.

“We’re on this great voyage with you on the Twittanic,” Anderson told Dorsey after roughly 20 minutes of interrupted back and forth. “There are people in steerage who are saying, ‘We are worried about the iceberg ahead!’ And you say, ‘That is a good point’ and ‘Our boat hasn’t been built to handle it,’ and we’re waiting, and you are showing this extraordinary calm and we’re all worried but we’re outside saying, ‘Jack, turn the fucking wheel!’”

Dorsey stoically listened to this comparison, like the meditative yogi he often talks about aspiring to be. “It’s democracy at stake! It’s our culture at stake! It’s our world at stake!” Anderson continued. “You’re doing a brilliant role of listening, Jack, but can you actually dial up the urgency and move on this stuff? Will you do that?”

“Yeah, yeah, yes,” Dorsey replied, but then added, “We could do a bunch of superficial things to address what you’re talking about, but we need to go deep.”

It’s been more than a year since Dorsey publicly committed to “fixing” Twitter, and figuring out what a platform that encourages healthy discussions looks like. He’s been on a mea culpa tour since then, telling the world—and regulators—that he knows Twitter is broken, that it’s toxic and terrible and that he and the team are planning to radically rebuild it. He reiterated all of this on the TED stage, explaining that he wants to rethink what behavior the site incentivizes, for instance, by possibly getting rid of the like button and de-emphasizing follower counts while emphasizing topical interests instead. He repeated that he wants to focus on maximizing the health of conversations, and prioritizing people spending their time learning on the site, rather than getting outraged or harassed. He admitted Twitter was full of problems, problems he didn’t anticipate 13 years ago when the site was founded, and which he’s still trying to figure out how to solve.

The urgency of this task couldn’t have been made clearer in the days leading up to Dorsey’s appearance. Over the weekend, Ilhan Omar—a woman of color, an immigrant, and a Muslim representing the state of Minnesota in the US House—reported an increase in death threats after President Trump tweeted out a video that intercut a speech she recently gave with footage of the 9/11 attacks. Many of the threats were made on Twitter. Then on Monday, as Notre Dame burned, people came to the platform to mourn the loss in real time, but also to spread lies and hate as quickly as the flames engulfed the cathedral’s spire. When Omar tweeted her own heartfelt condolences, people replied with more death threats. Twitter was very much itself, showcasing the power of its network as well the danger.

Dorsey didn’t address any of these incidents specifically at TED. In fact, his answers lacked specificity overall. When he was asked pointed questions, he evaded them, as he often does. Rodgers asked him how many people are working on content moderation on Twitter—a number the company has never published, and Tuesday continued the vagueness streak.

“It varies,” Dorsey said. “We want to be flexible on this. There are no amount of people that can actually scale this, which is why we have done so much work on proactively taking down abuse.”

That proactive work was the big news Dorsey announced from the stage: A year ago, Twitter wasn’t proactively monitoring abuse actively using machine learning at all. Instead, it relied entirely on human reporting—a burden Dorsey was quick to recognize was unfairly put on the victims of the abuse. “We’ve made progress,” he said. “Thirty-eight percent of abusive tweets are now proactively recognized by machine-learning algorithms, but those that are recognized are still reviewed by humans. But that was from zero percent just a year ago.” As he uttered those words, Twitter sent out a press release with more information on the effort, highlighting that three times more abusive accounts are being suspended within 24 hours of getting reported compared with this time last year.

That progress is good, but 38 percent is not exactly a lot. Facebook’s most recent transparency report, by contrast, says that over 51 percent of content it acted on for violating policies against hate speech was flagged before users reported it. Nor did Dorsey or the official Twitter announcement provide many details about how the technology to proactively flag abuse works.

Relying on algorithms and automation won’t solve all Twitter’s problems, either. Facebook just announced a slew of changes to better fight abuse and misinformation, which for all its technological sophistication, it hasn’t come close to eradicating. And on Monday YouTube briefly flagged news broadcasts' live video of the Notre Dame fire with a link to information about the 9/11 attacks—an effort at automated fact-checking that in this case demonstrated how imperfect such systems can be.

For years, organizations like Amnesty International have urged Twitter to be more transparent about abuse on its platform and the steps the company is taking to combat it. Rodgers noted that last year, a crowdsourced study by Amnesty found that a problematic or abusive tweet is sent to a woman every 30 seconds. For women of color, one in every 10 tweets they receive is abusive.

By bringing up the very real suffering of people on his platform, Rodgers and Anderson tried to bring a sense of urgency to the conversation. But Dorsey’s signature laconic and intensely calm style of speaking was at odds with the tone they were trying to set. When Dorsey tried to get into specifics of how Twitter is measuring healthy conversations on the site—using four metrics developed by MIT’s Cortico team—Anderson cut him off.

“How hard is it to get rid of Nazis from Twitter?” he asked.

Dorsey sighed. Deeply. He explained that the team has taken hateful accounts down, and when they can see that an account is associated with a hate group, they are banned. “We’re in a situation right now where that term is used fairly loosely and we just cannot take any one mention of that word accusing someone else as a factual indication that they should be removed from the platform,” he explained. Twitter, and Dorsey in particular, have long upheld free speech as a defining value for the service.

The conversation was clearly frustrating for all three participants. “You didn’t let me finish,” Dorsey told Anderson at one point, after he was cut off again. In that way, the TED event was also quite meta: To use Twitter is to be frustrated by its promise and limitations, by how much of a garbage fire it is while also being so useful to modern life, by how obvious some of its problems are while seeing how apparently elusive solutions can be.

Dorsey did bring up one specific fix. “The first thing you see when you go to [the page to report abuse] is about intellectual property protection. You scroll down and you get to abuse and harassment,” he noted. “I don’t know how that happened in the company’s history, but we put that above the thing that people actually want the most information on. Just our ordering shows the world what we believed was important. We are changing all that, we are ordering it the right way.”

For all his insistence on the bigger picture, this was a very small problem for Dorsey to point out, and one with a very obvious solution. Nevertheless, Twitter is not fixed. Why? The reasoning here is agonizingly circular: Because Dorsey says he doesn’t want to do a bunch of small iterative quick fixes; he wants to fundamentally rebuild the site to encourage better conversations, and that will take time—time it’s unclear the world can afford.

The day before Dorsey appeared at TED, Carole Cadwalladr, the British journalist who broke the story about Cambridge Analytica’s role in the Brexit vote, stood on the same stage and issued a challenge to all the “gods of Silicon Valley,” listing them by name: Zuckerberg, Sandberg, Brin, Page, and Dorsey last among them. “This technology that you have invented has been amazing, but now it’s a crime scene,” she said. “My question to you is, is this what you want? Is this how you want history to remember you? As the handmaidens to authoritarianism all across the world? You set out to connect people and the same technology is now driving us apart.”

Of those gods, only Dorsey showed up. But unlike an omniscient being, Dorsey doesn’t have all the answers. He’s more like a captain of a ship, wondering aloud how to avoid the many icebergs in his path while continuing ahead at full steam.


More Great WIRED Stories