Twitter Now Says Rules About Journalists Being Impersonated In Tweets Need To Be Revised

"People kept saying, 'Don't talk to her, she's racist,' and it just kept getting worse."


In the wake of the Florida high school shooting Wednesday, Twitter users began spreading doctored tweets targeting Miami Herald reporter Alex Harris.

The shooting Wednesday left at least 17 dead. After the shooter entered the school and began the attack, some students posted on Snapchat and Twitter about what was happening. Harris was one of the first reporters to reach out to the students to help explain the situation.

As the tweet went viral, Harris began getting harassment.

It took less than an hour for someone to create fake screenshots of tweets she sent.

The first tweet faked Harris asking for "pictures or video of dead bodies." The second doctored tweet went further, making it look like she was asking whether the shooter was white.

"That one went nuts, that one picked up tremendous steam," Harris told BuzzFeed News. It got posted to Reddit and a white nationalist forum. She began reporting the tweets almost immediately, but Twitter didn't take them down, despite gaining steam on social media.

Harris said people spreading the fakes were relentless. One person followed her every move on Twitter.

 "People kept saying, 'Don't talk to her, she's racist,' and it just kept getting worse."

"He would follow my tweets and every time I tweeted at someone, he would reply 'Don't talk to her, she's been harassing students,'" she said. Someone offered a victim $30 to talk to the competition and asked for people to send them money so they could offer more. "People kept saying, 'Don't talk to her, she's racist,' and it just kept getting worse."

It's difficult to tell how it impacted the reporting she was able to do, but she thinks it probably had an effect. "I think it genuinely might have made a difference to some of the people I reached out to," she said.

Harris has covered shootings before, including at the Pulse nightclub and the Fort Lauderdale airport. This is the worst online abuse she's ever received.

"I had literally thousands of messages and they just filled up my mentions and DMs with terrible, racist, sexist, horrific, graphic death threats," she said. "I got Facebook messages sent directly to my private account that had the same content, too. I've never experienced anything like it before."

When asked about the impersonating tweets, a Twitter spokesperson initially told BuzzFeed News that the company's team reviews reports against their rules and if those rules are broken, a number of actions can be taken.

Twitter also pointed BuzzFeed to a blog post from last June titled "Our Approach to Bots & Misinformation." The spokesperson highlighted a paragraph that dealt with hoaxes:

Twitter’s open and real-time nature is a powerful antidote to the spreading of all types of false information. This is important because we cannot distinguish whether every single Tweet from every person is truthful or not. We, as a company, should not be the arbiter of truth. Journalists, experts and engaged citizens Tweet side-by-side correcting and challenging public discourse in seconds. These vital interactions happen on Twitter every day, and we’re working to ensure we are surfacing the highest quality and most relevant content and context first.

It's worth noting, however, that Twitter's rules do include language forbidding malicious impersonation: "You may not impersonate individuals, groups, or organizations in a manner that is intended to or does mislead, confuse, or deceive others. While you may maintain parody, fan, commentary, or newsfeed accounts, you may not do so if the intent of the account is to engage in spamming or abusive behavior."

After being alerted to the matter in a tweet from a BuzzFeed News reporter, Twitter CEO Jack Dorsey said Thursday evening he was "investigating." On Friday, Dorsey tweeted, "We do need to revise this."

He added that the company doesn't have a "scaleable policy" around making sure content is authentic, saying, "We need to figure out a scalable and objective way rooted in durable policy to do this long-term."

@mat @JaneLytv We currently don’t have a *scalable* policy or set of product features around authenticity of conten… https://t.co/SchozY7aQU

For Harris, Twitter's first response was inadequate. She estimates she sent dozens of reports to the company and thought this was an open-and-shut case.

"The text accompanying the screenshot said 'kill yourself' and 'die' and 'how dare you say that, you racist,'" Harris said. "I don't see how you can read that and think it's not targeted abuse."

Twitter has long been criticized for how it deals with abuse on its platform. A BuzzFeed News report from last July showed clear-cut harassment cases were getting through the cracks, despite the social media network making abuse its focus for the year. Twitter also suspended and then reinstated white supremacist David Duke in March, and last August it faced criticism for allowing a tweet about the location of the Unite the Right rally in Charlottesville, Virginia, to remain on its network.

People routinely use Twitter to spread misinformation, including around breaking news situations like the Florida shooting. Hoax screenshots of media outlets or shooters are nothing new, but the doctored tweets are a new type of fake. Throughout her reporting, Harris has never come across anything like this before.

"It definitely created more attention over something that didn't need to be paid attention to when you're talking about 17 dead kids," she said.

UPDATE

This post has been updated with comment from Twitter CEO Jack Dorsey.

Topics in this article

Skip to footer