X

Inside Facebook, Twitter and Google's AI battle over your social lives

From stamping out trolls to removing fake bot accounts, here's how social networks are waging war using AI weapons.

When you sign up for Facebook on your phone, the app isn't just giving you the latest updates and photos from your friends and family. In the background, it's utilizing the phone's gyroscope to detect subtle movements that come from breathing. It's measuring how quickly you tap on the screen, and even looking at what angle the phone is being held.

Sound creepy? These are just some of the ways that Facebook is verifying that you're actually human and not one of the tens of millions of bots attempting to invade the social network each day.

That Facebook would go to such lengths underscores the escalation of the war between tech companies and bots that can cause chaos in politics and damage public trust. Facebook isn't alone. Twitter on Wednesday began removing millions of blocked accounts, and Google is looking to stamp out malicious trolls on YouTube .

The road to salvation, they believe, is paved with artificial intelligence. Facebook CEO Mark Zuckerberg repeatedly pointed to AI as a solution to his social network's flaws during his testimony before Congress and again at the company's F8 developers conference. Google wants to be an AI-first company and Twitter likewise wants to use the technology to stamp out trolls.

"It is already pretty much a fundamental part of everyday life," Michael Connor, the executive director of Open MIC, a technology policy nonprofit, said. "AI is becoming part of the way we listen to music, how we handle our medical issues, and how we drive our cars."

AI's been prescribed as a cure-all remedy, able to fix all the problems that plague the internet. After all, no single person or human team could ever deal with the flood of data coming from billions of users. But how does it work? CNET got an inside look at how Google, Twitter and Facebook use AI to manage abuse on a massive scale.

Artificial intelligence works best with lots of data -- something Facebook, Google and Twitter have no shortage of. If you're training a bot to find fake news, for example, you'd amass a ton of posts that you judge as fake news and tell your algorithm to look for posts similar to them.

Think of this machine learning like the process of teaching a newborn baby the difference between right and wrong, said Kevin Lee, Sift Science's trust and safety architect.

That's why AI will sometimes get things wrong, which is how blatant examples of abuse get past an algorithm. Or on the opposite end of the spectrum, why an algorithm considers harmless images abusive. When there's little transparency, people get skeptical.

But there's a method behind the machine-learning madness.

Facebook

Cybercriminals are becoming more savvy. They employ bots that act like a hive when it comes to creating accounts on Facebook, using multiple tricks to fool the massive social network. They will use fake IP addresses, slow down their pace to match a human's and add each other as digital alibis.

But they still haven't figured out how to fake human movement.

facebook-f8-2018-0265

Facebook has made fighting bots and fake news a priority. AI plays a big role in this. 

James Martin/CNET

The massive social network has been relying on outside AI resources, as well as its own team, to help it close the floodgates on bots. One such resource is Israeli startup Unbotify, which two people familiar with Unbotify confirmed was working with Facebook to detect bots.

Eran Magril, the startup's vice president of product and operations, said Unbotify works by understanding behavioral data on devices, such as how fast your phone is moving when you sign up for an account. His algorithm recognizes these patterns because it was trained on thousands of workers who repeatedly tapped and swiped their phones . Bots can fake IP addresses, but they can't fake how a person would physically interact with a device.

Magril declined to confirm Unbotify's relationship with Facebook, but said the company works with major social networks. Facebook also declined to comment about Unbotify.

Facebook has tried to fight off the scourge by doubling its content moderation team to 20,000 employees. Still, the numbers are massive: In May, Facebook announced it had deleted 583 million fake accounts in the first three months of 2018. That's why humans play a supporting role to AI.

"Tens of millions of accounts every single day are taken down," Lee said. "The vast majority of that is done by machines and not humans, thankfully."

Unbotify is able to detect these types of movement by having its code in its customer's apps. Magril stressed that the company doesn't collect personal information, only behavioral data with no names or personal identifying information.

Unbotify looks for bots by tracking things like movement. The image on the left shows mouse movements on a desktop from a bot, while the right shows mouse movements made by a human.

Unbotify looks for bots by tracking things like movement. The image on the left shows mouse movements on a desktop from a bot, while the right shows mouse movements made by a human.

Unbotify

But behavioral data isn't the only way that Facebook stops bots, according to Lee, a former team leader at the social network.

The company also relies on AI to automatically tell if an account is fake based on how many accounts are on one device, as well as its activities after it's created. Facebook's AI will label an account as a bot if it signs up and is able to send more than 100 friend requests within a minute, he said.

It knows, too, how many different accounts are on one device. The majority of the time, Lee said, fraudsters will have multiple bot accounts on one device. Normally, people have one Facebook account on multiple devices.

Facebook is relying on AI to stop fake news as well, responding to a problem often stirred up by its own algorithms.

In March, the social network said it was expanding its fact-checking program to include images and videos after a flurry of propaganda started coming from memes instead of hoax articles. The fact-checkers work in partnership with news organizations like AFP (Agence France-Presse) and the Associated Press.

Those fact-checking tools are employing the AI of AdVerif.ai, according to founder Or Levi. It finds flagged images and does a reverse image lookup to see where else it's been posted, and if it's been altered to show something different. In the past, for instance, Facebook has caught a fake image of an NFL player burning the American flag. AdVerif.ai's process would have seen that image, reverse-searched its origins and been able to tell fact-checkers that it was edited.

"We're looking at hundreds to thousands of pictures a day," Levi said. "We find the original image and basically identify you have manipulation."

Facebook relies on third-party fact-checkers to help it classify content as hoaxes and false information, Sara Su, a Facebook product specialist for the News Feed, said at a press event on Wednesday.

"When you build machine learning classifiers to identify something as especially hoaxy, you need training data," Su said. "In this case, the ratings coming from our third-party fact-checkers are a really important source of ground truth for these classifiers."

Twitter

Unlike Facebook, Twitter prefers that humans play a larger role alongside AI. That's because of the delicate balance it must maintain between free speech and healthy conversations.

Yet harassment is one of the largest problems that plagues Twitter, with CEO Jack Dorsey making promise after promise to fix the issue. The prevalence of trolls on Twitter was so rampant that it reportedly tanked Twitter's deal with Disney in 2016.

gettyimages-456915652

Twitter's CEO Jack Dorsey wrestles with how to make his social network a less social place. 

Getty Images

While Twitter also uses AI to spot bot behavior, its attempt to preserve an open platform means that it can't completely rely on AI to handle trolls. While spam posts are easy for Twitter's AI to track down and delete automatically, it's different for harassment, David Gasca, Twitter's product manager for health, said in an interview.

Every post that gets reported always gets a human set of eyes on it before any action is taken.

"Automated rules play a role in certain situations, but in others, there's a lot of nuance that is lost," Gasca said. "Especially on Twitter, there's a lot of context that goes on in various forms of conversations."

Often, it's up to millions of Twitter's users to help train its AI. The company gathers information on how often an account is muted, blocked, reported, retweeted, liked or replied to.

Its AI can recognize an account that's been blocked by 50 other people in the past, for example, and flag it for Twitter's moderators for a faster response.

"Machine learning improvements are enabling us to be more proactive in finding those who are being disruptive, but user reports are still a highly valuable part of our work," Nick Pickles, Twitter's senior strategist on public policy, told members of Congress at a House Judiciary Committee hearing on Tuesday.

That means Twitter's abuse-curbing AI is different for every person -- depending on who you're interacting with and who you're choosing to ignore. The AI is able to tell the difference between positive and negative interactions, and essentially helps curate the experience on Twitter.

"What you will block is different from what I will block," Gasca said. "You can create models for every user's threshold and tolerance."

If you constantly block people, Twitter's algorithm would start filtering out similar content from your feed. The idea is that you're then less likely to block what you see.

Since Twitter implemented this new method, Gasca said there's been a 40 percent drop in blocks from new interactions.

"That's a huge AI undertaking that predicts that someone will block a stranger after a mention," Gasca said.

Google

It's easy to forget that YouTube, better known for its video content, is its own form of social network too, complete with the same trolls infecting the comments section. That's why YouTube uses a version of Perspective, AI moderating tools developed by Alphabet's Jigsaw that's also available to users.

Logo for Alphabet's JIgsaw unit

Jigsaw, a unit under Google parent Alphabet, helps moderate toxic comments with AI.

Alfred Ng/CNET

Perspective was designed to sift out toxic comments in response to the hordes of harassment online. The AI is supposed to automatically flag comments it determines would ruin conversations, and to allow moderators to choose whether they should delete it.

The AI takes millions of comments and feeds them to thousands of people who help label them, said CJ Adams, a Jigsaw product manager. Groups of people rate every comment that comes in, telling the AI whether the comment is spam, harassment or obscene content.

YouTube CEO Susan Wojcicki at the VidCon conference in 2015.

YouTube CEO Susan Wojcicki addresses online video creators at the VidCon conference in 2015. Wojcicki has made it a priority to help YouTubers fight off trolls and toxic comments.

Joan E. Solsman/CNET

That algorithm learns from the labels and hunts for them in the real world. Adams said Perspective doesn't delete automatically, instead letting a human make the call.

"It makes mistakes, and it is not good for making automated decisions," Adams said. "But what it is good for is kind of taking a needle-in-a-haystack problem and turning it into a needle in a handful of hay."

Perspective works by allowing moderators to filter comments based on what its algorithm determines is toxic.

Google

Perspective gets its training data from comments on websites it's partnered with, like The New York Times and Wikipedia. That's a far cry from its debut in 2017, when the AI couldn't tell the difference between trash talk and harassment on sports sites. The progress stems from having more comments fed into the system, allowing it to continue learning. It now knows "Mets suck" doesn't always mean someone is being attacked, Adams said.

Perspective relies on constantly being retrained to learn, and to defend itself against trolls. When the AI first kicked off, Adams said "a ton of abuse came in" from trolls on 4Chan looking to trick the algorithm.

"They would type in awful things and say it's not toxic, hoping to retrain it and trick it," he said. But Perspective's team was already fighting against it, with humans rating those comments as toxic.

It ended up helping Google out even more, allowing it to use that data to stop trolls in the future.

"What we got was this huge trove of amazing abuse, some of the best stuff that these troll mobs could throw," Adams said. "We were suddenly like, 'Thank you.'"

Machine tuning

Tech companies may be high on AI, but that doesn't mean there aren't risks. On the Fourth of July, Facebook's algorithm judged parts of the Declaration of Independence to be hate speech and mistakenly deleted a post. 

"Think about that for a moment. If Thomas Jefferson had written the Declaration of Independence on Facebook, that document would never have seen the light of day. No one would be able to see his words because an algorithm automatically flagged it," Rep. Bob Goodlatte, a Republican from Virginia, said during the hearing on Tuesday.

While the post was quickly restored because of the backlash, many others mistakenly banned aren't as lucky.

False positives are a challenge for AI, no matter how good it gets. Even if there's only a 1 percent chance of error, with 2 billion people on Facebook and 1 billion people on YouTube, that's still tens of millions of toxic content or fake accounts slipping through.

US-INTERNET-FACEBOOK-DEMONSTRATION

Protesters set up 100 cardboard cutouts of Facebook founder and CEO Mark Zuckerberg outside the US Capitol in Washington, DC, to call attention to hundreds of millions of fake accounts still spreading disinformation on Facebook.

Saul Loeb/AFP/Getty Images

"Even if they are getting 99 percent, that 1 percent is getting through to somebody, and the consequences are real-world attacks," said Eric Feinberg, the lead researcher on the Digital Citizens Alliance report on terrorist content and social media.

His team had found 55 ISIS accounts and reported them to Facebook in December and January. Feinberg said Facebook failed to take down 24 of them, claiming they didn't violate Facebook's terms of services, even as these accounts were posting pro-terrorism memes.

Tech companies understand the problems, but also recognize their platforms have wildly outgrown their moderation tools. Once you've reached this many users, the potential for abuse exponentially rises along with it.

At this scale, AI is the only way to deal with the problem, despite the mistakes along the way. Tech companies hope the algorithms will learn from them and improve.

"Facebook literally could not hire enough people to do this," Lee said. He compared the shift to AI to the Industrial Revolution. "You have to depend on machines. Not everything is done by hands anymore, but that's OK."

CNET's Ben Fox Rubin contributed to this story.

Originally published, July 13 at 5 a.m. PT.
Update, July 17 at 8 a.m. PT:
To include remarks from a House Judiciary Committee hearing.

'Hello, humans': Google's Duplex could make Assistant the most lifelike AI yet.

Cambridge Analytica: Everything you need to know about Facebook's data mining scandal.