BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

AI Versus AI: Is Ethics An Arms Race?

This article is more than 4 years old.

Supposing you were concerned about a new technology, as far as the risks it might pose to individuals and to society. One thing you could do would be to form a committee composed of experts, practitioners of the technology, ethicists, philosophers and like-minded individuals. You might then propose policy and regulation and best practices to minimize harm.

Another approach would be to just write computer programs, presuming that your enlightened engineering effort would eliminate harm by making technology do the right thing from the get-go.

In the field of artificial intelligence, where fear mixes with elation in most headlines, most attention has been focused on the former option, creating non-profit consortia such as the Partnership for AI that aim to address ethical issues through guidelines and policy recommendations and the like. 

But there is another trend, one of developers of the technology building their AI technology with some explicit notion of doing the right thing.

The result looks something like a battle between different AI programs, each one professing to protect privacy or guard against fraudulent acts such as “deep fake” media. 

It is possible that in a few years’ time, neural networks and other forms of AI will be increasingly occupied not with engaging humans in chat or solving human problems, but facing off against other machines in a kind of closed loop, an adversarial contest to destroy or protect rights.

Take just one example. This week, two fascinating research projects surfaced, both trying to protect people, and yet in another sense diametrically opposed in their approaches. 

Chinese search engine giant Alibaba, in conjunction with other institutions in China, proposed a new program to detect fake images of faces created by machine learning. The same week, researchers at the Department of Computer Science of Norwegian University of Science and Technology, in Trondheim, Norway, proposed, in an attempt to preserve people’s privacy, a new way to generate fake faces in order to anonymize photos. 

Two efforts, both trying to protect society on some level, coming at the problem from opposite directions, one trying to expose and the other trying to conceal. 

The science of the efforts is a kind of ballet of code. 

On the one hand, something called a “generative” neural network can create fake versions of things, such as images or text. If a bunch of images are input to a generative neural network, the computer will calculate over many, many examples the repeating patterns that are common across the examples. Once this network is “trained,” it can use that statistical pattern to fashion brand-new images starting from random statistical noise. Such generative networks create the deep fakes of all sorts that have gotten everyone so anxious of late.

On the other hand, a second neural network, known as a classifier, can detect such deep fakes, by seeing how the generative network creates new statistical patterns that are slightly different from real images.

For the first part, the generative network, the team at Norwegian University came up with a neural network program called “DeepPrivacy,” which can change the features of faces in an existing picture: everything in the scene is preserved, just the appearances of people are replaced with fake features to make them unrecognizable, as if an artist retouched the picture, but all done automatically. 

For the second part, the classifier, the Alibaba team took some existing programs that can recognize faces and found an ingenious way to tell whether the faces being recognized are real faces or fakes. It turns out that when deep fake images of faces are passed through a face-recognizing neural network, the statistics of how the network performs mathematical calculations are different from the pattern of calculations that would happen when a real image is input to the network. There’s a trace, a statistical signature, in other words, that gives away the deep fake image, even if the faces look pretty convincing to the human eye. They call this detective program “FakeSpotter.”

So one group gets better at making fakes, and another gets better at spotting them. And interestingly, both claim good intentions. The Norwegian group wants to obey Europe’s GDPR law, which requires data be anonymized. But the Alibaba team doesn’t want the world polluted with deep fakes that can conceivably mislead people, say, by creating a fake but convincing-looking news broadcast, or harm individuals by depicting them in ways that are untrue. 

Where will this dance of fakes and fake-detectors lead? As the Alibaba team write, in somewhat awkward language, a battle of “techniques” is ensuing that will continue indefinitely. “The arms race between creating fakes and fighting fakes is on the endless road,” they write, “and powerful weapons should be developed for protecting the privacy of us while we are enjoying the AI techniques.”

But who’s to say what kinds of powerful tools those might be, when good actors come at the matter from opposite sides? Can a middle ground be found? Will something like “mutually assured destruction” balance these forces like in the Cold War period?

Anyone claiming a role in ethics, groups such as the Partnership, are going to have to contemplate a more complex future, one in which they don’t just foil bad actors but also sort the conflicting claims of good actors whose creations increasingly are in a kind of battle with one another.