UPGRADING

Why I’m turning my son into a cyborg

Where do we draw the line between boosting human potential and eroding our humanity?
Where do we draw the line between boosting human potential and eroding our humanity?
Image: Bárbara Abbês for Quartz

Imagine if everyone spoke a language you don’t understand. People have been speaking it around you since the day you were born, but while everyone else picks it up immediately, for you it means nothing. Others become frustrated with you. Friendships and jobs are difficult. Just being “normal” becomes a battle.

For many with autism, this is the language of emotion. For those on the spectrum, fluency in facial expressions doesn’t come for free as it does for “neurotypicals.” To them, reading facial expressions seems like a superpower.

So when my son was diagnosed, I reacted not just as a mom. I reacted as a mad scientist and built him a superpower.

This isn’t the first time I’ve played mad scientist with my son’s biology. When he was diagnosed with type 1 diabetes, I hacked his insulin pump and built an AI that learned to match his insulin to his emotions and activities. I’ve also explored neurotechnologies to augment human sight, hearing, memory, creativity, and emotions. Tiger moms might obsess over the “right” prep schools and extracurriculars for their child, but I say why leave their intellect up to chance?

I’ve chosen to turn my son into a cyborg and change the definition of what it means to be human. But do my son’s engineered superpowers make him more human, or less?

How the CIA taught me to smile

Life gave me an amazing and exhausting little boy. It also gave me unique tools to help him overcome his challenges.

The first came in the form of a crazy CIA scheme to create an AI to catch liars. Years ago, on my very first machine-learning project as an undergrad, I helped build a real-time lie-detection system that could work off raw video. The AI we developed learned to recognize the facial expressions of people on camera and infer their emotions. It explored every frame of video, learning the facial muscle movements that indicated disgust (nose wrinkle + upper lip raise) or anger (eyebrows down and together + eyes glare + lips narrow). It even learned to distinguish “false” smiles from “true,” otherwise known as duchenne smiles (tightening superobital muscles around the eyes).

Before this project, I assumed I’d spend a long neuroscience career sticking electrodes into brains. But watching our algorithms learn such a foundationally human task hooked me on studying how natural and artificial intelligence can work together.

Fast forward through the next decade of my academic career (neural coding and cyborgs) and my first few startups (AI for education and jobs), and I had built a reputation as the crazy lady seeking to “maximize human potential.” When the ill-fated Google Glass, a wearable smartphone masquerading as a pair of glasses, was launched by throwing some guys out of a blimp, I was invited to explore ideas for what could be done beyond social posts and family videos.

For a woman that wanted to build cyborgs, there was so much potential. Along with its computing power, Glass had a live camera, a heads-up display, and a combination of voice and head-motion controls. Drawing from that old CIA project and my years of machine-learning research, I began to build face- and expression-recognition systems for Glass. (In truth, the crappy little processor would heat up like a bomb, so the system required an extra computer strapped to the user’s back to work—not exactly Iron Man.)

Using these augmented reality glasses, I could read people’s faces—and so many more terrible things. I imagined using them to scan a room, reading expressions and flagging false smiles (LA and DC, I’m looking at you). I saw a future where we could access credit scores, or pull up Facebook or Grindr accounts (or Ashley Madison for CFOs). The scene could play out like an episode of Black Mirror, with Glass cuing my actions to exploit the emotional vulnerabilities of others.

But I wasn’t interested in the questionable or downright terrifying applications. I just wanted to give kids like my son greater insight into the people around them.

In 2013 I built a proof of concept system called SuperGlass. Based on research from one of my academic labs, our system could recognize the expression of a face and write the emotion on Glass’s little heads-up screen, allowing an individual with autism to more easily perceive whether the person in front of them was happy, sad, angry, or something else. Simply wearing Glass while continuing everyday social interactions with others allowed these kids to learn that secret language of facial expressions; it’s the real-time version of the flashcard-based emotion-recognition training using cartoon faces on cardboard.

But learning that a smile means happiness from a flashcard teaches kids nothing about why people are happy. Learning the same from natural social interaction actually helps build theory of mind, another secret language thought to be missing in autism.

This research has continued over the years and overcome many of its original limitations. For many kids, these systems are more than a prosthetic—they actually advanced their learning of this secret emotional language. A team at Stanford has shown that it can improve their expression recognition, even when not wearing it. Our pilot even found that it helped foster empathy.

But the more I experimented, the more I realized that I didn’t want to “cure” my son’s autism. I didn’t want to lose him and his wonderful differences. SuperGlass became a tool to translate between his experience and us neurotypicals (a scientific term that means “your brain is boring”). It didn’t level the playing field—it just gave him a different bat to play with.

In an era where jerks like me are building AIs to replicate human tasks, your value to the world will become what makes you uniquely human. The more different you are, the more valuable you become. My son is therefore priceless.

That said, there was still a question nagging at me: How could I make sure I was helping these kids navigate a sometimes alien world, rather than making them the aliens themselves?

Making life better, or just different?

I want to build a world where everyone has superpowers. And one of the ways to do that is through a field known as “neuroprosthetics.”

Neuroprosthetics are implants that directly interface with your brain. They’re already transforming many people’s lives today: cochlear implants for deafness, retinal implants for the blind, motor neuroprosthetics for the paralyzed, and deep brain stimulation for a rather extraordinary array of disorders, including depression and Parkinson’s.

What other advantages could neuroprosthetics bring? Research shows that we can augment creativity and emotional control, as well as influence honesty, pleasure, and numerous other foundations of self. My particular area of research and development is cognitive neuroprosthetics: devices that directly interface with the brain to improve our memory, attention, emotion, and much more. I’ve worked on systems to predict manic episodes in bipolar sufferers. Groups at MIT are using rhythmic visual or auditory stimuli to reduce Alzheimer’s symptoms, and others to detect seizures and depression.

For many, the idea of computers being jammed into our brains evokes science-fiction nightmares like the Borg from Star Trek or the human-like machines of The Terminator. While my own work takes me in very different directions than these dark stories, it’s true that neuroprosthetics are already beginning to change the definition of what it means to be “human,” and the end result of these explorations of humanity are not at all clear.

My first project that made me realize the potential of neuroprosthetics came during grad school at Carnegie Mellon. My advisor and I developed a machine-learning algorithm that learned how to hear just by “listening” to the sounds we recorded in the parks around Pittsburgh. As it listened, the algorithm slowly learned to hear more and more, subtly adjusting millions of internal calculations to make greater sense of its auditory world: the trill in a birdsong, the snapping of a twig, the t in “Vietnamese”.

I began to wonder if we could build an AI-driven cochlear implant: a neuroprosthetic ear that restores hearing to some forms of deafness. Our experiments showed that the algorithm greatly improved speech perception for those using the implants.

It was the first time I’d built something that could transform someone’s life, and I knew this was how I’d spend the rest of mine.

But it was also an introduction to the messy complexity of what makes a “better” life. As a naive hearing person, it never occurred to me that anyone would choose deafness. But I learned that some parts of the deaf community consider cochlear implants to be genocide: an erasure of their unique languages, way of life, and who they are.

Much like autism, I’m often confronted with the dilemma of “curing” people of who they are, versus giving them the tools to share those rich differences with the world. But how can we respect someone’s humanness while also giving them the choice to become more like the majority of humans?

After all, sometimes what it means to be human is tragic. A car accident, fall, or even poverty can take a child’s future away from them. Children with traumatic brain injuries (TBI) are often devastated by their injuries and suffer long-term mental and physical challenges. Clinical videos of kids and adults tearfully struggling with tasks that used to be trivial are heartbreaking. Many of those with TBIs have trouble with their working memory span, which is roughly how many “chunks” of information a given person can remember in any given moment; working memory plays a sizable role in education attainment, lifetime income, and even health and longevity.

If we know we can make a difference in these people’s lives, isn’t not intervening as morally perilous as augmentation run amok?

I think so. At my mad science incubator Socos Labs, one of the neurotech startups we’re working with is aiming to make a difference in these kids’ lives. HUMM has developed a wearable headband that sends electrical signals that enhance connections between the prefrontal cortex and the rest of the brain. The technology uses transcranial alternating current electrical stimulation (tACS) to enhance the connection between frontal parts of the cortex (crucial for working memory) and more posterior brain regions. This stimulation promotes an increase in multitasking performance, attention, and working memory span.

In a recent experiment, adults increased the length of a sequence of lights and sounds they could regularly remember by 20% when wearing the HUMM device, compared to a sham stimulation. In another recent experiment, similar stimulation improved working memory in seniors experiencing cognitive decline.

This technology could have a tremendous impact on a kid with a TBI and others struggling with working memory challenges. If a non-invasive device paired with intense therapy could improve their chances of living longer, richer lives, no loving society should deny them this opportunity.

But there’s a flipside. Neurotypical humans could see these kinds of cyborg-esque technologies as giving these kids unfair advantages. In a world that values difference, untypical humans paired with neuroprosthetics might become even more powerful than fully abled ones. If these kinds of augmentations can lift them above the crowd, soon everyone will want to be more than human.

So what happens when we start giving these superpowers to those who are already superheroes?

Curing normal

It would be willfully naive to think that neuroprosthetics research ends with these children or those suffering from dementia. If these technologies can augment functions for differently abled populations, they will inevitably one day do the same for neurotypicals.

This is already happening without neuroprosthetics. Students in the US experiment with drugs like ritalin and adderall to improve their academic outcomes, even though the benefits might be an illusion. These prescription medications are meant to help those with ADHD focus—not give their neurotypical peers a study boost.

Though it may be a tiny cognitive advantage—if any at all—it’s one that’s only available to kids who already have the means to purchase the performance enhancers. We already know that socioeconomic factors dominate university admission and long-term economic success, and without the advantages of wealth, a little augmented intelligence helps much less.

It’s a good example of how science and technology can further drive existing inequalities. In theory, anyone might have access to new neurotechnologies. But in reality, those most able to take advantage of them are likely to be the ones who need them the least. Simply being born into poverty and stress robs children of their cognitive potential, whereas having wealthy parents dramatically impacts a child’s outcomes, even working memory.

Imagine these advantages not being subtly embedded in the life experience of well-off Westerners, but being directly for sale—and turned up to 11. Intergenerational social and economic mobility would disappear.

Performance-enhancing devices like these are in our near future. You can think of them like music equalizers. You might already have an app on your phone that allows you to amplify the bass and treble of the songs you listen to. Adjusting the slider controls around doesn’t fundamentally change the song, but it emphasizes different elements, from the clarity of voice in an opera to the big bass drop of dance music.

Now imagine the app equalizes you. Instead of adjusting the power at different sound frequencies, sliding a controller on this app boosts your attention or dampens your creativity. Add in a boost for memory and you are ready to cram for an exam. Hit the “Date Night” preset to stimulate emotion and focus while dampening cognition. (If there’s a bad romantic comedy in your near future, why be too smart to enjoy it?) These abilities could become a sweet 16 gift from hyper-competitive parents, or bought in Silicon Valley strip malls as performance-enhancing pick-me-ups.

Where do we draw the line between boosting human potential and eroding our humanity? Any system I build follows my most important technology design rule: You should not only be better when you’re using it, you should be better when you turn it off. Neuroprosthetics shouldn’t replace what we can do for ourselves—they should augment who we aspire to be.

I don’t want to “cure” someone of themselves. Especially not my son. I want them to be able to share that self with the world.

Kurt Vonnegut’s short story Harrison Bergeron imagined a planet in which prosthetic handicaps make us all equal by removing advantage. While a standardized world may seem utopic, it is equally possible that we’d lose our rich differences through over-augmentation as well. If we assume there is only one kind of strength, one kind of beauty, or one kind of intelligence, then we might super-normalize away the rich difference of human existence.

It’s seductively easy to imagine a world in which we’re a little smarter or a bit more creative, in which our kids have the latest advantage. But augmentation could also become a tool to entrench inequality even more firmly.

These technologies can and should be used to give people with disabilities—the non-neurotypical—the ability to exist and thrive in a neurotypical world. But what happens once everyone has a superpower in their back pocket?

What happens when we all want to become superhuman?