A.I. Killed The Radio Star

Charlotte Kemp Muhl (UNI and The Urchins) on the future of bot-generated art.

I remember seeing an ad for a “unique music generator” program in 2015 called XHail which terrified me as a mere mortal composer. It promised to generate any style, tempo, genre of song, or film score at the click of a few buttons, and under the directive of a few adjectives. It didn’t even require constant snack breaks and ego-stroking like most musicians.

The demo example sounded like a fairly convincing Hans Zimmer (you could even say his trademark major arpeggios are “algorithmic”). Then it generated faux Danny Elfman. And country pop. And funk.

It dawned on me how fundamentally replaceable most artists are by this tech, and called into question whether we are simply parodies — or predictable code — of ourselves anyway.

Fast forward a few years and I’ve decided that if you can’t beat them, join them. I’ve been toying around with making neural network generated “paintings,” some of which would be uncannily convincing hung next to a Magritte, and made me foment a few hilarious get-rich-quick schemes with my bandmates — like hiring people on Fiverr to repaint them quickly and sell to pretentious NYC galleries. We even wrote a song using lyrics freestyled by an A.I. for Uni and The Urchins’ last release (“Simulator”). If you like David Lynch and Syd Barrett with a touch of damage to the Broca’s Area, (the part of our brain that governs linguistics) then A.I. slam poetry is definitely for you.

Not unlike most humans, it required heavy curation and help with the rhyming schema. But as someone who always admired Brian Eno and Bowie’s experimental writing techniques, the appeal of this process was that it seemed like a Web 3.0 version of Exquisite Corpse or Oblique Strategies.

Through working with this burgeoning neo-sentience, which parses data averages and search hits across the internet like the nodes in our brain for associative creativity, I’ve started to wonder where we fit into all of this. Our soon-to-be obsolete wetware brains will be like the beeper to its smart phone. How can the references we’ve collected in our short analog life span compete with the cyber-libraries of Alexandria? For the time being, our imagination and directives still count; it would be lost without our verbal “prompts,” which are the specific descriptor inputs for the generators. I.E. “A girl with spider legs and a nun habit sitting by a Byzantine window in the style of Bosch.” Learning how to properly word the prompts and set the parameters to get the result you want is like learning to speak robot.

Most of the A.I. art that people have created so far is cheesy (sorry, not sorry), so our individual taste is still clearly a currency. But can’t culture and taste be programmed too? Based on our Spotify and YouTube recommendations, we’re highly predictable phenotypes. My tailored Instagram ads know me better than most of my family. “Imagination is more important than knowledge,” Einstein once said. That is, until these neural networks no longer require our prompts.

Although it’s seemingly superior at many tasks, A.I. is not without human foibles. The Dall-E network had to be restructured when it was brought to the creators’ attention that it overwhelmingly generated white men when you typed the prompt “CEO.” “Loab” was another scandal wherein the A.I. associated images of an older woman with horror and gore, and she kept reappearing to haunt people’s image generations. But neural networks aren’t inherently opinionated, they’re simply siphoning venn diagrams of data. Perhaps they’re more like the child in the parable of The Emperor’s New Clothes — emotionlessly and tactlessly pointing to mass consensus truths of our flawed collective unconscious.

This sociopathic literalism is where the Paperclip Problem arises (the theory that if you program a robot with a seemingly innocuous task like making as many paperclips as possible, it might not stop at anything — including killing everyone on the planet to do so.) Or the Trolley Problem, which driverless car manufacturers are already struggling with (if faced with the option to run over two old men or one small child, which would the car’s A.I. prioritize?) These debates have been raging for some time now, but I haven’t heard many dialectics regarding the ethics of A.I. and art, besides my occasional designer friend getting angry at me for generating my own haute couture.

We used to think the conception and execution of art would be the last thing computers could replace. Now it seems it might be one of the first. There used to be a time when a jazz guitarist could support their family, or a jingle composer, painter, fashion designer or a screenplay writer. All of these fields and more will soon be replaced with A.I., which will be cheaper and faster and eventually more adroit. We already have hologram pop stars like Hatsune Miku, and virtual supermodels like Shudu Gram. I read an Olive Garden commercial written by a bot more hilarious than most Key & Peele skits. If you thought it was hard to turn a dime now as a content creator, just wait for the next industrial revolution where you’ll suddenly find yourself identifying as the group of Luddites who stormed the factories to burn weaving machines. Or the taxi drivers who worked their whole life to pay off a medallion angrily flipping over Uber cars.

There will be an existential reckoning with what it means to be an artist soon. But we’ve already been primed for this with social score algorithms, (clickbait, views, likes) which generally monetize the most banal content and edge out others from risk adverse record labels and film investors. Algorithms produce aesthetic echo chambers in the way they produce political echo chambers. Movements are reduced to memetics. Audiences are stratified. Consumers are squashed into cynical predictive demographics with a cyber spin on Cold War game theory. Instead of going to an exhibit and being surprised with new ideas, or inventing genres as a response to authentic zeitgeists (like the origin of psychedelia or hip hop being byproducts of relevant culture) we are fed recursive paradigm reinforcers. Eventually we all become part of a human centipede of upcycled trends in virtual bubbles. (Think of the Rhesus monkey/cocaine experiment, where they continuously pressed the button for cocaine instead of food.)

But the future of this tech is not all grim and Soylent Green. Maybe it will incentivize human innovation to compete and come up with new formats of entertainment and art that bots haven’t mastered. Or maybe we’ll transcend the need to “make stuff” altogether and float around like Wall-E in some technocratic utopia. For now, it might be hard to believe a bot could write a better love song if it’s never had its heart broken, yet you might be surprised to know a virgin wrote the Kama Sutra, and Brian Wilson of the Beach Boys had never actually gone surfing.

The Infinite Monkey theorem suggests if you give an infinite amount of monkeys typewriters, eventually one of them will write the entire works of Shakespeare by accident. (It wasn’t specified if this experiment also involved cocaine but it seems self-evident.) So given enough time, perhaps these algorithms will surpass Beethoven in their beauty. Maybe these technologies will even outlive us after a nuclear war, and the planet will be populated with bohemian machines impervious to radiation poisoning who just write symphonies and paint murals for a billion years, until some other civilization discovers them, and that will be our legacy… We created the creators.

(Photo Credit: Celeste Martearena)

Charlotte Kemp Muhl is the bassist for NYC art-rock band UNI and the Urchins. She has directed all of UNI and The Urchins’ videos and mini-films, and engineered, mixed and mastered their upcoming debut album Simulator (out 1/13/23 on Chimera Music) herself. A workaholic and a diehard control freak, her biology is 70% composed of cup ramen and useless science factoids.

UNI and the Urchins AI-written song/AI-made video for “Simulator” is out now.