‘Yes, AI is the future of music – but not in the way you’d think.’

The following op-ed comes from Oleg Stavitsky (pictured inset), CEO of Endel, an AI-powered sound wellness company headquartered in Berlin. Endel says that its patented technology takes inputs from a user’s movement, time of day, weather, heart rate, location, and other factors and then uses AI to generate personalized soundscapes that adapt to changes in real-time. Endel recently inked a deal to produce wellness playlists for Amazon Music.


At the moment, AI is seen either as a threat, as tech that will replace the artist or songwriter, or as a mere tool that will be woven into the creative process of composers and producers.

Yet AI’s most groundbreaking role will likely be as a new medium that will shift music into more adaptive, responsive formats.

Technology has always played an important part in transforming how we produce, record, and consume music.

Over the past 20 years, from iPod to AirPods, to mixtapes on SoundCloud to playlists on Spotify, music has become ever present and mobile like never before and that has impacted its formats.

Generative AI can provide the next revolution in music mediums. Medium is the message: the way the music is delivered to us today influences the format and music itself.

On one hand, Tik-Tok and streaming are shrinking music to bite-sized 30-sec clips/skits. On the other, YouTube has birthed the functional music genre of infinite long-form videos designed to help listeners sleep or study.

These long-form videos are essentially soundscapes, rooted in Brian Eno’s ideas of generative music. Today, Eno’s ideas of music-as-a-system, hands-off approach to composing are ripe for innovation.

We are surrounded by data: devices around us know our average heart rate and step count, our wake up and sleep time, our sex, age, chronotype, and menstrual cycle. Imagine feeding all this information into a generative AI model and adding artist stems into the system.

What you get is music that lives and breathes with you. That adapts to when you wake up, the number of meetings you have, your current heart rate, circadian rhythm, and movement. That knows when to be barely audible, and when it’s time to shield you from the world.

“A new platform for such functional AI-powered soundscapes is about to emerge. What’s missing is the legal infrastructure for such a platform.”

This AI-powered adaptive functional soundscape version of your favorite music is the future available to us today. It opens up new opportunities for artists to create and monetize their art, for platforms to offer additional revenue streams, and for labels to breathe new life in their catalogs. Best of all: it can peacefully coexist with traditional pre-recorded music that we know and love.

Music artists like Grimes, Miguel, James Blake, Arca and Plastikman already opened up to an idea that their music will exist in a new way: as a living, breathing, ever-changing organism that constantly adapts to the context of the listener. This hands-off approach to composing music by feeding building blocks into a system and then watching it work doesn’t remove the artist from the equation. The artist is present, it’s just their role from active performer/composer shifts to a conductor/architect.

A new platform for such functional AI-powered soundscapes is about to emerge. What’s missing is the legal infrastructure for such a platform.

Today, the companies that spend hundreds of millions of dollars and years building the technologies that enable new opportunities for artists and rights holders are expected to bear the lion’s share of the costs. That balance needs to shift.

Those of us in AI and in music need to work together to establish the fair legal and business infrastructure that will let this new adaptive a new adaptive and generative approach to music emerge.Music Business Worldwide

Related Posts