The Great Escape in Brighton is one of the UK’s best showcases for new artists, but it’s also a chance for the industry to discuss some of the new technologies that could have an impact on those musicians’ careers. Yesterday, the focus was on artificial intelligence (AI) – particularly around AI that can create music or be used as a creative foil by human artists.

Organised by Complete Music Update (CMU) a day of panels kicked off with a spirited session of definitions from the academic world, in the shape of Margaret Boden from the University of Sussex – who has been involved in AI for decades – and Marcus O’Dair from Middlesex University.

Boden stressed the long history of AI. “The excitement at the moment, which is very understandable because AI can do things now it couldn’t do even five years ago… but the ideas, the algorithms if you like, which are doing that were worked out 25 years ago,” she said.

“What’s happened is the computers have got more powerful, more fast, and the storage of data has absolutely rocketed. So things can be done in practice now which they couldn’t be done before, but the theory of how they could be done in principle is at least 25 years old.”

She also provided a useful definition of AI at its broadest level – “trying to get computers to do the sorts of things that human minds do” from translating language and recognising faces in a batch of photographs to playing chess and making music.

Boden also stressed that the history of AI research has been as much about our brains as it has been about the intelligence of algorithms.

“It gives us a way about thinking about how our minds work: what our brain is doing when we do whatever we do… and about what it is for people to have different personalities,” she said.

What is it that AI has taught us? The most important thing it has taught us is the enormous – and previously totally unrealised – subtlety, richness and power of human minds… it’s taught us a great deal about ourselves. Plus, of course, providing lots of gizmos that have made some people rich. But I don’t care as much about those things!”

Boden was in no doubt that computers can be creative, citing Google’s AlphaGo software, which not only beat the human world-champion at the ancient Chinese board game, but also came up with some moves that humans had never thought to play before.

“If that doesn’t count as creativity, I don’t know what does, because the people who built the system didn’t know about it,” she said, before turning her attention to AIs creating music.

They can certainly imitate Mozart and Beethoven and produce stuff that some people will mistake initially for Mozart and Beethoven. Whether they can match them? That requires all sorts of subjective judgements,” said Boden. “But I don’t think that creativity is denied to AI.”

O’Dair continued in that vein, based on a report on AI music that he’s publishing soon. He also stressed the three different categories of this technology: AIs that are composing music; AIs that are working with humans to co-compose; and AIs that can remix or adapt existing music for a specific purpose.

He added that the most radical category – AIs composing music – is far from science-fiction. “I’ve been in the room where people played two tracks: one’s from an AI and one’s from a human. And people couldn’t tell the difference,” said O’Dair. “I don’t think that means an AI is going to write Beethoven’s Fifth tomorrow, but for cases like library music… that is certainly interesting.”

Their panel ended with Boden giving short shrift to fears around the development of ‘artificial general intelligence’ – AI that can think, learn and reason like a human across all areas, rather than one specific task (like Go or music).

“I wouldn’t be surprised if we never got there, and certainly don’t think we’ll get there within this century,” she said. “Artificial general intelligence? We haven’t got it, we’re nowhere near getting it, and I don’t expect that anybody sitting in this room – most of whom are very very much younger than me! – will ever see it. I’d be amazed.”

CMU’s AI day at The Great Escape showcased some of the companies that are deploying AI for musical purposes – and not just for music-making. British startup Instrumental talked about how its TalentAI, which was originally developed to talent-spot musicians emerging on YouTube, is now mining Spotify for a similar purpose.

That involves analysing the 20k+ new tracks uploaded to Spotify every day, but also tracking more than 11k playlists on the streaming services, and crunching data on historic growth curves for artists on Spotify. Clients – labels for example – can then apply filters to be served up artists whose trajectory might make them of interest for signing.

It’s not about serving up lots of information. Most of our time is spent trying to serve up very little information every day, so you can get value from it,” said CEO Conrad Withey, of TalentAI’s evolving role. “We’re trying to do Spotify brilliantly, but we also have YouTube data and Instagram data, which gives you a richer sense of the data coming through.”

Another UK startup, Jukedeck, revealed that its AI music-creation tool has already been used to make more than 1m pieces of music. Its CEO Ed Newton-Rex addressed the debate over whether this kind of technology – currently used mainly to create backing tracks for online videos – is a threat to human musicians’ livelihood in the longer term.

I definitely don’t see this stuff as a competitor in any way, and I don’t think it will ever be a competitor. Even if it’s possible to get to that stage, why would you? What’s the point? We don’t want to listen to AI music just for the sake of it,” said Newton-Rex.

He cited Jukedeck’s recent project collaborating with a group of K-Pop musicians and producers as a better guide to the future. “We really think AI can be the next big tool shift: the change in toolsets that musicians have, in the same way that digital audio workstations and synthesizers have been in the past.”

He also pointed to the example of one YouTuber who created a track using Jukedeck, then composed her own vocal melody with lyrics to add on top, to create a finished track.

“She was someone who didn’t have any musical ability before that, and what AI has done has helped her get to grips with writing music,” said Newton-Rex. “I don’t think AI will ever or should ever replace the more professional end of music creation, but what it could do is help to democratise… help more people to get involved in the process of music-making.”

Also showcased during the day were live-music website Ent24’s Event Affinity Engine, crunching data to improve concert recommendations, as well as MXX Music, which uses AI to “atomise and reconstruct” existing music to accompany video footage – for example editing a track to fit the dynamics of an advertisement or online video.

The company already has a library of tracks from the production-music catalogues of established rightsholders. “This is a series of connected catalogues. We have Sony, we have EMI… Warner, BMG… and they’re growing all the time,” said CEO Joe Lyske. “It’s music where all the rights are in the one place for at the moment.”

However, he said that some labels are interested in seeing how their main catalogues might be used here too. The reverse of this technology, meanwhile, was represented by Rotor Videos, which uses AI to create the visuals for songs (rather than to adapt songs for visuals like MXX).

CEO Diarmuid Moloney said his company’s technology is addressing the challenge of scale for labels. “Music video and the use of music video has evolved beyond that ‘let’s spend 100 grand on a video for the lead single and that’s it’. We need video content now for every track,” he said.

That’s particularly true on YouTube. “YouTube are now doing charts, and the charts will take into consideration all the different videos for a song,” he said. “So it’s now important to have multiple videos for each track, and videos for every track on the album.”

The AI day continued with consideration of some of the legal issues around AI music, although as ReedSmith partner Gregor Pryor only half-joked: “My friends remind me that likely lawyers are going to be AI before music!

Pryor pointed out that, as is often the case, in many parts of the world the law is struggling to catch up with technology – in this case, questions as key as whether computer-generated works fall under copyright law, and whether an algorithm can be recognised as an author.

In the UK, the answer to the copyright question is yes, although it’s the person giving the input from which a work is created who’s the author, rather than the AI itself. Pryor noted that in Europe, the emphasis is on intellectual property as “the work of a human mind”, while the US has come down heavily on the notion that machines can be creative.

Among the other wrinkles identified by Pryor: Sir Elton John’s plans to have an AI created based on him, which could continue composing new material after he dies.

“If the composer is dead and you’ve got copyright [for a composition] being life-plus-70-years, but then you for the sound recording have a very different copyright length… If a song is created 10 years after Sir Elton died, by an AI that he created the input for, how are you going to marry up those copyright terms? So it’s a minefield, absolutely.”

The day ended with a panel discussion around AI music creation, and some of those threats and fears felt by human musicians when they see this technology and hear its output.

The Orchard’s co-founder Scott Cohen, giving his personal views rather than those of his company or its owner Sony Music, was in no doubt that AIs will be writing hit songs.

“100% yes. Of course! There’s things that are inevitable technologies,” he said, citing the development of autonomous-driving technology, and the knock-on effects for humans. “I’m sorry if you’re an Uber or taxi driver, because at some point you’re not going to have a job… And it’s the same thing about songwriting and hits. There will be a number one song that’s 100% AI-written.”

Cliff Fluet, of law firm Lewis Silkin and consultancy Eleven Advisory, took that on. “How do you know it hasn’t happened already?” he said. “All of this stuff is being used as writing tools right now, and the extent to which they have inspired, enabled or completely composed? It’s happening. And it’s been happening for quite a long time.”

The notion that an AI can compose a hit depends on how you’re thinking of what a hit is, added songwriter and Auddly exec Helienne Lindvall.

“There are different types of hits. We had a hit with Crazy Frog. I wouldn’t imagine that it would take a degree from the Royal College of Music to have made that track!” she said – the implication being that an AI could create that kind of song, even if an Adele torch-ballad might be a tougher task.

Cohen suggested that the industry should – however hard this is – shift the way it thinks about what music is, and what it can do for human listeners.

“We make it seem as if it’s so sacred. And maybe it’s sacred for the composer and the artist, but when we take a step back and say what does music do? It changes our brain, our moods, our feelings. It can make you sad or happy, make you want to get up and dance,” he said. “So, if that’s what music is doing to our brain, can’t we have a machine recognise what’s happening and deliver exactly that?”

Cohen also sees scope to move on from an “industrial-revolution model” of music-making, where the same song recording is made available to every listener, much like a production-line car or garment.

Now, you make one song and distribute it to lots of people. But in the future we can create bespoke music that’s happening dynamically on the fly, which is what people want in that moment,” he said.

Artists were represented on the panel by musician Chagall, who has used a range of technology in her recording and live work, but who also thinks that humans still have a vital role to play.

“What we like about music is also the artist themselves and the story behind the music or the song,” she warned. “What attracts people to music is not specifically that one song… it could be their whole body of work and knowing about their lives.”

She also stressed that even if AI emerges that can rival humans for writing songs, that won’t quench the urge of the latter to continue creating music.

I can’t imagine yet asking a computer ‘can you write me a song?’ because I get so much joy out of writing a song: it’s what I get up for every day!” she said, before adding that she sees the merits of AI as a generator of musical ideas or snatches of melody, that a human can then work with – much as Chagall uses drum machines as a base before chopping up their beats to suit her creative purposes.

There was still time for Cohen to point out that how humans in 2018 feel about AI music is no guide to how those in the future will respond to it. He noted that now, if he goes to a rock gig (“with a couple of other bald men!”) there may be a club down the road packed with 3,000 people dancing to EDM.

“In 10 years time, will there be only 300 people in that club, but down the road, 3,000 people in an AI club, where music is being generated on the fly with no human intervention, and they’re loving it?”

Lindvall suggested that lyrics are often forgotten in the debate around AI, and that could be important for some genres – rock for example – where an AI might be able to compose a good tune, but then fall short on believably-human lyrics.

Fluet and Cohen weren’t so sure, with Fluet pointing to David Bowie’s use of William Burroughs’ cut-up technique: if humans can compose in an almost-algorithmic way and still make great art whose lyrics are pored over by fans, then why can’t computers do the same?

People are already reading books written by AI and they don’t even know it. Those romantic books that people pick up at the airport? Lots of them are already written by AI,” added Cohen, before taking a swing at one of the biggest rock icons of all.

“People can’t tell which book was written by a computer and which was written by a human. If I was to present you with some Led Zeppelin lyrics and say they were written by a computer, you’d say ‘Yeah, they’re gibberish! They don’t make any fucking sense!”

EarPods and phone

Tools: platforms to help you reach new audiences

Tools :: We Are Giant

With “fan communities” being on every artist’s team’s mind, we’re fans of the fact that…

Read all Tools >>

Music Ally's Head of Insight

Leave a comment

Your email address will not be published. Required fields are marked *