Nielsen’s Gracenote Uses Artificial Intelligence to Classify 90 Million Songs by Style

Gracenote Sonic Style
Courtesy of Gracenote

Nielsen-owned media data specialist Gracenote wants to help music services make better mixes with fewer outliers: The company announced a new music dataset called Sonic Style Wednesday that classifies 90 million tracks not by the genre the artist is known for, but the actual style of the recording.

This will allow services making use of Gracenote’s data to for instance compile a playlist of all of Taylor Swift’s dance pop hits, while keeping anything that sounds too much like country out of the mix. Or combine The Clash’s old-school punk tracks without adding some of the bands new wave fare.

“Now that playlists are the new albums, music curators are clamoring for deeper insights into individual recordings for better discovery and personalization,” said Gracenote music and auto GM Brian Hamilton.

Gracenote has been in the music data business for close to 20 years. Originally, the company helped consumers automate the copying of audio CDs with a giant database of albums and sound recordings that still powers apps like Apple’s iTunes. In recent years, Gracenote has expanded to not only catalog sound recordings, but also classify them to help music services and other curators.

Popular on Variety

To do so, Gracenote is using artificial intelligence (AI) and machine learning technologies, effectively teaching computers to listen to millions of tracks and make sense of what they’re hearing. Until recently, these efforts only focused on moods and vibes, with categories like “sultry,”  “sassy” and “gentle bittersweet.”

With Sonic Styles, the company expanded its AI work to include close to 450 style descriptor values. “Sonic Style applies neural network-powered machine learning to the world’s music catalogs, enabling Gracenote to deliver granular views of musical styles across complete music catalogs,” said Hamilton.

Gracenote executives previously told Variety that using AI for music recognition can come with its own set of challenges. For instance, computers can listen to characteristics of an audio file that aren’t actually music and determine that they are part of a certain mood or genre.

“It can capture a lot of different things,” said Gracenote’s VP of research Markus Cremer for a previous behind-the-scenes look at the company’s AI work. Unsupervised, Gracenote’s system could for example decide to pay attention to compression artifacts, and match them to moods, with Cremer joking that the system may decide: “It’s all 96 kbps, so this makes me sad.”

However, categorizing music on a song level can ultimately help make it more accessible — especially now that people don’t spend a long time navigating through their carefully curated collections anymore, but simply ask their smart speaker to start playing something. Said Hamilton: “These new turbo-charged style descriptors will revolutionize how the world’s music is organized and curated, ultimately delivering the freshest, most personalized playlists to keep fans listening.”