At an international competition called the A.I. Song Contest, tracks exploring the technology as a tool for music making revealed the potential — and the limitations.
LONDON — For its first 30 seconds, the song “Listen to Your Body Choir” is a lilting pop tune, with a female voice singing over gentle piano. Then, everything starts to fracture, as twitchy beats and samples fuse with bizarre lyrics like “Do the cars come with push-ups?” and a robotic voice intertwines with the human sound.
The transition is intended to evoke the song’s co-writer: artificial intelligence.
“Listen to Your Body Choir,” which won this year’s A.I. Song Contest, was produced by M.O.G.I.I.7.E.D., a California-based team of musicians, scholars and A.I. experts. They instructed machines to “continue” the melody and lyrics of “Daisy Bell,” Harry Dacre’s tune from 1892 that became, in 1961, the first to be sung using computer speech synthesis. The result in “Listen to Your Body Choir” is a track that sounds both human and machine-made.
The A.I. Song Contest, which started last year and uses the Eurovision Song Contest’s format for inspiration, is an international competition exploring the use of A.I. in songwriting. After an online ceremony broadcast on Tuesday from Liège in Belgium, a judging panel led by the musician Imogen Heap and including academics, scientists and songwriters praised “Listen To Your Body Choir” for its “rich and creative use of A.I. throughout the song.”
In a message for viewers of the online broadcast, read out by a member of M.O.G.I.I.7.E.D., the A.I. used to produce the song said that it was “super stoked” to have been part of the winning team.
The contest welcomed 38 entries from teams and individuals around the world working at the nexus of music and A.I., whether in music production, data science or both. They used deep-learning neural networks — computing systems that mimic the operations of a human brain — to analyze massive amounts of music data, identify patterns and generate drumbeats, melodies, chord sequences, lyrics and even vocals.
The resulting songs included Dadabots’ unnerving 90-second sludgy punk thrash and Battery-operated’s vaporous electronic dance instrumental, made by a machine fed 13 years of trance music over 17 days. The lyrics to STHLM’s bleak Swedish folk lament for a dead dog were written using a text generator known for being able to create convincing fake news.
While none of the songs are likely to break the Billboard Hot 100, the contest’s lineup offered an intriguing, wildly varied and oftentimes strange glimpse into the results of experimental human-A.I. collaboration in songwriting, and the potential for the technology to further influence the music industry.
Karen van Dijk, who founded the A.I. Song Contest with the Dutch public broadcaster VPRO, said that since artificial intelligence was already integrated into many aspects of daily life, the contest could start conversations about the technology and music, in her words, “to talk about what we want, what we don’t want, and how musicians feel about it.”
Many millions of dollars in research is invested in artificial intelligence in the music industry, by niche start-ups and by branches of behemoth companies such as Google, Sony and Spotify. A.I. is already heavily influencing the way we discover music by curating streaming playlists based on a listener’s behavior, for example, while record labels use algorithms studying social media to identify rising stars.
Using artificial intelligence to create music, however, is yet to fully hit the mainstream, and the song contest also demonstrated the technology’s limitations.
While M.O.G.I.I.7.E.D. said that they had tried to capture the “soul” of their A.I. machines in “Listen To Your Body Choir,” only some of the audible sounds, and none of the vocals, were generated directly by artificial intelligence.
“Robots can’t sing,” said Justin Shave, the creative director of the Australian music and technology company Uncanny Valley, which won last year’s A.I. Song Contest with their dance-pop song “Beautiful the World.”
“I mean, they can,” he added, “but at the end of the day, it just sounds like a super-autotuned robotic voice.”
Only a handful of entries to the A.I. Song Contest comprised purely of raw A.I. output, which has a distinctly misshapen, garbled sound, like a glitchy remix dunked underwater. In most cases, A.I. — informed by selected musical “data sets” — merely proposed song components that were then chosen from and performed, or at least finessed, by musicians. Many of the results wouldn’t sound out of place on a playlist among wholly human-made songs, like AIMCAT’s “I Feel the Wires,” which won the contest’s public vote.
A.I. comes into its own when churning out an infinite stream of ideas, some of which a human may never have considered, for better or for worse. In a document accompanying their song in the competition, M.O.G.I.I.7.E.D. described how they worked with the technology both as a tool and as a collaborator with its own creative agency.
That approach is what Shave called “the happy accident theorem.”
“You can feed some things into an A.I. or machine-learning system and then what comes out actually sparks your own creativity,” he said. “You go, ‘Oh my god, I would never have thought of that!’ And then you riff on that idea.”
“We’re raging with the machine,” he added, “not against it.”
Hendrik Vincent Koops is a co-organizer of the A.I. Song Contest and a researcher and composer based in the Netherlands. In a video interview, he also talked of using the technology as an “idea generator” in his work. Even more exciting to him was the prospect of enabling people with little or no prior experience to write songs, leading to a much greater “democratization” of music making.
“For some of the teams, it was their first time writing music,” Koops said, “and they told us the only way they could have done it was with A.I.”
The A.I. composition company Amper already lets users of any ability quickly create and purchase royalty-free bespoke instrumentals as a kind of 21st-century music library. Another service, Jukebox, created by a company co-founded by Elon Musk, has used the technology to create multiple songs in the style of performers such as Frank Sinatra, Katy Perry and Elvis Presley that, while messy and nonsensical, are spookily evocative of the real thing.
Songwriters can feel reassured that nobody interviewed for this article said that they believed A.I. would ever be able to fully replicate, much less replace, their work. Instead, the technology’s future in music lies in human hands, they said, as a tool perhaps as revolutionary as the electric guitar, synthesizer or sampler have been previously.
Whether artificial intelligence can reflect the complex human emotions central to good songwriting is another question.
One standout entry for Rujing Huang, an ethnomusicologist and member of the jury panel for the A.I. Song Contest, was by the South Korean team H:Ai:N, whose track is the ballad “Han,” named after a melancholic emotion closely associated with the history of the Korean Peninsula. Trained on influences as diverse as ancient poetry and K-pop, A.I. helped H:Ai:N craft a song intended to make listeners hear and understand a feeling.
“Do I hear it?” said Huang. “I think I hear it. Which is very interesting. You hear very real emotions. But that’s kind of scary, too, at the same time.”