At the Frontiers of AI and Music Creation

What is the chord progression for joy? What’s a good time signature for the kind of disappointment you feel when someone stands you up at a coffee shop? These are not questions we ask ourselves straight out, but we have always looked to art to help us understand these ineffable moments. Now, with the advent of AI and the proliferation of available tools for creation, could we be closer than ever to capturing the sublime? Just because there’s no known recording of heartbreak doesn’t mean that it won’t happen one day.

The combination of technology and music has always been controversial. Early adopters of Edison’s first phonograph were thrilled with the ability to listen to music at home. Others believed the phonograph to be the ruination of live acoustic music, which dominated listening halls all over the world. Later, electric amplification changed the way that musicians were able to manipulate sound both onstage and in the studio, prompting audiences to question whether distortion and feedback belonged in a song at all.

With the advent of synthpop, disco and hip-hop, detractors fought to keep electronic music and rap out of the mainstream by claiming that sampling, scratching, and other innovative production techniques didn’t qualify as “music.” However, as advances in technology only improved a musician’s ability to enhance and record their work, many of those early production trailblazers have become icons. The pattern is clear - artists will push the limits of their instruments against the waves of available technology until they can create a noise that sounds most like what they hear in their hearts.

AI and the Best of Both Worlds

The discourse surrounding the rise of AI and music-making sounds surprisingly familiar - much like a cover of an old classic. Music lovers and makers are once again fearful of technology taking over one of the most human-centric forms of art. And not without reason - the far-reaching capabilities of AI have driven many a nightmarish episode of Black Mirror. However, like technological innovations before, AI will likely only give us tools that make us smarter and better creators. 

AI gives us access to not only the sounds themselves, but also the times they embody. Because recorded music is a soundtrack of time as much as it is of voices and instruments, we can’t help but want to iterate on what we once found inspiring. The magic of a good cover is that it forces the original version into a dialogue with a different artist’s vision.

Consider the difference between crooner Andy Williams’ “Moon River” and Frank Ocean’s 2018 version of the same tune. “Moon River” has become part of the fabric of American media, instantly recognizable as a classic just a few notes in. Williams’ mournful voice is accompanied by a soaring Mancini orchestra and a chorus of angelic voices. His lyrics invite listeners to keep hope alive that there’s more than the heartbreak at hand, while Ocean’s “Moon River” is provocatively digital, beautifully haunting, and somehow sonically alienating. By layering his own voice and changing the lyrics to be self-reflective, Ocean converses with himself on a journey of self-discovery. 

Unlike Williams’ version (originally sung by Audrey Hepburn in 1961’s movie Breakfast at Tiffany’s), we’re lucky we get to listen in as Ocean passes by. The technological tools Ocean uses to create this cover weren’t around when Andy Williams topped the charts in the 60s, but it’s not the technology that makes Ocean’s cover brilliant. It’s Ocean’s redirection and purposeful harmonic distancing from the original that is so moving. It’s the artist who breathes life into the technology, not the other way around.

Partnering with AI

Ocean’s reimagined “Moon River” is a standout cover, a whole new experience of the song. However, chances are good that any standout song on the charts these days has been augmented by technology. Sampling, a once controversial practice that tied up millions of dollars in legal red tape, is now commonplace. Vocal processors like Auto-Tune, designed to keep singers on-key, are not only standard in recording (and live on stage), they’re now tools in the hands of artists who want to openly push the boundaries of how the human voice can be manipulated as an instrument. Now that AI is as accessible as any other producer’s tool online, there’s a whole new world waiting round the bend for artists.

Use of AI can unlock both the future and the past by giving us access to new tools that will shape the sound of the future, as well as access to sounds and recording practices that have been lost to time. For instance, TAIP, Baby Audio’s most popular plugin, uses AI to generate  the warm ambience of analog recording. Though digital recording all but replaced analog in the early 2000s, musicians want the ability to recapture the spirit and richness of the time. They also want the freedom to mix those older sounds with drum tracks from the 2010s and the vocals of long-dead superstars.

AI also empowers solo musicians to create their own music with or without access to costly instruments and recording studios, thereby removing some of the most stubborn obstacles of the industry. Magenta, an open-source collections of plugins for use with Ableton Live, also provides the tools needed to to generate unique-sounding instruments with an experimental Google collaborative program called NSynth Super.  

Currently, a lot of easy-to-use AI is open-source and free, providing artists with tools that were once only available to established professionals with label support, or amateur musicians with access to private funding. Now, independent musicians are able to create entire AI-generated ensembles with Boomy, write tracks from scratch with assistance from AVIA presets, or even compose using only eye movements. Collaboration tools like Kompoz have been around for decades, but they require humans to connect and play together. AI-based simulators like Vocaloid provide the best of both analog and digital worlds: composers have the freedom to experiment with AI-generated lyrics and vocal styles before deciding if they want to add a human voice to a track. They can also choose to freely manipulate and replicate vocal tracks already in existence. 

The Dawn of a New Era

How are older musicians and digital artists feeling about this brave new world of music? Like Hollywood, issues of artist compensation are rife in the music industry. Some artists are enraged by what they deem a lack of consent in the industry. In April of 2023, TikTok user Ghostwriter77 released  “Heart on My Sleeve,” a song with AI-generated vocals from Drake and The Weeknd. Responses ranged from complete outrage to ironic memes. Spotify removed thousands of AI-generated tracks on playlists created by bots. One thing is clear - there are serious ethical questions about where AI-generated music belongs and how it is used. 

New tech, however, can create new solutions. Some platforms like Soundrawn are paying professional musicians a living wage to use AI to compose royalty-free music. Subscribers can use whatever music they like from a library of thousands of songs. Musician Grimes has founded replai, a free streaming service for AI-composed music. While Adobe promises to develop AI to ensure fair compensation to artists, some creators are decrying copyright altogether and calling for an end to authorship.

This is an empowering time for artists. We’re on the precipice of having tools so sophisticated that anyone with access will soon be able to express themselves through music. Anyone with the inclination can even get in there under the hood and invent new musical tools that help to express life’s new experiences. While AI has the power to democratize the industry, there will be new questions of authorship, copyright, and distribution. There will be new types of artists who will once again need to convince the general public of their rights to find and record the sound of heartbreak, of falling in love, of life. Whatever the future holds, there will be amplification.

ABOUT SUPERPOSITION

We believe that progress is measured not by the number of new technologies created, but by their ability to be understood, embraced, and leveraged for positive impact. We seek to bring you easy-to-understand briefs on science and technology that can change the world, interviews with the leaders at the forefront of these breakthroughs, and writing that illuminates the importance of communication within and beyond the scientific community. The Superposition is produced by JDI, a boutique consultancy that brings emerging technology and science-driven companies to market. Our mission is to make precedent-setting science companies well known and understood. We pursue mastery of marketing, communications, and design to ensure that our clients get the attention they deserve. We believe that a good story — well told — can change the world.

To learn more, or get in touch, reach out to us at superposition@jones-dilworth.com.

story-shapeCreated with Sketch.
OUR WEEKLY DISPATCH + NEWSLETTER
THE SUPERPOSITION
© 2024 Jones-Dilworth, Inc. Privacy Policy