How we used A.I. to create music in front of a live audience
Together with our friend Tomas Louter from media.monks, we used artificial intelligence to create music, based on prompts given by our live audience at ADsterdam festival 2024.
Nowadays, A.I. seems to be omnipresent and it polarises people’s feelings. While some say, robots will turn creative industries upside down and put many people out of their jobs, others say, A.I. tools will merely become a great new addition to the already existing tool kit which will still be operated by humans like you and me.
A similar fear haunted musicians during the 70s, with the emergence of MIDI (Musical Instrument Digital Interface) as well as the first ever samplers and synthesisers. Musicians feared for being replaced by robots. During the 90s, label bosses and studio owners got scared by DAWs (Digital Audio Workstations) which let basically everyone become a music producer in their own bedroom. Yet, despite the changing landscape, big studios and major labels did obviously not go out of business. Then came the 00s with the increasing digitalisation of music production, with home studios and sample CDs. And now?
Artificial intelligence, tools like voice & instrument isolation, voice cloning (check YouTube for David Guetta vs Eminem) or prompt based chord progressions and musical snippets, ready to use. As of now, such A.I. tools are still mostly standalones and have not yet become main features in Logic, Ableton, Cubase etc. Quick side note: the Logic 11.0.1 update does contain the first A.I. extension - a stem splitter that lets users extract drums, bass, vocals and other instruments.
Yeah, yeah, all exciting and nice but…does A.I. really work? We decided to put A.I. and ourselves on the spot at ADsterdam and ran an experiment with an Suno A.I. and a hint of human creativity.
To prove that all this was done live, we first asked a random audience member to give us a topic to prompt. Based on this, we prompted it to give an RnB acapella lead vocal as well as backing vocals. In a separate prompt to generate a soulful brass section. These outputs were then in turn put into a sampler, chopped up and rearranged to make these somewhat generic machine outputs unique.
Combined with human made drums, bassline and a few other tricks, the result was a bouncy and driving hip hop piece. Part of the audience was wowed, part of the audience was positively shook.
What does this mean for the future of music? Tough to say. All this was done in the time span of 25 minutes after requesting the prompt from the audience. What even last year would have cost a small fortune in renting a studio and hiring session musicians and probably a day or two in time, can now be done on a budget and in no time! Does that mean that studios will go bankrupt and session musicians will become unemployed? We are convinced - probably not, although we need to differentiate here. For one, for bedroom producers, who have neither the budget, nor the equipment, nor the network to hire a brass ensemble, these are great news. They too now have the opportunity to experiment with non-copyrighted musical snippets for the creations. Here, professional session musicians don’t need to worry since bedroom producers couldn’t have hired them anyways. As for big artists with budget and a crew, A.I. seems unlikely to completely remove musicians any time soon. The reasons for that are remaining qualitative shortcomings in sound as well as lack of fine control of individual parameters. For a human instrumentalist, it is crucial to understand what the current project is about, what the feel is and what their instrument adds to the piece. That comes often through practice, experience and conversations about the project. A.I. can’t do that - yet.
A tricky situation might arise for the spaces between professional and bedroom producers where there might be some income but perhaps only just enough to hire musicians. Here, contemplations on whether it is worth spending the money on an instrumentalist or rather, say, for marketing, while subscribing to an A.I. are more likely. An interesting point can also be made regarding the future of stock music and its producers. From a sonic branding perspective, stock music is sounds decent, cheap and quick to access. However, the track(s) you decide to go with eventually, is not exclusively for your brand or campaign. So it can easily happen that your brand uses the same song that was just two months ago used by another brand. Worst case: used by your competitor. Tools like Suno help avoiding such awkward situations by enabling marketing agencies to cheaply developing music for a campaign at a fraction of time and cost. Thereby replacing stock music, mind you - not replacing bespoke music.