Skip to the content

Breaking up the static - AI and the music industry
personRachel Falconer eventOct 17, 2018

Breaking up the static - AI and the music industry

Artificial Intelligence (AI) is an established tool in music distribution, search and streaming, but now AI is being employed by industry innovators as a collaborative creative tool to disrupt the static models of music tracks and artist composition. 

Far from being a new phenomenon, the application of AI and machine-learning to the music industry has already provided a wide spectrum of convincing use cases across a number of different verticles. From generative music production to shifts in copyright structures, analytics to the breaking down of static broadcast models, A&R to consumption, sync to advertising, AI and Music Tech has most definitely arrived.

Machine Learning algorithms have been used for some time across the industry to aid search, detect musical taste and make recommendations on streaming services with companies like Pandora and Spotify headlining this space. However, there has been a recent innovative shift in the attitude towards AI and Music Tech. Machine Learning, neural networks and wider applications of AI are now beginning to be used as a collaborative tool in the creative process of making music itself. 

Despite the fact that AI is now becoming a real player across the music industry, the application of AI as a creative collaborator is often met with the common riff of fear that artificial intelligence will replace the traditional artist and kill off creative production as we know it. This stagnant position needs to be silenced, or at least seriously questioned if we are to fully understand the opportunities and potential applications for both artists and the music industry at large that total immersion in AI promises.

Source: Adobe Stock

Using machines as a collaborative tool to compose and produce music is nothing new and musician and machine have long enjoyed sharing a stage. As early as the 1950s, experimental composers such as Stockhausen, Lejaren Hiller and Iannis Xenakis were creating algorithmic compositions using randomised statistical models. Later David Bowie collaborated with former Universal Music Group CTO Ty Roberts to conceive the Verbasizer – an automated lyric generation system based on the cut-up work of Burroughs.

So, just as with the VR/XR space this time around, what’s new on the AI and Music Tech scene isn’t the technology itself but the investment in it. Major record labels, streaming services, VC firms, Angel Investors and most importantly Brands are investing in AI to deploy scalable, generative music products across an exponentially growing platform ecology.

One of the most iconic early examples of this investment and acceleration through to scalable industry is Sony’s Computer Science Laboratories (CSL), who in 2016 developed the AI system Flow Machines. They invited songwriter Benoît Carré to collaborate with their AI system to write a song in the style of The Beatles, entitled “Daddy’s Car”. CSL’s director François Pachet left Sony shortly after to join Spotify as head of the streaming service’s Creator Technology Research Lab whose focus is on “making tools to help artists in their creative process.” Pachet joined forces with Carré to promote an AI-composed music product on the streaming platform called SKYGGE. SKYGGE’s hit single“Hello Shadow,” featured on Spotify’s New Music Friday playlist in Dec. 2017, as well as on localised NMF playlists in the U.K., Norway and Scandinavia.

Google’s Magenta is another high profile example of major AI Music Tech development. Magenta provides generative tools for artists across the creative industries such as their open source experimental instrument NSynth, and its Performance RNN employs neural networks to generate dynamic, human-like qualities to traditionally machine-generated MIDI files. Producer sevenism is a prominent user of this technology dropping ambient, textural albums on his Bandcamp page at a high frequency. All of Magenta’s tools are open-source, and artists are already collaborating with these tools to co-create their own songs - the recent launch of singer-songwriter and YouTuber Taryn Southern’s I Am AI being a star use case. The artist wanted to test the limits of AI composition by putting the sound of her album entirely in the hands of AI composition outfits Google’s Magenta, Amper Music, IBM’s Watson Beat, and AIVA. By integrating AI into the compositional workflow Southern has created an unprecedented first in commercial music production.

Other more experimental artists such as Holly Herndon and Mat Dryhurst are using AI to create a generative musical system. Their “AI baby” Spawn is being trained to create music of its own by being trained on audio files of Herndon’s own voice and eventually collections of voices taken from “training ceremonies” performed in public gigs across Berlin. They believe artists should take control of the tools of production and composition in order to be at the forefront of the AI revolution that is gripping the music industry.

Another example of artists taking control of AI is the Iranian London-based producer Ash Koosha’s Yona – an AI generated virtual “singer” or auxiliary-human. She is programmed with Koosha’s machine learning algorithms but generates her own music responding to the influence of performance similar to human response systems.

Source: Ash Koosha's Yona

In the commercial music space, a global start-up scene is also now maturing around AI and Music Tech as a service - offering products that provide established artists with automated songwriting platforms, or enabling users to generate customised instrumental tracks. Companies such as Splice and Amadeus Code as well as Amper, Popgun and SecondBrain are high profile examples.

In the UK Jukedeck were early pioneers offering bespoke generative music as well as democratising the music making process and opening up collaborative possibilities between machine and musician.

A recent disruptive innovation in this space is the use of AI to break-down and intervene in the static nature of the music track itself. Start-up AI Music are trail blazers in the field, dynamically disrupting the Music Tech space with their “shapeshifting” AI products. Their vision is to optimise the value of a song by hyper-personalising a track to increase engagement between an artist and their fan and tap into the context and mood of the individual listener. Their offer of “Evolving music from a static, one-directional interaction to one of dynamic co-creation” employs AI technologies to enable dynamic co-creation between artist and listener. Each track has the ability to be shape-shifted through the use of AI to serve up different iterations and genres of the original song itself depending on the context and mood of the individual listener. Their drive to increase the value and reach of tracks is a whole new way of both experiencing and conceptualising the value of individual tracks through the hyper-personalisation of the listening experience.

This drive towards the use of AI for hyper - personalisation has obvious synergies with current trends in advertising and brand strategy. Automated personalised creative content is already being deployed by big brands across traditional marketing channels but it is the Voice space and within the area of IoT that Music Tech and AI is likely to be the most innovative – with the promise of delivering truly subliminal hyper-personalised, contextual listening experiences. The gaming industry is already looking towards AI and Music Tech to enhance the zoned experience of experiential play particularly within the VR space, and the possibilities of augmented, personalised gaming content are only now beginning to be explored.

As we begin to understand the possibilities and reach AI and Music Tech have across the music industry and beyond, labels, producers, rights holders and artists must move to the rhythm of innovation and not against it in order for truly disruptive models to play out.

About the author
Rachel Falconer
See full profile

Rachel Falconer is a digital art curator, innovation consultant, and writer. Her curating and research practice explores how emergent technologies such as AI, Machine Learning, gaming environments and VR affect creative processes, social behaviour and ethical boundaries. Rachel is also Head of Goldsmiths Digital Studios, an inter-disciplinary commercial studio of artists and creative technologists offering rapid prototyping and innovation solutions for the creative industries and beyond.

Latest from Rachel Falconer

To enable comments sign up for a Disqus account and enter your Disqus shortname in the Articulate node settings.

Recommended articles

Agtech: Where field and server farms meet

2020: why a quarter of workers could be robots

Manifesto: The Innovation Economy

Kids and Content: The OnRamp for AI in The World of Media

How blockchain could help to save the planet

Latest articles

Why automation could lead to mass unemployment and how a Universal Basic Income might be the only solution

CERN’s asset management supercollider

5 Tips on How to Innovate

Process Model Innovation: A vital part of Modern Dual Innovation

Change activism for the really agile enterprise

Outro

Science and technology are the principal drivers of human progress. The creation of technology is hindered by many problems including cost, access to expertise, counter productive attitudes to risk, and lack of iterative multi-disciplinary collaboration. We believe that the failure of technology to properly empower organisations is due to a misunderstanding of the nature of the software creation process, and a mismatch between that process and the organisational structures that often surround it.