Content uploaded by Alexander Williams
Author content
All content in this area was uploaded by Alexander Williams on Jan 22, 2025
Content may be subject to copyright.
Towards Music Industry 5.0: Perspectives on Artificial Intelligence
Alexander Williams1, Mathieu Barthet1,2
1Centre for Digital Music, Queen Mary University of London
2Aix-Marseille Univ CNRS PRISM
alexander.j.williams@qmul.ac.uk, m.barthet@qmul.ac.uk
Abstract
Artificial Intelligence (AI) is a disruptive technology that is
transforming many industries including the music industry.
Recently, the concept of Industry 5.0. has been proposed em-
phasising principles of sustainability, resilience, and human-
centricity to address current shortcomings in Industry 4.0. and
its associated technologies, including AI. In line with these
principles, this paper puts forward a position for ethical AI
practices in the music industry. We outline the current state
of AI in the music industry and its wider ethical and legal
issues through an analysis and discussion of contemporary
case studies. We list current commercial applications of AI
in music, collect a range of perspectives on AI in the indus-
try from diverse stakeholders, and comment on existing and
forthcoming regulatory frameworks and industry initiatives.
Resultingly, we provide several timely research directions,
practical recommendations, and commercial opportunities to
aid the transition to a human-centric, resilient, and sustainable
music industry 5.0. This work particularly focuses on west-
ern music industry case studies in the European Union (EU),
United States of America (US), and United Kingdom (UK),
but many of the issues raised are universal. While this work
is not exhaustive, we nevertheless hope it guides researchers,
businesses, and policy makers to develop responsible frame-
works for deploying and regulating AI in the music industry.
Introduction
Cultural and creative industries produce and disseminate
cultural products influenced by people’s lifestyles, beliefs,
attitudes, and insights (Jahromi and Ghazinoory 2023). They
provide economic value and contribute to innovation, em-
ployment, and national competitiveness (Mbamba 2024).
Music is one such industry, and generally refers to individu-
als and organisations that earn money by creating and selling
various forms of music, performing, and organisations and
professionals that aid, train, assist, represent and supply mu-
sic creators (UK Music 2024).
The makeup of the music industry is not static and has de-
veloped closely in relation to technological innovation and
the creation of new music technology (music tech) (Lerch
2018). Some may consider music tech companies a segment
of the overall music industry, but tensions have always ex-
isted in their complex relationship with the wider industry.
Copyright © 2025, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.
The music tech sector typically seeks to develop disruptive
or value-adding technologies that challenge the industry sta-
tus quo in areas such as music production, publishing, con-
sumption, or distribution, while music rights-holders seek to
protect the value of their intellectual property (IP) - primar-
ily music recordings and artist brands. In many cases, the
success and survival of an emerging music tech company de-
pends upon securing licencing arrangements for copyrighted
material from asset holders or obtaining support through
partnerships. This power dynamic means many emerging
music tech companies will be resisted if they are seen not
to align with the corporate strategic goals of the incumbent
oligopoly (Watson and Leyshon 2022).
Current music tech innovation is rooted in the maturity of
Industry 4.0 technologies such as AI, big data, cloud com-
puting, virtual / augmented reality (VR / AR), the Internet
of Things, and blockchain. Such technologies are (socially)
disruptive (Hopster 2021) and are currently transforming the
music industry by introducing new business models, rev-
enue streams, and methods of music distribution and en-
gagement (Jahromi and Ghazinoory 2023; Mbamba 2024;
Clancy 2021). In 2021, the European Commission proposed
the concept of Industry 5.0. Where Industry 4.0 focused on
increasing production efficiency, flexibility and worker up-
skilling through technical innovation, Industry 5.0 focuses
on using the encompassed technologies to achieve societal
goals beyond jobs and growth, such as social fairness, sus-
tainability, and worker wellbeing (Xu et al. 2021).
Recent reports suggest that, without intervention, music
sector workers stand to lose nearly a quarter of their income
to AI in the next four years (Cisac and PMP Strategy 2024)
and up to 30% of (UK) jobs are automatable with AI with
“crafts, creative arts and design” roles amongst those cur-
rently most at risk (Department for Education 2023). Given
the fundamental human connection in music (Malloch and
Trevarthen 2018), it is essential that the music industry tran-
sitions to an Industry 5.0. model and adopts its three core
principles of human-centricity, sustainability, and resilience
to ameliorate the existential challenges posed by AI.
Related Work
In recent years, there has been an increasing number of
works discussing ethical issues of AI in music applications
generally and in specific commercial case studies, partic-
ularly generative music. (Mbamba 2024) reviews the im-
pact of Industry 4.0 technologies on global creative indus-
tries, particularly focusing on AI for music creation and rec-
ommendation, VR / AR, and blockchain for digital rights
management. (O˘
gul 2024) contrasted ethical guidelines pub-
lished by various AI researchers, music industry organi-
sations, and campaign groups. (Barnett 2023) conducted a
systematic literature review on ethical implications of gen-
erative audio models while (Jabour 2024) focused on the
perceptions, ethical concerns, and business opportunities of
AI-generated vocals. (Holzapfel, Sturm, and Coeckelbergh
2018; Huang et al. 2023) present ethical considerations relat-
ing to music information retrieval technology (MIR), while
(Peeters 2021) looks more closely at the impact of AI on
MIR in general. (All-Party Parliamentary Group on Music
and UK Music 2024) presents a report on AI in relation to
the UK music industry. (Huang, Sturm, and Holzapfel 2021)
presents an East-Asian ethical perspective on applying AI to
music applications. (Ma et al. 2024) present a survey of AI
models for music applications with discussion of ethical and
social issues. (Sturm et al. 2024) and (Boon 2023) critique
the models of specific music generation platforms, Boomy
and AIVA. (Pasti Da Porto 2023) studies how the music in-
dustry can meet UN Sustainable Development Goals.
Commercial Applications of AI in the Music
Industry
AI systems are already being used commercially for rec-
ommending music (Born et al. 2021), DJing (Clancy 2021),
separating songs into their separate instrument parts such as
vocals, drums, guitar etc. (Hennequin et al. 2019; Sun 2023;
Clancy 2021), mastering music (Birtchnell 2018; Robin-
son 2024; Welsh 2022), imitating a singer’s voice (Mon-
roe 2023; Minsker 2021; Coscarelli 2023; Hawthorne 2024),
writing lyrics (Simpson 2022; Taylor 2024) sound design /
foley (noa 2021), music transcription (Bittner 2022), cre-
ating concert visuals and VR performances (Google Arts
& Culture 2019; Rufo 2024), sample identification (Cetin
2023), generating music artwork (Jones 2024a), generating
dancer choreography (Studio Wayne McGregor 2019; My-
ers 2023), music venue security and management (Henkin
2023; Anderson 2024; noa 2019), and music marketing and
public relations (Adgcraft Communications 2024). In the
last two years, there has been a notable rise in the num-
ber and quality of music generation models and services in-
cluding Suno, Udio, Boomy, AIVA, SOUNDRAW, Tad AI,
Google’s MusicLM (Agostinelli et al. 2023), Meta’s Au-
dioCraft, Stability AI’s Stable Audio / Open (Evans et al.
2024a,b) and others.
Such use cases indicate that AI is already affecting profes-
sions in the creative industries including artists, musicians,
composers, DJs, visual artists / graphic designers, mixing
engineers, marketing, public relations, journalists, sound de-
signers, song writers, publishers, producers, choreographers,
and performers. AI systems also influence culture and con-
sumer habits (Born et al. 2021; Holzapfel, Sturm, and Co-
eckelbergh 2018) and exploit general consumers for unpaid
labour (Morreale et al. 2023). We can expect that new appli-
cations of AI in the music industry will emerge with maturity
in their technology readiness level (TRL).
Public and Professional Attitudes to AI in the
Music Industry
AI has been described as a floating signifier in that it can
mean different things to different people (Suchman 2023).
It has also been suggested that those with familiarity and
expertise with AI are more likely to support its general ap-
plication (Horowitz et al. 2024) and that there are numer-
ous factors that can lead to incorrectly assessing AI’s capa-
bilities (Crompton 2021). Therefore, any general opinions
on AI should be taken cautiously, particularly as it has cap-
tured mainstream public attention in recent years. Neverthe-
less, while opinions on AI in the music industry vary even
amongst similar stakeholders, common themes emerge.
Artists and Music Industry Professionals
Over 35,000 professionals working in creative industries in-
cluding literature, music, film, theatre and television have
backed a statement against using unlicensed training data for
AI (Milmo 2024). An open letter from the Artist Rights Al-
liance advocacy group seeking protections against predatory
use of AI has also been signed by 200 well-known music
acts (Artist Rights Alliance 2024).
Meanwhile, a recent survey (Tencer 2024a) of mostly
western music producers suggests that 25% are now using
AI in the creation of music. Of those that do: 74% use it for
stem separation; 46% for mastering and EQ plugins; 21% for
generating song elements; and 3% to create entire songs. Of
the 75% not using AI: 82% cite artistic and creative reasons;
35% quality reasons; 14% costs; and 10% copyright con-
cerns as reasons. Assistive AI was seen more positively than
generative AI but both had less than 50% approval. Will-
ingness to pay for AI tools was also low. Another survey of
1,600 self-releasing artists (Dalugdug 2023) from DIY dis-
tributor TuneCore found that 27% of indie music artists had
used AI in some capacity. Of those artists who had used AI
tools: 57% had used it to create artwork; 37% had used it
to create promotional assets; and 20% had used it to engage
with fans. About half of respondents expressed willingness
to license their music for training AI, while a third expressed
willingness to grant consent for their music, voice or artwork
to be used in generative AI.
Many music industry groups including professional asso-
ciations, music tech companies, music publishers, and aca-
demic and educational institutions have backed initiatives
including aiformusic and the Human Artistry Campaign that
contain principles for AI music creation emphasising re-
sponsible development and human involvement (Universal
Music Group 2024; Tencer 2024c). Sony, one of the largest
music publishers in the world with a significant music tech
division, has also published a statement declaring its support
for human artistry and clear intention to opt out of any unli-
censed AI training or data mining carried out by external ac-
tors on its content (Aswad 2024). While such organisations
are undoubtedly protecting their own commercial interests,
they are nevertheless protecting artists’ rights in the process.
General Public / Music Consumers
Two surveys on the general public’s attitudes to various ap-
plications of AI in the music industry were conducted by
music industry organisations. The first is in the 2023 report
by (International Federation of the Phonographic Industry
2023) representing the views of over 8000 record companies
worldwide. It surveys 43,000 people from 26 countries ac-
counting for 91% of global recorded music market revenues
on their musical habits and opinions including AI. Results
suggest: 79% feel human creativity essential to the creation
of music; 76% think an artist’s music or vocals should not
be used or ingested by AI without permission; 74% agree AI
should not be used to clone or impersonate artists without
authorisation; 73% agree AI systems should clearly list any
music used for training; 70% think there should be restric-
tions on what AI can do; and 64% say governments should
play a role in setting restrictions in what AI can do.
The second is a 2024 UK-specific report on AI and the
Music Industry compiled by UK Music (All-Party Parlia-
mentary Group on Music and UK Music 2024), which in-
cluded a poll on the UK general public’s attitudes to mu-
sic applications of AI (Whitestone Insight 2024). In it, 83%
agree AI-generated songs should be clearly labelled; 80%
agree that the law should prevent an artist’s music from be-
ing used to train an AI application without consent; 77%
agree that AI-generated music that does not acknowledge
the original music’s creators amounts to theft; 68% are con-
cerned about music artists losing out financially by having
their work used by AI to generate new music; 66% are con-
cerned about AI generation eventually replacing human cre-
ativity; 62% are concerned about the rise of deep fakes of
their favourite artist; and 55% are concerned about listening
to AI generated music without realising it.
Issues Raised by AI in Music
Experts have argued that AI models will carry benefits such
as increasing reach and accessibility of the arts (Jahromi and
Ghazinoory 2023), providing creative opportunities, new
mediums for expression, saving time on routine procedures,
offering inspiration (Deruty et al. 2022; Birtchnell 2018), be-
ing a tool for financial benefit, and a way for music fandoms
to engage with artists (Shroff 2024). Conversely, issues have
been raised relating to ownership and distribution rights,
royalty sharing, fair use of training data, job displacement
/ automation of traditional creative and knowledge work,
competition, deskilling, model bias, cultural appropriation,
creativity stifling and climate impact (Sturm et al. 2019;
Shroff 2024; Rezwana and Maher 2023; Barnett 2023; Hen-
derson et al. 2020; Boon 2023; Sturm et al. 2024). Exploita-
tive working practices and inequality are already common in
the music industry (Arditi and Nolan 2024). Like digitalisa-
tion (Pusztahelyi and Stef´
an 2024), AI has the potential to
both improve existing conditions and pose new issues.
Environmental Impact
The music industry’s environmental impact is significant
owing to the energy and resource utilisation associated
with live music, physical and digital music distribution,
and manufacturing and distributing music equipment (Bren-
nan 2020). Some initiatives, guidelines and businesses have
emerged to reduce music’s environmental impact (Nolan
2024; Pasti Da Porto 2023), but widespread industrial adop-
tion of AI poses challenges to sustainability goals as model
energy requirements become non-trivial (Peeters 2021).
Automation and Deskilling
Given its significant economic potential, automation in cre-
ative industries is likely to continue expanding to high-value
work previously done only by humans. AI systems will not
necessarily need to perform better than humans for such
substitution to take place, instead it is likely that quality-
cost considerations will inform business adoption (Melville,
Robert, and Xiao 2023). While workers with unique career
histories, contextual music industry knowledge and devel-
oped ’people skills’ should have the privilege to charge for
premium services (Birtchnell 2018), certain applications of
AI risk displacing and devaluing early career opportunities
that are crucial for industry workers to develop confidence,
experience and portfolio for operating at and obtaining work
at a higher level.
Intellectual Property Issues
Creative and cultural industries are more than just the gen-
eration of IP but IP is of significant interest to various stake-
holders due to its economic value (Lee 2022). There are
three primary aspects of generative AI applications that in-
tersect with IP protection (Wengen and Ribbert 2024): (i)
learning with protected works as input; (ii) copyright protec-
tion of AI generated works; and (iii) potential infringement
by the output of pre-existing works. These intersections and
their current legal ambiguities are the subject of legal cases
in various countries. For example, major music publishers
UMG, Sony Music, and Warner Records are suing genera-
tive AI music companies Suno and Udio in the US for copy-
right infringement (Brittain 2024), while in the UK a court
case between stock photo provider Getty Images and British
AI company Stability AI is pending trial at the High Court
over the use of copyrighted images for training their Sta-
ble Diffusion image generation model (Davies 2024; Brit-
tain 2023). Current uncertainty and emerging legislation has
led AI companies to seek pre-emptive deals to secure usage
of copyrighted materials for training (noa 2024), and ma-
jor music publishers such as UMG to seek deals licensing
artists’ music and voices with various AI companies includ-
ing Google (Ingham 2023), Endel (Olson 2023) and TikTok
(Gerken 2024; Casey 2024). However, the details of these
deals mostly remain private.
Licensing and Remuneration Models Discussions on
fair exposure and remuneration structures for music creators
who license their data for training AI models have yet to
generate consensus (Henry et al. 2024). Licensing deals are
often bespoke. For example, Holly Herndon and Grimes are
two independent artists who have both created and freely
distributed their own AI voice models for public use but with
different forms of remuneration. Grimes proposed splitting
50% royalties on any successful song that uses her AI voice
- an identical deal to any human artist she would collaborate
with (Monroe 2023) - while income generated from com-
mercial licensing of Holly Herndon’s AI voice would go di-
rectly to her IP-owning cooperative to fund new tool devel-
opment (Minsker 2021). Furthermore, there are active de-
bates over opt-in versus opt-out licensing models (Pasquale
and Sun 2024; Gahnberg 2024) - whether data can be used
for AI training by default before user intervention. Critics of
opt-out schemes say it is unfair to put the burden of opting
out of AI training on the creator whose work is being trained
on when many will be unaware of such schemes (Milmo
2024), particularly when a significant proportion of the mu-
sic industry are under-resourced freelancers (Rutter 2016).
Dynamic opt-in licensing and attribution-based models -
where revenue is paid out proportionally to the use of data
- have gained traction in industry. For example, music tech
company LANDR’s ‘Fair Trade AI’ program lets musicians
using its platform earn money by opting their music in to
internal AI training (subject to curation) (Robinson 2024).
Participating users will receive an attribution-based share of
20% of the revenue generated by any of LANDR’s tools
trained on this dataset. Similarly, Sureel is a dynamic licens-
ing management platform that tracks data usage for attribu-
tion payments and integrates with the Do Not Train registry
hosted by Spawning AI, which is respected by Hugging Face
and Stability AI amongst others (Pelczynski 2024). Major
music publishers seem to favour the attribution model, sug-
gested by the partnership between UMG and the company
ProRata (Stassen 2024; Knibbs 2024a).
Academics have also proposed alternative models for roy-
alty distribution of music created using AI such as (national)
levy-based trust funds and ownership funds (Drott 2021;
Jacques and Flynn 2024), and attribution-based on algorith-
mic evaluation (Deng, Zhang, and Ma 2024).
Data Poisoning In some cases, bad actors may choose to
avoid licensing content or to ignore opt-out directives from
creators to train on their data. In response to this, services
have been introduced offering so-called “data poisoning” -
imperceptibly altering the pixel composition of images to
perturb AI models being trained on that data and degrade
model performance (Heikkil¨
a 2023; Chen et al. 2023).
Data poisoning can be likened to an adversarial attack that
causes future harm by incorrectly calibrating AI models. It
is challenging to mitigate for (Chen et al. 2023). Though not
yet mature, there is research in applying this to musical au-
dio contexts (Meerza, Sun, and Liu 2024). Without condon-
ing the practice, it is plain that some of its application stems
from creators’ loss of agency over how their work is used. It
highlights the need for informed consent, fair licensing deals
over user data, and ethical dataset creation.
Music Identification and IP Enforcement The Google
Assistant software can identify very short music excerpts
and unearth previously undiscovered samples (Cetin 2023)
using an approach likely employing neural audio fingerprint-
ing (Arcas et al. 2017). While sample identification can be
informative for music listeners, such technology also al-
lows for easier detection of copyright infringement, espe-
cially when deployed on digital music distribution platforms
hosting user-generated content such as YouTube, Spotify,
or SoundCloud. Content moderation systems on these plat-
forms rely on algorithms that scan millions of content up-
loads automatically each day, and can produce outcomes that
include blocking and taking down material. However, such
rigid and widespread enforcement of copyright discourages
the distribution of infringing works produced through spe-
cific creative practices that involve sampling, such as hip
hop music, mash-ups, and bootleg remixes (Watson 2024;
Brøvig-Hanssen and Jones 2023).
Copyright laws that protect owners’ interests in IP are
balanced by limitations and exceptions intended to prevent
copyright from excessively impinging on freedom of expres-
sion. These are country-specific, for example ‘fair use’ in the
US and ’parody’ in the EU / UK, but rarely specify precisely
what kind of appropriation of copyrighted material is per-
mitted. Many uses of copyrighted material are untested in
court and their legal status remains unclear. Consequently,
platforms mostly ignore limitations and exceptions to enact
blanket decisions in the interests of operational efficiency,
significantly reducing the efficacy of copyright law’s ex-
ceptions to the detriment of cultural expression (Brøvig-
Hanssen and Jones 2023). Thus, enforcing copyright in these
contexts is a nuanced issue, that requires balancing creative
liberties with the economic incentives provided by IP rights.
Reproducibility in Music AI Research
Advances in music AI research, and AI more generally
(Henderson et al. 2018), are currently stifled by repro-
ducibility issues. New issues have emerged due to the cur-
rent trend of requiring larger models to achieve current state-
of-the-art quality and subsequent protection of their com-
mercial value. Producing large commercially-ready (Lavin
et al. 2022) models is only possible with intense resources.
With rare exception, only a few large tech corporations can
create and deploy large AI systems at scale, from start to fin-
ish (Widder, West, and Whittaker 2023). Many companies
will also choose to limit access to models or not divulge
proprietary algorithms or model training processes, mostly
for commercial reasons but occasionally for AI safety rea-
sons. For example, the original paper for Google’s Musi-
cLM (Agostinelli et al. 2023) stated that they had “no plans”
to release the tool to the public due to concerns that a sig-
nificantly large proportion of its generated output could be
traced to copyrighted sources. But Google has also recently
made it available for beta testing following licensing agree-
ments with UMG (Jabour 2024). Safety and commercial
value are both valid reasons not to share work, but this re-
stricts research progress and conflicts with optimistic hopes
for decentralisation as a benefit of AI (Birtchnell 2018).
The State of AI Regulation in the West
Optimistic predictions of AI’s economic potential has led
policy makers around the world to back the technology with
various initiatives (Uren and Edwards 2023). At the same
time, there is caution. G7 Nations have signed the Hiroshima
AI Process which contains high-level guiding principles for
developing advanced AI systems (Wintour 2024) and AI leg-
islation is starting to emerge in many polities including the
EU, US, and UK. While the extent of regulation varies, west-
ern polities appear motivated in finding a balance between
supporting their creative / cultural and technology sectors.
European Union
The EU has recently approved its AI Act, which will be
fully applicable in law by 2026 with some aspects already
in effect (Official Journal of the European Union 2024).
General purpose AI models will have to comply with trans-
parency requirements and EU copyright law by labelling AI-
generated content, designing models to prevent generating
illegal content, publishing summaries of copyrighted train-
ing data, and obtaining the authorisation of the rights holder
concerned for any use of copyright protected content, unless
relevant exceptions and limitations apply (Tencer 2024b).
The act was cautiously welcomed by trade organisations
for IP rights holders, including a number of music indus-
try groups (International Federation of the Phonographic In-
dustry 2024), but measures were also criticised as “watered
down” by consumer watchdogs (Corporate Europe Observa-
tory 2024). The nature of risk self-assessment and disparity
in risk-level regulation suggests that many music industry
applications could potentially be assessed as low-risk de-
spite the numerous impacts identified here and consequently
subjected to looser regulation (Nature Editorials 2024). Ad-
ditionally, while the act encourages sustainability through
standardisation, codes of practice and information disclo-
sure, it does not respond effectively to the AI industry’s sig-
nificant environmental impacts (Pereira 2024).
United States of America
AI regulation in the US is currently a patchwork of guide-
lines proposed by state and local governments. In terms of
IP, the US Copyright Office released recently updated guide-
lines rejecting the notion of considering AI as a contributor,
stating that they do not register works created by machines
and creative works still need a human author to qualify for
copyright protections (Rockwell 2024; Berkowitz 2024).
Publicity laws, rather than copyright, protect an individ-
ual’s name, image, and likeness from being exploited for
commercial purposes (Rockwell 2024). In 2023 / 2024, the
NO AI FRAUD and NO FAKES Acts were introduced in
the US House and Senate respectively seeking to establish
a “right of publicity” at the federal level and hold individ-
uals and companies liable for producing or hosting deep-
fakes. While these bills have not passed yet, in July 2024
Tennessee’s ELVIS Act was the first state-level legislation
to come into effect with the intention to protect musicians
having their vocal likeness generated by AI for commer-
cial purposes. The bill makes it illegal to replicate an artist’s
voice without their consent (All-Party Parliamentary Group
on Music and UK Music 2024).
Elsewhere, the AI Environmental Impacts Act has been
introduced to direct the National Institute for Standards and
Technology to collaborate with academia, industry and civil
society to establish standards for assessing AI’s environmen-
tal impact, and to create a voluntary reporting framework for
AI developers and operators. However, this legislation has
not yet passed. In any case, voluntary measures rarely pro-
duce a lasting culture of accountability and consistent adop-
tion because of reliance on goodwill (Crawford 2024).
The United Kingdom
The UK has no specific AI legislation and AI is currently
governed by limited pre-existing laws. For example, the IP
aspect of AI generated works is currently covered by the
Copyright, Designs and Patents Act 1988. Unlike the US, the
UK’s act explicitly allows for computer-generated work to
be copyrighted. Section 9(3) of the act stipulates that when
a work has ‘no human author’ and is ‘computer-generated’,
the ‘author’ is defined as the person who makes ‘arrange-
ments necessary for the creation of the work’ and is granted
copyright. The law currently empowers UK courts to de-
cide appropriate authorship of AI-generated music works
depending on the facts of each case. Ambiguities mean this
becomes complex when many different stakeholders could
be involved in the creation of AI-generated music such as
the user, programmer, data controller, training data creator,
model trainer, model owner, investors, or any combination of
these (Sturm et al. 2019; Koempel 2020; Majumdar 2023).
British lawmakers appear to have recognised that the cur-
rent law is inadequate in the context of AI-generated works.
In 2023, a debate was held in the UK parliament on IP Rights
in relation to AI (HC Deb 2023). One of the key outcomes
was a successful argument against a so-called text and data
mining (TDM) exemption on copyrighted works to allow AI
developers free use of existing music, literature and works of
art for the purposes of training AI (Culture, Media and Sport
Committee 2023). However since then the UK Government
has changed following a democratic election in 2024 and it
now appears that the TDM exemption and opt-out system are
being reconsidered by the new government following lobby-
ing from AI companies (Thomas and Gross 2024).
A report by UK music industry representatives and UK
parliamentarians (All-Party Parliamentary Group on Music
and UK Music 2024) makes eight recommendations for the
UK Government informed by testimony from legal experts
on UK, EU, and US IP law, authors associations, and the
British music tech company DAACI. The recommendations
centre around the introduction of a pro-creative industries AI
Bill that protects copyright, introduces new rights and obli-
gations around labelling and record keeping, and enhances
personality rights. Other recommendations include transpar-
ent labelling requirements for AI-generated content, an obli-
gation for AI developers and those using models to keep a
record of training data, compliance with UK copyright law,
addressing the copyright status of AI-generated works, and
specific personality rights to protect individual’s voice, im-
age, name, and likeness from misappropriation.
Conclusions and Recommendations
AI is affecting the music industries in a myriad of ways still
being borne out. This paper has outlined many areas of the
western music industry that are currently affected by AI and
detailed some of the associated issues, taking into account
current general public and music industry perceptions on
AI and current / proposed legal frameworks in the EU, US
and UK. To conclude this paper, we make recommendations
consistent with the principles of Industry 5.0. on topics of
advocacy, working practices, commercial opportunities, and
research directions to support the transition to a sustainable,
resilient and human-centric music industry.
Sustainability and Ethical Practice
Disclosure of potential impacts and ethical implications of
AI in music is currently lacking. Findings in (Barnett 2023)
suggest that less than 10% of generative audio research pa-
pers discuss any negative broader impact in their work, even
though 65% consider potential positive broader impacts.
(Henderson et al. 2020) meanwhile found substantial under-
reporting of information to calculate energy and resource
use. (Reje 2022) suggests that many emerging music tech
start-ups developing AI do not prioritise the adoption of for-
mal ethical guidelines and (O˘
gul 2024) notes that environ-
mental impact and sustainability are frequently missing even
from published ethical guidelines on AI in the music indus-
try. Largely, AI developers are not prioritising energy reduc-
tion, energy-efficient models, or disclosing relevant data. As
such it is hard to get accurate and complete data on AI’s en-
vironmental impact (Crawford 2024).
Organisations developing AI music systems should be
transparent and share relevant information and resources
where possible, disclosing potential impacts in appropriate
language (Haueis 2024). They should create or adopt ethical
guidelines (e.g. aiformusic) to guide development and fol-
low the machine learning technology readiness level (ML-
TRL) assessment framework proposed by (Lavin et al. 2022)
which prioritises ethics and fairness to develop principled,
safe, and trusted AI technology. Model developers should
take steps to report the energy required for the training and
inference of their AI models and make efforts to minimise
energy use through adoption of efficient model architectures
and data pipelines (Chen et al. 2023; Douwes et al. 2023),
following examples such as (Douwes et al. 2023; Douwes
and Serizel 2024; Tabata and Wang 2021; Utz and DiPaola
2023) which demonstrate methods for computing and op-
timising energy use and emissions in the training and in-
ference of AI audio models and other digital audio applica-
tions. Developers should also consider using smaller training
datasets which require less energy to train on and enable ad-
ditional benefits in certain use-cases (Vigliensoni, Perry, and
Fiebrink 2022). These measures will enable adopters to, in
turn, make an informed decision about specific model use.
We agree with (Brennan 2020) and (Crawford 2024) that
addressing the environmental impact of AI is a collective ef-
fort from industry, researchers, legislators, and the public.
Audiences must consider their consumption choices. Like-
wise musicians, manufacturers, promoters, labels, and tech-
nology companies that rely on AI or musical content for
their business model should consider the sustainability of
their working practices and shift towards more sustainable
ones where practicable. For this, we recommend leverag-
ing resources from organisations such as Music Declares
Emergency (Nolan 2024) and elsewhere (Jones, McLachlan,
and Mander 2021) to inspire mitigation efforts. However, we
recognise that without strong legislation or sufficient incen-
tives many companies and industry workers may prioritise
profit acquisition, career opportunities, or user needs over
sustainability goals (Sturm et al. 2024), even when environ-
mentally conscious values are held (Røyseng, Vinge, and
Stavrum 2024). Thus, most importantly, lawmakers should
develop targeted incentives in relation to AI and the music
industry to address the climate crisis (Crawford 2024).
Investment in Human Creativity and Industry
Grassroots
Large, data-intensive AI models have shifted the value of
music towards its profitability as data, rather than intrinsic
artistic worth. But it is crucial to acknowledge that the effi-
cacy of these AI systems is contingent upon the high-quality
materials from which they learn that predominantly stem
from human creativity (Jacques and Flynn 2024). There-
fore, we recommend that individuals and organisations that
feel strongly about work quality and Industry 5.0. princi-
ples should strive to commission human creatives where
practicable. Similarly, we would like to see more initiatives
from industry and governments investing in the music in-
dustry grassroots to maintain a creative talent pipeline. For
example, Spotify’s investment in UK youth clubs (Collins
2024) and the UK Government’s backing of a voluntary levy
on tickets at large venues to support grassroots venues and
workers (Reilly 2024). Such initiatives will not only main-
tain fulfilling employment opportunities, facilitate worker
upskilling, and build music industry resilience, but also con-
tribute to the resilience of AI models in general. New, high
quality dataset creation will help to maintain model per-
formance and contemporaneity while web-scraped data be-
comes increasingly unreliable and ineffective due to AI-
generated content (Alemohammad et al. 2024; Jones 2024b).
Intellectual Property Licensing and Compliance
Shifting legal situations in the UK, EU, and US in favour of
the entrenched music industries demonstrate the importance
for individuals and organisations to organise, advocate, and
lobby for their position. However, developments of both the
technology and the legal frameworks governing it should be
closely watched to ensure they are fair to creators and other
key industry workers. Standard licensing practices are yet to
emerge so it is important to set fair precedents within the in-
dustry in terms of remuneration, transparency, and consent.
A global music industry with divergent legislation makes
international data sharing and standards adherence com-
plex. There are opportunities in IP licensing and environ-
mental standards compliance certification and enforcement
as demonstrated by the Fairly Trained Initiative (Knibbs
2024b). Several ongoing research directions could be valu-
able to various aspects of compliance from detecting deep
fakes and AI-generated music (Vaglio et al. 2021; Khanjani,
Watson, and Janeja 2023; Desblancs et al. 2024), to assess-
ing generative music output similarity for attribution-based
models (Flexer 2023; Batlle-Roca et al. 2024; Deng, Zhang,
and Ma 2024), the placement and detection of watermarks in
AI-generated content (Porter 2023), and data poisoning for
music (Meerza, Sun, and Liu 2024).
Acknowledgements
Alexander Williams is a research student at the UKRI Cen-
tre for Doctoral Training in Artificial Intelligence and Mu-
sic, supported jointly by UK Research and Innovation [grant
number EP/S022694/1] and Queen Mary University of Lon-
don. We wish to thank Lord Tim Clement-Jones for an in-
sightful discussion on AI governance in the creative indus-
tries. The Centre for Digital Music is a signatory to the aifor-
music initiative and member of Music Technology UK.
References
2019. Digital Barriers enhances security operations at Lon-
don’s O2 Arena for The BRIT Awards and National Televi-
sion Awards. Source Security.
2021. Queen Mary spinout Nemisindo launches online
sound design service based on procedural audio technology.
Queen Mary University of London.
2024. OpenAI signs multi-year content partnership with
Cond´
e Nast. The Guardian.
Adgcraft Communications. 2024. Integrating AI in Music
Industry PR.
Agostinelli, A.; Denk, T. I.; Borsos, Z.; Engel, J.; Verzetti,
M.; Caillon, A.; Huang, Q.; Jansen, A.; Roberts, A.;
Tagliasacchi, M.; Sharifi, M.; Zeghidour, N.; and Frank,
C. 2023. MusicLM: Generating Music From Text.
ArXiv:2301.11325 [cs, eess].
Alemohammad, S.; Casco-Rodriguez, J.; Luzi, L.; Imtiaz,
A.; Babaei, H.; LeJeune, D.; Siahkoohi, A.; and Baraniuk,
R. 2024. Self-Consuming Generative Models go MAD. In
The Twelfth International Conference on Learning Repre-
sentations (ICLR). Vienna, Austria.
All-Party Parliamentary Group on Music; and UK Music.
2024. Artificial Intelligence and the Music Industry – Master
or Servant? Technical report, United Kingdom.
Anderson, C. 2024. The O2 introduces first self-serve bars
powered by AI. News Shopper.
Arcas, B. A. y.; Gfeller, B.; Guo, R.; Kilgour, K.; Kumar, S.;
Lyon, J.; Odell, J.; Ritter, M.; Roblek, D.; Sharifi, M.; and
Velimirovi´
c, M. 2017. Now Playing: Continuous low-power
music recognition. In Machine Learning on the Phone Work-
shop at the 31st Conference on Neural Information Process-
ing Systems. Long Beach, CA, USA. ArXiv:1711.10958.
Arditi, D.; and Nolan, R., eds. 2024. The Palgrave Hand-
book of Critical Music Industry Studies. Cham: Springer
Nature Switzerland. ISBN 978-3-031-64012-4 978-3-031-
64013-1.
Artist Rights Alliance. 2024. 200+ Artists Urge Tech Plat-
forms: Stop Devaluing Music.
Aswad, J. 2024. Sony Music Warns AI Developers Not to
Use Its Content for Training. Variety.
Barnett, J. 2023. The Ethical Implications of Generative Au-
dio Models: A Systematic Literature Review. In Proceed-
ings of the 2023 AAAI/ACM Conference on AI, Ethics, and
Society, AIES ’23, 146–161. Montr´
eal, QC, Canada.: Asso-
ciation for Computing Machinery. ISBN 9798400702310.
Batlle-Roca, R.; Liao, W.-H.; Serra, X.; Mitsufuji, Y.; and
G´
omez, E. 2024. Towards Assessing Data Replication in
Music Generation with Music Similarity Metrics on Raw
Audio. In Proc. of the 25th Int. Society for Music Infor-
mation Retrieval Conf. San Francisco, CA, USA.
Berkowitz, A. E. 2024. ”Gimme Some Truth”: AI Music
and Implications for Copyright and Cataloging. Information
Technology and Libraries, 43(3).
Birtchnell, T. 2018. Listening Without Ears: Artificial In-
telligence in Audio Mastering. Big Data & Society, 5(2):
205395171880855.
Bittner, R. 2022. Meet Basic Pitch: Spotify’s Open Source
Audio-to-MIDI Converter. Spotify Engineering.
Boon, H. 2023. Alien power chords: AIVA has ‘musical
artist status’ in France – but what about the humans who
feed it? The Sociological Review Magazine.
Born, G.; Morris, J.; Diaz, F.; and Anderson, A. 2021. Artifi-
cial Intelligence, Music Recommendation, and the Curation
of Culture: A White Paper.
Brennan, M. 2020. The Environmental Sustainability of
the Music Industries. In Oakley, K.; and Banks, M., eds.,
Cultural Industries and the Environmental Crisis: New Ap-
proaches for Policy, 37–49. Cham: Springer International
Publishing. ISBN 978-3-030-49384-4.
Brittain, B. 2023. Getty Images lawsuit says Stability AI
misused photos to train AI. Reuters.
Brittain, B. 2024. Music labels’ AI lawsuits create copyright
puzzle for courts. Reuters.
Brøvig-Hanssen, R.; and Jones, E. 2023. Remix’s retreat?
Content moderation, copyright law and mashup music. New
Media & Society, 25(6): 1271–1289. SAGE Publications.
Casey, S. 2024. Tik Tok and Universal Music Group’s Li-
censing Battle. Cardozo Arts & Entertainment Law Journal
Blog.
Cetin, M. 2023. Detecting samples less than one second long
now possible with Google Assistant, report shows. DJ Mag.
Chen, Z.; Wu, M.; Chan, A.; Li, X.; and Ong, Y.-S. 2023.
Survey on AI Sustainability: Emerging Trends on Learn-
ing Algorithms and Research Challenges [Review Article].
IEEE Computational Intelligence Magazine, 18(2): 60–77.
Cisac; and PMP Strategy. 2024. Study on the Economic
Impact of Generative AI in the Music and Audiovisual In-
dustries. Technical report.
Clancy, M. 2021. Reflections on the Financial and Ethical
Implications of Music Generated by Artificial Intelligence.
PhD Thesis, Trinity College, Dublin, Ireland.
Collins, R. 2024. Can Spotify’s youth club plan actually help
the future of music? BBC News.
Corporate Europe Observatory. 2024. Trojan Horses: How
European Startups Teamed up with Big Tech to Gut the AI
Act.
Coscarelli, J. 2023. An A.I. Hit of Fake ‘Drake’ and ‘The
Weeknd’ Rattles the Music World. The New York Times.
Crawford, K. 2024. Generative AI’s environmental costs are
soaring — and mostly secret. Nature, 626(8000): 693–693.
Nature Publishing Group.
Crompton, L. 2021. The decision-point-dilemma: Yet an-
other problem of responsibility in human-AI interaction.
Journal of Responsible Technology, 7-8: 100013.
Culture, Media and Sport Committee. 2023. Connected
tech: AI and creative technology. Technical Report HC
1643, House of Commons, United Kingdom.
Dalugdug, M. 2023. 27% of indie artists have used AI music
tools, according to TuneCore study. Music Business World-
wide.
Davies, C. W. 2024. Getty Images V Stability I: The Im-
plications for UK Copyright Law and Licensing. Pinsent
Masons.
Deng, J.; Zhang, S.; and Ma, J. 2024. Computational Copy-
right: Towards A Royalty Model for Music Generative AI.
In 2nd Workshop on Generative AI and Law, co located with
the International Conference on Machine Learning. Hon-
olulu, Hawaii, USA.
Department for Education. 2023. The Impact of AI on UK
Jobs and Training. Technical report.
Deruty, E.; Grachten, M.; Lattner, S.; Nistal, J.; and
Aouameur, C. 2022. On the Development and Practice of
AI Technology for Contemporary Popular Music Produc-
tion. Transactions of the International Society for Music
Information Retrieval, 5(1): 35–49. Ubiquity Press.
Desblancs, D.; Meseguer-Brocal, G.; Hennequin, R.; and
Moussallam, M. 2024. From Real to Cloned Singer Iden-
tification. In Proc. of the 25th Int. Society for Music In-
formation Retrieval Conf. San Francisco, CA, USA: arXiv.
ArXiv:2407.08647.
Douwes, C.; Bindi, G.; Caillon, A.; Esling, P.; and Briot, J.-
P. 2023. Is Quality Enough? Integrating Energy Consump-
tion in a Large-Scale Evaluation of Neural Audio Synthesis
Models. In ICASSP 2023 - 2023 IEEE International Confer-
ence on Acoustics, Speech and Signal Processing (ICASSP),
1–5. ISSN: 2379-190X.
Douwes, C.; and Serizel, R. 2024. From Computation
to Consumption: Exploring the Compute-Energy Link for
Training and Testing Neural Networks for SED Systems. In
Proceedings of the 9th Workshop on Detection and Classifi-
cation of Acoustic Scenes and Events. Tokyo, Japan.
Drott, E. 2021. Copyright, compensation, and commons in
the music AI industry. Creative Industries Journal, 14(2):
190–207. Routledge.
Evans, Z.; Carr, C. J.; Taylor, J.; Hawley, S. H.; and Pons,
J. 2024a. Fast Timing-Conditioned Latent Audio Diffusion.
ArXiv:2402.04825.
Evans, Z.; Parker, J. D.; Carr, C. J.; Zukowski, Z.; Taylor, J.;
and Pons, J. 2024b. Stable Audio Open. ArXiv:2407.14358.
Flexer, A. 2023. Can ChatGPT Be Useful for Distant Read-
ing of Music Similarity? In 2nd Workshop on Human-
Centric Music Information Research. Milan, Italy.
Gahnberg, C. 2024. AI-Control: Opt-Out Mechanisms From
the View of a Governance Cycle. In IAB Workshop on AI-
CONTROL.
Gerken, T. 2024. TikTok and Universal settle dispute over
music royalties. BBC News.
Google Arts & Culture. 2019. Editorial Feature |How an
Artist Used AI to Make a Concert Hall Dream.
Haueis, P. 2024. Climate concepts for supporting political
goals of mitigation and adaptation: The case for “climate
crisis”. WIREs Climate Change, 15(5): e893.
Hawthorne, K. 2024. ‘I’m empowering my song to go and
make love with different people’: Imogen Heap on how her
AI twin will rewrite pop. The Guardian.
HC Deb. 2023. Artificial Intelligence: Intellectual Property
Rights - Hansard - UK Parliament.
Heikkil¨
a, M. 2023. This new data poisoning tool lets artists
fight back against generative AI. MIT Technology Review.
Henderson, P.; Hu, J.; Romoff, J.; Brunskill, E.; Jurafsky,
D.; and Pineau, J. 2020. Towards the systematic reporting of
the energy and carbon footprints of machine learning. The
Journal of Machine Learning Research, 21(1): 248:10039–
248:10081.
Henderson, P.; Islam, R.; Bachman, P.; Pineau, J.; Precup,
D.; and Meger, D. 2018. Deep reinforcement learning that
matters. In Proceedings of the Thirty-Second AAAI Confer-
ence on Artificial Intelligence and Thirtieth Innovative Ap-
plications of Artificial Intelligence Conference and Eighth
AAAI Symposium on Educational Advances in Artificial In-
telligence, AAAI’18/IAAI’18/EAAI’18, 3207–3214. New
Orleans, Louisiana, USA: AAAI Press. ISBN 978-1-57735-
800-8.
Henkin, D. 2023. Orchestrating The Future—AI In The Mu-
sic Industry. Forbes.
Hennequin, R.; Khlif, A.; Voituret, F.; and Moussallam, M.
2019. Spleeter: A Fast and State-of-the Art Music Source
Separation Tool with Pre-Trained Models. In Late Break-
ing/Demo at the 20th International Society for Music Infor-
mation Retrieval. Delft, The Netherlands.
Henry, A.; Wiratama, V.; Afilipoaie, A.; Ranaivoson, H.; and
Arriv´
e, E. 2024. Impacts of AI on Music Consumption and
Fairness. Emerging Media, 27523543241269047. SAGE
Publications.
Holzapfel, A.; Sturm, B. L.; and Coeckelbergh, M. 2018.
Ethical Dimensions of Music Information Retrieval Tech-
nology. Transactions of the International Society for Music
Information Retrieval, 1(1): 44–55.
Hopster, J. 2021. What are socially disruptive technologies?
Technology in Society, 67: 101750.
Horowitz, M. C.; Kahn, L.; Macdonald, J.; and Schneider,
J. 2024. Adopting AI: how familiarity breeds both trust and
contempt. AI & SOCIETY, 39(4): 1721–1735.
Huang, R.; Sturm, B. L. T.; and Holzapfel, A. 2021. De-
Centering the West: East Asian Philosophies and the Ethics
of Applying Artificial Intelligence to Music. In Proc. of the
22nd Int. Society for Music Information Retrieval Conf. On-
line.
Huang, R. S.; Holzapfel, A.; Sturm, B. L. T.; and Kaila, A.-
K. 2023. Beyond Diverse Datasets: Responsible MIR, In-
terdisciplinarity, and the Fractured Worlds of Music. Trans-
actions of the International Society for Music Information
Retrieval, 6(1): 43–59.
Ingham, T. 2023. YouTube and Universal Music Group part-
ner to develop AI music tools – complete with ‘protections’
for artists and rightsholders. Music Business Worldwide.
International Federation of the Phonographic Industry. 2023.
Engaging With Music 2023. Technical report.
International Federation of the Phonographic Industry. 2024.
EU AI ACT - Joint statement from European creators and
rightsholders.
Jabour, G. 2024. Drake Or Fake? Perceptions, Concerns,
and Business Implications of AI-Generated Vocals. Ph.D.
thesis, University of Texas, Austin, TX, USA.
Jacques, S.; and Flynn, M. 2024. Protecting Human Cre-
ativity in AI-Generated Music with the Introduction of an
AI-Royalty Fund. GRUR International, 73(12): 1137–1149.
Jahromi, G. S.; and Ghazinoory, S. 2023. How to use bits
for beats: the future strategies of music companies for using
Industry 4.0 technologies in their value chain. Information
Systems and e-Business Management, 21(3): 505–525.
Jones, A. 2024a. Tears For Fears Address Use Of AI For
New Album Cover. Stereogum.
Jones, C.; McLachlan, C.; and Mander, S. 2021. Super-Low
Carbon Live Music: a roadmap for the UK live music sec-
tor to play its part in tackling the climate crisis. Technical
Report SLCM 5.1, Tyndall Centre for Climate Change Re-
search, United Kingdom.
Jones, N. 2024b. The AI revolution is running out of data.
What can researchers do? Nature, 636(8042): 290–292. Na-
ture Publishing Group.
Khanjani, Z.; Watson, G.; and Janeja, V. P. 2023. Audio
deepfakes: A survey. Frontiers in Big Data, 5. Frontiers.
Knibbs, K. 2024a. Generative AI Has a ’Shoplifting’ Prob-
lem. This Startup CEO Has a Plan to Fix It. Wired.
Knibbs, K. 2024b. This Tech Exec Quit His Job to Fight
Generative AI’s Original Sin. Wired.
Koempel, F. 2020. From the gut? Questions on Artificial
Intelligence and music. Queen Mary Journal of Intellectual
Property, 10(4): 503–513.
Lavin, A.; Gilligan-Lee, C. M.; Visnjic, A.; Ganju, S.; New-
man, D.; Ganguly, S.; Lange, D.; Baydin, A. G.; Sharma, A.;
Gibson, A.; Zheng, S.; Xing, E. P.; Mattmann, C.; Parr, J.;
and Gal, Y. 2022. Technology readiness levels for machine
learning systems. Nature Communications, 13(1): 6039. Na-
ture Publishing Group.
Lee, H.-K. 2022. Rethinking creativity: creative industries,
AI and everyday creativity. Media, Culture & Society, 44(3):
601–612. SAGE Publications Ltd.
Lerch, A. 2018. The Relation Between Music Technol-
ogy and Music Industry. In Bader, R., ed., Springer Hand-
book of Systematic Musicology, 899–909. Berlin, Heidel-
berg: Springer. ISBN 978-3-662-55004-5.
Ma, Y.; Øland, A.; Ragni, A.; Sette, B. M. D.; Saitis, C.;
Donahue, C.; Lin, C.; Plachouras, C.; Benetos, E.; Shatri,
E.; Morreale, F.; Zhang, G.; Fazekas, G.; Xia, G.; Zhang, H.;
Manco, I.; Huang, J.; Guinot, J.; Lin, L.; Marinelli, L.; Lam,
M. W. Y.; Sharma, M.; Kong, Q.; Dannenberg, R. B.; Yuan,
R.; Wu, S.; Wu, S.-L.; Dai, S.; Lei, S.; Kang, S.; Dixon, S.;
Chen, W.; Huang, W.; Du, X.; Qu, X.; Tan, X.; Li, Y.; Tian,
Z.; Wu, Z.; Wu, Z.; Ma, Z.; and Wang, Z. 2024. Foundation
Models for Music: A Survey. ArXiv:2408.14340.
Majumdar, A. 2023. Facing the Music: The Future of Copy-
right Law and Artificial Intelligence in Music Industry.
Malloch, S.; and Trevarthen, C. 2018. The Human Nature of
Music. Frontiers in Psychology, 9. Frontiers.
Mbamba, U. O. L. 2024. Impact of Selected Fourth Indus-
trial Revolution Technologies on the Music Industry: An Ex-
ploration of the Pros and Cons. Umma Journal of Contem-
porary Literature and Creative Arts, 11(1): 179–209.
Meerza, S. I. A.; Sun, L.; and Liu, J. 2024. HARMONY-
CLOAK: Making Music Unlearnable for Generative AI. 85–
85. IEEE Computer Society. ISBN 9798331522360. ISSN:
2375-1207.
Melville, N. P.; Robert, L.; and Xiao, X. 2023. Putting hu-
mans back in the loop: An affordance conceptualization of
the 4th industrial revolution. Information Systems Journal,
33(4): 733–757.
Milmo, D. 2024. Thom Yorke and Julianne Moore join thou-
sands of creatives in AI warning. The Guardian.
Minsker, E. 2021. Holly Herndon’s AI Deepfake “Twin”
Holly+ Transforms Any Song Into a Holly Herndon Song.
Pitchfork.
Monroe, J. 2023. Grimes Unveils Software to Mimic Her
Voice, Offering 50-50 Royalties for Commercial Use. Pitch-
fork.
Morreale, F.; Bahmanteymouri, E.; Burmester, B.; Chen, A.;
and Thorp, M. 2023. The unwitting labourer: extracting hu-
manness in AI training. AI & SOCIETY. Springer.
Myers, A. 2023. AI-powered EDGE Dance Animator Ap-
plies Generative AI to Choreography.
Nature Editorials. 2024. There are holes in Europe’s AI Act
— and researchers can help to fill them. Nature, 625(7994):
216–216.
Nolan, R. 2024. Music Declares an Emergency: Music In-
dustry Studies in the Context of a Changing Climate. In
Arditi, D.; and Nolan, R., eds., The Palgrave Handbook of
Critical Music Industry Studies, 525–535. Cham: Springer
Nature Switzerland. ISBN 978-3-031-64013-1.
Official Journal of the European Union. 2024. Regula-
tion (EU) 2024/1689 of the European Parliament and of the
Council of 13 June 2024 Laying down Harmonised Rules
on Artificial Intelligence and Amending Regulations (EC)
No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU)
2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Direc-
tives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Ar-
tificial Intelligence Act) (Text with EEA Relevance). Leg-
islative Body: CONSIL, EP.
Olson, C. A. 2023. World’s Largest Music Company Em-
braces AI. Forbes.
O˘
gul, S. 2024. In Tune with Ethics: Responsible Artificial
Intelligence and Music Industry. REFLEKT˙
IF Sosyal Bilim-
ler Dergisi, 5(1): 139–149.
Pasquale, F.; and Sun, H. 2024. Consent and Compensation:
Resolving Generative AI’s Copyright Crisis. Virginia Law
Review, 110.
Pasti Da Porto, V. C. 2023. Sustainability and Inclusion:
How SDGs may be implemented in the Music Industry.
MA Thesis in Management, Universit´
a Ca’Foscari Venezia,
Italy. Accepted: 2023-02-17 Publisher: Universit`
a Ca’ Fos-
cari Venezia.
Peeters, G. 2021. The Deep Learning Revolution in MIR:
The Pros and Cons, the Needs and the Challenges. In
Kronland-Martinet, R.; Ystad, S.; and Aramaki, M., eds.,
Perception, Representations, Image, Sound, Music, Lecture
Notes in Computer Science, 3–30. Cham: Springer Interna-
tional Publishing. ISBN 978-3-030-70210-6.
Pelczynski, M. M. 2024. The AI Era: Building Sustain-
able AI Business Models for the Music Industry. Forms +
Shapes.
Pereira, J. R. L. d. 2024. The EU AI Act and environmental
protection: the case for a missed opportunity. Heinrich-B¨
oll-
Stiftung.
Porter, J. 2023. Google is embedding inaudible watermarks
right into its AI generated music. The Verge.
Pusztahelyi, R.; and Stef´
an, I. 2024. Improving Industry 4.0
to Human-Centric Industry 5.0 in Light of the Protection of
Human Rights. In 25th International Carpathian Control
Conference (ICCC), 1–6.
Reilly, N. 2024. The UK Government Has Officially Backed
a Levy on Stadium and Arena Tickets. Rolling Stone UK.
Reje, A. 2022. Ethical Risk Analysis of the Use of AI in Mu-
sic Production. Ph.D. thesis, KTH Royal Institute of Tech-
nology, Stockholm, Sweden.
Rezwana, J.; and Maher, M. L. 2023. User Perspectives
on Ethical Challenges in Human-AI Co-Creativity: A De-
sign Fiction Study. In Proceedings of the 15th Conference
on Creativity and Cognition, C&C ’23, 62–74. New York,
NY, USA: Association for Computing Machinery. ISBN
9798400701801.
Robinson, K. 2024. LANDR’s ‘Fair Trade AI’ Program Lets
Musicians Earn Money by Contributing to AI Training. Bill-
board.
Rockwell, E. 2024. The Heart of Artificial Intelligence in the
Music Industry: Amending the Music Modernization Act to
Promote Transparency. Boston College Intellectual Property
and Technology Forum, 2024.
Rufo, Y. 2024. Elvis Evolution: Presley to be brought to life
using AI for new immersive show. BBC News.
Rutter, P. 2016. The Music Industry Handbook. Routledge,
2nd edition.
Røyseng, S.; Vinge, J.; and Stavrum, H. 2024. The cultural
dissonance of sustainable live music. Annals of Leisure Re-
search, 1–15. Routledge.
Shroff, L. 2024. AI & Copyright: A Case Study of the Music
Industry. GRACE: Global Review of AI Community Ethics,
2(1).
Simpson, D. 2022. ’It feels like a fresh start’: why Every-
thing Everything turned to AI to write their new album. The
Guardian.
Stassen, M. 2024. Universal Music strikes strategic agree-
ment with AI startup ProRata, which just raised $25m for a
chatbot and tech to attribute and compensate content owners.
Music Business Worldwide.
Studio Wayne McGregor. 2019. Living Archive: Creating
Choreography with Artificial Intelligence.
Sturm, B. L. T.; D´
eguernel, K.; Huang, R. S.; Kaila, A.-K.;
J¨
a¨
askel¨
ainen, P.; Kanhov, E.; Cros Vila, L.; Dalmazzo, D.;
Casini, L.; Bown, O. R.; Collins, N.; Drott, E.; Sterne, J.;
Holzapfel, A.; and Ben-Tal, O. 2024. AI Music Studies:
Preparing for the Coming Flood. In AI Music Conference.
Oxford, United Kingdom.
Sturm, B. L. T.; Iglesias, M.; Ben-Tal, O.; Miron, M.; and
G´
omez, E. 2019. Artificial Intelligence and Music: Open
Questions of Copyright Law and Engineering Praxis. Arts,
8(3): 115. Multidisciplinary Digital Publishing Institute.
Suchman, L. 2023. The uncontroversial ‘thingness’ of AI.
Big Data & Society, 10(2): 20539517231206794. SAGE
Publications Ltd.
Sun, M. 2023. Paul McCartney says there’s nothing artificial
in new Beatles song made using AI. The Guardian.
Tabata, T.; and Wang, T. Y. 2021. Life Cycle Assessment
of CO2 Emissions of Online Music and Videos Streaming
in Japan. Applied Sciences, 11(9): 3992. Multidisciplinary
Digital Publishing Institute.
Taylor, J. 2024. Suno AI can generate power ballads about
coffee – and jingles for the Guardian. But will it hurt musi-
cians? The Guardian.
Tencer, D. 2024a. 25% of music producers are now using
AI, survey says – but a majority shows strong resistance.
Music Business Worldwide.
Tencer, D. 2024b. As landmark AI Act passes EU parlia-
ment vote, rightsholders urge ‘meaningful and effective’ en-
forcement of copyright. Music Business Worldwide.
Tencer, D. 2024c. Music industry applauds introduction
of ‘No AI FRAUD Act’ in US Congress. Music Business
Worldwide.
Thomas, D.; and Gross, A. 2024. UK to consult on ‘opt-
out’ model for AI content-scraping in blow to publishers.
Financial Times.
UK Music. 2024. This Is Music 2024. Technical report,
United Kingdom.
Universal Music Group. 2024. Music Industry Unites to Pro-
tect the Rights of Musicians Amid the Growth of Generative
AI Technology.
Uren, V.; and Edwards, J. S. 2023. Technology readiness
and the organizational journey towards AI adoption: An em-
pirical study. International Journal of Information Manage-
ment, 68: 102588.
Utz, V.; and DiPaola, S. 2023. Climate Implications of
Diffusion-based Generative Visual AI Systems and their
Mass Adoption. In 14th International Conference on Com-
putational Creativity. Ontario, Canada.
Vaglio, A.; Hennequin, R.; Moussallam, M.; and Richard, G.
2021. The Words Remain the Same: Cover Detection with
Lyrics Transcription. In Proc. of the 22nd Int. Society for
Music Information Retrieval Conf. Online.
Vigliensoni, G.; Perry, P.; and Fiebrink, R. 2022. A Small-
Data Mindset for Generative AI Creative Work. In Genera-
tive AI and HCI Workshop at CHI 2022.
Watson, A.; and Leyshon, A. 2022. Negotiating platformisa-
tion: MusicTech, intellectual property rights and third wave
platform reintermediation in the music industry. Journal of
Cultural Economy, 15(3): 326–343. Routledge.
Watson, J. 2024. Copyright and the Production of Hip Hop
Music.
Welsh, A. C. 2022. iZotope announces Ozone 10 AI master-
ing suite. DJMag.
Wengen, W. v.; and Ribbert, R. 2024. EU AI Acts Opt-Out
Trend May Limit Data Use for Training AI Models. Green-
berg Traurig.
Whitestone Insight. 2024. UK Music AI Poll – 27th March
2024.
Widder, D. G.; West, S.; and Whittaker, M. 2023. Open (For
Business): Big Tech, Concentrated Power, and the Political
Economy of Open AI.
Wintour, P. 2024. Why the pope has the ears of G7 leaders
on the ethics of AI. The Guardian.
Xu, X.; Lu, Y.; Vogel-Heuser, B.; and Wang, L. 2021. In-
dustry 4.0 and Industry 5.0—Inception, conception and per-
ception. Journal of Manufacturing Systems, 61: 530–535.