Copyright and Artificial Intelligence Consultation: What visual artists and rightsholders need to know
In December 2024, the UK Government launched an open consultation on Copyright and Artificial Intelligence (AI). To help artists, estates and the wider public understand the proposals and their potential impacts on visual artists and rightsholders, DACS has published our views on the consultation.
We have also produced a guide to completing the consultation, which includes suggested responses.
In December 2024, the UK Government launched an open consultation on Copyright and Artificial Intelligence (AI), inviting feedback on proposals which support the government's objectives for AI and copyright.
These objectives are stated as:
1. Supporting right holders’ control of their content and ability to be remunerated for its use.
2. Supporting the development of world-leading AI models in the UK by ensuring wide and lawful access to high-quality data.
3. Promoting greater trust and transparency between the sectors.
The consultation proposes 4 options:
- Option 0: Do nothing: Copyright and related laws remain as they are.
- Option 1: Strengthen copyright requiring licensing in all cases.
- Option 2: A broad data mining exception.
- Option 3: A data mining exception which allows right holders to reserve their rights, underpinned by supporting measures on transparency.
The UK Government has stated that Option 3 is its preferred route.
In simple terms, this would facilitate a broad exception to copyright, which would allow AI developers to train their models on copyrighted work. This option includes a rights reservation system, whereby rightsholders can opt-out their work from AI training. By reserving their rights, rightsholders would – in theory - also be able to continue to license their works for AI training.
The Government has stated that they will not introduce a data mining exception without effective rights-reservation measures and transparency around training practices.
Since the announcement of the consultation, organisations from across the creative industries and media sectors – including DACS and other collective management organisations – have voiced concerns about this approach.
DACS will be submitting a response to the consultation, drawing on the findings of our AI & Artists Report. We have also published a guide to responding to the consultation, available via the button below:
DACS View: The Government’s approach presents challenges for visual artists and rightsholders.
-
A rights reservation system inverts existing copyright frameworks: copyright arises automatically when the creator makes their work – the Government’s proposal means the exception would apply automatically unless the creator takes explicit action, placing the burden on rightsholders to enforce their rights. There is the additional risk that rights-reservation will not be possible retroactively, meaning that copyright works may have been already used to train AI models before a rightsholder has been able to opt-out.
-
The proposed ‘rights reservation’ closely resembles the ‘opt-out’ models which have been introduced in the EU, and which have failed to give effective control or remuneration to rightsholders. The EU’s text and data mining exception has created a plethora of issues around enforceability of the opt-out, which the EU’s AI Act has attempted to address to retrofit the 2019 law. No effective opt-out solutions have yet been developed.
-
Visual artists have much less control over downstream uses of their work than other creators. For example, it is common practice for visitors to galleries and exhibitions to take photographs of the works on display, which they can then post online, or on social media. These platforms allow AI training of their posts, and therefore of the artist’s work if posted on other users’ accounts.
Rights reservation systems do not work in this situation, there is no way for an artist to meaningfully reserve their rights in the exhibition without taking measures such as preventing photography in the gallery. This would be a massive step backwards and have a negative effect on audience engagement with galleries and museums. -
The most widely-adopted rights reservation standard is the robots.txt standard, which is text file that tells web crawlers which parts of a website they can and can’t access. However, this mechanism cannot provide the granular control that artists need to effectively reserve their rights and facilitate licensing of their works.
Robots.txt allows works to be blocked from web crawling at the site level – i.e. the site housing a work, for example an artists’ website, can be opted-out – but does not recognise reservations associated with individual works. This makes opt-outs ineffective for secondary or downstream uses of artists’ works. Further, this system does not enable right holders to distinguish between individual works, meaning they would need to either opt-out all or none of their works, limiting their licensing opportunities. -
Many artists create large volumes of work: a photographer could make a thousand works in a day, all of which are subject to copyright. Many artists use online storage systems, so their work could potentially be scraped for training, depending on the terms and conditions of these providers. The photographer would need to reserve their rights on every single image they produce.
Rather than adding additional burdens on their time and resources just to secure their copyright, artists should be supported make more work, have greater opportunities to develop their practice, and be fairly rewarded for secondary uses of their work. The onus should not be on rights holders to prevent AI firms from commercially exploiting their data without permission or remuneration. -
From a survey of 1000 artists, we found that the majority (84%) were in favour of licensing their work on the condition that they received fair pay. DACS has 40 years of experience in licensing a variety of sectors, but a barrier to licensing AI is that developers are not transparent about what works they have used for training.
Transparency requirements in the EU have not been meaningful, so any solutions brought into UK law must require a greater level of transparency on the part of AI firms, including:
The crawler used, its method, and its purpose, as well as the source URL of the content.
Identity of individual works, such as digital identifiers, file names, and where the information is available.
Information on how the work was identified, for example external metadata, proprietary metadata, and the name of the publisher it was scraped from.
As the AI market develops, so will the appropriate standards for transparency, so any regulations must account for this.
DACS will be providing a response to the consultation. You are entitled to submit your own response to all or some of the consultation questions.
DACS has produced a guide to responding to the consultation, which includes key questions and suggested responses.