The government has rightly identified accelerated artificial-intelligence adoption as a key enabler to improve public services and boost economic growth. But this ambition could be undermined by a legislative agenda that fails to support domestic AI development. Two policy initiatives under consideration right now – on data use and on copyright – risk taking us in the wrong direction. A bold, coherent AI strategy should be a key objective for Downing Street.
January’s AI Opportunities Action Plan set out an ambitious programme to support the development of AI in the United Kingdom – and make good use of it in our economy and public services.[_] It announced new plans for a National Data Library and made the case for establishing the UK as a competitive place to train AI models. The AI Opportunities Action Plan is central to the government’s broader growth agenda, together with other initiatives like the Technology Adoption Review.[_]
At the same time, the rapid development and diffusion of AI across society poses important policy questions that need to be answered.
One of these is how the training of AI models interacts with copyright. The Intellectual Property Office (IPO) recently closed its consultation on a commercial text- and data-mining exemption, along with an opt-out mechanism for rights-holders. This is sound policy and would bring the UK broadly in line with the European Union.[_] Of course, there are legitimate legal and moral questions about the data that are used to develop AI. But the current discourse is unhelpfully framed as a zero-sum game between rights-holders and AI developers. The progressive solution is not to abandon the IPO’s proposals but to look beyond copyright law and set the conditions for both the UK’s creative industries and its AI sector to flourish in the digital age.
At the same time, Parliament is debating the Data (Use and Access) Bill (DUAB).[_] This is a broad attempt to reform UK data regulations to “harness the power of data for economic growth, support a modern digital government, and improve people’s lives”.[_] Transparency is important and, if done right, can improve both AI governance and market competition. But some of the technical DUAB amendments proposed in pursuit of transparency place too many burdens on AI developers. This risks making the UK a less attractive location for businesses and researchers considering where to invest, and where to develop their next technology.
Each of these areas deserves scrutiny and thoughtful legislation. But it is important that any bills advanced by the government support the vision, articulated by Prime Minister Keir Starmer when announcing the AI Opportunities Action Plan, of the UK becoming a global AI leader. If UK legislation is too restrictive, our ability to achieve this ambition will be limited. This means fewer economic benefits, less influence over the global direction of technology and poorer public services.
The Tony Blair Institute for Global Change will soon publish further analysis and recommendations for policymakers in this area. A full policy paper on how to support the creative industries in the age of AI, developed in collaboration with both artists and legal scholars, will be launched in early April. However, several key points are already worth highlighting:
Severe data-access requirements risk pushing frontier-AI developers out of the UK. If passed with some of the amendments currently under discussion, the DUAB could end up being more stringent than the EU’s data laws. For example, transparency requirements for training data may force AI developers to reveal trade secrets. Transparency can and should be ensured without endangering AI opportunities.
The global reach of data requirements would not only hamper AI development but also AI adoption in the UK. The DUAB suggests that all web crawlers and AI models would be liable under UK law, even if training occurs in other jurisdictions. This risks the inadvertent banning of popular models – such as ChatGPT – in the UK, hampering AI adoption and complicating the fine-tuning of AI models for local use cases.
Burdensome regulation will give established companies the advantage at the expense of academic researchers and small and medium-sized enterprises. Much of what the DUAB requires – including immediate data disclosure – is feasible yet costly. This would favour big tech, which has the capital to comply with regulation, and hinder UK startups and researchers.
National security and economic competitiveness could be put at risk. Using AI to advance medical research, limit disinformation and improve public services relies on high-quality data. Heavy-handed data and copyright law could make AI models available in the UK less capable and reliable, jeopardising not only the government’s ambition to boost economic growth but also its ability to ensure national security.
Unclear measures could increase legal uncertainty. There is a lack of clarity around how copyright laws apply to model training in the UK. As the IPO has already acknowledged, this risks undermining investment in and adoption of AI[_] and makes the UK less competitive than the EU and other regions. Neither the creative sector nor the AI industry benefits from an unclear regime.
British AI policy can’t exist in isolation. Take copyright as an example. While the tech- and data-mining exemption with an opt-out proposal broadly aligns with the EU’s approach, other jurisdictions have more flexible regimes. Absent similar provisions in the US, it will be hard for the UK to enforce strict copyright laws without straining the transatlantic relationship – which the government has so far actively sought to nurture.
There are better ways to support artists than strict copyright laws. A flourishing arts scene is a hallmark of a healthy society. But how human creativity is expressed evolves over time and in response to technological innovation. Whether by supporting the development of new skills, infrastructure or business models, there is much more governments can do to benefit artists and set the conditions for the creative sector to flourish in the digital age than tightening the screws on AI development.
The speed and degree of effectiveness with which countries can accelerate AI adoption will be the defining factor in their national competitiveness. Part of succeeding is putting the right rules in place. Just as society benefits from food producers disclosing their ingredients, so AI developers should be subject to some transparency obligations. Equally, rights-holders must have a say in how their content is used.
The question is not whether to regulate intellectual property and data, but how to do so effectively and proportionately, in keeping with the overarching need for economic growth and national security. An early stumble in the AI-adoption race could be hugely damaging. We need vigilance alongside the vision.