With elections in Europe and the United Kingdom behind us, Brussels and London are well positioned for a fresh start. In the UK, Prime Minister Keir Starmer’s Labour government has secured strong parliamentary support. In the EU, President Ursula von der Leyen has won a second term, backed by a broad political coalition. And this comes at a time when artificial intelligence is transforming societies, putting tech policy at the centre of strategic agendas to boost economic growth, protect national security interests and improve public-sector service delivery. Political leaders on both sides of the Channel should seize this unique opportunity to seek greater strategic alignment on AI policy.
There is a strong economic imperative for closer EU-UK collaboration on AI research and governance. The EU and the UK lag behind the United States in terms of growth.[_] While the US has experienced an 8.6 per cent increase in GDP since 2019, the Eurozone has grown by just 3.4 per cent and the UK by a mere 1.8 per cent. Accelerated AI adoption can strengthen Europe’s competitiveness by improving efficiency and enabling new solutions to complex problems. But realising this potential requires strategic cooperation. AI development is a capital- and energy-intensive endeavour, and increased resource sharing would help both the EU and the UK overcome critical bottlenecks, for example relating to compute, data and talent. Moreover, businesses fear regulatory fragmentation. Harmonising AI standards would create a more favourable business environment, particularly for SMEs and startups. In short, closer EU-UK collaboration on AI would save costs, unlock economic growth and improve public-sector service delivery.
Improving how the EU and UK work together on AI would also create significant geopolitical advantages. Today, AI research is dominated by the US and China: the two countries house almost 80 per cent of the world’s most powerful AI models.[_] By combining their complementary strengths, the UK and the EU could achieve greater success in AI research and strengthen their positions to influence global AI policy debates. Furthermore, the EU and the UK have struggled to engage constructively since Brexit. While rejecting any possibility of rejoining the EU in the near term, Starmer’s new Labour government has made it a priority to improve relations with Brussels.[_] Yet doing so takes more than warm words. Improving relations requires pragmatic actions in specific policy areas that help rebuild trust and ties.[_] Strategic collaboration on AI could thus create a bridgehead for the EU and the UK to work together in other areas of mutual interest without reopening divisive debates about a closer political union.
In this paper we delve deeper into four key areas of potential collaboration that we believe should be at the top of a joint EU-UK AI policy agenda. These include investments in compute infrastructure, harmonised AI standards, closer institutional coordination and multilateral cooperation on questions concerning emerging technologies and international trade.
Chapter 1
Several recent initiatives have demonstrated a willingness on both sides to jointly explore collaboration on tech policy in general, and AI research and policy in particular. For example, in November 2023, Ursula von der Leyen attended the Global AI Safety Summit in London,[_] and in February 2024, the UK and France announced joint funding for global AI safety.[_] Although moving in the right direction, such efforts have hitherto suffered from a lack of overarching political coordination. A more structured framework to implement a strategic agenda is needed to realise the economic, social and geopolitical benefits of closer EU-UK collaboration on AI.
It is worth remembering that the AI policy ecosystems of the UK and the EU have only recently begun to take shape. The EU AI Act comes into force on 1 August 2024, making it the first comprehensive AI regulation adopted by any major economy.[_] The European AI Office, which will oversee and implement the act, is also fully established. Across the Channel, the UK government published an AI regulation white paper in March 2023 that was notably pro-innovation and pro-safety.[_] The UK’s Department for Science, Innovation and Technology (DSIT) has since set up a Central Monitoring Function to drive coherence in its regulatory approach across government. It has also established the AI Safety Institute to advance best practices for identifying, monitoring and addressing emerging AI risks.[_]
The EU and UK have so far taken different approaches to AI regulation.[_] Employing a risk-based approach, the EU AI Act introduces new obligations for developers. In contrast, the UK has deployed a principles-based framework for existing regulators to interpret and apply within their own sectors. But AI research is advancing rapidly and regulatory debates are still evolving in both jurisdictions. For example, the UK’s new Labour government has signalled in its manifesto an openness towards considering binding regulations for the most powerful AI models.[_] At the same time, there is widespread concern among businesses and startups that the EU AI Act is heavy-handed and opaque, threatening Europe’s economic competitiveness. Closer collaboration does not mean straitjacketing either side. It means realising synergies while allowing for divergent political priorities. It also means learning what works in different contexts and continuously improving policies.
Beyond regulation, the EU and the UK have much to gain from closer strategic alignment by jointly leveraging the combined strengths of their respective AI ecosystems. The EU’s market size allows it to shape global policy debates and influence regulatory developments. As a case in point, Meta obtains around 23 per cent of its revenue from Europe, making it the tech giant’s second-biggest market after the US.[_] This economic heft has led to the so-called Brussels effect, whereby multinational organisations voluntarily align their internal standards with EU policies because it is economically rational. Together with the US, the EU is also a global leader in building powerful supercomputers – with Finland, Italy and Spain hosting three of the world’s top-ten supercomputers.[_] In addition, earlier this year the EU launched JUPITER, its first exascale supercomputer, which will be housed in Germany.
The UK brings its own strengths to the table, including a robust AI-innovation ecosystem. For example, the UK is home to more than 3,000 AI companies, which collectively generated more than £10.6 billion in revenue in 2021–22.[_] These startups are clustered around world-leading universities in Oxford, Cambridge and London. The UK is also a leader in AI safety: it was the first country to establish an AI Safety Institute, with the US, Japan and Singapore following suit. The UK AI Safety Institute remains the most well-funded in the global network of emerging safety institutes. Its technical expertise in evaluating frontier AI models, combined with the UK’s traditional prowess in service industries such as auditing, insurance and finance, uniquely positions the country to excel in the burgeoning AI assurance industry.
Given the complementary strengths of their respective regulatory approaches and AI ecosystems, closer collaboration between the EU and the UK can only be mutually beneficial. Despite post-Brexit regulatory divergences, there are significant low-hanging fruits in relation to AI research and governance that the EU and the UK should seize. We have identified four areas where opportunities to partner are especially ripe.
Chapter 2
In a previous paper, Moving Forward: The Path to a Better Post-Brexit Relationship Between the UK and the EU, the Tony Blair Institute for Global Change (TBI) outlined a general agenda for a special collaboration between the UK and the EU. In this paper, we focus specifically on the benefits of closer collaboration in the field of AI. The overarching aim is for the EU and the UK to foster innovation, promote responsible practices and ensure digital competitiveness by coordinating responses to joint challenges in a fragmented global landscape. To achieve this, political leaders in the EU and the UK should act on the following opportunities without delay.
1. Invest in Shared Compute Infrastructure Through the High-Performance Computing Joint Undertaking (EuroHPC JU)
Digital infrastructure and compute capacity are fundamental enablers for AI research and innovation. Recognising that, the EU has established the EuroHPC JU, a public-private collaboration with a budget of approximately €7 billion, to enhance its supercomputing capabilities.[_] The UK joined the EuroHPC JU in May 2024.[_] For UK scientists, this has meant getting access to the EU’s cutting-edge compute infrastructure and the opportunity to bid for EuroHPC grants. This is a good first step; it provides UK researchers and businesses with critical tools to deliver breakthroughs across disciplines, from health care to energy. But the EU and the UK can and should go further. Promisingly, the UK’s new Labour government is expected to coordinate with EU partners to optimise common compute resources and jointly develop next-generation exascale computing. This would be a welcome move that brings benefits to both sides. For the UK, expanded access to supercomputing capabilities and increased international collaboration would accelerate scientific discovery and unlock economic growth. At the same time, the UK’s longstanding expertise in supercomputing will contribute to the EU’s objective of becoming a global leader in high-performance and quantum computing.[_]
2. Harmonise AI Standards
Standards are critical enablers for accelerating innovation, facilitating international trade and implementing technology policies. The development of uniform definitions, practices and metrics is especially important in the field of AI, which is marked by technical and regulatory fragmentation. Most pressingly, the adoption and enforcement of comprehensive AI policies across multiple jurisdictions will require detailed technical standards. Against this backdrop, the British Standards Institution (BSI) and the European Standards Organisations (CEN, CENELEC, ETSI and ENISA) should work closely together to harmonise AI standards. Specifically, the aim should be to establish a common terminology, improve interoperability, set uniform measurements and facilitate cooperation between regulatory agencies. This would streamline compliance, reduce costs, promote the adoption of trustworthy AI systems and strengthen digital competitiveness by lowering barriers for AI developers. An aligned approach to AI standardisation would spur innovation and create synergies between the EU and UK AI ecosystems, while still allowing for some divergence in regulatory approaches.
3. Establish a Close Working Relationship Between the European AI Office and the UK’s AI Safety Institute
AI policies are only as strong as the institutions enforcing them. The key institutions responsible for AI governance in the UK and the EU are the UK’s AI Safety Institute and the European AI Office. Establishing a close working relationship between these two relatively new agencies would strengthen their respective capabilities and impact. Promising avenues for collaboration include joint research on AI safety as well as the specialisation of model-testing and evaluation capabilities to avoid duplication of work. For example, we recommend that Unit A.2 of the European AI Office partners with the UK’s Central AI Risk Function within the DSIT to jointly develop risk-assessment methodologies and risk-mitigation strategies.[_] Similarly, Unit A.3 of the European AI Office should work closely with the UK’s AI Safety Institute on model-evaluation and monitoring protocols, ensuring a more standardised approach across both jurisdictions. Both institutions would also gain insights from regular information exchanges on key model and stakeholder risks. This would see the European AI Office benefitting from the extensive resources of the AI Safety Institute which would, in turn, benefit from the EU’s reputation and extensive regulatory influence on the creation of safer AI systems.
4. Expand Cooperation on AI Through a Renewed Trade and Technology Council
Multilateral trade collaborations are central vehicles for coordinating AI research and governance. In 2021, the EU and the US established the Trade and Technology Council (TTC) to boost alignment on regulatory approaches to key technologies, including AI.[_] Until now, neither the EU nor the US have shown enthusiasm for letting the UK join the TTC, but the idea of a trilateral UK-US-EU forum to discuss technology regulation might look more attractive given uncertainty around the current TTC’s impact and longevity should the political leadership change in the US. Recasting the TTC as a more multilateral and technocratic forum with a sharper focus on AI governance could strengthen both its influence and impact. The UK should therefore seek to expand multilateral cooperation on AI regulation and governance through a reformed TTC. In the long term, this body could serve as a lever for the UK, EU and US to jointly address AI governance challenges in a world that, in geopolitical terms, is becoming increasingly fragmented.
Chapter 3
Our recommended four-point agenda charts a path towards capitalising on the UK and EU’s complementary strengths. However, implementing this agenda in practice will only be possible if political leaders on both sides of the Channel act decisively. In this section, we outline some of the barriers to successful EU-UK collaboration on AI – as well as how they can be overcome.
To begin with, putting our four-point agenda in motion will require the EU and the UK to establish formal channels for collaboration on AI policy at a ministerial level. Currently, the EU and UK remain fragmented on major AI-related policy questions; a joined-up approach would allow both to wield greater influence in global regulatory debates. Although the UK’s previous government generally rebuffed the EU’s efforts for formal collaboration on global policy issues, the new government will likely be more pragmatic and open to working together. Encouragingly, the existing governance provisions of the EU and UK’s Trade and Cooperation Agreement are flexible enough to accommodate formal cooperation to align approaches to AI over time.
Implementing our four-point agenda will also require closer coordination between academic institutions and private research centres. Thanks to joining the EuroHPC JU, the UK can collaborate with European partners to develop effective compute governance measures that promote responsible practices in exchange for access to public compute infrastructure.[_] As part of this process, the UK must ensure that its expanding compute infrastructure, in particular Bristol’s National AI Research Resource, is both scalable and fully interoperable with European high-performance computing systems.[_] This will help ensure the efficient sharing of compute resources to promote innovation while maintaining high ethical standards.
Funding schemes constitute another key enabler for our four-point agenda. In recent TBI papers, the State of Compute Access: How to Bridge the New Digital Divide and The UK’s AI Startup Roadmap (the latter co-authored with the Startup Coalition and Onward), we called for the UK to invest in public compute infrastructure and improve access for startups and the public sector. Being part of the EuroHPC JU presents several opportunities: for example, the UK can leverage Fortissimo Plus, a key initiative of EuroHPC JU, which aims to increase the uptake of high-performance computing by SMEs and startups by offering them the necessary knowledge, as well as financial and technical support.
As part of a wider “compute diplomacy” strategy, the UK and the EU should be proactive in supporting future experimental resource-swap schemes, along the lines of the EuroHPC JU’s pioneering new collaboration with Japan.[_] The scheme, called HANAMI (HPC AlliaNce for Applications and SupercoMputing Innovation), will give European scientists working in areas such as climate modelling and material sciences access to Japan’s Fugaku supercomputer, currently ranked number four in the TOP500 global supercomputers list.[_] Such experimental schemes critically enable the transfer, testing and evaluation of scientific applications in different computational architectures.
Potential roadblocks remain, but they can be overcome. One point of tension is that standards play different roles in the EU and UK’s regulatory frameworks. The European Commission sees standards as a regulatory tool, giving bodies such as the CEN/CENELEC the mandate to develop harmonised standards under Article 40 of the EU AI Act. In contrast, the UK largely views standards as voluntary tools that organisations can employ to ensure good AI governance. There is also divergence among AI standards-setters themselves. For example, the CEN/CENELEC’s perspective on adopting the ISO/IEC 42001 standard (the world’s first standard on the procedural management of AI systems) for the EU AI Act differs from the BSI’s perspective. But there are promising signs of alignment. In a recent technical report by the EU’s Joint Research Centre, the proposed ISO/IEC 42005 standard (on AI system impact assessments) offers fit-for-purpose guidance on conducting AI impact assessments under the EU AI Act, a view shared by the BSI.[_]
In addition to technical considerations, political roadblocks may also arise. At the moment there are promising signs of closer multilateral cooperation; for example, in July 2024 the competition authorities of the US, EU and UK published a joint statement on competition law relating to generative AI models and other AI products.[_] But the future remains uncertain. The impact of Brexit has made it harder for EU and UK officials to engage constructively. Further, the elections for the European Parliament in June resulted in gains for party groups on the right that pursue more isolationist policies. Finally, a change in the US administration may constrain support for multilateral agreements such as the TTC. Attempts to downsize government capabilities might also render the US AI Safety Institute and President Joe Biden’s Executive Order on AI ineffective. However, the fact that such challenges loom on the horizon should only motivate the UK and the EU to establish a more robust bilateral collaboration.
The UK has already taken meaningful steps forward to foster global cooperation on AI safety. Last year it hosted the first AI Safety Summit, with the participation of 28 countries that are leading AI developments. All parties at the summit agreed to the Bletchley Declaration – a commitment to share responsibility for mitigating the risks of frontier AI, collaborate on safety and research, and promote AI’s potential as a force for good.[_] Following the summit, the UK commissioned Yoshua Bengio, the founder and scientific director of research institute Mila, to produce the State of the Science report on the capabilities and risks of frontier AI.[_] Professor Bengio will be advised by an expert advisory panel from representative countries (including EU member states). This year’s AI Safety Summit was co-hosted with the Republic of Korea.[_] The 2025 summit – which will be hosted in France – presents a timely opportunity to nurture further cooperation between the UK and its neighbours in the EU.[_]
To summarise, several key components required to implement our four-point agenda for closer EU-UK collaboration on AI are already in place. For example, the European AI Office has a formal dialogue with the US AI Safety Institute,[_] and the UK’s AI Safety Institute has collaborations with its counterparts in the US, France and Singapore. Yet more structured collaboration would bring numerous benefits. The UK’s AI Safety Institute has twice the budget of the European AI Office, so the EU has much to gain from encouraging the sharing of data and expertise.[_] For the UK, cooperation with the EU would mean participating in dialogue on regulatory activities, such as the drafting of the EU AI Act’s Codes of Practice, which will likely influence the business practices of AI firms around the world. Putting our four-point agenda in motion would thus also reinforce the existing EU-UK Trade and Cooperation Agreement, which envisages the sharing of best practices on cyber issues and the exchange of information relevant to the development of digital trade more broadly.
Chapter 4
Strategic alignment between the UK and the EU on AI presents a significant opportunity for both parties to realise economic and geopolitical benefits. The four-point agenda outlined in this paper is both feasible and timely, given the current political landscape, and the EU and UK’s shared commitment to the responsible advancement of AI technologies.
The economic benefits of closer collaboration are clear: the EU’s substantial market influence and the UK’s robust AI sector can create a synergy to drive innovation and competitiveness on a global scale. Geopolitically, a united approach to AI regulation and development will enhance the standing of London and Brussels in the global tech arena, ensuring they remain leaders in a rapidly evolving field. These proposals can be implemented through regular dialogue, joint task forces and shared initiatives, many of which are already in place, such as the Global AI Safety Summit and joint funding for AI safety research. This collaboration will ensure that both the UK and the EU leverage their unique strengths while mitigating the risks associated with AI.
Now is a unique window of opportunity: recent elections and new parliamentary majorities have provided fresh political momentum, leaving both sides in a good position to enhance their relationship. The establishment of bodies like the European AI Office and the UK’s AI Safety Institute demonstrates a mutual commitment to responsible AI governance, laying a strong foundation for future cooperation in this field. Strategic collaboration on AI could also pave the way for a stronger post-Brexit EU-UK relationship. In a fragmenting geopolitical landscape, working together on AI allows both parties to not only accelerate innovation and address pressing regulatory issues but also signal a broader commitment to shared values and joint progress. It’s a win-win situation.
Chapter 5
We would like to extend our thanks to the following peer reviewers, who offered their advice and guidance in the development of this commentary.
Josh Cowls
Merlin Stein
Marta Ziosi