Chapter 1
Artificial intelligence has the potential to usher in a new era of economic growth and human flourishing. Already AI systems help doctors to improve medical diagnostics, engineers to optimise energy consumption and scientists to uncover exciting discoveries. Indeed, AI is projected to contribute approximately $15.7 trillion to the global economy by 2030, enhancing productivity and fostering new product innovations.
Yet the AI revolution is only beginning. And if governments are willing and able to invest in the right data infrastructure, compute capacity and skills, while also providing incentives for AI research and adoption, they will be important drivers of that revolution.
Governments also need to recognise that technological innovation alone is not enough: realising AI’s economic and social potential also requires political leadership and good governance. Contrary to the commonly held assumption about a direct conflict or necessary trade-off between innovation and regulation, the two go hand in hand; businesses need some degree of regulatory certainty, allied to technical standards, to thrive. In addition, without careful design and testing, AI systems could cause harm that slows down their adoption. A pro-innovation approach ensures a safe environment and keeps overly restrictive regulations at bay.
A parallel can be made between the need for AI regulation and the need for market regulation. Markets drive economic growth but need contract law to function and regulations to address market failures. Similarly, while AI systems are powerful tools for improving efficiency and solving problems, regulation is needed to address safety concerns and ensure widely distributed benefits. As with markets, the question is not whether to regulate AI but how to do so effectively, promoting growth while improving social outcomes.
Political leaders worldwide face a common challenge: harnessing AI’s potential while managing its legal, social, environmental and security risks. Regulatory responses have varied. In 2024 the European Council approved the EU AI Act, South Korea’s parliament passed the Basic AI Act, Brazil’s senate passed the Brazilian AI Bill and California rejected a state-level proposal for regulation. These initiatives have sparked much debate about how to regulate rapidly evolving general-purpose technologies such as AI.
More than 37 countries have proposed AI-related legal frameworks, but the AI regulation debate has so far focused on the Global North. Without a more inclusive approach, there is a real risk that regulatory asymmetries will widen the global AI divide, leaving emerging economies at a disadvantage and deepening existing economic disparities. The fact that technology makers and takers have fundamentally different interests in shaping AI regulations has thus far been largely overlooked. Furthermore, AI risks vary across cultural contexts and evolve over time, as do laws and norms. AI regulations can therefore neither be copied and pasted from one jurisdiction to another, nor remain static over time.
In this report, the Tony Blair Institute for Global Change offers practical guidance to political leaders in the Global South on how to design and implement effective, proportionate AI regulations. Drawing on TBI’s first-hand experience of working with leaders in more than 45 countries, the report builds on previous work on global AI governance and incorporates insights from academic experts, industry practitioners and policymakers across diverse geographies.
The report provides two key frameworks.
First, it outlines a five-step process for designing and implementing AI regulations. While these steps mirror well-established approaches to regulation, our report provides specific insights into their application within the unique contexts of regulating AI in the Global South.
Define regulatory objectives.
Establish AI principles and ethical guidelines.
Define a regulatory posture.
Design comprehensive interventions.
Commit to continuous adaptation and learning.
The second framework we introduce is the AI Regulation Wheel: a comprehensive overview of the AI value chain, from design to deployment and use. It enables policymakers to target key areas for intervention. For example, privacy risks can be addressed by regulating data collection, storage and processing, whereas risk from malicious players (such as cyber-attacks) may mean controlling access to AI systems and monitoring their outputs.
Hybrid approaches are essential when it comes to regulating AI, as it encompasses diverse technologies with distinct risks. Automated decision systems and generative AI, for example, require different regulatory responses; some of these risks are best addressed by horizontal regulation, others by vertical regulation. Further, the scope of regulation spans from product legislation and competition law to national security. Addressing this range demands a plethora of complementary regulatory approaches.
The strategic and geopolitical context matters too. The top priority of most political leaders in 2025 is to boost economic growth; accelerating technology diffusion will be key to this agenda. Moreover, the world is entering into intensified geopolitical competition, as illustrated by the US’s imposition of export controls on semiconductors. The result is that countries in the Global South face increased pressure to align with distinct economic and geopolitical blocks.
The five-step process and the AI Regulation Wheel introduced in this report provide starting points for developing AI regulations that are both locally relevant and globally aligned. While there are no simple solutions, designing effective AI regulations involves understanding how specific AI risks manifest in different local contexts; it also requires strategies to tackle power asymmetries, resource constraints and sovereignty challenges.
The opportunities for governments that embrace AI are immense. With the right regulatory approach, governments in the Global South can position themselves as leaders in AI adoption and innovation. They can bridge the global AI divide and ensure that AI serves as a catalyst for transformative growth and positive social change.
Chapter 2
Artificial intelligence holds immense potential to transform societies and economies. By leveraging increasingly sophisticated algorithms and the growing availability of big data, it can improve medical diagnostics, support personalised education and enable precision agriculture. These innovations could significantly contribute to achieving the United Nations’ Sustainable Development Goals (SDGs), enhancing health, education and economic development.[_] Harnessing the power of AI to not only drive economic growth but also improve social outcomes is especially important for countries in the Global South.[_]
However, realising this potential is a challenge. AI adoption in the Global South remains uneven, reflecting substantial disparities in innovation readiness and technological capacity. While AI is projected to contribute up to $15.7 trillion to the global economy by 2030, developing countries are not expected to see much of this growth;[_],[_] for instance, outside of China, they account for less than 10 per cent of global AI patents as of 2024.[_] This underrepresentation leaves substantial untapped opportunities for economic and social progress, underscoring the urgent need to bridge this AI divide.
The rapid evolution of AI also introduces ethical safety and societal risks that need to be addressed to unlock its full benefits. These risks endanger both public and private sectors, threatening national security, eroding consumer trust and exposing vulnerabilities in data protection and privacy laws. The scale and scope of these risks emphasises the urgent need for government interventions.
Recent regulatory efforts around the world – including the European Union’s AI Act, South Korea’s AI Basic Act, the Brazilian AI Bill and the United Kingdom’s proposed AI bill (discussed in Getting the UK’s Legislative Strategy for AI Right) – reflect a growing recognition among policymakers of the need to find a balance between spurring innovation and managing risk. Effective regulation can do both, through mechanisms such as conditional incentive (including research grants and tax benefits) and regulatory sandboxes. Regulation also indirectly fosters innovation by creating a stable and predictable environment that attracts investment, stimulates market competition and builds public trust (highlighted in Reaping the Rewards of the Next Technological Revolution: How Africa Can Accelerate AI Adoption Today). For instance, studies show that countries with stronger contract enforcement and more efficient international trade regulations attract more foreign direct investment.[_]
In the Global South, striking the right balance between innovation and risk is a challenge compounded by structural and systemic hurdles such as insufficient technological infrastructure, limited regulatory capacity and global power asymmetries. In addition, regulatory efforts are often fragmented or inadequately resourced, leaving critical gaps in governance. Furthermore, governments in these regions are frequently excluded from international discussions on AI regulation, which not only perpetuates disparities in development and adoption but also risks creating a global regulatory system that overlooks the needs and realities of a significant portion of the world’s population.[_] For instance, among the seven prominent international AI-governance initiatives outside the UN, only seven countries – all from the Global North – participate in all of them. Meanwhile 118 countries, primarily from regions such as Africa, Latin America and Asia-Pacific, are excluded, due to not being members of the intergovernmental organisations behind these initiatives.[_]
Recent global initiatives offer a promising opportunity to address these disparities. The UN’s Recommendation on the Ethics of Artificial Intelligence and its newly established Global Dialogue on AI Governance, as well as the AI Action Summit in Paris (expected to bring together about 100 heads of government), offer valuable platforms to amplify the perspectives and priorities of the Global South.[_],[_] These initiatives have the potential to bridge adoption and regulatory gaps by offering the necessary support for capacity-building, resource-sharing and fostering international cooperation in AI governance.
However, the responsibility does not rest solely on international initiatives: governments in the Global South should continue to take proactive steps. A strategic and holistic approach to AI regulation is essential, integrating national priorities with international standards while addressing local realities. Governments can advance this agenda by reviewing and updating existing laws, implementing AI-specific regulations or employing a combination of both. Inaction should not be considered a neutral stance: it exacerbates risks, entrenches inequalities and undermines AI’s transformative potential.
This report introduces a five-step process for developing a strategic and holistic approach to AI regulation, aligned with the initiatives in the Tony Blair Institute for Global Change paper Governing in the Age of AI: A New Model to Transform the State. Recognising that no one solution fits all contexts, we have refrained from prescribing specific approaches or interventions. Instead our report offers a flexible framework that empowers governments to design tailored interventions aligned with their unique priorities and challenges.
Drawing on analysis of emerging regulatory initiatives in the Global South and recent regulatory developments in the Global North, we provide actionable insights to help governments define their regulatory posture and design targeted interventions. We also highlight how balanced and effective regulation can guide AI innovation toward socially beneficial outcomes, build public trust and attract investment, ensuring that AI becomes a catalyst for sustainable and inclusive growth aligned with the UN’s SDGs. But first, what are the key hurdles that the Global South needs to clear in order to implement successful regulation?
Chapter 3
Despite proactive efforts to advance AI regulation in the Global South, significant stumbling blocks persist across five key areas: digital infrastructure; societal awareness and citizens’ AI literacy; institutional and regulatory capacity gaps; foreign-investment dynamics and local interests; and global asymmetries in power. These challenges not only hinder the ability of governments to craft effective and context-specific regulations but also present substantial barriers to the adoption of AI.
Limited Digital Infrastructure and Governance
Many countries in the Global South lack the robust technological infrastructure needed for AI deployment and effective AI regulation. Limited internet connectivity, insufficient cloud computing resources and unreliable energy supplies hinder the scaling of AI technologies and the implementation of regulatory mechanisms. Despite recent advances, nearly 43 per cent of the population in the Global South still lack internet access;[_] underdeveloped digital-governance frameworks, including weak data-privacy laws and inadequate cybersecurity protocols, further exacerbate the issue.
Societal Awareness and Citizens’ AI Literacy
Public understanding of AI is a critical component of effective regulation. Limited AI literacy in the Global South complicates regulatory efforts, as public participation is essential for the adoption and oversight of AI systems. Language diversity adds another layer of complexity, as many AI systems fail to adequately support local languages or dialects, leaving large populations underserved.[_] Without widespread public awareness and education about AI, governments face the spread of misinformation about its risks and benefits, public resistance to regulation, and challenges in enforcing compliance.
Institutional and Regulatory Capacity Gaps
A shortage of skilled personnel in the Global South, exacerbated by brain drain, creates significant challenges for developing context-specific regulations that align with local priorities. For example, by March 2024, only seven African countries had drafted national AI strategies, with none implementing comprehensive AI regulations.[_] Moreover, reliance on adopting international regulation frameworks, while a useful foundation, may not fully reflect regional needs and can often lead to misaligned or ineffective policies.[_]
Limited resources, institutional inefficiencies and a lack of coordinated execution strategies hinder the realisation of even the most well-intentioned regulatory objectives. Enforcement is particularly challenging, as regulators often contend with a shortage of subject-matter experts, unfamiliarity with emerging legal tools and the rapid pace of AI advancements. Furthermore, minimal international cooperation weakens oversight of cross-border AI risks, such as data-privacy violations and algorithmic bias.
Tension Between Foreign Investment and Local Interests
Attracting foreign investment in AI is critical for driving growth, but it often conflicts with the need to protect local industries and national interests. For example, data-localisation laws, such as those in India’s Personal Data Protection Bill (PDPB), aim to boost national security and domestic AI development by requiring data to be stored and processed locally. While this is intended to protect personal data and strengthen the local economy, critics warn that such measures could hinder international collaboration, restrict global funding access and disrupt the free flow of information.[_] Balancing the objectives of national security and economic growth requires carefully crafted policies that account for domestic priorities while leveraging the benefits of global integration.
Global Asymmetries in Power
The Global South faces significant power imbalances when engaging with multinational corporations and global governance platforms. The actions of Western AI companies in the region have been likened to a new era of digital colonialism, marked by exploitative and oppressive practices that undermine local agency and control. [_],[_] This dynamic is further exacerbated by reliance on external funding and the disproportionate influence of Global North entities in key decision-making forums, which often limits the ability of Global South nations to shape AI regulation in alignment with their priorities. Efforts to regulate global tech companies face persistent resistance, particularly on critical issues such as data sovereignty and intellectual-property rights, where US-based firms hold a dominant position. [_]
The outcome of the 2024 US election could further impact this balance.[_] A deregulatory approach in the US might strengthen the influence of its tech firms, while a shift towards more cooperative regulatory policies stands to benefit the Global South.[_]
Chapter 4
Despite the challenges highlighted in the previous chapter, across Africa, Asia, the Middle East, Latin America and the Caribbean, governments are actively pursuing AI regulation through diverse strategies tailored to their unique socioeconomic, cultural and technological contexts. These efforts include developing regional AI regulatory strategies, leveraging existing legal frameworks, advancing governance initiatives to establish a foundation for responsible AI adoption and leveraging emerging global regulatory frameworks.
Regional AI Regulatory Strategies
Emerging activities such as the African Union’s Continental Artificial Intelligence Strategy underscore a growing commitment to fostering regional cooperation.[_] The strategy establishes AI governance and regulation as one of 15 key action areas to harness AI’s positive and transformative potential in Africa. The strategy urges governments to amend existing laws (such as those related to data protection and intellectual property) to address AI risks while promoting ethical and accountable AI use. It also promotes the use of tools such as ethical impact assessments, regulatory sandboxes and African-led research, while encouraging the development of agile, forward-looking, risk-based regulations at a national level. Similarly, Caribbean countries are collaborating on sub-regional initiatives, including UNESCO’s Caribbean Artificial Intelligence Policy Roadmap.[_]
Leveraging Existing Legal Frameworks
In Africa, countries such as Nigeria and Kenya are utilising existing laws, such as data protection and labour regulations, to address immediate AI challenges while preparing legislation.[_] The Nigeria Data Protection Regulation is providing a foundation for managing AI’s impact, focusing on ethical deployment and data governance.[_] For instance, in July 2024, the Federal Competition and Consumer Protection Commission imposed a $220 million fine on Meta for violating local data-protection laws, citing the unauthorised appropriation of Nigerian user data on its platforms. [_]
In September 2024, Kenyan courts allowed a $1.6 billion lawsuit by former Facebook moderators who alleged unfair treatment under the country’s labour laws.[_] Meanwhile, Kenya’s draft National AI Strategy 2025–2030 highlights the need for comprehensive AI-specific regulation, aligning with President William Ruto’s directive to the ICT Ministry in September 2023. [_],[_]
In Asia, countries such as India are also leveraging existing regulations to manage AI opportunities and impact. The country’s Ministry of Electronics and Information Technology (MeitY) recently issued an AI-specific advisory under the Information Technology Act 2000, requiring tech companies to seek government approval before launching new AI tools.[_] However, after significant industry pushback, the government withdrew this requirement, adopting a more flexible approach to avoid stifling innovation.[_]
Advancing AI Governance Frameworks
In the Middle East, leading AI adopters such as the United Arab Emirates (UAE) and Saudi Arabia do not yet have AI-specific laws, but are actively promoting their commitment to responsible AI governance through various strategic initiatives. The UAE recently launched the Charter for the Development and Use of Artificial Intelligence, aiming to create a thriving environment in accordance with the highest standards of safety and privacy, and enhance public trust.[_] It has also made amendments to the Dubai International Finance Centre’s Data Protection Law to align it with specific standards on generative AI and autonomous systems.[_] And in Saudi Arabia, the national government has established the Saudi Data & AI Authority (SDAIA), which has committed to developing global AI governance frameworks.[_]
Elsewhere, the Dominican Republic has become the first Caribbean country to release a national AI strategy, with a focus on ethical practices and transparent governance.[_] In Antigua and Barbuda, the government is examining the EU AI Act with the intention of implementing similar regulations.[_] And in both the Bahamas and Jamaica, government leaders have publicly supported the development of AI regulations.[_]
Leveraging Emerging Global Regulatory Frameworks
Countries in South America have been actively developing AI regulations, with many following the EU’s lead in adopting a risk-based approach. Brazil’s Federal Senate has approved the Brazilian AI Bill, which seeks to consolidate various fragmented bills into a comprehensive, risk-based framework for governing AI.[_] Peru has proposed a draft AI bill that emphasises data and privacy protections for industry and consumers.[_] And the Chilean government has also proposed an AI Regulation Bill, which bears comparison with the EU AI Act but differs in how it addresses copyright issues and emphasises human rights.[_]
Chapter 5
This roadmap outlines an actionable five-step process for designing AI regulations that builds on the efforts of governments to address opportunities and challenges unique to the Global South. By tailoring approaches to local contexts and addressing structural and systemic barriers, governments can create robust and adaptive frameworks that align with their national priorities and global best practices. The roadmap also builds on research by leading organisations such as UNESCO,[_] the World Economic Forum,[_] the Brookings Institution,[_] the Carnegie Endowment for International Peace[_] and the Observer Research Foundation.[_]
Step One: Define Regulatory Objectives
Governments should begin by defining clear regulatory objectives that answer a fundamental question: what are we regulating and why? AI is not a singular technology but a diverse spectrum encompassing large language models, automated decision-making systems and statistical models. To develop effective regulations, governments must first understand their national goals for AI innovation and how risks manifest within that specific national context; this requires dedicated research to inform evidence-based policymaking.
Setting clear objectives is crucial for guiding regulation through its key stages (design, adoption, approval and enforcement) and these objectives should include realistic timelines and measurable milestones for tracking progress. Governments should clearly define how they aim to support innovation, whether by encouraging investment, fostering startups, driving research and development, or aligning AI with national priorities.
At the same time, these objectives must address the risks to be managed and specify the timeframe over which those risks might evolve. Governments should also remain cognisant of the broader systemic barriers that impact AI adoption and governance; as outlined in chapter 3, these include insufficient digital infrastructure, limited regulatory capacity and global power asymmetries. National AI strategies – adopted by more than 60 countries – often provide a critical foundation, outlining innovation goals and approaches to addressing challenges.[_]
Countries such as Saudi Arabia, Malaysia and Brazil have already committed significant resources to building AI capabilities, partnering with global tech firms and demonstrating a strong desire to improve technological adoption and governance. However, much more should be done by governments in the Global South (and their partners) to close the digital infrastructure and governance divide.[_]
Addressing AI Risks
Innovation-driven objectives in the Global South often draw from established frameworks, but the ability to identify and manage AI-related risks remains a newer and more complex challenge that requires a foundation in evidence-based assessments.[_] In the discourse on AI regulation, risks are frequently grouped under the broad category of “AI harms”. But while some risks highlight overarching challenges that are relevant across various contexts – requiring foundational capabilities such as investigating incidents, disabling problematic systems and establishing accountability mechanisms – effective and balanced regulation must also address the specific contexts and impacts of AI applications. This involves tailoring interventions to manage distinct risks, such as product liability and national-security concerns, while maintaining the flexibility to adapt to the rapid evolution of AI technologies.
In the Global South, the risks posed by AI are particularly acute, often with more severe impacts than those seen elsewhere. For example, much of the low-paid, high-stress labour supporting the development of AI systems is concentrated in the Global South:[_] content moderators training AI models in countries such as Kenya, India and the Philippines have reportedly suffered from exploitative working conditions, facing psychological trauma due to exposure to disturbing content.[_] At the user end, non-English-speaking and less homogeneous regions have faced significant challenges with AI-powered misinformation, which has reportedly incited violence and escalated conflicts.[_] Additionally, bias embedded in machine-learning models – in facial recognition, hiring systems, health care and finance, in particular – exacerbates existing racial inequalities, further marginalising vulnerable populations.[_],[_]
A classification system, such as the one included in the International Scientific Report on the Safety of Advanced AI, [_] offers a useful starting point for understanding the risks and sub-risks associated with AI use (see Figure 1 of the annex, available as a downloadable PDF). However, these classifications may not always directly apply to contexts in the Global South, as they are often reflective of advanced AI systems used more often in developed economies and may lack stability due to the emergence of new risks and the evolution of existing ones.
Governments in developing countries should ground their regulatory objectives in localised research to understand how AI-related risks manifest in specific national and regional contexts. This research should include an assessment of the relative significance of different risks, defining areas as high risk (posing an immediate threat to safety, equity or societal wellbeing) and low risk (requiring less urgent intervention). This categorisation can help countries determine areas to prioritise in their regulatory interventions and allocate resources effectively. Furthermore, risks should be defined at a high level to avoid regularly encountering new ones that require additional interventions, enabling a stable and adaptable regulatory environment.
Finally, this assessment should also consider risks emerging at the nexus of AI and geopolitics, such as digital inequality and data sovereignty, as well as the high costs associated with AI technologies and unequal access to them. Such challenges exacerbate vulnerabilities in less developed regions and threaten to intensify global power disparities as AI becomes a key driver of economic and technological development.
Step Two: Establish AI Principles and Ethical Guidelines
Before implementing specific regulatory interventions, governments should define national AI principles and ethical guidelines that reflect societal values and promote inclusivity. These principles should align with a country’s cultural, legal and societal contexts, and incorporate input from academia, industry, civil society and marginalised groups (particularly the younger generation and women) to ensure diverse perspectives are represented. These principles can assist in evaluating trade-offs during the design and implementation of regulatory interventions and, if actionable, can offer clear guidelines for AI developers and procurers to assess and improve their systems. Additionally, aligning with international principles harmonises local efforts with global standards.
In contexts where institutional capacity and expertise are limited, developing AI principles tailored to local priorities offers a practical starting point. These principles can serve as a foundation for building regulatory frameworks over time, ensuring that AI adoption aligns with national goals while addressing immediate challenges and opportunities. Additionally, they provide governments with a roadmap to navigate complex trade-offs during the regulatory development process.
Many countries in the Global South have already established AI principles or are implementing internationally adopted ethical frameworks, such as UNESCO’s Recommendation on the Ethics of ArtificiaI Intelligence;[_] for example, 50 countries that are primarily in the Caribbean and Africa are putting the recommendations into effect through UNESCO’s Readiness Assessment Methodology.[_] Meanwhile, countries such as Mexico and Colombia have adopted the Organisation for Economic Co-operation and Development’s AI Principles, which were agreed upon by 42 countries in 2019, to ensure that AI development is aligned with international standards on human rights, fairness and accountability. In addition, the UAE has published national ethical frameworks that prioritise values such as transparency, fairness and security.[_]
In countries that are just beginning their AI regulatory journey, governments should assess the national context, considering cultural values, legal frameworks and societal norms. This involves reviewing existing laws – such as those related to data privacy, cybersecurity, criminal law and sector-specific regulations (in health care, finance and education, for example) – to ensure alignment and consistency with the proposed AI principles. For example, Brazil’s AI regulation draft is being shaped around the existing data privacy framework (LGPD),[_] ensuring consistency across regulatory frameworks.
It is equally important for governments to establish foundational AI principles through inclusive and diverse stakeholder engagement. While challenges such as limited AI literacy persist in the Global South, governments have a crucial responsibility to foster participatory governance through education, transparent consultation processes and robust mechanisms for citizen engagement. By consulting a broad range of stakeholders – including academia, industry, civil society and marginalised groups – governments can create AI principles that are representative of, and responsive to, the concerns and values of all segments of society.
Kenya offers a compelling example of this approach. The government’s Task Force on Blockchain and AI has adopted a multi-stakeholder model, bringing together representatives from various sectors to develop AI-governance guidelines. As a result it has been able to recommended embedding principles based around concepts such as human rights into national policies, to ensure the ethical, equitable and responsible use of AI technologies.[_]
However, while establishing ethical principles is an essential first step, research suggests that companies often find it challenging to translate general principles into practical steps and actions.[_] In the interim, standardisation bodies and other organisations, including local non-profits and researchers, can play a pivotal role in operationalising guidance documents and facilitating tangible impact. Furthermore, through the subsequent steps outlined in this paper, governments can ensure that regulations actively support the translation of high-level principles into realisable actions that developers and deployers can implement effectively.
Step Three: Define a Regulatory Posture
Countries use a mix of AI governance tools, ranging from soft laws – such as guidelines, policies and ethical frameworks – to hard laws, including AI-specific legislation, executive orders and foundational data-privacy laws. This report focuses on hard laws, which provide the legally binding mechanisms necessary to ensure accountability, enforce compliance and create a stable regulatory environment. Amid widespread global discussions on AI regulation, much of the debate remains fragmented or focused on specific regions, with insufficient analysis of relevant frameworks from a global view. In this analysis, both AI-specific laws and AI-relevant laws (such as data-protection frameworks) are explored, as they can both be designed to foster innovation by effectively managing the risks associated with AI.
To understand how governments approach AI regulation it is crucial to examine their regulatory postures, which reflect their strategy, structure, scope and approach. The following benchmarking analysis explores regulatory frameworks in the EU and various countries – including the US and China – alongside emerging postures from key players such as the UK, South Africa, Brazil, the UAE, Saudi Arabia, India, Singapore and South Korea. These countries were selected for the maturity of their AI ecosystems and their influence within their regions, offering valuable insights into diverse regulatory practices.
This analysis, as of January 2025, provides a snapshot of countries’ positions relative to others within the benchmark. These positions should be understood as relative rather than absolute, since in practice regulatory postures often blur the lines between distinct categories. For the sake of analysis, however, these categorisations are drawn to illustrate trends and comparisons. Moreover, it is important to note that AI regulations are evolving rapidly, and we do not expect these positions or strategies to remain static. Continuous developments in technology, geopolitics and international cooperation will likely reshape the regulatory landscape in the near future.
Regulatory Strategy: Proactive vs Reactive
Countries are adopting various strategies for AI regulation, from proactive and comprehensive frameworks to more reactive, targeted strategies. Proactive regulations involve anticipating potential challenges and risks associated with AI development and deployment, and addressing them before the technology fully matures; reactive strategies respond to issues as they emerge.
The EU has adopted a proactive strategy with the EU AI Act, which is widely regarded as the most comprehensive and stringent in the world. Elsewhere, South Korea’s National Assembly has approved and adopted the AI Basic Act, while Brazil’s Federal Senate has approved the Brazilian AI Bill, which is currently under review by the Federal Chamber of Deputies. Both the South Korean and Brazilian frameworks are comprehensive, forward-looking, risk-based AI regulations that align with the EU AI Act.
China, on the other hand, has so far adopted a more piecemeal approach to AI regulation, introducing targeted laws as specific technologies gain prominence. For instance, algorithm-recommendation regulations were produced by the Cyberspace Administration of China, which came into effect in 2022, in response to platforms such as video-hosting service Douyin (owned by ByteDance, which also owns TikTok). Then, in 2023, China introduced both Deep Synthesis Regulation and Generative AI Regulation, in response to the growing influence of deepfakes and generative-AI technologies.[_]
The UAE, Saudi Arabia, India and South Africa are also taking a reactive approach to AI regulation, adapting existing laws to relevant AI developments. In South Africa, the government is applying relevant elements of the Protection of Personal Information Act and the Electronic Communications and Transactions Act to emerging AI technologies.[_] Similarly, Singapore[_] and the UAE[_] have amended their road laws to include specific provisions regarding autonomous vehicles.
In practice, many governments are employing a hybrid strategy (neither fully proactive nor reactive) as it allows them to address immediate challenges while keeping an eye on long-term safety and risks. For example, countries such as the US, the UK and Singapore are responding to AI developments with targeted approaches while also setting in motion proactive regulations for future developments.
Although the US had more than 100 AI-related bills in Congress as of September 2024, binding directives have primarily come from executive actions. President Joe Biden’s Executive Order (EO) 14110, issued in October 2023, promoted initiatives such as workforce development and integrating AI into government services. It also addressed risks (such as bias and transparency in AI models) and took a forward-looking approach to mitigating the future risks posed by large AI systems, championing mechanisms such as compute governance, enhanced cybersecurity protocols and export control.[_]
Upon assuming office, President Donald Trump repealed EO 14110 and introduced a new executive order called Removing Barriers to American Leadership in Artificial Intelligence. This directive emphasises rapid AI development and reduces regulatory oversight to maintain the country’s global AI dominance.[_] Meanwhile, sector-specific regulation and state laws are increasingly reactive: states such as Texas, Virginia and California have recently passed deepfake-specific laws in response to the rise of generative AI.[_]
The UK government is another that has moved towards a hybrid strategy. The government initially favoured a sector-specific regulatory strategy but has signalled its intention to regulate frontier AI models, recognising the need to address potential harms from emerging technology.[_] Singapore has also adopted a hybrid approach, balancing the adaptation of existing laws with forward-looking governance strategies. For example, its government extended the regulation of its Health Products Act 2007 to require medical devices that incorporate AI technology to be registered before they are used.[_] It has also leveraged its comprehensive AI governance framework – the AI Verify toolkit, developed in collaboration with industry – to encourage responsible AI development and mitigate present risks.[_]
Regulatory Structure: Decentralised vs Centralised
The structure of AI regulation varies significantly, often shaped by a country’s legal and political systems, as well as the level of AI integration across its economy.[_] For example, the EU, Singapore, Saudi Arabia, South Africa and the UAE have adopted relatively centralised structures, where a primary national authority plays a key role in formulating and enforcing AI regulations across various sectors. This approach aims to ensure efficiency, uniformity and the large-scale implementation of regulations, while providing clear guidance and a cohesive strategy for innovation and compliance.
However, even centralised systems incorporate sector or regional collaboration to address specific needs. For instance, while the design and enforcement of the EU AI Act is centralised, experts anticipate that additional sector-specific principles, standards and guidelines from individual regulatory bodies will be necessary to ensure the specificity required for the law to be effective.[_] Moreover, Singapore and South Korea emphasise collaboration between national regulators and industries to tailor guidelines effectively.[_],[_]
By contrast, Brazil and China exhibit more distributed regulatory structures. Brazil’s proposed AI regulatory framework envisions coordination between a central AI authority and sector-specific regulators, promoting sectoral engagement and innovation. [_] Similarly, in China, AI regulation involves multiple agencies such as the Cyberspace Administration of China, the Ministry of Science and Technology, and the Ministry of Industry and Information Technology, with significant inter-institutional competition; local governments, including those in Shanghai and Beijing, have also been proactive in introducing their own AI policies.[_] This distributed and competitive structure allows for diverse perspectives but can create challenges for policy coherence.
Countries such as the US, India and the UK employ various forms of decentralised or hybrid regulatory structures. The US follows a hybrid model, combining federal-level regulations (such as White House executive orders on AI) with state and city-level regulations and sector-specific guidelines. Localised initiatives, such as New York’s Bias Audit Law and the Illinois Biometric Information Privacy Act, highlight how regional regulations address specific concerns. However, this approach can lead to complexities in compliance for organisations operating across jurisdictions.
India’s regulatory framework is decentralised but incorporates strong central oversight through MeitY, complemented by sector-specific regulators such as the Reserve Bank of India and the National Health Authority. The UK adopts a similarly decentralised yet coordinated framework, with multiple AI-related bodies such as the AI Policy Directorate and the Responsible Technology Adoption Unit (both within the Department for Science, Innovation and Technology) working alongside the Information Commissioner’s Office.
Regulatory Scope: Sector Specific vs Broad/Technology Specific
Across jurisdictions, governments face the challenge of balancing sector-specific regulations with a broader, technology-specific scope. While the latter can promote alignment and consistency, sector-specific regulations can foster agility.
The likes of the EU, South Korea and Brazil have approved or proposed broad, comprehensive regulations grounded in a risk-based approach; China also has technology-specific regulations. Meanwhile, countries such as the US, the UK and Singapore have adopted hybrid regulatory strategies that combine sector-specific oversight with broader principles. The US relies on federal agencies to regulate AI in specific industries, while also implementing broad and technology-specific regulation through executive orders and state regulations. Singapore emphasises collaboration between regulators and industries, tailoring sector-specific guidelines while maintaining overarching ethical principles, such as those outlined in its Model AI Governance Framework. The UK, which initially adopted a sector-specific approach favouring regulation specific to use cases, has recently signalled a shift towards adding technology-specific rules to complement its existing framework.
Saudi Arabia lacks AI-specific laws, but the Saudi Data and Artificial Intelligence Authority has issued broad, technology-specific guidelines. These include the AI Ethics Principles and Generative AI Guidelines, both of which promote the responsible use of AI in government and the private sector. Similarly, the UAE has focused on sector-specific measures while issuing general AI guidelines to foster ethical and sustainable AI development. South Africa and India, while expressing interest in broader regulations, currently rely on sector-specific guidance. Both countries tailor their AI policies to address the unique challenges and opportunities of their respective industries, reflecting a flexible and adaptive approach to AI governance.
Regulatory Approach: Risk, Rules, Outcomes or Principles
Four key regulatory approaches have emerged among the countries benchmarked as part of our report, each with distinct advantages and challenges that reflect their broader technological and market perspectives. However, these approaches are not mutually exclusive, with many countries choosing combinations and variations.
Risk-Based Approach
The EU, South Korea and Brazil employ a model that tailors interventions to the level of risk posed by different AI systems. For example, the EU AI Act sees higher-risk applications, such as those in health care or autonomous vehicles, subject to stricter oversight, while lower-risk uses face lighter requirements. This overarching framework helps ensure that changes focus on reassessing and reclassifying specific systems, rather than necessitating frequent structural overhauls to the regulatory model itself.
Rules-Based Approach
China’s model provides clear, detailed directives for AI developers and users. It allows for strong government oversight and control, ensuring systems are developed in alignment with national priorities. While the rules-based system may limit agility and potentially stifle innovation, China’s model incorporates mechanisms that mitigate these concerns. Many of its regulations are secondary regulations, which are inherently more adaptable than primary ones. They allow for adjustments as technology advances, enabling the regulatory framework to remain relevant in a rapidly evolving field. Additionally, China increasingly relies on standards to explicate specific provisions, offering further adaptability within the broader framework.
Outcomes-Based Approach
The US adopted an outcomes-based regulatory approach to AI under President Biden’s administration, emphasising flexibility for innovators in their efforts to achieve key objectives. It was an approach that focused on fairness, transparency and accountability in AI systems. President Trump’s repeal of EO 14110 signals a shift in priorities toward rapid AI development, economic growth and global competitiveness, while maintaining an outcomes-based framework. This approach, while fostering innovation, can create uncertainty for developers due to its lack of specific interventions and obligations.
Principles-Based Approach
The UK previously adopted this approach, focusing on high-level, context-specific guidelines. This allowed sector-specific bodies to develop more detailed regulations tailored to specific use cases, promoting flexibility and innovation across sectors. However, the current government has signalled a shift towards a more targeted stance, particularly for frontier AI models, reflecting an evolving understanding of the risks posed.
The regulatory postures of benchmarked countries and regions vary enormously
Based on the benchmarking analysis in this section, a few underlying insights have emerged.
Diverse Regulatory Approaches
AI regulation across the globe reflects a wide array of strategies, structures, scopes and approaches, with many countries developing hybrid models. However, these hybrids are far from uniform, varying significantly in their design and emphasis. Some hybrid models focus on sector-specific oversight while incorporating broader, technology-specific governance, whereas others blend centralised frameworks with localised adaptation to address unique needs.
Governments in the Global South may particularly benefit from implementing hybrid regulatory models, which can help address resource constraints and the challenges of regulating rapidly evolving technologies. Moreover, hybrid models enable alignment with international standards, allowing these governments to navigate global power dynamics effectively. At the same time, they provide the flexibility to customise frameworks that reflect local priorities and unique socioeconomic realities, fostering a balance between global integration and national autonomy.
But while context-specific regulation can be agile and hybrid regulatory models can enable incremental progress in addressing immediate priorities, they also introduce key challenges, such as regulatory divergence. When jurisdictions implement differing rules and standards, inconsistencies can arise that complicate international collaboration and enforcement efforts. This is especially concerning given AI’s far-reaching implications for global security.
Regulatory divergence also creates opportunities for regulatory arbitrage, whereby companies exploit differences in national regulations to operate under the least stringent requirements. Addressing this risk requires not only global harmonisation of AI regulations but also mechanisms for coordinated enforcement, to ensure that AI technologies are developed and deployed responsibly across borders.
Establishing common ground through organisations such as the International Organization for Standardization can help mitigate the risks by creating a level playing field, promoting ethical AI practices and international trade, and preventing companies from circumventing stricter regulations in certain regions. A useful comparison can be drawn with the financial industry, where countries have developed tailored regulatory approaches based on their unique economic and legal contexts. But despite these differences, global harmonisation has been achieved in critical areas such as banking regulations and anti-money laundering, thanks to frameworks such as the Basel Accords and the Financial Action Task Force, which balance local specificity with global standards.[_]
Path-Dependent Regulatory Decisions
Regulatory postures can be inherently path dependent, shaped by historical, political and legal contexts. For instance, countries with highly centralised political structures tend to adopt top-down regulatory frameworks, with government authorities setting and enforcing AI regulations, allowing for the swift implementation of policies (as seen in countries such as Saudi Arabia). Moreover, regions such as the EU, where legal traditions emphasise fundamental principles such as proportionality, often favour risk-based approaches. In these systems, regulations are tailored to specific risks, ensuring that they address AI’s potential dangers without unnecessarily infringing on companies’ rights. This contextual dependency highlights the importance of understanding a nation’s broader institutional framework when designing AI regulations.
Influence of Special Interests on Regulation
Special interests play a significant role in shaping AI regulations, often reflecting values that may not be universal. As previously discussed, the EU AI Act is heavily influenced by the region’s strong focus on privacy, shaped by longstanding European principles around personal-data protection. Conversely, the efforts of President Biden and President Trump reflect the needs and priorities of US tech companies, emphasising the fostering of innovation, maintaining global competitiveness and safeguarding national security. These varying influences underscore the challenges of exporting regulatory frameworks without fully considering the underlying cultural, economic and political contexts. To avoid the pitfalls of importing regulations wholesale, countries developing their own AI frameworks should ensure that these regulations align with regulatory objectives, principles and posture.
Leveraging Existing Legal Foundations
Many countries are grounding their AI regulations in well-established legal frameworks, such as privacy, data protection and competition laws. The EU’s General Data Protection Regulation and Digital Markets Act, China’s Personal Information Protection Law and the California Consumer Privacy Act in the US are all pivotal in shaping AI regulation. By building on these pre-existing legal foundations, governments can ensure regulatory coherence and minimise the need for entirely new frameworks, which in turn reduces enforcement costs. In countries with emerging AI regulations, governments can streamline the regulatory process by updating or adapting existing laws to address AI-specific challenges, ensuring continuity while spurring innovation.
Step Four: Design Comprehensive Interventions
Governments should design robust interventions that address specific stages of the AI lifecycle, prioritising where they have the most visibility and impact, such as the deployment and application stages. TBI’s AI Regulation Wheel (see below) provides a comprehensive overview of the AI value chain, enabling policymakers to strategically identify and target key areas for intervention. Additionally, the AI Impact Assessment Framework (see downloadable annex PDF) offers a structured method for evaluating the potential outcomes and effectiveness of specific regulatory measures, ensuring that interventions are both impactful and aligned with national priorities.
Several organisations, governments and companies have developed tools to support governments in designing effective regulations. There are some key examples: the US government’s National Institute of Standards and Technology’s AI Risk Management Framework; UNESCO’s Readiness Assessment Methodology and Ethical Impact Assessment; the Brookings Institution’s AI Regulatory Toolbox; and Google’s Building a Responsible Regulatory Framework for AI.
Adding to these efforts, TBI has developed the AI Regulation Wheel, a comprehensive, first-principles framework for regulating the AI value chain. While it draws on the principles of existing regulatory models, it is a newly developed framework to address the unique challenges and opportunities of AI in a comprehensive manner.
The AI Regulation Wheel visualises the AI lifecycle, encompassing four key stages: input, development, output and the feedback loop. At its core, the framework has the three foundational pillars for crafting adaptable, innovation-supporting interventions that were detailed in the first three steps: regulatory objectives, AI principles and regulatory posture.
The outer layer of the framework represents the broader AI value chain, encompassing essential components such as hardware, natural resources and energy infrastructure – critical elements highlighted by UNESCO.[_] While the regulation of these supply-chain elements is vital for comprehensive AI regulation, they fall outside the specific scope of this analysis of AI software.
The TBI AI Regulation Wheel
Source: TBI
The AI Regulation Wheel supports governments as they work on designing effective regulations for a range of AI actors. Using the wheel, governments can allocate resources more efficiently by identifying opportunities and risks throughout the AI lifecycle. It facilitates the development of interventions that cover market-level dynamics, firm-level practices and people-centered considerations, as well as activities in both the public and private sectors.
A key strength of the wheel is its adaptability. It is designed to be time-neutral, equipping policymakers to address immediate, intermediate and long-term risks, including emerging threats such as lethal autonomous weapons and biohazards. Additionally, it complements existing laws by identifying gaps and opportunities for new process- and product-oriented interventions, to facilitate managing the complexities of AI development and deployment effectively.
The wheel framework also highlights the interconnected nature of the AI value chain, emphasising that risks often span multiple stages of the AI lifecycle. For instance, regulating data privacy at the Input stage affects the Development and Output phases, by shaping how AI models are trained and deployed. Similarly, strong competition laws tailored to a country’s unique context can promote fairness and innovation throughout the entire lifecycle of AI systems. By demonstrating how interventions in one area can significantly influence others, the wheel underscores the need for a holistic regulatory view to address AI’s multifaceted risks and opportunities.
The feedback loop within this framework also plays a critical role in ensuring that regulations remain adaptive, responsive and impactful. A key element of this loop is AI assurance, which enables the continuous monitoring, evaluation and refinement of AI systems. By integrating AI-assurance processes – such as impact assessments, audits and compliance reviews – governments can systematically identify and address risks, enhance transparency and ensure alignment with ethical and societal priorities. This approach not only supports the operationalisation of principles such as accountability and safety, but also fosters public trust and strengthens global collaboration.[_]
Mapping The AI Regulation Wheel: Country-Specific Examples
Drawing on countries from both the Global North and Global South, here we demonstrate how existing AI-relevant laws and new interventions can be mapped retrospectively on to the lifecycle layer of the AI Regulation Wheel (there is also a table summarising this information in the annex, available as a downloadable PDF). This exercise illustrates how regulatory efforts align with various stages of the AI lifecycle, lays the groundwork for deeper analysis and, ultimately, informs the potential to generate new regulatory frameworks. It also highlights existing best practices, and opportunities for improvement in innovation promotion and risk mitigation.
Input
Innovation: India’s PDPB sets clear guidelines for the secure and ethical use of personal data, which are applied in the National Digital Health Mission (NDHM). By following PDPB standards, the NDHM offers developers access to anonymised health-care data, enabling AI advancements in diagnostics, personalised treatments and public health. This framework fosters innovation in health-care AI by providing a data pool that developers can use to train AI models, [_] while maintaining strict privacy, data security and ethical safeguards.
Risk mitigation: In August 2024, citing Brazil’s LGPD, the country’s National Data Protection Authority (ANPD) issued a landmark decision temporarily suspending Meta’s use of its users’ social-media data for AI-model training.[_] The LGPD mandates transparency, informed consent and data minimisation in data processing, which Meta was initially found to have violated. However, the suspension was lifted after Meta implemented a comprehensive compliance plan, including enhanced transparency measures, opt-out mechanisms and safeguards to protect personal data, particularly for minors. The case underscores Brazil’s commitment to enforcing its data-protection standards while allowing AI developers to align with legal and ethical safeguards. It also demonstrates how regulatory engagement can foster compliance without stifling innovation.
Development
Innovation: Canada’s proposed Artificial Intelligence and Data Act (AIDA) promotes innovation in the development phase by creating a clear and predictable regulatory environment.[_] By mandating risk assessments, transparency and ethical testing, AIDA fosters trust in AI technologies, which can encourage investment and collaboration. Additionally, it incentivises compliance by offering benefits such as fast-track approvals to companies that adhere to responsible AI-development standards, enabling them to bring their innovations to market more quickly.
Risk mitigation: Mexico’s proposed Federal Law Regulating Artificial Intelligence requires financial institutions and other AI providers to obtain prior authorisation from the Federal Telecommunications Institute.[_] It mandates AI systems to undergo risk classifications, and institutions must ensure transparency, accountability and bias mitigation before deploying AI solutions. The law also imposes penalties for non-compliance.
Output
Innovation: The Dubai Financial Services Authority (DFSA) launched the Innovation Testing Licence (ITL) programme as part of its regulatory framework on fintech innovation.[_] The ITL allows applicants to test innovative financial products in a controlled environment while ensuring compliance with DFSA’s regulatory objectives. Participants undergo an authorisation process, creating a legally compliant space for testing financial innovations. This sandbox supports the development of new technologies and enhances DFSA’s supervisory understanding.
Risk mitigation: The EU’s proposed AI Liability Directive, and revised Product Liability Directive, establish frameworks to enhance accountability for developers, manufacturers and operators of AI systems[_] (including AI applications in health care, finance and autonomous vehicles). Additionally, if an AI system causes harm or malfunction, the liable party is responsible for compensatory damages.
Feedback Loop
Innovation: Brazil’s LGPD creates a flexible regulatory framework that emphasises data privacy and protection. It requires companies to conduct regular Data Protection Impact Assessments, ensuring that AI systems handling personal data are evaluated for compliance. [_] This promotes a dynamic environment for AI innovation, in which companies can confidently develop new technologies while ensuring adherence to evolving privacy standards.
Risk mitigation: The US’s proposed Algorithmic Accountability Act would require companies to conduct impact assessments on automated decision-making systems, including AI.[_] The regulation would require businesses to evaluate their algorithms for potential biases, privacy concerns and other risks, and make adjustments as necessary.
Assessing the Impact of Regulatory Interventions
Interventions that address a government’s regulatory goals and AI risks may not always align with a country’s broader economic, national security and geopolitical objectives. For example, the EU’s risk-based regulatory approach focuses on protection and trust – effectively supporting national security, labour markets and consumer-protection efforts – but it may have unintended consequences. These could include hindering economic growth and international trade by slowing technological development, limiting access to data and technologies or creating systems that are not interoperable.
TBI’s AI Regulation Impact Assessment Template (see annex, available as a downloadable PDF) provides a framework for benefit-cost analysis, allowing for a systematic evaluation of AI interventions. Benefit-cost analysis is already widely used by governments to assess the impact of regulatory proposals and is particularly crucial for AI due to its rapidly evolving nature and significant economic and societal implications. TBI’s template systematically maps interventions in existing AI regulations to their impact across six key areas that are particularly important to governments: economic growth, labour markets, national security, consumer protection, trade, and climate change.[_]
The analysis reveals that interventions from regions such as the EU, the US and China often involve trade-offs that may not align with the priorities of countries in the Global South.[_] For instance, while Global North regulations may emphasise consumer protection or ethical considerations, they could inadvertently hinder economic growth or labour-market development – areas that are often more critical for Global South countries. Therefore, governments in the Global South should ensure that regulatory interventions are closely aligned with their national priorities and tailored to their socioeconomic contexts. This approach helps maximise the benefits of regulation while minimising unintended negative impacts.
Step Five: Commit to Continuous Adaptation and Learning
AI regulation must be dynamic and responsive to the rapid evolution of the technology. Governments should establish mechanisms for continuous monitoring, evaluation and revision of regulatory frameworks to make sure that they remain relevant and effective in addressing emerging developments and risks. Tools such as regulatory sandboxes, pilot programmes and regional testbeds are particularly valuable in the Global South, enabling policymakers to gather insights, refine approaches and identify unintended consequences in a controlled and cost-effective manner. Additionally, fostering collaboration with international bodies, industry leaders and research institutions can help bridge gaps in expertise and capacity.
While the AI Regulation Wheel offers a comprehensive framework, it must navigate significant challenges related to its interconnectedness and scalability. Its strength – addressing AI regulation across multiple areas – can also create potential regulatory overlaps or gaps. Moreover, while the wheel’s broad categories aim for universality, they may struggle to scale effectively across diverse industries and specialised AI applications where more nuanced approaches are often needed.
The EU’s regulatory approach serves as a compelling example of adaptation to emerging AI challenges. While earlier drafts of the EU AI Act made no mention of foundational models, the European Parliament later proposed their inclusion during negotiations. However, in the final text, the term “general-purpose AI models” was used instead to better capture the broad applicability and diverse risks associated with such technologies.[_] This shift underscores the importance of regulatory adaptability so that frameworks remain flexible, and responsive to the rapidly evolving landscape of AI innovation and its associated risks.
Similarly, the launch of the world’s first AI Safety Institute (AISI) in the UK in November 2023 underscores the role of new institutions in prioritising safe AI development. The primary remit of AISI is to test the safety of emerging technologies and provide oversight of the risks posed by advanced models. In the time since it was established, several countries – including Australia, Canada, France, Japan, Kenya, South Korea, Singapore and the US, as well as the EU – have established AISIs or equivalent institutes, with a newly formed International Network of AI Safety Institutes.[_] They are integral to evaluating risks, testing AI models and shaping global standards for responsible governance, and emphasise the need for cross-border collaboration and knowledge-sharing to address the diverse challenges that AI poses globally.
For governments in the Global South, adapting to the fast-paced evolution of AI technologies requires proactive measures. This includes establishing mechanisms to monitor advancements in areas such as quantum-enhanced AI, which combines the computational power of quantum systems with AI’s analytical capabilities and increasingly sophisticated AI agents. This technology can autonomously make decisions and act on behalf of users, raising complex regulatory concerns.
Moreover, governments should strengthen the capacity of regulatory agencies by equipping them with specialised sociotechnical expertise (knowledge of not only AI technologies but also their social and ethical implications) to effectively address risks and design balanced interventions.[_] This includes investing in ongoing training, technical resources and knowledge-sharing initiatives that empower regulators to make informed, proactive decisions. Additionally, it is crucial to grant these regulatory bodies the authority and independence to act swiftly in response to emerging risks and adapt regulatory frameworks as AI evolves.
Collaboration with international research institutions, industry leaders and regulatory bodies is essential to strengthen the adaptive capacity of frameworks such as the AI Regulation Wheel. For example, the recently launched Partnership for Global Inclusivity on AI, led by the US and eight prominent AI companies, has committed more than $100 million to enhancing AI capabilities in developing countries. This initiative complements the wheel’s focus by expanding access to AI models, providing compute credits and delivering training programmes to build regulatory expertise and capacity.[_] By fostering such global partnerships, governments can develop forward-looking regulations that remain adaptable to emerging risks and technological advancements.
Chapter 6
To responsibly harness AI’s transformative potential, regulation is not just inevitable but essential. At the International Parliamentary Union Assembly on 17 October 2024, parliamentarians from 130 countries resolved to “swiftly develop and implement robust legal frameworks and policies for the responsible creation, deployment and use of AI technology”.[_]
This global call to action is already gaining momentum. More than 37 countries have proposed or enacted AI-related legal frameworks, signalling a shared recognition of the need for regulatory measures that enable innovation by managing risks.[_] However, regulating AI is inherently complex. There is no universal blueprint and, rather than applying to a single technology, legal frameworks must be agile, relevant and adaptable in an era of rapid technological change.
This report emphasises the need to set clear objectives that integrate innovation with risk management, establish ethical principles rooted in inclusivity and define regulatory postures that harmonise international standards with local realities. It should be considered a starting point for governments to develop AI regulations that are locally relevant and globally aligned – and the time to start is now.
Chapter 7
We would like to extend our thanks to those who offered their advice and guidance in the development of this report (while noting that contribution does not equal endorsement of all the points made and does not reflect the views of respective employers).
TBI Contributors
Rasmus Andersen
James Goff
Jared Haddon
Calum Handforth
Johan Harvard
Steven Suskauer
External Contributors
Olubayo Adekanmbi, EqualyzAI
Fola Adeleke, Global Center on AI Governance
Markus Anderljung, Centre for the Governance of AI
Aubra Anthony, Carnegie Endowment for International Peace
Juan David Gutiérrez, School of Government, Universidad de los Andes
Lily Edinam Botsyoe, University of Cincinnati
Sean Evins, independent consultant
Rachel Finn, Trilateral Research
Kelly Forbes, AI Asia Pacific Institute
Jonathan Gonzalez, Access Partnership
Caleb Groen, Harvard Law School
Jonas Kgomo, Equiano Institute
Mihir Kshirsagar, Center for Information Technology Policy, Princeton University
Yolanda Lannquist, The Future Society
David Lemayian, Tenery Research
Zixuan Li, MIT AI Alignment
John McDermid, Institute for Safe Autonomy, University of York
Gina Neff, Minderoo Centre for Technology & Democracy, University of Cambridge
Huw Roberts, Royal United Services Institute
Camille Stewart Gloster, CAS Strategies
Ed Tsoi, AI Safety Asia
Henrik von Scheel, Institute of Strategic Intelligence
Kush Wadhwa, Trilateral Research
Kofi Yeboah, Mozilla Foundation
Kwan Yee Ng, Concordia AI
Bobina Zulfa, Pollicy