Skip to content

Tech & Digitalisation

Delivering AI Impact: A Leadership Agenda for Turning Technology into Public Value


Paper17th February 2026


Foreword

Artificial intelligence is a general-purpose technology shaping how states govern, economies grow and citizens experience public services. We are living through a civilisational-scale shift, one that offers opportunity not only for countries pushing the technological frontier, but also for those that deploy AI at scale to solve real problems.

That is why the India AI Impact Summit is focused on outcomes. Its purpose is to move beyond abstract debates about potential and risk and show how safe and useful AI can improve lives and livelihoods.

For governments, the most pressing question is how to translate AI’s potential into public value in a world of limited fiscal space and uneven digital foundations. Promisingly, low- and middle-income countries are well positioned to lead in adoption. With fewer legacy systems to unwind, digitally confident populations and ambitious development goals, they can leapfrog and build state capacity designed for the digital era from the ground up.

This report by the Tony Blair Institute addresses that opportunity head-on. It begins from a pragmatic insight: the greatest public value from AI will come from solutions that are affordable, reliable and scalable – tools that fit the realities of schools, hospitals and frontline services where time is scarce and trust is essential. Impact will come not only from technological breakthroughs, but from governments that invest, experiment, execute and adopt at scale.

This perspective matters because global discussions on AI have not always matched the realities governments face. Much attention has been paid to frontier risks and competition for compute. These issues matter. But for most countries, the more urgent task is ensuring AI becomes useful and inclusive, especially where service-delivery gaps are most acute. Countries that adopt well can capture benefits as significant as those that innovate at the frontier.

India’s digital journey has shown what becomes possible when technology is linked to public purpose and delivered at population scale. Digital public infrastructure, interoperable payments and open platforms for service delivery have expanded inclusion, lowered costs and enabled vibrant innovation ecosystems across startups, public institutions and industry. The next phase through the IndiaAI Mission should also be shared and adapted with partners across the world.

Delivering AI impact at scale demands systems thinking. AI cannot sit at the edges of government. Public value depends on foundations for adoption: service-led applications, efficient local models, and the compute, infrastructure and skills to sustain deployment.

Leadership must focus on adoption: compute and commercial value matter, but so do trust and optimism. When people see tangible benefits and clear protections, uptake accelerates.

This report offers a practical framework for scaling AI safely, strengthening digital public infrastructure and embedding AI into essential services. It shows how countries can build resilience and shape norms by investing where they have advantage, rather than becoming passive consumers.

The India AI Impact Summit is a call to action: judge AI by its impact on people’s lives.

S. Krishnan

Secretary, Ministry of Electronics & Information Technology

Government of India, New Delhi


Executive Summary

Artificial intelligence (AI) is no longer a distant frontier; it is already reshaping how states are governed, how economies grow and how services reach citizens. For low- and middle-income countries (LMICs), the most powerful opportunity lies less in competing at the technological frontier and more in strategic focus, disciplined execution and political leadership that treats AI as a means of building public value.

Today, hundreds of millions of people still lack reliable access to essential public services, from health care and education to basic administrative systems. Governments that are digitally capable, data-driven and delivery-focused are best placed to tackle this challenge and realise AI’s potential. Strengthening these foundations enables leaders to allocate scarce resources more effectively, respond faster to citizen needs and expand access to essential services at scale.

For countries facing structural and domestic challenges, this is not just a technology story. It represents a wider opportunity to transform how states deliver services and to deliver tangible improvements to citizens’ everyday lives.

Every government contends with limitations on state capacity. As such, strategies must reflect citizens’ needs in order to make deliberate choices about what to prioritise. For many governments, the binding constraints are institutions not designed for digital delivery, as well as weak data systems, unreliable power and limited access to compute. Until foundations are built, leaders should avoid proliferating pilots without a path to scale, regulating technologies not yet deployed and publishing AI strategies that are disconnected from infrastructure, procurement or budgets. In parallel, early, proportionate regulation and standard setting are essential to building effective digital foundations.

History shows that these moments of general-purpose technological change are critical in shaping long-term development trajectories. In the late 19th century, industrialisation became the route through which countries moved from middle-income status to sustained prosperity – not by prioritising invention above all else, but by reorganising state institutions to adopt and deploy new technologies at scale.

AI represents a comparable opportunity for LMICs today. With fewer entrenched platforms and legacy processes to unwind, governments can redesign delivery models, institutions and digital systems for the AI era by building interoperable data systems, modern procurement processes and AI-enabled services that meet the needs of citizens. LMICs are uniquely positioned to seize this moment. With deliberate political commitment, AI adoption can become an accelerated route to growth, enabling governments to leapfrog slower-moving systems in service delivery, innovation and state capacity.

This paper sets out a practical policy agenda, drawing on the experience that the Tony Blair Institute for Global Change (TBI) has garnered working with governments worldwide. It is written for political leaders who want to turn AI’s potential into meaningful public value. Given limited fiscal space, this agenda focuses on prioritisation, reallocating existing budgets, reforming procurement, and crowding-in private financing and partnerships – rather than assuming large amounts of new public spending.

Four overarching recommendation themes define the essential requirements.

I: Strengthen Central Government Capacity to Lead Digital Transformation

Pin transformation at the top of the political agenda. Governments that anchor responsibility for transformation and efficiency at the centre of government are the ones that can deliver transformation. Political leadership should focus on execution and outcomes rather than producing strategies alone.

Build digital readiness across government. Governments should embed AI into the core systems that govern planning, budgeting and service delivery, treating AI as critical public infrastructure and strengthening the digital foundations first. This enables better resource allocation, reduces administrative burden and supports a shift from pilots to scalable, routine delivery.

Strengthen technical procurement to enable delivery and create demand. Modernising government requires strong technical and procurement capability, enabling governments to define the problems that matter most and to buy, adapt and scale digital solutions accordingly. This can drive demand and shape markets, accelerating delivery.

II: Secure Digital Infrastructure and Ensure Widespread Access

Expand connectivity and inclusion. Governments should treat connectivity as a right and a public good, using blended finance, demand aggregation and risk-sharing partnerships to extend broadband and mobile access to all, including rural and marginalised communities.

Gain access to compute capacity via local and regional hubs. Governments should treat compute as strategic infrastructure, combining limited domestic capacity for sensitive workloads with regional or public–private compute models to balance sovereignty, resilience and affordability.

Build interoperable national data infrastructure. Interoperable data systems are essential to localise AI models and reduce dependence on external data sets. Governments thus benefit from modernising foundational registries and data platforms, establishing secure data-exchange layers and investing in data sets that reflect local languages and contexts.

Build reliable energy systems aligned with AI deployment. Governments should integrate AI demand explicitly into national energy strategies, and align energy and digital-infrastructure planning. This can be done via regional energy markets and renewable-powered infrastructure.

III: Create Ecosystems to Accelerate AI Adoption Across Sectors

Invest in talent and research. Focusing strategically on expanding both advanced and applied AI skills while prioritising broad-based literacy for frontline workers will help build an ecosystem for AI diffusion. Selective investment in research capability aligned with national priorities will further accelerate AI adoption.

Scale high-impact AI deployments through localised technology. Governments that move beyond isolated experimentation and prioritise integrated AI deployments – embedded in real service workflows, trained on local data and designed to scale from the outset – will be well positioned to deliver sustained impact, supported by clear performance metrics.

Leverage open source to accelerate innovation. Governments should support open models, data sets and standards to reduce costs, enable localisation and strengthen transparency, while investing in stewardship and security to manage risk.

Structure public–private partnerships and demand-side policies. Governments have an opportunity to work with industry to access infrastructure, skills and platforms, while also avoiding long-term dependency if they set clear conditions on data access, interoperability and skills transfer and exit.

IV: Develop Governance to Build Trust, Enable Growth and Ensure Accountability

Pursue a sector-specific, evidence-informed approach to AI governance. Governments that entrust AI governance to sector regulators and delivery ministries are better positioned to focus oversight on high-impact uses, learn from real-world deployments, and align with national and international AI frameworks, without defaulting to comprehensive, one-size-fits-all AI laws.

Build assurance and accountability capacity that makes AI safe to scale. Governments that invest in assurance systems are better positioned to generate evidence on AI performance and risk over time, and to use that evidence to guide decisions on scaling, adaptation and withdrawal, with clear accountability across the AI value chain.

Build trust through delivery, transparency, participation and effective redress. Governments can build trust by ensuring AI systems demonstrably improve citizen outcomes in practice, are transparent in everyday use and retain clear human accountability, supported by active public engagement, and effective grievance and redress mechanisms.

In practice, many government approaches have been characterised by a proliferation of pilots and a tentative posture towards investing in AI transformation with insufficient clarity on objectives and outcomes. This has limited real-world impact. These recommendations aim to help countries achieve whole-of-government transformation for the AI era, and should be sequenced based on national constraints, capacity and political priorities to maximise impact.

As global leaders convene at the India AI Impact Summit in 2026, AI represents a defining opportunity for LMICs. This paper sets out a path to impact grounded in disciplined execution: building capability, securing foundations, scaling adoption and governing with confidence over the long term. For citizens, the test for AI impact is simple: does AI make public services more accessible, more responsive and more trustworthy in practice? Countries that act now can ensure AI becomes a tool of public value for their citizens, rather than another source of fragmentation or dependency.


Chapter 1

Introduction

The age of AI transformation is here. For many governments, the challenge is not technological but institutional: whether they can translate new technological tools into tangible public value under conditions of scarcity, fragmented systems and structural constraints.

This challenge is not unique to LMICs; every country is grappling with how to adapt their governing apparatus to this new technological era. While advanced economies have placed emphasis on frontier risks and competition for compute and advanced models, LMICs are engaging with similar challenges – but with a different focus. Rather than frontier development, their priority is delivery: expanding adoption, strengthening foundational systems and translating AI into tangible public value, often through nascent policies and strategies and pragmatic regulatory approaches. In both contexts, the stakes are high. Ultimately, success in any country depends on strengthening the state and infrastructural capacity that makes AI work for citizens.

Beyond the practical imperative of AI-driven growth lies a deeper political one. AI adoption is a question of global justice, equity and sovereignty. Research across the Global South shows that AI development so far overwhelmingly reflects the priorities, values, languages and data sets of high-income economies, leaving many countries structurally disadvantaged in shaping or benefiting from these technologies.[_]

This asymmetry is not just economic or technical. When governments, businesses and citizens rely on AI systems built elsewhere, there is a risk of cultural and normative displacement: local languages and social norms are marginalised, while those embedded in US- or China-centric models become the default. Over time, this can reshape how knowledge is produced, decisions are made and public authority is exercised. The result is a widening “AI divide” that compounds existing inequalities in digital infrastructure, skills and adoption,[_] while constraining countries’ ability to retain agency over their own developmental and political trajectories.

A Turning Point in the Global AI Agenda

As AI technology evolves, the dialogue is shifting from frontier breakthroughs to social and economic consequences. The world’s first AI Safety Summit at the UK’s Bletchley Park in 2023 framed the debate around safety and responsibility.[_] Subsequent gatherings in Seoul and Paris added a strong focus on action. The upcoming fourth summit – the India AI Impact Summit in 2026 – will convene global leaders to discuss “safe, trustworthy and useful AI for all”.[_] This highlights a shift from abstract principles to the practical challenge of deploying AI within real public systems where delivery capacity, infrastructure and institutions determine whether technology translates into public value.

For heads of government facing binding constraints in service delivery, growth and inclusion, impact matters because it determines whether AI improves access, quality and efficiency in essential services, rather than remaining concentrated in advanced economies or high-income sectors. LMICs are well placed to lead this agenda by demonstrating how AI can be deployed inclusively, through public services, digital public infrastructure and adaptive governance, to deliver tangible benefits for citizens while shaping norms for responsible use.

The Real Constraints to AI Delivery in LMICs

Much of the prevailing global AI agenda remains poorly aligned with the realities of delivery in LMICs. It has tended to prioritise national strategies, ethics frameworks and pilot programmes while assuming that infrastructure, institutions, procurement systems and energy access already exist or will follow. In practice, without reliable power, affordable compute, interoperable data systems and public institutions capable of procuring and deploying digital tools at scale, AI initiatives remain fragmented and trapped in pilots. For many governments, the bottlenecks are not a lack of ambition or misaligned principles, but gaps in the foundations that make AI usable, scalable and accountable in the real world.

These constraints are visible in the global distribution of digital and AI capabilities. Despite remarkable digital gains over the past decade, most LMICs enter the AI era from structurally disadvantaged positions. A third of the world’s population – the majority in LMICs – remains offline.[_] Data systems are often incomplete, fragmented or non-interoperable. Access to compute is highly concentrated: about 60 per cent of servers are located in Europe and North America, despite these regions accounting for only around 17 per cent of the world’s population.[_] More than 90 per cent of frontier models come from US- and China-based companies, and only six firms globally (none based in LMICs) have trained models larger than 100 billion parameters.[_]

This concentration creates dependence on foreign cloud providers and models, limits countries’ bargaining power over cost, data use and standards, and reduces their ability to shape how AI is deployed domestically. Furthermore, critical discussions on standards, intellectual property and safety frameworks are being shaped without LMICs’ input despite AI’s transnational consequences and global reach.[_]

A Distinctive Opportunity for LMICs

LMICs are uniquely positioned to shape how AI is deployed. Critically, there are structural opportunities that echo earlier episodes of catch-up growth. In the late 19th century, industrialisation marked the dividing line between countries that achieved high-income status and those that did not. Countries such as Germany and Japan did not lead the first wave of industrial invention, but they caught up rapidly by strategically adopting proven technologies and, critically, reorganising state institutions to deploy them at scale. This led to the building of railways, the standardisation of administration, investment in skills and the alignment of industry with national priorities. Industrialisation was not simply a technological shift; it was a transformation of the state.[_]

History suggests that a country’s long-term economic and geopolitical outcomes are shaped less by technological leadership than by the ability to diffuse new technologies across the economy.[_] Today we face such an inflection point. As a general-purpose technology, AI’s impact depends less on who invents it and more on who can integrate it across public- and private-sector use cases. Countries that modernise state capacity, build interoperable digital foundations and coordinate AI deployment across government can turn late adoption into accelerated development. Those that do not risk being locked into a subordinate position, consuming AI built elsewhere without shaping how it affects their economies, societies or public authority. In this sense, AI adoption is not just a productivity agenda; it is a contemporary route to economic convergence, institutional maturity and long-term sovereignty.

For LMICs, this historical logic translates into a concrete opportunity today because rapid frontier advances are making AI tools more capable and accessible. Where institutional and regulatory frameworks remain adaptable, governments can create space to experiment with AI-enabled delivery models that respond to local priorities.

Public attitudes towards AI are more favourable in LMICs than in other countries. For example, 78 per cent of people in Indonesia, 74 per cent in Thailand and 73 per cent in Mexico believe AI products and services have more benefits than drawbacks, compared with only 37 per cent in the United States and France, and 46 per cent in the UK.[_] This optimism must be treated with caution, as it tends to be higher where AI literacy is lower. Combined with younger, increasingly digitally capable populations, however, there is both an opportunity and an imperative to build literacy while retaining positive public attitudes by embedding AI into public services in ways that are trusted, inclusive and grounded in real-world use.

LMICs have not been passive. Governments have incorporated digital reform and AI across sectors. Albania has utilised AI to accelerate its legal harmonisation with the European Union. Rwanda has built AI-powered real-time service-delivery dashboards;[_] Kenya has pioneered digital agriculture tools. Brazil is a global leader in leveraging AI to support its judiciary system.[_] India has developed its digital public infrastructure (DPI): shared digital systems built on open standards and protocols (see case study).[_] The development of language models adapted to regional languages to localise AI is also advancing.[_],[_]

Managing the Risks of AI Adoption

While the opportunity is significant, AI adoption carries real risks of misuse, misalignment and misadventure (unintended consequences) that can undermine public trust and cause harm.

Many models perform poorly outside the contexts in which they are trained, particularly when deployed across different institutions, populations or service environments. For example, a pneumonia-detection model trained on data from a single hospital was 20 per cent less accurate when applied to patients from other health systems, misclassifying cases because it had learned institution-specific rather than clinical features.[_]

This challenge also exists in language models: a study has revealed, for example, that multilingual models fail to capture Arabic’s cultural messaging or syntactic complexity.[_] These failures can disproportionately harm marginalised communities. There is a real need to build language- and context-appropriate AI models to reduce misalignment.[_]

AI’s success also brings its own risks. Even when systems work as intended, their effectiveness can generate wider societal harm. Social-media recommender systems tuned for maximised engagement can perform exceptionally well at their objective, yet they can amplify polarising or misleading political content. These algorithms have disproportionately surfaced political and divisive content, inadvertently increasing exposure to extreme material. This is not because the system is failing, but because its optimisation goal drives this outcome.[_] This illustrates how a technically successful system requires very clear objective and risk identification, otherwise it can destabilise public discourse, erode trust and provoke regulatory backlash. Acknowledging and managing these risks openly is essential to build in safety and trust.

While LMICs are not immune to political, commercial, or donor-driven conflicts of interest, their position outside the core technology-producing hubs creates an opportunity to design governance frameworks that foreground societal benefit from the outset, rather than retrofitting safeguards around entrenched commercial interests. This can enable responsible innovation that strengthens trust and protects citizens.

A New Model of State Capacity

AI readiness is ultimately linked to underlying state capacity. No government can use AI effectively if its DPI – the systems through which it collects data, delivers services and makes decisions – remains analogue, fragmented or inaccessible.

Some governments have digital platforms that can support rapid AI adoption. Others must first improve data systems, modernise basic processes and build skills. Wherever countries begin, the principle is the same: AI works only when governments have the institutional and digital foundations to use it.

Taken together, this points to the need to reimagine the state for the age of AI (see Governing in the Age of AI: A New Model to Transform the State). Governments must move towards models that are digitally enabled, able to use data to guide decisions and organised around solving real problems rather than managing administrative silos. AI does not simply add a new tool to existing systems; it reinforces the case for institutional reform that aligns technology, delivery and accountability around public value.

A Roadmap for Action

The accelerating pace of technological change calls for both incremental reform and a radical rethinking of how states are organised. The question is no longer whether governments should embrace this shift, but how they can do so in ways that strengthen accountability, inclusion and public trust.

Turning this ambition into reality requires practical reform across government, focusing on capability-building and a whole-of-government transformation, rather than standalone initiatives. Drawing on experience working with governments at different starting points, this paper sets out a practical playbook for political leaders seeking to convert AI’s potential into sustained public value.

This agenda is structured around four interlocking recommendations (Figure 1) that together form a roadmap for impact: building state capacity, securing critical foundations, accelerating adoption through ecosystems, and governing AI with confidence. These recommendations are designed to help LMIC leaders – wherever their starting point – to scale what works, avoid fragmentation and ensure AI strengthens service delivery rather than adding new layers of complexity.

Figure 1

The four overarching recommendations for delivering AI impact

Figure 1 – The four overarching recommendations for delivering AI impact

Source: TBI

India

Indias digital public infrastructure as a platform for delivering AI value on a national scale

India’s digital foundation is not a single application, but a set of shared, interoperable digital “rails” that make up its digital public infrastructure (DPI). Built around open standards, interoperable registries and open application-programming interfaces (APIs), India’s DPI enables both government and private actors to deliver services to hundreds of millions of people at low marginal cost. This approach responds directly to the twin constraints of scale and inclusion that many LMICs increasingly face.

India’s DPI reduces friction in service delivery by making foundational systems reusable across sectors. Core layers include digital identity, population and sectoral registries, consent-based data sharing, payments and language access. Rather than each ministry or vendor building closed, end-to-end systems, providers can plug into common infrastructure and compete or innovate on service quality. This has allowed rapid expansion of digital services while avoiding fragmentation and vendor lock-in.

Crucially, DPI has been treated as a centre-of-government reform priority rather than a standalone technology programme. Successive administrations have invested political capital in building and governing these shared foundations, embedding them into fiscal transfers, welfare delivery, health systems and financial inclusion. This institutional backing has enabled DPI to scale nationally and endure beyond individual pilots or political cycles.

Why DPI is the “multiplier”

DPI functions as a public-utility layer for the digital economy. Instead of systems being duplicated across agencies, common building blocks lower costs, improve interoperability and accelerate adoption. In India’s health stack, for example, a unique health ID, combined with registries for facilities and professionals and a consent-based data-exchange layer, enables interoperable health records across states and providers, without imposing a single monolithic national system. This architecture has also supported ecosystem growth, with hundreds of certified integrators, including hospitals, startups and software firms building services on shared standards.

How AI layers over DPI (the “intelligence layer”)

DPI creates trusted pipelines and shared data sets; AI then becomes the intelligence layer that 1) improves decision quality, 2) makes systems usable through voice and local languages, and (3) scales frontline service delivery. The critical design choice in these cases is augmentation over automation: AI is positioned as decision support (for clinicians and officials) and as an interface layer (language and voice), rather than a replacement for human accountability in high-stakes services.

India’s experience illustrates a broader lesson for LMICs. AI delivers impact at scale only when built on strong digital public infrastructure and aligned with institutional reform. Without shared foundations, AI remains fragmented and pilot-driven. With them, it becomes a lever for inclusive, system-wide transformation.

The lesson for other LMICs is not to replicate India’s scale, but to adopt the principle: invest politically in shared digital foundations, build once, reuse across sectors and govern them centrally.


Chapter 2

The Vision: Harnessing Technology for AI Impact

AI offers LMICs a distinctive opportunity: not to compete at the technological frontier, but to build digital foundations that improve service delivery for their people and support innovation under real constraints. For citizens, this translates into shorter waits for public services, more reliable public programmes and institutions that respond faster and more fairly to everyday needs. When applied as a delivery tool, AI can catalyse growth and help governments advance development goals across sectors. This chapter explores what success looks like when governments are AI-ready and able to deploy AI-enabled systems at scale with trust and legitimacy.

A 2035 Vision for AI Impact

Consider an LMIC that begins 2026 with limited fiscal space, fragmented digital systems and acute delivery pressures. By 2035, this country is not an AI superpower and does not compete at the frontier of model development. Instead, it has focused on using AI to strengthen the foundations of government delivery and to improve outcomes in areas where citizens feel failure most acutely.

Early progress has come not from large-scale research investment, but from sequencing reform around practical use cases. Basic data systems have been cleaned and connected, DPI has been expanded, and AI-enabled tools have been embedded into priority services such as health, education, agriculture and social protection. These systems are adapted from regional or global platforms, rather than built from scratch, and are aligned to local priorities.

This progress comes from alignment at the centre of government. Political priorities are translated into delivery goals through a small, empowered unit that links ministries, budgets and data. Rather than launching a proliferation of pilots, governments focus on scaling what works, using AI to reduce administrative bottlenecks and lower the cost of delivering services to remote and underserved populations.

By 2035, visible gains have accumulated. Public-service waiting times have shortened, network coverage has expanded and frontline workers are better supported by tools that reduce workload rather than add complexity. Governance has built trust with the people. These improvements free scarce fiscal and human resources, allowing governments to reinvest in foundational systems and gradually build domestic capability. In some countries, this has enabled the emergence of local firms or regional partnerships that support public-sector delivery.

This vision is ambitious but grounded. Each element reflects what is already being demonstrated at local, sectoral or sub-national level across LMICs today. What distinguishes success by 2035 is not access to frontier compute or advanced models, but political leadership that treats AI as a means to add public value, applies it where it solves real problems and commits to sustained delivery over symbolic adoption.

Pathways to Impact

Achieving meaningful impact through technology and AI requires recognising that innovation unfolds via incremental and transformational change (Figure 2). Governments should pursue both pathways, but at different speeds depending on sector readiness and needs.

Incremental innovation enhances what governments already do. It improves the speed, cost and reliability of existing services without changing their fundamental ways of operating. Like a horseshoe, which protects hooves without altering the mode of transport, incremental reform doesn’t change the destination or the mode of travel – it reduces wear, failure and friction in the system already in use.[_]

In public services, AI can help innovate how routine diagnostics are carried out to reduce backlogs in primary health care, support teachers with lesson planning and assessment, or streamline procurement to reduce delays and leakage. These gains build confidence among frontline workers, service users and political leaders, and demonstrate progress within politically relevant timeframes. But incremental improvement should not become a substitute for deeper reform. Without deliberate efforts to retire legacy systems and organisational structures, AI risks optimising inefficiency rather than eliminating it.

This is where transformational innovation is required. It enables governments to do things that were not previously possible and redefines routine processes: delivering proactive services that identify needs before citizens seek help; offering personalised learning and health support through multilingual, always-on digital front doors; using real-time simulation and system-wide monitoring to guide decision-making. These innovations do not simply optimise existing processes; they reimagine what an effective, people-centred state can look like in the AI era.

Both pathways are essential. Incremental improvements can build trust, reliability and institutional capacity, while transformational innovation enables new models of service delivery and governance. Together, they define a realistic trajectory towards intelligent, responsive and inclusive public systems.

Figure 2

The pathways to impact

Figure 2 – Pathways to impact

Source: TBI

AI Impact Across Health, Education and Agriculture

Health and agriculture can serve as illustrative examples to ground the vision for AI impact. They are sectors that are highly salient to citizens, and already central to national development agendas. They provide a clear lens through which to show how AI can translate into tangible public value.

What do we mean by AI and AI impact?

AI refers to a family of data-driven methods that support prediction, automation and decision-making, ranging from established machine-learning systems to newer generative models capable of natural-language and multilingual interaction.

In this paper, “impact” refers to public value. It describes AI’s ability to materially improve how the state works for its citizens, particularly in areas where governments today struggle to deliver. Impact is not defined by a single macroeconomic indicator or headline efficiency gain, but by sustained improvements in public value: faster and more reliable services, better-quality decisions, wider inclusion, greater fairness, institutional resilience and improved value for money. Impact is realised when AI measurably improves day-to-day outcomes for citizens while helping governments do more with less, sustainably and at scale.

Health

Health is a priority for the public worldwide, but many governments face a sustained and profound gap between their people’s needs and the resources they have available to meet them. AI can play a role in bridging this gap, enabling more people to get access to the care they need.

Every year, 9 million people die from treatable conditions due to gaps in access to and quality of care.[_] While significant strides have been made in reducing unnecessary deaths, progress has stalled in many countries. The world will lack 10 million health professionals in 2030, and a $10 billion fall in overseas aid for health between 2024 and 2025 has further increased the discrepancy between resources and need.[_],[_] It will take decades to train and pay for the necessary doctors, nurses and clinics needed to improve care by traditional means.

Responsibly deployed AI has the potential to accelerate progress by expanding and improving quality of care, by helping people look after their own health, and by enabling leaders to take better decisions. In the Philippines, the Department of Health has integrated AI-enabled portable chest X-ray screening into its national tuberculosis programme, using WHO-recommended computer-aided detection to support rapid triage in underserved areas.[_] With deployment of portable units, screening that previously could not be done without referral and delays can be carried out in-community with results in minutes, accelerating confirmatory testing and treatment initiation.

In Rwanda, TBI has supported the government in developing its first policy for AI in health, using TBI’s AI in Health Framework to focus on where AI could materially improve health outcomes and system performance. Rather than starting with technology, an initial focus was placed on health-system constraints – prioritising AI for diagnostics, clinical decision support, and analytics to address workforce shortages and support more effective allocation of limited resources. Rwanda has also recently been named as the first country participating in the Horizon1000 initiative, a $50 million partnership between the Gates Foundation and OpenAI, aiming to support the country’s health workers through AI deployment in primary care and communities.[_]

Agriculture and Environment Management

In agriculture and environment management, AI can help governments balance growth with climate resilience at a time of intensifying pressure. Climate shocks already impose severe economic costs on food systems in LMICs.

Between 2008 and 2018, total crop- and livestock-production losses in LMICs amounted to approximately $110 billion, driven largely by climate-related hazards, while earthquakes and landslides alone accounted for around 13 per cent of total losses over the same period.[_] These losses are not inevitable: where governments invest in shared data infrastructure, early-warning systems and digitally enabled extension services, AI can be embedded into public systems to anticipate risks, coordinate responses and alert farmers before shocks translate into irreversible damage.

In the Philippines, AI-based landslide mapping has strengthened disaster preparedness and protected communities from health and livelihood losses, showing how intelligent systems can enhance both safety and inclusion (see AI for Climate Resilience case study). Such systems are already reaching millions of smallholder farmers, reducing crop losses and strengthening resilience to increasingly frequent climate shocks.

From Vision to Reality

The vision for AI in LMICs is one of transformation driven by state capacity, not consumption. If these approaches can be scaled, the impact on economies and society will be substantial. Where governments combine political leadership with the ability to deliver, proportionate governance and disciplined technology deployment, AI can do more than automate existing processes: it can enable new models of inclusive growth and public-service delivery.

This vision is not utopian. Elements of it are already visible: Rwanda’s data-driven service delivery, Estonia’s digitally enabled government, India’s DPI, Kenya’s mobile-first innovation ecosystem. What distinguishes successful countries is not their wealth or technological sophistication but sustained political commitment and execution over time.

AI for Climate Resilience

The Philippines Landslide Risk-Mapping Initiative

The Philippines faces some of the world’s most severe climate risks, with nearly nine million people living in areas vulnerable to rainfall-induced landslides. In partnership with TBI, the Department of Environment and Natural Resources piloted an AI-powered landslide inventory and probability-mapping tool to strengthen disaster preparedness and climate resilience.

Using high-resolution satellite imagery and machine learning, the system automatically detected and catalogued landslides across the country, expanding coverage from 0.3 per cent to 19 per cent of national land area and cutting analysis time from a week to just two hours. The model also enabled landslide-probability mapping, making it possible to update landslide probability and susceptibility maps as frequently as after every typhoon, instead of at the current five-year cadence.

Beyond the technology itself, the initiative reimagines how the state can use AI to anticipate rather than react, shifting disaster-risk management from response to prevention. An essential component of the initiative is a policy roadmap that aims to link early-warning data to risk-weighted public budgeting, slope-specific building codes and new climate-financing mechanisms.

The value of this initiative lies not only in improved risk mapping, but in how the government chose to integrate AI into budgeting, regulation and planning, turning predictive capability into institutional action.


Chapter 3

Why AI Impact Fails to Scale

Across LMICs, AI pilots are multiplying and political ambition is high. But despite potential and pockets of success, AI has not yet translated into population-scale impact across economies or public services.

This gap reflects a set of mutually reinforcing constraints. Weak state capacity limits coordination and procurement; fragile digital, data and energy foundations raise costs and deepen dependency on other countries; adoption stalls at the frontline without skills, incentives and workflow integration; and governance limitations constrain deployment in sensitive sectors. Together, these dynamics increase risk aversion and keep AI initiatives in LMICs fragmented, externally dependent pilots, preventing promising applications from scaling.

Weak State Capacity and Coordination

The most persistent barriers to scaling AI in LMICs are limited prioritisation and alignment at the centre of government. Responsibility for AI and digital transformation is often dispersed across ministries and agencies or outsourced to development partners. No single authority is empowered to coordinate priorities, pool demand or drive delivery across sectors. Coordination failures can also occur between central and subnational levels. These coordination challenges affect governments’ ability to deliver.

Where central coordination mechanisms do exist, they are not always configured to enable delivery. Units established to oversee digital or AI agendas are sometimes given formal authority without the delivery capability required to execute, or operational responsibility without the mandate, incentives or skills to drive change across government. In these cases, coordination can become a bottleneck – slowing implementation, blurring ownership and discouraging initiative – rather than a catalyst for reform. The result is fragmented effort, delayed execution and pilots that struggle to scale.

These structural challenges manifest in long procurement cycles that far exceed rapid AI-development timelines, duplicated pilots across ministries, and heavy reliance on vendors for system design, implementation and evaluation due to limited in-house capability.

Fragile Digital, Data and Energy Foundations

AI systems depend on data, compute, connectivity and reliable power. Across most LMICs, these foundations remain uneven and fragile. Sub-Saharan African countries experience an average of 87 blackouts a year, compared to once a year in North America.[_] Public-sector data is siloed and non-interoperable, and domestic compute capacity is limited in most LMICs. Energy reliability, cloud costs and restrictive contracts lock governments and startups into external ecosystems.

Without secure digital and energy foundations, governments struggle to host shared data platforms, localise AI models or negotiate favourable terms for access to compute. While some reliance on external infrastructure is unavoidable, weak foundations limit governments’ bargaining power over cost, data use and exit options. Over time, this leads to dependence on offshore providers and increases long-term fiscal, operational and sovereignty risks.

These infrastructural gaps are compounded by financing constraints. Capital for DPI, compute and energy remains concentrated in advanced economies and is also susceptible to economic downturn. In Africa, technology investment fell by more than 50 per cent between 2023 and 2024, and South-East Asia saw a 29 per cent decline.[_] Where institutions, planning and procurement are weak, AI infrastructure is priced as high-risk, limiting investment even where demand exists. The result of this is that viable projects struggle to secure financing, reinforcing a cycle in which constraints raise costs and deter the very investment needed to scale. This reflects an ecosystem not adapted for technology integration.

Adoption Bottlenecks at the Point of Delivery

Even where infrastructure is available, AI adoption often stalls at the point of delivery. Health systems, schools, agricultural services and local administrations face acute capacity constraints that limit uptake. Shortages of skilled professionals, limited training and poor integration of AI tools into existing workflows all undermine adoption. These workers often operate under heavy workloads and tight accountability frameworks, with limited tolerance for additional risk. If AI tools expose frontline staff to new forms of scrutiny without clear benefit, or are otherwise difficult to integrate, they are unlikely to be adopted in practice.

These bottlenecks mirror wider evidence on skills and information frictions that shape how AI is adopted at the front line, particularly in high-pressure public-service environments.[_]

In the absence of clear ownership, incentives and operational support, AI systems will struggle to move from pilot stage into routine use.

Low Trust and Limited Governance Capability

Finally, limited governance capability and low trust constrain AI deployment in sensitive sectors. While risks related to bias, misuse and opacity are real, testing, monitoring and adaptation are often not standard practice. Oversight mechanisms are often weak, and global governance frameworks developed for high-income contexts are imported into LMICs without adaptation to local capacity or social priorities.

Public-trust research shows that acceptance of AI depends less on abstract principles and more on visible safeguards, transparency, human oversight and possibilities for redress. Limited public literacy and awareness of how AI systems work further compound these challenges, making it harder for governments to communicate benefits, manage risks or secure consent for deployment. Where safeguards are unclear or poorly understood, governments face political resistance and regulatory backlash, even when systems could deliver clear public value. Governance approaches that enable safe experimentation and actively build legitimacy through transparency, engagement and learning are needed.

From Diagnosis to Action

The challenge is not a shortage of ideas or ambition, but the absence of systems, capability and coordination to move from pilots to population-scale delivery. These constraints are interdependent rather than sequential: weaknesses in one area reinforce failures in others. The chapters that follow directly address these binding constraints: strengthening state capacity, securing digital and energy foundations, building an ecosystem, and governing AI in ways that sustain trust as systems scale.


Chapter 4

Strengthen Central Government Capacity to Lead Digital Transformation

Countries that make the fastest progress in digital transformation share one defining feature: a capable centre of government that provides direction, coordination and follow-through across the state.

Building this capability does not require creating large new institutions overnight. Most LMIC governments operate under tight fiscal constraints, making prioritisation essential. Progress depends on alignment and priority at the centre of government and redesigning how the state plans, spends and delivers, rather than layering AI onto fragmented systems (Figure 3).

Recommendation: Pin responsibility for transformation at the top of the political agenda.

Governments should assign clear political and administrative ownership for AI-enabled transformation to key decision-makers at the centre of government, with the authority to set priorities, coordinate across ministries and hold delivery accountable for outcomes achieved rather than strategies or pilots launched.

Across various political and administrative systems, progress has depended on whether the centre of government has the authority, capability and mandate to translate political intent into delivery. Whether through the UK’s AI Safety Institute within the Department for Science, Innovation & Technology, Singapore’s Smart Nation and Digital Government Office, the IndiaAI Mission in India’s Ministry of Electronics and Information Technology, Rwanda’s Ministry of ICT and Innovation, or Saudi Arabia’s Data & AI Authority, the common pattern is strong central direction that enables coherence, prioritisation and accountability across government.

In practice, this agenda is most effective when governments begin with a limited number of national priorities tied to concrete outcomes that reflect domestic constraints, service-delivery gaps and political objectives (see Odisha case study). For example, the Albanian government prioritised the use of AI to fundamentally retool the machinery of the state, turning EU legal harmonisation from a multi-year administrative bottleneck into a scalable, AI-enabled delivery function. This needs-driven approach helped with alignment and deployment of AI. Establishing such a mandate can usually be done within months and at relatively low fiscal cost, as the primary investment is political and organisational rather than financial.

Over-centralisation can sideline sectoral expertise or weaken ownership among line ministries and subnational actors responsible for execution, while weak central authority allows pilots and priorities to proliferate without scale. These dynamics can be reinforced by the political economy of line ministries and agencies, which may have limited incentives to share data, cede control or align behind common standards.

Addressing this risk requires a whole-of-government framework that combines central coordination and escalation authority with clear custodianship for line ministries (see Ukraine case study). The centre must set priorities and unblock barriers, while ministries retain ownership of data assets, delivery mandates and accountability for outcomes. Without this balance, even strong political momentum at the centre will struggle to translate into sustained adoption on the ground.

Recommendation: Build digital readiness across the government.

Governments should embed AI and digital capability into core state systems, such as those used for planning, budgeting, procurement and performance management, sequencing reform to digitalise high-volume processes and digitise foundational data before deploying advanced AI tools.

Predictive analytics can improve resource allocation and detect service gaps in real time. Automating routine processes such as payroll checks, procurement verification or case triage can reduce administrative burden and free up staff time and fiscal space. Many of these gains depend less on advanced AI than on well-digitalised processes and reliable data, with AI increasing their scale, speed and adaptability.

Experience from countries such as Kenya and Rwanda, where TBI has supported central delivery dashboards, shows that integrating AI into core management systems can shift government from static reporting to continuous learning.

Operationally, this work is usually led by ministries of finance or planning, working closely with the centre of government and digital authorities. Costs are often lower than the cumulative cost of fragmented legacy-IT modernisation or repeated fragmented pilots. Attempting to digitalise all functions simultaneously increases complexity and failure risk. Effective sequencing starts with digitalising high-volume, low-discretion processes and core data systems, before layering AI into more complex planning, regulatory and decision-making functions.

The main risk is attempting to implement AI where digital readiness is not yet in place – leading to the automation of poor-quality data or inefficient processes, and to over-customisation that limits scalability. Leaders must therefore choose to embed AI deeply only where digital foundations are sufficiently strong and public value is clear.

Recommendation: Strengthen technical procurement to enable delivery and create demand.

Governments should build in-house technical capability to reform public procurement so they can buy AI solutions to solve clearly defined service problems, manage vendors and contracts effectively, and use public demand to shape markets toward scalable, interoperable solutions.

Political leadership and digital foundations are necessary but not sufficient to deliver AI impact. Delivery can be supported by the government’s central procurement capability, driven by solving problems rather than using solutions. This requires the ability to specify problems clearly, select appropriate technologies, negotiate contracts that allow systems to evolve and avoid vendor lock-in. Without this capability, AI systems will struggle to integrate across public services and will remain fragmented.

Effective procurement does more than enable delivery; it also creates demand. In LMICs, markets alone rarely deliver AI adoption at the speed or scale required to improve public services. Governments are often the largest and most credible buyers, and how they spend shapes which solutions scale, which firms survive and how quickly ecosystems develop.

Governments need mechanisms to assess whether systems perform effectively in local contexts, not just in laboratory settings. Context-specific benchmarks and performance metrics can guide procurement decisions and prevent the scaling of underperforming systems. Initiatives such as AfroBench, which evaluates large language models across 64 African languages, illustrate how benchmarking can support better purchasing decisions and real-world delivery.[_]

The main implementation challenge is balancing coordination with ministerial ownership. Over-centralisation can slow innovation and generate resistance, while weak central authority leads to fragmentation. Early efforts should focus on defining problems, setting standards and agreeing outcome metrics. As systems mature, responsibility should shift decisively to ministries, with the centre of government retaining a role in assurance and continuous improvement.

Used well, procurement is not a back-office function but a core component of effective state capacity.

Odisha, India

Building State Capacity for AI-Driven Delivery

Over the past decade, the economic growth of Odisha, a state on India’s eastern coast, has outpaced the national average, driven by advancements in its IT and manufacturing sectors. Building on this trajectory, the state is now seeking to apply a similar delivery-led approach to social and developmental outcomes by embedding AI across government.

Odisha’s ambition is to position itself as a national hub for AI deployment and innovation, using AI as a practical tool to accelerate progress in priority sectors including health care, education, agriculture, climate resilience and governance. Rather than treating AI as a standalone digital initiative, the state has approached it as a whole-of-government reform agenda.

With support from TBI, Odisha has developed a state-level AI vision and strategy aligned with India’s national AI mission. As part of this process, more than 30 high-impact AI use cases were identified and prioritised across key departments, alongside detailed implementation roadmaps to move from pilots to deployment. To support execution, strategic levers for ecosystem building were defined, with clear targets and action plans covering infrastructure readiness, workforce skills and policy enablers.

Odisha’s approach illustrates how strong political leadership can translate AI ambition into delivery by focusing on prioritisation, coordination and institutional capability, rather than frontier technology alone.

Figure 3

The AI-adoption matrix

Source: TBI

Note: These categories represent distinct implementation patterns rather than a maturity progression. Context-appropriate strategies may begin in any quadrant, though whole-of-government transformation offers the greatest potential for inclusive, sustained impact at scale.

Ukraine

Using AI to Strengthen Digital Capacity and Service Delivery in Crisis

Political leaders in Ukraine set a clear national vision for digital transformation, backed by an empowered Ministry of Digital Transformation with the authority to coordinate initiatives, set interoperability standards and drive cross-government adoption.

These institutional arrangements enabled the creation of delivery mechanisms that integrate AI into everyday public services. Through platforms such as Diia and the new AI-powered assistant Diia.AI, Ukraine now embeds automation and AI-driven guidance into processes ranging from information access to service verification. This approach reduces administrative steps, improves user experience and supports more consistent delivery across ministries.

Ukraine’s experience demonstrates that scalable AI deployment depends less on individual tools and more on political leadership, coherent institutional architecture and dedicated delivery capability. With strong institutional foundations, AI deployment can accelerate across government. These building blocks allow governments to move from experimentation to routine use of AI in core public services.


Chapter 5

Secure Digital Infrastructure and Ensure Widespread Access

For countries that have placed AI high on the political agenda, impact at scale depends on having the physical and digital infrastructure to support it. Connectivity, compute, data and energy form the backbone of the AI economy, yet access to these foundations remains deeply unequal across the globe. Without deliberate investment in these core systems, AI initiatives struggle to move beyond pilots and deliver sustained public value.

Without such foundations, governments cannot run sensitive workloads, deliver AI-enabled public services at scale or assure the safety and accountability of the systems they rely on. Leaders therefore face a decision: which layers of the AI stack must be secured domestically, which can be accessed regionally and which can be outsourced without undermining public value.

Recommendation: Expand connectivity and inclusion.

Governments should treat digital connectivity as a strategic public good, targeting underserved regions and using public finance, demand aggregation and partnership models to de-risk last-mile deployment.

Equitable access to AI begins with connectivity. Broadband networks, cloud infrastructure and local data exchanges form the building blocks of digital opportunity. In most LMICs, however, existing investment models continue to concentrate resources in commercially viable urban corridors, while high capital costs, weak demand aggregation and unresolved last-mile economics render rural deployment unprofitable.

Left to market forces alone, connectivity will expand where returns are highest, not where public value is greatest. Governments must therefore decide whether connectivity is treated as a byproduct of private investment or as a strategic public good. Targeting broadband funds at low-return regions, de-risking rural deployment through development banks and aggregating public-sector demand are not technical fixes but deliberate policy choices that maximise participation in the digital economy.

In addition to government, mobile operators and technology providers have shown their role in this transition. Initiatives such as those convened by the GSMA bring together Africa’s leading mobile operators, cloud providers and AI developers to improve backbone connectivity, share infrastructure and align investments around inclusive AI use cases.[_] Microsoft’s connectivity programmes across Africa similarly focus on closing coverage gaps by providing internet to more than 117 million people across Africa.[_] Since 84 per cent of broadband connections within LMICs occur through mobile devices, compared to 57 per cent globally, such advancements will be game-changing.

In this context, the role of the state is not to replace private providers, but to convene, regulate and de-risk. Governments can convene operators, cloud providers and financiers around shared coverage goals; regulate access, infrastructure sharing and spectrum use to reduce duplication and lower costs; and de-risk investment by using public finance, guarantees or anchor institutions to make marginal deployments viable. These interventions shape the conditions under which private capital flows, rather than substituting for it.

Public-service facilities can further amplify inclusion when governments choose to use them as digital anchors. Schools, libraries and hospitals can double as shared digital-access points, hosting connectivity and computing resources for communities. Successful examples such as Kenya’s Community Learning and Resource Centres[_] and India’s Common Services Centres[_] show that public infrastructure can catalyse AI readiness and adoption when paired with private-sector innovation.

Recommendation: Gain access to compute capacity via local and regional hubs.

Governments should secure affordable and resilient access to compute through a layered strategy that combines limited domestic capacity for sensitive and mission-critical workloads with regional or public–private compute hubs, rather than pursuing premature national self-sufficiency.

Compute power is a strategic asset underpinning innovation, economic competitiveness and national resilience. For many governments, dependence on foreign cloud providers creates structural vulnerabilities and limits the ability to deploy AI systems aligned with domestic priorities.

Building national-scale data centres or supercomputers is neither feasible nor efficient for most countries. Regional collaboration offers a more realistic pathway. Initiatives such as the Euro High Performance Computing Joint Undertaking,[_] which enables smaller countries like Estonia to access high-performance computing through systems such as Finland-based supercomputer LUMI, demonstrate how shared infrastructure can provide secure access to advanced compute without full national ownership.

Public–private hub models also show how regional access can be expanded at scale. The partnership between Groq and Aramco Digital[_] to establish a large AI-inference facility in Saudi Arabia illustrates how countries with strong data-centre ecosystems can host regional compute hubs. This approach can be secured by keeping governments in control of their data, supported by strong encryption and binding contractual commitments.

For mission-critical functions – including public-service delivery, security and emergency response – governments should retain some domestic compute capacity to ensure operational continuity. This layered approach balances sovereignty, resilience and cost, while reducing the fiscal burden on individual states. Sequencing is critical: premature national investments without reliable foundations or sustained demand can risk creating stranded assets.

Recommendation: Build interoperable national data infrastructure.

Governments should modernise foundational data systems by investing in interoperable registries, secure data-exchange layers and priority data sets that reflect local languages and contexts, enabling AI systems to be adapted, governed and scaled for domestic public services.

AI-driven systems depend not only on connectivity and compute, but on high-quality, accessible and interoperable data. Often, core data sets such as population registries, land records, health-information systems and energy telemetry remain fragmented across ministries, incomplete and designed for periodic reporting rather than real-time use. These systems were not built to support continuous analytics, model training or system-level learning, all of which are essential to deploy context-specific AI.

Language compounds this challenge. Around 49 per cent of all online content is in English,[_] while many LMICs operate primarily in languages that are sparsely represented in global training data. As a result, widely used AI models perform best in a narrow set of linguistic and cultural contexts, and struggle to generalise to local settings (“generalisation” being the ability of a model to extend to and capture the local context). Building effective AI systems for under-represented languages and contexts is therefore not simply a matter of translation. It requires sustained investment in centralised data collection, annotation and governance that reflect local linguistic, social, cultural and institutional norms, including how knowledge is produced, social rules are interpreted and policy decisions are made.

Governments should therefore treat national data systems as critical infrastructure. This means modernising foundational registries, establishing secure data-exchange layers that enable trusted sharing across agencies, and investing in platforms that support real-time data flows for public services. This will accelerate the research and scaling of local models (with InkubaLM, a model trained in five African languages,[_] as one example) built for local users with their social and economic realities in mind. Data sets should start with demand, rather than supply, and prioritise one or two strategic data sets.

When paired with national or regional compute hubs, shared data infrastructure becomes a powerful enabler of innovation. Hosting priority data sets and open models within secure national or regional environments allows researchers, startups and public agencies to develop solutions tailored to local needs to maximise AI impact.

Recommendation: Build reliable energy systems aligned with AI deployment.

Governments should integrate AI-driven digital demand into national energy planning and align investments in power generation, grids and data infrastructure, using regional energy markets and renewable-powered infrastructure where domestic reliability is insufficient.

Half of the world’s data-centre energy demands are expected to come from AI by 2030. Without deliberate planning this will place additional strain on already fragile grids. For AI to scale sustainably, countries need access to cheap, reliable and abundant power, alongside grids capable of delivering it consistently.

Digital and energy systems are often developed in isolation but considering them together is essential. The most sustainable pathway is a dual focus: expanding firm generation to ensure adequate baseload supply, and modernising grid infrastructure to improve reliability, flexibility and storage capacity.

Where these conditions cannot yet be met domestically, governments should take a pragmatic approach to compute access. Rather than investing prematurely in national data centres powered by unreliable or high-cost electricity, countries can rely on remote cloud infrastructure for non-sensitive workloads or participate in regional compute hubs located in countries with stronger energy systems and surplus generation capacity. Regional energy markets such as the Southern African Power Pool already provide a foundation for this approach by stabilising supply and enabling access to consistent baseload power for energy-intensive workloads.

Investing in smart grids and energy-efficient data centres can reduce operational costs and emissions while helping to address the wider energy deficits that constrain economic development in many LMICs. Countries such as Egypt and Morocco[_] are already demonstrating what is possible through renewable-powered data facilities that combine geothermal and solar energy, showing how AI infrastructure can accelerate – rather than compete with – the transition to more reliable power systems. This has been enabled by deliberate regulatory reform, credible utility governance, long-term power-purchase arrangements and alignment between energy, digital and industrial policy. When approached in this way, the AI agenda can serve as a catalyst for long-overdue energy-sector reform, strengthening fiscal stability, service reliability and long-term growth.


Chapter 6

Create Ecosystems to Accelerate AI Adoption Across Sectors

Even where governments have built state capacity and invested in digital infrastructure, AI impact can fail to spread beyond isolated deployments. The binding constraint is no longer only technology or access, but whether local ecosystems are organised to adopt, adapt and routinely use AI at scale.

Accelerating adoption is therefore a question of implementation and incentives – specifically whether AI can be embedded into real delivery systems in health, education, agriculture and public administration. Impact depends not on experimentation alone, but on aligning incentives, workflows and accountability so AI becomes part of routine practice rather than a standalone tool.

Recommendation: Invest in talent and research.

Governments should invest in both broad-based AI literacy for frontline workers and selective research capacity aligned with national priorities, sequencing mass skills development to ensure adoption, relevance and sustainability.

For LMICs, this requires sustained investment in talent and research capability to ensure AI systems can be developed, adapted and used effectively in local contexts. Currently, only 10 per cent of global research and development takes place in LMICs, limiting domestic capacity to shape AI applications and increasing reliance on imported models. Expanding the pipeline of skilled practitioners – across development, deployment and oversight – is therefore essential for both innovation and adoption.

Some countries illustrate what is possible. Estonia is aiming for 80 per cent elementary AI- and data-skills attainment within broader society by 2030 and Singapore has trained 243,000 individuals in AI via government upskilling programmes.[_] Achieving similar progress requires strengthening universities and technical colleges, supporting AI-focused degree programmes, and offering scholarships, research grants and faculty partnerships with leading global institutions. At the same time, LMICs need domestic research ecosystems capable of generating locally relevant innovation – from language technologies and climate modelling to health and agricultural applications. Investing in research centres, national labs and joint industry–academia programmes help ensure countries are not solely dependent on imported models, and can shape and adapt AI to their own priorities and contexts.

Beyond advanced expertise, widespread applied skills are essential. Realising the potential of AI requires not just technical innovation but deliberate efforts to embed intelligence into practical work and elevate human capability across sectors. Frontline workers in sectors such as health, agriculture, education, manufacturing and public administration must understand how to use AI tools effectively and responsibly. Vocational programmes, digital-skills training and continuous professional development can help technicians, teachers, extension workers and civil servants integrate AI into daily practice, ensuring that the benefits of AI are spread more widely across the labour market.

In practice, skills-and-research investment is led by education and labour ministries, coordinated by the centre of government to ensure alignment with adoption priorities. The central trade-off is depth versus reach: investing heavily in elite research without widespread applied capability risks fragmentation, while broad training without institutional anchors limits sustainability. Broad-based literacy and applied skills deliver early returns, while advanced research capacity should be built selectively where it supports priority use cases.

Recommendation: Scale high-impact AI deployments through localised technology.

Governments should prioritise scaling AI deployments that address concrete service bottlenecks, and are trained on local data, embedded in real delivery workflows and assessed early against clear performance thresholds, rather than sustaining open-ended pilots.

High-impact, citizen-facing deployments in sectors such as health diagnostics, education delivery and agriculture are essential, but need to be designed from the outset for integration, learning and scale.

This means grounding deployments in service needs and local conditions. AI-assisted disease screening in rural clinics, for example, must be trained on local data, operate in local languages and be embedded in referral systems rather than run as standalone tools. Multilingual interfaces are core to inclusion and adoption. Governments should procure such tools with clear performance metrics and alignment to national priorities, ensuring that successful systems are integrated into national workflows rather than remaining fragmented pilots.

The key trade-off is experimentation versus commitment: excessive piloting undermines credibility, while premature scaling without learning can amplify harm. This is where expert technical procurement can guide the process.

Recommendation: Leverage open source to accelerate innovation.

Governments should support open-source models, tools and data sets as shared digital infrastructure, using procurement, funding and stewardship to reduce costs, enable localisation and avoid long-term dependence on proprietary systems.

Rather than competing at the frontier, open models, tools and data sets allow countries to focus on diffusion, adaptation and practical problem-solving – lowering costs, avoiding vendor lock-in, and enabling AI systems to be tailored to local languages, institutions and public-service needs.

As outlined in TBI’s recent paper on open source, the real opportunity lies not in building standalone national models, but in cultivating open-source ecosystems that turn AI into usable infrastructure. Governments can support this by enabling access to open models, investing in reusable software tools and developing shared data sets in areas such as agriculture, health, education and climate resilience. Open-language models, agricultural data commons or health-data sandboxes can provide platforms that local researchers, startups and public agencies can build on. Partnerships with universities, civil society and the private sector are essential to ensure these assets are high quality, secure and sustainably maintained. For example, edtech company EIDU has open-sourced its codebase to support affordable and scalable digital-learning solutions.[_]

Open source also strengthens governance and resilience. Transparent systems are easier to audit, adapt and integrate across agencies, helping governments retain control over how AI is deployed. Open standards improve interoperability, while shared codebases reduce long-term dependence on single suppliers. To realise these benefits, governments can fund local developer communities, support the maintenance of critical open-source infrastructure, host open-data repositories and prioritise open standards in public procurement.

In practice, successful open-source initiatives are often led by digital or innovation ministries working closely with universities, local developer communities and regional partners. Done well, these efforts can build durable local opportunities, support regional collaboration and ensure that AI adoption in LMICs delivers tangible public value rather than deepening dependency.

Recommendation: Structure public–private partnerships and demand-side policies.

Governments should work with private firms to access infrastructure, platforms and expertise while setting clear conditions on interoperability, data governance, skills transfer and exit, ensuring partnerships build domestic capability rather than entrench dependency.

Private firms control much of the infrastructure, platforms and expertise required to deploy AI at scale, and partnerships can accelerate adoption far faster than public provision alone. They also have the resources to test and validate models with ethical guardrails.[_] The strategic challenge for governments is therefore not whether to work with industry, but how to do so in ways that build domestic ecosystems, create demand and avoid long-term dependency.

In South Africa, Vodafone has delivered AI-enabled agriculture extension services to smallholder farmers through mobile networks, demonstrating how private platforms can extend services at scale.[_] At the same time, infrastructure providers including Amazon Web Services, Teraco and Dimension Data have committed more than $1 billion to expand data-centre capacity.[_] Industry bodies such as the South African Artificial Intelligence Association (SAAIA) demonstrate the value of neutral platforms that convene startups, large firms, researchers and government agencies around shared standards, skills development and knowledge exchange.[_]

Similarly, Singapore’s Triple Helix partnership provides a connection between research community, industry and government to facilitate research collaborations, and the rapid commercialisation of fundamental AI research and deployment of AI solutions.[_] Together, these examples show that private and hybrid institutions play a critical role in extending infrastructure and resources. Their impact depends on active government engagement to shape demand, align incentives and steer adoption towards public goals.

The central trade-off in industry partnership is speed versus agency. Poorly structured partnerships can deliver rapid deployment but entrench vendor lock-in and external dependence. Well-designed partnerships embed requirements for interoperability, data access, skills transfer and exit, allowing governments to rebalance towards greater domestic provision as ecosystems mature.


Chapter 7

Develop Governance to Build Trust, Enable Growth and Ensure Accountability

As AI solutions move from pilots to routine use, governance becomes central to whether they deliver real public value. Done well, governance supports growth by lowering transaction costs, enabling fair competition and facilitating cross-border interoperability. Done badly, it produces either a vacuum, where there is an absence of adequate regulation to facilitate AI-industry growth and safe adoption by the public, or a form of governance theatre, where laws and principles outpace enforcement capacity and undermine credibility.

For LMICs, the objective is a practical model of governance that makes AI safe to scale and valuable in delivery. Effective governance enables trust, legitimacy and adoption at scale by combining technical, procedural and social tools rather than relying on regulation alone.[_]

Recommendation: Pursue a sector-specific, evidence-informed approach to AI governance.

Governments should embed AI governance within sector regulators and delivery ministries, grounding oversight in evidence from real deployments and focusing enforceable requirements on high-impact public functions, in alignment with national and international AI frameworks.

For LMICs, effective governance starts with strengthening existing delivery systems and assurance mechanisms, updating existing legal foundations and grounding policy in evidence from local deployments.

AI governance becomes most effective when it attaches to where decisions are made and services are delivered, not to abstract definitions of AI systems. In practice, this means empowering existing regulators and ministries to set enforceable requirements for the AI they procure or deploy in their domains, including health, education, agriculture, finance, social protection and public administration.

A practical starting point is to articulate a small number of nationally endorsed governance commitments, such as safety, fairness, accountability and security, but to operationalise them sector by sector. What “safe” means for a clinical-screening tool differs from what it means in an education-placement system, and the evidence required before scaling must reflect those differences.[_] This approach is both more feasible for states with limited regulatory capacity and more credible for citizens because it anchors governance in real service contexts rather than abstraction.

Sector-specific governance also supports growth and diffusion. Clear, predictable rules reduce uncertainty for providers, strengthen procurement leverage for governments and help ensure that AI adoption strengthens domestic capability rather than deepening dependency.

In some cases, horizontal laws that apply across all sectors and use cases can also support effective governance. For example, transparency requirements that enable regulators to more effectively appraise the systems they are going to use for whether they are safe and fit for purpose. Their purpose should be to set a small set of baseline obligations, such as serious-incident reporting or transparency expectations for high-impact deployments,[_] while providing shared tools and guidance that sector regulators can draw on.

Singapore

Practical, non-prescriptive AI governance for scale

Singapore’s approach to AI governance illustrates how trust and scale can be built on top of enforceable baseline regulation without comprehensive, one-size-fits-all AI laws. Its Model AI Governance Framework for Generative AI emphasises practical guidance and assurance over prescriptive regulation, focusing on how AI systems are deployed and managed in real contexts.[_]

This framework operates alongside existing, legally binding regimes – including data-protection law and sectoral regulatory powers – which provide enforceable requirements for transparency, accountability and redress. Rather than relying on horizontal rules, the framework sets out key governance dimensions such as accountability, safety, testing and assurance, and content provenance, while emphasising shared responsibility across the AI value chain and continuous evaluation as systems evolve. High-level principles are paired with operational guidance that sector regulators, delivery ministries and deploying organisations can apply in practice, including through procurement requirements, assurance processes and supervisory engagement. This approach prioritises operationalisation over abstraction, helping to avoid governance theatre and reduce uncertainty for deployment.

Singapore’s experience demonstrates how sector-specific, evidence-informed governance and assurance can make AI safe to scale while supporting innovation and adoption while still leaving room to strengthen formal requirements over time as risks and capabilities evolve.

Recommendation: Build assurance and accountability capacity that makes AI safe to scale.

Governments should strengthen or establish independent-assurance bodies with clear mandates, technical expertise and enforcement powers to test, monitor and correct AI systems over time, and to hold both public and private actors accountable for harm.

For most LMICs, the binding governance constraint is not the absence of principles, but the absence of assurance capability. Governments need the ability to seek evidence, validate performance, detect failures in deployment and respond when things go wrong.

Effective assurance requires intervention at multiple levels of the AI ecosystem. In practice, this means: governance tools that address organisational governance processes and accountability structures within deploying agencies and suppliers; product- and application-level requirements tied to specific service workflows; and operational safeguards that shape how systems are used in practice, including access controls and user behaviour. This reflects the reality that risk does not sit in the model alone, but emerges from deployment contexts, incentives, institutional weaknesses and downstream use.

Accountability is also distributed across an AI supply chain: model developers, integrators, procuring agencies, frontline users and cloud providers. Governance must allocate responsibilities across this value chain and enforce them through procurement contracts, licensing conditions and sector regulation, not through voluntary commitments alone. In practice, contracts should require documentation, audit rights, update controls, security commitments, incident reporting and clear remediation pathways.

Assurance should be evidence-informed by design. Governments should treat continuous monitoring and evaluation of AI-application performance as core governance functions, not afterthoughts. This is particularly important for models that are frequently updated and experience performance shifts over time.

A practical way to make this operational is through cross-government decision checkpoints that determine whether an AI system can move from pilot to wider deployment. Governments should require proportionate evidence that it improves outcomes versus the status quo, meets minimum safety and equity thresholds in the target population, can be monitored in deployment, and has a clear incident-response and redress path. Where evidence is mixed, governments should adapt the intervention rather than defaulting either to prohibition or unchecked scaling.

For LMICs, a critical priority within assurance is the development of multilingual and locally grounded evaluation. Methodologies that operate only in English risk obscuring failure modes in low-resource languages and culturally specific contexts. Governments should invest in and partner with academia and regional research networks to develop evaluation approaches that reflect linguistic diversity and real-world service conditions. Regional cooperation can help keep costs manageable, especially for specialised testing capability.

Assurance must also be sequenced. Governments should start by specialising in a limited set of high-value capabilities – such as security assessment for public systems, audits of update and change-management processes, language evaluation and incident reporting – rather than attempting to cover all risks at once.

Recommendation: Build trust through delivery, transparency, participation and effective redress.

Governments should build trust in AI by earning and maintaining a social licence for its use: demonstrating measurable public value, embedding transparency and human accountability into deployments.

Trust grows when citizens experience AI systems as fair, accountable and oriented towards clear social benefit. Governments should therefore pursue delivery-led governance: piloting AI in real settings, measuring improvement against the status quo and scaling up AI use only where benefits are evident. Performance must be communicated clearly and honestly.

Trust also depends on the everyday design of decision processes. In high-impact contexts, such as welfare eligibility, clinical triage, student support or law enforcement, governments should specify when humans remain accountable, what constitutes meaningful oversight and how to prevent automation bias. Where risks are primarily technical, the response should prioritise secure design and monitoring. Where risks are procedural, safeguards must be built into workflows.

Transparency should be practical and functional, not purely declaratory. Governments can strengthen trust by maintaining public registers that set out the purpose, responsible owner, supplier and risk level of every AI system used in government; adopting routine reporting standards for high-impact deployments so disclosure is comparable across agencies; ensuring clear frontline communication about what systems do, what they do not do and where humans remain accountable; and by implementing content provenance and authenticity measures for generative-AI use cases involving public communications.

As public-sector use expands, governments must also be visibly accountable for AI-related harms. This includes requiring impact assessments for high-impact systems, publishing them in accessible formats and ensuring clear legal pathways to contest automated or algorithmically assisted decisions.[_] Redress mechanisms should be easy to access and effective in practice, reinforcing the principle that AI does not dilute human responsibility within the state.

Public participation is also key, enabling the public to meaningfully input into decisions about AI use and governance to ensure public-interest orientation. For LMICs, this helps prevent narrow interests from shaping AI systems, builds social licence and enables early detection of failures. Governments should institutionalise consultation, user engagement and frontline feedback loops that inform system design, regulatory refinement and system deployment. Mechanisms can be lightweight but concrete: user panels for high-impact services, frontline reporting channels for system errors, and periodic public reporting that explains what has been learned and what has changed as a result. AI-enabled online deliberations, for example through platforms like Pol.is, are also an effective way to engage large-scale and nuanced public consultations at low cost in order to inform confident decision-making about AI deployment and use in the public interest.

Over time, trust is built not by promising perfect control, but by demonstrating competence: deploying AI where it noticeably improves outcomes, correcting failures quickly and transparently, and making accountability visible.


Conclusion

AI is no longer a question of if, but how. The defining challenge of this decade is translating technological promise into tangible public value that citizens experience in their everyday lives. The countries that succeed will be those with clear political intent, strong state capacity and the discipline to prioritise delivery over symbolism.

Delivering AI impact therefore begins with the state itself. For citizens, this is where AI either becomes meaningful, damaging or invisible. Political leaders must reimagine government as a digital enabler of delivery. This means strengthening central government capacity to lead digital transformation, so that ambition translates into execution; investing in the digital, data and energy infrastructure that makes deployment possible; cultivating ecosystems that spread adoption across sectors; and governing AI in ways that build public trust as systems scale. Done well, these changes will materially change citizens’ experience of public services and determine whether AI becomes a driver of inclusive growth or another source of dependency.

This is not an agenda to be pursued all at once. Progress depends on deliberate sequencing: building coordination and foundations first, then scaling adoption through priority use cases where benefits to citizens are clear and measurable, and deepening governance as systems move into routine use. Getting this order right allows governments to move fast where value is clear, while avoiding fragmentation, wasted effort or premature regulation.

The prize is a generation of governments that are not merely consumers of innovation, but active producers of public value via AI.

This is a leadership agenda for the AI age. The countries that act now with focus, sequence reforms effectively and govern with confidence will not only shape their own development paths but help define, through the use of technology, a more inclusive global order in which AI measurably improves lives, strengthening the relationship between states and the people they serve.


Acknowledgements

We would like to extend our thanks to those who offered their advice and guidance in the development of this paper (while noting that contribution does not equal endorsement of all the points made and does not reflect the views of respective employers/organisations).

TBI Contributors

  • Keegan McBride, Elizabeth Seger, Guy Ward-Jackson & Kevin Zandermann (Science & Technology)

  • Hilda Barasa, PeiChin Tay, Oliver Large & Graham Drake (Government Innovation)

  • Johan Harvard (AI & Innovation)

  • Ned Naylor (Global Experts)

  • Devorah West (Climate)

  • Panagiotis Vallianatos, Arlind Rama (Advisory / Albania)

  • Leonardo Camacho, Carlo Enrico Santiago, Reini Azriel Evangelista (Advisory – Philippines)

External Contributors

  • Rachel Adams – Global Center on AI Governance & Professor of Leverhulme Centre for the Future of Intelligence, University of Cambridge

  • Payal Arora – Professor of Inclusive AI Cultures, Utrecht University

  • Rajeev Chandrasekhar - Former Minister of Electronics and Information Technology, Government of India

  • Costa Federico – Amazon Web Services

  • Sana Khareghani – Professor of Practice in AI, King’s College London

  • Hector de Rivoire – Microsoft’s Office of Responsible AI

  • Ed de Minckwitz – ServiceNow

  • Jeremy Ng – World Bank

Footnotes

  1. 1.

    Barani Maung Maung et al., “Generative AI Disproportionately Harms Long Tail Users”, Computer 57, No. 11 (2024): 82–85

  2. 2.

    https://www.ilo.org/publications/major-publications/mind-ai-divide-shaping-global-perspective-future-work

  3. 3.

    https://www.gov.uk/government/topical-events/ai-safety-summit-2023

  4. 4.

    https://ai-impact-summit.vercel.app

  5. 5.

    https://www.itu.int/itu-d/reports/statistics/2023/10/10/ff23-internet-use

  6. 6.

    https://institute.global/insights/tech-and-digitalisation/state-of-compute-access-2024-how-to-navigate-the-new-power-paradox

  7. 7.

    https://www.unglobalpulse.org/wp-content/uploads/2025/06/UN-Global-Pulse%5F2024-Annual-Report.pdf

  8. 8.

    S Adan, R Trager, K Blomquist et al., “Voice and Access in AI: Global AI Majority Participation in Artificial Intelligence Development and Governance” (2024)

  9. 9.

    A Greenspan and A Wooldridge, Capitalism in America: A History (Penguin Books, 2019)

  10. 10.

    J Ding, Technology and the Rise of Great Powers: How Diffusion Shapes Economic Competition (Princeton University Press, 2024)

  11. 11.

    https://www.ipsos.com/sites/default/files/ct/news/documents/2023-07/Ipsos%20Global%20AI%202023%20Report.pdf

  12. 12.

    https://www.moh.gov.rw/news-detail/new-health-intelligence-center-to-drive-real-time-evidence-based-decisions

  13. 13.

    https://www.techandjustice.bsg.ox.ac.uk/research/brazil

  14. 14.

    https://www.iimb.ac.in/cdpg/pdf/State-India-DPI%5FReport.pdf

  15. 15.

    https://www.nature.com/immersive/d44151-025-00085-3/index.htmlhttps://www.nature.com/immersive/d44151-025-00085-3/index.html

  16. 16.

    https://www.nature.com/articles/s41591-025-03815-3

  17. 17.

    https://pubmed.ncbi.nlm.nih.gov/30399157/

  18. 18.

    https://arxiv.org/abs/2405.01590

  19. 19.

    https://www.nature.com/articles/s41467-024-52618-6

  20. 20.

    https://pubmed.ncbi.nlm.nih.gov/34341121/

  21. 21.

    Carl Benedikt Frey, How Progress Ends: Technology, Innovation, and the Fate of Nations (Princeton University Press, 2025)

  22. 22.

    https://www.thelancet.com/journals/langlo/article/PIIS2214-109X%2819%2930485-1/fulltext

  23. 23.

    https://gh.bmj.com/content/7/6/e009316

  24. 24.

    https://www.healthdata.org/sites/default/files/2025-07/FGH%5F2025%5FFINAL%5Fincl%5FTranslations%5F2025.07.31.pdf

  25. 25.

    https://www.who.int/philippines/news/detail/12-11-2025-who--doh-target-12m-filipin[…]osis-by-2026--philippines-aims-to-double-budget-for-tb-serviceshttps://www.who.int/philippines/news/detail/12-11-2025-who--doh-target-12m-filipinos-to-be-screened-for-tuberculosis-by-2026--philippines-aims-to-double-budget-for-tb-services

  26. 26.

    https://www.gatesnotes.com/expanding-access-to-health-care-through-ai

  27. 27.

    https://openknowledge.fao.org/handle/20.500.14283/cb3673en

  28. 28.

    https://institute.global/insights/tech-and-digitalisation/state-of-compute-access-2024-how-to-navigate-the-new-power-paradox

  29. 29.

    https://businessfightspoverty.org/exploring-the-ai-opportunity-in-lmics

  30. 30.

    https://pissaridesreview.ifow.org

  31. 31.

    https://mcgill-nlp.github.io/AfroBench/index.html

  32. 32.

    https://www.gsma.com/solutions-and-impact/connectivity-for-good/mobile-economy/wp-content/uploads/2024/11/GSMA%5FME%5FSSA%5F2024%5FWeb.pdf

  33. 33.

    https://news.microsoft.com/source/emea/2025/12/microsoft-achieves-major-connectivity-milestone-in-africa/?msockid=398db30aa96f670737a4a57aa8116641

  34. 34.

    https://www.uil.unesco.org/en/clcs-kenya

  35. 35.

    https://www.digitalindia.gov.in/initiative/common-services-centres

  36. 36.

    https://www.eurohpc-ju.europa.eu/about/discover-eurohpc-ju%5Fen

  37. 37.

    https://aramcodigital.com/articles/aramco-digital-and-groq-announcement.html

  38. 38.

    https://www.statista.com/statistics/262946/most-common-languages-on-the-internet/

  39. 39.

    https://huggingface.co/lelapa/InkubaLM-0.4B

  40. 40.

    https://www.theguardian.com/environment/2025/feb/20/europe-greenwashing-with-north-africas-renewable-energy-report-says

  41. 41.

    https://www.imda.gov.sg/-/media/imda/files/news-and-events/media-room/media-releases/2024/09/ai-playbook-for-small-states/imda-ai-playbook-for-small-states.pdf

  42. 42.

    https://tech.eu/2024/09/05/berlin-edtech-pioneer-eidu-open-sources-code-to-boost-global-learning/

  43. 43.

    https://www.anthropic.com/news/political-even-handedness?utm%5Fsource=superhuman&utm%5Fmedium=newsletter&utm%5Fcampaign=nvidia-s-earnings-prove-ai-isn-t-slowing&%5Fbhlid=753af0087231e17f7afc97cd441917025a183132

  44. 44.

    https://event-assets.gsma.com/pdf/GSMA%5FME%5FSSA%5F2024%5FWeb.pdf

  45. 45.

    https://www.accenture.com/content/dam/accenture/final/capabilities/strategy-and-consulting/strategy/document/Accenture-Total-Enterprise-Reinvention-Thriving-amid-disruption-in-South-Africa.pdf

  46. 46.

    https://saaiassociation.co.za/

  47. 47.

    https://www.mddi.gov.sg/newsroom/national-artificial-intelligence-strategy-unveiled/

  48. 48.

    https://institute.global/insights/tech-and-digitalisation/how-leaders-in-the-global-south-can-devise-ai-regulation-that-enables-innovation

  49. 49.

    R Bommasani et al., “Advancing Science- and Evidence-Based AI Policy”, Science 389, no. 6759 (2025): 459–61

  50. 50.

    https://www.soumu.go.jp/hiroshimaaiprocess/en/report.html

  51. 51.

    https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/factsheets/2024/gen-ai-and-digital-foss-ai-governance-playbook?

  52. 52.

    https://www.unesco.org/ethics-ai/en/ram

Newsletter

Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions