Skip to content

Tech & Digitalisation

AI Won’t Transform Government Without Governable Data


Commentary6th May 2026

The AI Promise and the Missing Foundation

Governments around the world are now moving from asking whether artificial intelligence matters to asking how it can reshape the operating model of the state. The ambition is understandable. AI promises faster services, better targeting, more personalised support, less bureaucracy and more capable public institutions. For policymakers, the appeal is obvious – the possibility of a state that can respond earlier, coordinate better and deliver more value with limited resources – in short, a more agentic state.

But there is a risk in the current debate. Too often, AI is treated as something that can be superimposed on existing government systems: a chatbot on a portal, a model inside a case-management process, a few pilots in selected agencies, perhaps a new innovation lab or centre of excellence to coordinate experiments. These initiatives can be valuable, but they do not by themselves reimagine the state.

My experience as the Estonian government’s chief data officer for nearly eight years taught me that the real transformation starts deeper. AI will not reimagine the state unless data does first. This is not an argument against AI adoption. It is an argument for taking AI seriously enough to build the foundations that allow it to work in real government environments.

The failure in many governments is not a lack of innovation. It is a failure to treat data as public infrastructure and to organise government around using it. Without that shift, AI will not fix the operating model of the state. It will scale its dysfunction faster than it delivers reform.

The next generation of government will not be defined by who has the most advanced models, the most ambitious AI strategy or the most visible pilots. It will be defined by which states can make data usable, shareable, trustworthy and governable across institutional boundaries. Without that, AI risks becoming an interface improvement – useful, but limited. With it, AI can become part of a deeper shift in how the state understands needs, coordinates action and delivers public value.

This makes AI-enabled government a priority on the political leadership agenda, not just a technical or administrative challenge. The foundations that matter – legal clarity, data quality, interoperability, institutional accountability, transparency and delivery capacity – cut across mandates and organisational boundaries. They require leaders to make choices about incentives, responsibilities, investment and trust. Without that leadership, governments may adopt AI, but they will not build the state capability needed to use it well.

From Online Services to a New Operating Model

From my perspective, one of the key lessons from Estonia is that there is a real difference between digitalising government and transforming it. We went through distinct phases. First came the e-government: putting services online and improving efficiency, but often still following the logic of non-digital administration. Then came the digital state: a deeper shift, where secure data exchange, digital identity, national registers and the once-only principle began to change how the state actually operates. Now the next phase is emerging: a more agentic state, where the conscious use and governance of data and AI can enable more personalised, proactive and coordinated public services.

Estonia’s experience is useful to understand not because it offers a model to copy blindly, but because it shows what happens when digital transformation is treated as architecture rather than interface design. I have seen how much this matters. Estonia’s early digital state was built on the strong foundations of digital identity, secure data exchange, national registers, the once-only principle and a legal environment that made digital service provision and interaction possible. These were important because they did not merely digitalise existing bureaucracy – the traditional e-government model of putting services online. They enabled a deeper shift towards a true digital government – a change in the operating model of the state, where institutions can exchange data securely, reuse information lawfully, and design services around people rather than administrative boundaries.

The shift towards an agentic state will require a similar strengthening of foundations across the stack. Governments will need legal clarity for data reuse and automated decision-making; semantic clarity so institutions and AI systems interpret information consistently; interoperable data infrastructure; consent, permission and audit mechanisms; discoverable and high-quality data assets; secure environments for testing and validation; algorithmic transparency; and clear institutional accountability so AI-supported services remain explainable, contestable and under democratic control.

Many countries are still catching up with that transition, even as they now seek to leap directly into AI. In my view, that is where the risk lies. If the shift from e-government to digital government remains unfinished, AI will not compensate for it. It will expose it.

A government can put forms online and still remain fundamentally paper-based in its logic. It can create portals and still require citizens to understand administrative structures. It can automate individual transactions and still leave people to coordinate between agencies. The real value of digital government appears when the underlying architecture of the state changes: when data can be reused securely, when institutions can cooperate through shared infrastructure and when services are designed around people’s needs rather than administrative boundaries.

The AI era requires the next version of that shift. It is no longer enough for a service to be online. The more important question is whether the state can use data responsibly to reduce the need for people and businesses to navigate the state at all. Instead of asking citizens to repeatedly submit information the state already has, governments should be able to reuse data lawfully and transparently. Instead of asking businesses to interpret complex administrative requirements alone, governments should be able to pre-fill, guide and simplify. Instead of organising public services around institutional mandates, the state should be able to coordinate around life events and outcomes. A more capable state should be able to identify, for example, when a person may be eligible for support, when a business can be guided through compliance automatically, or when early signals suggest that an intervention could prevent a larger social, health or economic problem later.

The Hidden Work Behind a Proactive and Personal State

This is where the discussion about AI often becomes too narrow. Whether a form can be pre-filled, eligibility detected automatically and risks identified earlier, or whether institutions can coordinate around a family, a patient or an entrepreneur, are not principally questions of AI maturity. They are questions of data governance. AI may make these interactions more intelligent, but the possibility of delivering them depends on whether the state has the legal, technical and institutional ability to use data in a responsible and coordinated way.

In practice, making government data work is rarely glamorous. It means knowing what data exists, who is responsible for it, what it means, how reliable it is, under which legal basis it can be used, how access is granted, how use is logged, how errors are corrected and how citizens can understand what is happening. This is the hidden work behind any serious attempt to reimagine the state.

I have seen this repeatedly in government. The obstacle is rarely that nobody wants to innovate. More often, the problem is that the data needed for a better service sits in another institution, the legal basis for data processing is missing or is interpreted differently by different stakeholders, the quality or reliability of the data does not meet the needs of those who want to reuse it, or no one has the mandate to make the whole journey work from the citizen’s perspective. In those moments, technology has not been the hard part. The hard part has been organising the state to use the data it already has.

This work is often underestimated because it does not look like innovation from the outside. It is easier to communicate about a new AI assistant than a metadata standard. It is easier to showcase a prototype than a data catalogue. It is easier to announce a national AI strategy than to fix data quality, clarify access rights or redesign institutional responsibilities. But in government, these less visible foundations often determine whether innovation can scale.

In this sense, data governance should not be understood as a compliance function sitting somewhere in the back office. It is a core capability of the modern state. It determines whether governments can understand demand, coordinate institutions, measure outcomes, target interventions, automate safely and learn from implementation. A state that cannot govern its data cannot govern intelligently in the AI era.

What My Experience Suggests

In Estonia, many of the most important reforms I worked on happened precisely at this layer. The data consent service is a good example. The point was not simply to create another digital tool. It was to create a trusted mechanism through which a person could authorise the reuse of government-held data for a specific purpose, with traceability and the ability to understand what had happened. That kind of infrastructure changes the conversation: it moves data sharing from ad hoc negotiation to a governed, reusable capability.

Estonia’s Data Tracker gives citizens a clearer view of which institutions have used their data and for what purpose. But even in a digitally mature state, making this real across the whole public sector was difficult. By the end of 2025, after sustained work, public-sector organisations finally had prepared implementation plans, alongside plans to make the Data Tracker mandatory. Legal changes have also enabled more automated and data-driven administrative procedures. Work on data catalogues, metadata, open data, algorithmic transparency and an AI sandbox has focused on making data and AI not only possible, but governable.

These are not isolated technical projects. Together, they point to a different model of governance: one where public services can become more proactive, where institutions can cooperate more effectively, and where the use of data is supported by safeguards rather than blocked by uncertainty. A proactive state cannot be built on fragmented data. A personalised state cannot be built on unclear semantics. A trustworthy AI-enabled state cannot be built without auditability. And an efficient state cannot be built if every institution optimises only within its own walls.

The lesson I have taken away from my experience in Estonia is that the foundations of an AI-enabled state are not built through one flagship technology. They are built through many practical choices that often look small from the outside: clarifying legal bases, improving metadata, making data discoverable, creating mechanisms for consent and transparency, and giving institutions safe ways to test new approaches. These are the choices that determine whether governments can move from pilots to trusted, scalable services.

The Institutional Challenge

The real transformation challenge is institutional, not only technological. Governments are usually organised vertically: ministries, agencies, registers, mandates, budgets and accountability lines. But people experience life horizontally. A child is born. A company is created. A person loses a job. Someone becomes eligible for support. A business applies for a licence. A family moves. These situations rarely fit neatly inside one institution.

The promise of a data-driven state is that government can begin to organise around these real-life needs. But that only works when data can move safely and meaningfully across the state. Not freely, not carelessly and not without limits, but with clear rules, clear responsibilities and clear safeguards. This balance is critical. The answer to fragmented government is not uncontrolled data sharing. It is governed data sharing: lawful, purposeful, transparent, secure and accountable.

This is where many countries struggle. They have data, but not necessarily usable data. They have registers, but not always interoperability. They have AI strategies, but not always data quality. They have innovation projects, but not institutional ownership. They have legal safeguards, but not always practical mechanisms that make responsible reuse easy. The result is a familiar pattern: successful pilots that do not scale, services that remain fragmented, AI tools that cannot access reliable data and citizens who still have to act as messengers between parts of the same state.

Data as Public Infrastructure

The lesson is simple – data must be treated as public infrastructure. Not as an internal by-product of administration. Not as something owned defensively by individual institutions, or as a resource to be opened or restricted in binary terms. Data should be understood as infrastructure that enables better services, better decisions, innovation, accountability and trust.

This requires a different level of commitment from governments. Roads, energy systems and digital-identity infrastructure are not expected to emerge accidentally from individual projects. They require investment, standards, governance, maintenance and long-term stewardship. The same is true for data. If governments want AI to transform public administration, they must invest in the data infrastructure that makes responsible transformation possible. The hard truth is that these foundations cannot be skipped. They only become more visible when AI is introduced.

The Foundations That Cannot Be Skipped

In practice, this requires at least six foundations that every government serious about AI should treat as non-negotiable conditions for transformation. Without legal clarity, nothing happens: institutions hesitate, lawyers disagree and responsible reuse stalls. Without semantic clarity, automation breaks: the same concept means different things in different parts of government. Without interoperability, coordination fails: every new service requires another bespoke agreement, another integration, another negotiation. Without accountability, no one trusts the system: data quality, access and lifecycle management become everyone’s concern and no one’s responsibility. Without transparency and redress, legitimacy collapses: people cannot understand, challenge or trust how data and automation affect them. And without delivery capacity, transformation never leaves the pilot stage.

This is why these foundations cannot be treated as back-office reforms. They are the conditions that determine whether AI becomes part of a new operating model for the state or simply exposes the weaknesses of the old one.

AI Governance Is Data Governance in Practice

This is also why AI governance cannot be separated from data governance. AI does not operate in a vacuum. It operates on the data, rules, permissions, workflows and institutional assumptions that already exist inside the state. An algorithmic register, an AI impact assessment or a transparency requirement will not help much if the underlying data remain undocumented and unmanaged. Conversely, strong data governance makes AI both more useful and safer. It gives public institutions a clearer basis for what data can be used, how quality is assured, how risks are managed and how citizens’ rights are protected.

The same applies to compute. Governments increasingly discuss sovereign AI infrastructure, national compute capacity and local deployment for sensitive workloads. These are important questions. Governments need to understand where sensitive workloads should run, how critical capabilities are secured and how dependency risks are managed. But compute without trusted data is not capability. Models without governed data flows do not create a new operating model for the state. Infrastructure matters, but it must be connected to public-sector problems, institutional workflows and governed data.

From AI Adoption to State Capability

This has implications for how governments should approach AI. The first question should not be “where can we add AI?”; it should be “which public problems require better use of data, better coordination and better decision-making?” Sometimes the answer will be a language model. Sometimes it will be rule-based automation. Sometimes it will be a better API, a legal change, a data-quality improvement or a redesign of responsibility between institutions. Mature governments will not treat AI as the answer to every problem. They will treat it as one capability within a broader transformation of the state.

This is also where international debate needs to mature. We have spent years discussing digital government through the lens of services, platforms and user experience. These remain important. But the next phase will be about state capability: whether governments can understand complex needs, act across silos, use data responsibly and learn from implementation. In this sense, data is not only a technical issue. It is a governance issue, a leadership issue and increasingly a question of national competitiveness.

This challenge looks different in every country. Some governments have strong legal frameworks but weak implementation capacity. Others have advanced digital services but fragmented registers, weak integration and limited data reuse. Some have ambitious AI strategies but limited data quality, while others have rich administrative data but low public trust. The sequencing will differ, but the direction is the same: AI adoption must be connected to effective data governance across the state.

Countries that get this right will be able to reduce administrative burden, improve compliance, target support more effectively, accelerate public-service delivery and create better conditions for innovation. Countries that do not will find themselves with impressive AI demonstrations but limited institutional change.

The Leadership Test

This is the practical lesson I would offer from my experience. The digital state was not built by launching one application. It was built by combining architecture, law, institutions, trust and delivery over time. The same will be true for the AI-enabled state. There will be no shortcut around the hard foundations.

The next leap is therefore not about discovering data after digital services have been built. A true digital government should already have addressed how the state uses, shares and governs data. The opportunity now is to use AI to accelerate the work that remains unfinished: simplifying fragmented service journeys, reducing compliance burden, improving decision-making and helping institutions coordinate around people and outcomes. But this only works if the underlying data governance, information architecture and institutional responsibilities are in place.

This is also a political leadership question. Foundational reforms are often less visible than new applications, but they are what determine whether transformation lasts. Leaders need to change what government rewards: not only launches, but maintenance; not only pilots, but scale; not only novelty, but reuse; not only risk avoidance, but responsible data use for public value. The future state will depend on the less visible capabilities that make responsible automation possible: data quality, interoperability, metadata, legal clarity, auditability, institutional ownership and delivery discipline.

AI can help reimagine the state. But it cannot do so alone. The real question for governments is therefore not simply how to use AI. It is whether they are building a state capable of using data responsibly, across institutions, in the service of people.

If governments fail to fix data and information architecture first, AI will not overcome the limits of the state. It will expose them: fragmented institutions, unclear responsibilities, poor data quality, weak interoperability and limited accountability. The countries that succeed will not be those that add AI most quickly to existing systems, but those that use this moment to build the foundations for a more capable, proactive and trustworthy state.

AI may become the most visible symbol of the next generation of government. But governable data will determine whether that next generation actually works.

Newsletter

Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions