Contributors: Kayla Crowley-Carbery, Emily West
England’s education system is at a crossroads. Governments have squeezed most of what they can from traditional policy levers. The results speak for themselves: the attainment gap is still alarmingly wide, far too many pupils finish school without the skills they need and the system faces an enduring teacher-retention crisis.
This is not a failure of effort; it is a failure of the system. An analogue model built for a different century – labour-intensive, standardised and reactive – is now creaking under the pressures of the modern one.
Artificial intelligence offers an alternative: to boost capacity without expanding headcount; give every citizen personalised support by default; intervene early; and build services that learn and improve. In education, this means AI tools that take admin off teachers’ plates so they can focus on teaching; lessons that adapt to every child; predictive systems that spot early-warning signs long before pupils fall behind; and intelligent analytics that improve continuously.
For some families, there are ways to bypass the constraints of the current model, as more affluent schools have already begun to act on AI’s potential. But many other schools lack the devices, connectivity and structured guidance to engage at all. Insights from the Tony Blair Institute for Global Change show how stark the gap has already become. Private schools are almost three times more likely than state schools to teach pupils what AI is, and 2.5 times more likely to teach pupils how to use it. Without intervention, England risks entrenching a new educational divide.
It also risks falling behind internationally – as countries such as South Korea, Estonia, Singapore, China, Germany and Iceland embed AI into their school systems – while forfeiting the chance to build a thriving domestic edtech sector and global export opportunities. While different education systems will inevitably pursue different models of rollout, the government must at minimum match the strategic ambition and pace set by these more proactive nations.
To realise this, three building blocks must be in place: a secure digital-learner identity for every pupil so their education information can be connected safely; a trusted suite of AI tools that schools can access; and a national intelligence layer that turns anonymised data into actionable insight.
This paper focuses on the second element: frontline AI tools. And its findings are stark.
Most pupils barely encounter AI in their learning. Only 10 per cent of state secondary schools teach pupils how to use AI in subject teaching, according to new polling commissioned by the Tony Blair Institute for Global Change.
Teachers are interested in generative AI but just 8 per cent use it daily and a third still have not used it at all.
School-level implementation is shallow. Nearly half of schools have not officially implemented AI. Where AI is used, it is mainly to deliver useful workload reductions, but these gains barely scratch the surface of its potential to improve learning.
The paper identifies five of the most significant drivers of this slow uptake – all of them within the power of government to fix.
Leadership signals remain tentative. Only 8 per cent of school leaders have made any organisational change to address generative AI, while nearly a third have no plans whatsoever.
Teachers have been left without support. Almost half rate their AI confidence at just three out of ten, and new polling commissioned by TBI shows that 91 per cent of teachers using AI are entirely self-taught – a sign of the near-total absence of training.
Information is patchy and poor. Only five out of the 28 leading AI brands are recognised by more than 10 per cent of teachers.
Digital infrastructure is weak. TBI-comissioned polling shows that barely half of secondary schools have reliable whole-school WiFi, and virtually all schools cite cost as a major barrier to tech adoption.
The market is stalling. School-technology investment has fallen 96 per cent since 2020, and just 0.7 per cent of edtech investment has gone to AI tools – compared to 22 per cent in health tech.
In some ways, we have been here before with previous waves of edtech. But this time, the stakes are much higher. AI is more powerful, more far-reaching and more transformative than anything that came before it. This technology and its impacts will soon be impossible to ignore, and the question now is whether the government shapes this change deliberately or allows it to happen by accident.
Government has a short window within which to ensure AI is rolled out fairly, safely and in line with national priorities. The alternative is to do nothing and watch AI use spread unevenly – with some schools advancing rapidly while others fall behind, and with serious risks around quality and equity.
With the old tools exhausted and the potential of AI still largely untapped, this paper sets out a new way forward: teachers empowered to use AI confidently; every pupil able to benefit from safe, high-quality tools; and a thriving domestic edtech scene that drives innovation, enabled by government-backed infrastructure.
Policy Foundations to Boost AI Diffusion in Schools
The widespread adoption of safe, high-quality AI in schools depends on five segments, each representing a stage in the diffusion lifecycle and requiring a distinct policy response. The government has a key role to play at every stage of this diffusion cycle – both in enabling the conditions for these foundations to take root and in directly building certain elements of the overall architecture.
The Department for Education (DfE) should set out a clear, system-wide AI action plan for schools. This should be underpinned by a dedicated AI unit with clear senior accountability and the expertise needed to oversee delivery of the reforms set out below including, where specific proposals require it, working in partnership with other agencies and system actors.
Segment 1 – Investing in AI Tools and Ensuring Product Diversity
Develop a free, universally available national AI teaching assistant for all schools, building on precedents like Australia’s NSWEduChat, with two modes: a teacher-facing mode for planning, differentiation, classroom-activity design and subject refreshers, and a “student mode” for use during lessons.
Intervene selectively in areas of market failure by developing niche, high-value tools where needed.
Provide clear market signals, stating where the government will build baseline tools, where it will not intervene and where it will target investment to correct market gaps.
Segment 2 – Developing High-Quality Prototypes
Use government’s unique access to secure, well-governed education data to drive innovation, opening access to high-quality data sets under strong safeguards and clear public-interest purposes to improve the accuracy, fairness and utility of AI tools.
Make the DfE’s new content store the flagship mechanism for this, embedding it in the National Data Library (NDL) after its design phase, broadening access for accredited users and expanding data sets.
Set clear principles for data access; require ethical use, demonstrable public benefit, transparency about data-set provenance and open, non-exclusive licensing.
Segment 3 – Testing Efficacy and Meeting Minimum Standards in the Market
Establish a permanent national testbed network linking schools, developers and researchers to trial tools in short, iterative evidence cycles, giving innovators real-world feedback and letting schools shape product design.
Introduce minimum safety and reliability standards to protect children, ensuring tools meet safeguarding and privacy requirements while giving vendors predictability.
Create a free national AI-testing lab so vendors can quickly check safety, curriculum alignment and age-appropriateness using reference data sets like the DfE’s new content store.
Segment 4 – Ensuring Demand-Side Readiness
Create an AI Readiness Fund, accessible only to schools with a clear, credible plan for engaging with AI across teaching, learning and operations. The process would be iterative – focused on promoting engagement and supported by advice and peer networks.
Establish AI leads in every school with access to clear progression and leadership pathways – building on evidence from early adopters in England and abroad that trained in-house champions are crucial for adoption.
Segment 5 – Deployment and Enabling Digital Infrastructure
Treat school digital infrastructure as critical national infrastructure, ensuring every school has high-speed broadband, resilient whole-school WiFi and technical capacity – non-negotiable foundations for system-wide AI adoption.
Guarantee equitable device access through a hybrid model, combining Bring Your Own Device (laptops and tablets only) with a government-funded loan scheme so every teacher and secondary pupil has a suitable device, and primary schools maintain minimum device ratios.
Create a national interoperability framework, with open standards and interoperable application-programming interfaces (APIs) so school systems can share data securely and AI tools can be integrated seamlessly.
Chapter 1
The widespread adoption of safe, high-quality AI across primary and secondary schools in England would deliver generational benefits for learners, educators and the broader economy.
AI Expands What Education Can Do
Teachers free to teach: Teachers consistently report heavy workloads from lesson planning, marking and other administration.[_] AI can reduce this workload: one large-scale trial showed that giving secondary science teachers access to generative-AI aids such as ChatGPT reduced lesson-planning time by 31 per cent without any loss of lesson quality,[_] while Oak National Academy’s “Aila” and other technologies show similarly promising gains.[_],[_] Excellent teaching is the strongest in-school driver of attainment,[_] and freeing teachers from routine tasks lets them concentrate on pedagogy and individual support.
Importantly, AI should be viewed as an assistant rather than a replacement for teaching expertise. Human–AI collaboration can enhance professional judgement, ensuring that pedagogical decisions remain with teachers but are informed by richer data. For example, timely feedback is critical for learning: pupils who receive effective feedback make six additional months of academic progress over a school year.[_] AI systems can deliver rapid, individualised responses and can even assess handwritten work, analyse hand-drawn maps and pick up soldering mistakes on circuit boards.[_]
Easing the recruitment crisis: Beyond enhancing the impact of teaching, AI can help remove some of the pressures and pain points that frustrate teachers today. Postgraduate teacher recruitment was 38 per cent below target in 2023/24, and by much more in subjects like physics and computing.[_] Roughly one in three newly qualified teachers quits within five years.[_] Excessive workload is the single biggest driver of this problem – 92 per cent of teachers considering leaving the profession cite this as the main reason.[_] By reducing paperwork, AI allows teachers to spend more meaningful time with pupils, boosting job satisfaction.
Personalised learning: By analysing performance and adapting support accordingly, AI can offer practice where needed, supply hints when pupils struggle and accelerate once mastery is shown. This scaffolding approach mirrors one-to-one tutoring, a method long proven to improve outcomes.[_] Evidence from intelligent-tutoring systems – precursors of today’s AI – shows average gains of +0.66 standard deviations, moving pupils from the 50th to the 75th percentile.[_]
While evidence for more advanced AI learning tools is nascent, this reflects the field’s youth rather than lack of promise. In the United States, pupils using digital educational platform Khan Academy for 18+ hours a year achieved 20 per cent higher-than-expected learning gains in maths (grades 3 to 8).[_] In China, pupils using the Squirrel AI tutoring system outperformed peers in whole-class and small-group settings.[_] Meta-reviews indicate that effective use of generative AI is linked to improved academic performance.[_] In addition, today’s AI will be the least capable and aligned ever deployed, and research labs are already developing pedagogy-aware systems, such as DeepMind’s LearnLM.[_]
Narrowing the attainment gap: Used purposefully, AI could help close England’s entrenched disadvantage gap – now more than 19 months by the end of secondary school.[_] The causes of this gap are complex, but a key constraint is a lack of time and resources to tailor learning, leaving some pupils behind. Adaptive platforms can help pupils progress at their own pace, while the time teachers save through automation can be reinvested in supporting those who need it most. At the same time, assistive technologies enable pupils with special educational needs and disabilities to overcome barriers and engage more fully with the curriculum,[_] while analytics tools can enhance early identification of learning difficulties.[_]
Streamlining administration and reinvesting in learning: Behind the scenes, AI can transform how schools operate – automating timetabling, attendance, parent communications and reporting. Tasks that once took days can be completed in minutes, freeing both funds and staff capacity for frontline teaching. For example, Arbor, a management-information-system provider for UK schools, has designed a model that built an AI-generated school-improvement plan in just four minutes, a task that used to take days.[_] The resulting efficiencies can be redirected into enrichment, materials and targeted interventions that directly raise attainment.
Smarter data infrastructure to drive school improvement: At a system level, AI can play a crucial role in identifying where improvement is most needed. Ofsted, for example, is exploring how AI can analyse data to flag schools at risk of decline, enabling earlier inspection and support. The Department for Education (DfE) is taking similar steps through new AI-powered attendance tools designed to identify emerging patterns of absence and prompt timely intervention.[_] And the potential goes far beyond these early applications, as the education-data landscape remains fragmented and underexplored.
The Ensuing Effect on Lives, Prosperity and Global Standing
Enabling the diffusion of safe, high-quality AI in schools is not only an educational imperative – it is an economic and strategic necessity at a time when UK growth remains weak.
Raising educational outcomes: The first economic effect would come through boosting skills, qualifications and productivity. Modelling by TBI suggests that widespread diffusion of AI-driven learning could lift UK GDP by around 6 per cent in the long run;[_] this estimate is based on today’s technology, assumes only a modest effect size, and calculates returns solely based on higher levels of qualification.[_]
In reality the benefits are likely to be greater. As the technology matures, its impact on learning will grow. The multiplier effects associated with better learning outcomes are also likely to extend beyond education into health, employment and social outcomes. AI in schools would also build AI literacy across the workforce, equipping young people to use and understand AI responsibly in every profession, while helping to close a £63 billion digital-skills gap.[_]
Expanding the AI-in-education sector: The second way of boosting AI in education would deliver economic growth is by creating a stronger domestic AI-in-education sector. First, broad adoption across English schools would create strong domestic demand, spurring investment, jobs, and research and development.
It would also unlock new export opportunities. The global edtech market is projected to surpass $300 billion by 2030,[_] representing a vast export opportunity. Education has long been a successful export sector: education-related exports and transnational education activity contributed nearly £26 billion in UK revenue in 2020.[_] A leadership position in AI-enhanced education would open up a valuable market – encompassing not only learning platforms, but also teacher-training programmes, advisory services and digital-transformation partnerships with other education systems.
Maintaining strategic advantage: Establishing a position at the forefront of AI in education would strengthen the UK’s national competitiveness. Around the world, governments are racing to harness AI in schools as a driver of economic growth and geopolitical influence. Countries such as South Korea, Estonia, China, Singapore and Germany recognise that tech-savvy human capital underpins future prosperity and are embedding AI literacy and digital infrastructure into their education systems.[_] If the UK fails to modernise its classrooms at a similar pace, it risks falling behind – with a workforce less fluent in AI than its international peers and an economy less equipped to seize the opportunities of the AI era.
Playing a global leadership role: Taking a leadership role would also allow the UK to set international standards for the responsible use of AI in education. The government has already signalled its ambition to be a global leader in shaping the development of AI – exemplified by its hosting of the first global AI Safety Summit in 2023. Building a proven model of safe, human-centred, classroom-ready AI would give the UK valuable, real-world expertise. In turn, this would enhance the UK’s credibility, strengthen its influence in global governance debates and position it as a trusted partner for countries seeking to deploy AI responsibly in their own education systems.
Chapter 2
The previous chapter established why the diffusion of AI in education should be treated as a strategic priority. This section examines the state of play in England’s schools: how far has AI diffusion progressed to date and what factors are influencing its pace?
Few Schools Are Making the Most of AI
The use of AI in England’s schools remains at an early stage of diffusion: adoption is both limited in scale and narrow in scope. Three indicators – teacher usage, opportunities for pupils and school-level implementation – summarise the current picture.
Teacher Usage
According to DfE data, around half of teachers report having used generative AI in some form.[_] Yet “use” covers a wide spectrum, from occasional experimentation to daily use, and regular users remain a substantial minority. In addition, one study suggests only about one in four UK teachers use AI tools every day,[_] while another finds that only 49 per cent of teachers use AI monthly, just 8 per cent use it daily and a third still have not used it at all.[_]
Even among teachers who are experimenting with AI, activity tends to cluster around a small set of administrative tasks. The most common uses are generating lesson materials (cited by 73 per cent of AI-engaged teachers) and supporting curriculum planning (47 per cent). While valuable, these functions represent only a fraction of AI’s wider potential to transform teaching, learning and school operations.
Teachers and leaders who use generative AI do so to support a narrow range of tasks
Source: DfE
Opportunities for Pupils
Pupils’ access to AI is similarly limited. There is no nationwide programme or strong curriculum requirement for education regarding AI. Few pupils have the chance to learn about AI in a structured or informed way: according to a TBI-commissioned Teacher Tapp survey, only one in five state-secondary respondents teach pupils how AI works and what it is; just 27 per cent support pupils to use AI in their learning; and only 10 per cent report using AI within subject teaching.
A minority of teachers in England’s state secondary schools teach pupils about AI and help them use it
Source: TBI-commissioned polling
Note: Teacher Tapp survey of 7,817 teachers in England, conducted 2 July 2025. Question answered by 4,452 state secondary-school teachers (results weighted to reflect national teacher and school demographics). The comparison year (2024) only included secondary schools, which means for this figure the comparison only covers these schools. We further limited the sample to state secondary schools for this question to cover only those schools that the DfE funds. In this year, this question was answered by 4,738 teachers.
With few structured, supported opportunities available in school, many pupils are exploring these tools on their own terms. By December 2024, nearly four in ten secondary pupils reported using generative AI tools outside any school-related context. Only a small proportion were integrating AI into academic work: just 8 per cent said they used AI for lessons and homework, and 18 per cent for homework. Even then, it remains unclear whether this use was prompted by teachers or driven by pupils.[_]
School-Level Implementation
Implementation at institutional level remains very limited, too. According to analysis by Bett in 2025, nearly half of schools have not officially implemented AI.[_] While this is an improvement on 2024 (69 per cent), there is a long way to go. This underlines the fact that, at a system-wide level, adoption also remains at the margins.
Even among the minority of schools that are experimenting, the rationale is often strikingly narrow: reducing teacher workload (44 per cent) is the most common motivation, while smaller shares cite improving pupils’ outcomes (14 per cent) or supporting assessment tasks (9 per cent).[_]
In other words, schools are largely turning to AI for efficiency gains and administrative support rather than as a tool to reshape pedagogy or unlock fundamentally new ways of teaching and learning. This highlights both the scale of untapped opportunity and the need for policy to help schools move beyond incremental efficiency gains towards genuinely innovative uses of AI.
In addition, when viewed alongside Bett data showing that two-thirds of UK teachers have used AI, a gap emerges.[_] While some individual teachers are experimenting at the margins, perhaps using free and accessible tools, institutional adoption remains even less common. Where AI is being used in classrooms, it is often ad hoc and teacher led – the product of individual initiative or local champions rather than the result of a coordinated, whole-school strategy.
An Emerging Social Divide
Beneath these broader statistics lies an even more urgent challenge – the uneven distribution of AI’s benefits, which is creating a new educational divide. Three fault lines are already defining this emerging inequality: who gets to use AI at school, who receives foundational AI instruction and the accelerating advantage of independent schools.
Fault Line 1: Who Gets to Use AI at School?
AI use among pupils is rising, but access within schools varies sharply. Teachers in schools rated as “outstanding” by Ofsted are three times more likely to have formal AI training and three times more likely to report a school-wide AI strategy than those in schools rated “inadequate”.[_] Device access follows the same pattern: 72 per cent of pupils in inadequate schools lack access to individual devices in class, compared with 59 per cent in outstanding schools.[_]
Fault Line 2: Access to Foundational AI Instruction
Most pupil engagement with AI is informal, unguided and inconsistent – with disadvantaged schools least likely to offer structured teaching. TBI-commissioned Teacher Tapp analysis shows pupils in these settings are significantly less likely to be taught what AI is, how it works or how it can support learning across subjects, leaving many without the basic literacy required to use AI safely or effectively.[_]
Fault Line 3: The Accelerating Advantage of Independent Schools
Independent schools are moving fastest, with three times the likelihood of having a school-wide AI strategy,[_] far greater access to devices[_] and much more structured AI instruction than state schools. TBI data show independent-school teachers are 2.5 times more likely to teach pupils how to use AI in their learning and over 3.5 times more likely to teach how AI applies within subjects – giving their pupils a substantial early advantage in developing fluency and confidence.
Access to teaching of AI skills is far better in private schools than in state schools
Source: TBI-commissioned polling
Note: Teacher Tapp Survey of 7,817 teachers in England, including those in private schools, conducted 2 July 2025. Question answered by 7,017 teachers (results weighted to reflect national teacher and school demographics).
Taken together, these disparities are laying the foundations of a two-tier system: one in which a minority of pupils gain meaningful opportunities to use AI. This is not simply a technological divide but an emerging divide in opportunity, confidence and prospects – influencing academic achievement, access to high-value careers and pupils’ abilities to participate fully in civic life.
Drivers of Low Adoption
The metrics outlined above suggest that the diffusion of AI in England’s classrooms remains nascent. Some teachers are experimenting, but many others remain hesitant or constrained. Pupils are engaging in AI but often not in educational contexts. In addition, systematic, school-led integration is still in its infancy. This section examines why adoption in England’s schools remains low.
Lack of High-Level Direction
Whether schools pursue AI proactively or hold back is often a function of leadership. In schools where senior teams articulate a clear vision and outline AI’s benefits, teachers are more likely to deploy AI across a greater range of tasks. When leadership is sceptical or uncertain, use tends to be confined to basic administrative tasks.
At present, few school leaders have engaged directly and proactively with AI. A minority are advancing with clear plans and structures, but most are still at early stages – experimenting informally or waiting for firmer policy signals because they fear missteps.[_] One way this caution can be identified is through school planning data. Most schools have yet to take concrete steps to adapt to generative AI: 32 per cent of leaders report no plans to address the issue, while only 8 per cent have already made changes.
Whether teachers and leaders are considering changes within their school to account for generative-AI tools and technology
Source: DfE
Lack of Expertise
Another barrier to uptake is a gap in workforce knowledge and training. Teachers’ attitudes towards AI are broadly favourable – almost two-thirds of primary and secondary teachers are positive or very positive about technology and AI tools’ potential to support tailored learning.[_] However, a large proportion of teachers do not feel they know enough about AI tools or how to use them pedagogically. Forty-three per cent rate their AI confidence at just three out of ten.[_] Among non-users, 64 per cent say they have not used generative AI because they do not know enough about how it could be applied in their role (see Figure 5).[_]
Teachers and leaders say their lack of knowledge is a key barrier to using generative AI
Source: DfE
While general guidance is emerging – for example, DfE materials, government work with the Chiltern Learning Trust and the Chartered College of Teaching, and resources from unions and professional bodies[_] – there is no systematic approach to embedding AI in initial teacher education or ongoing professional development. Where training occurs, it is very often ad hoc and informal (for example, an “AI champion” running demos and sharing tips),[_] and according to new TBI-commissioned polling, 91 per cent of those using AI in their roles are self-taught.
The majority of teachers who have used AI tools are self-taught
Source: TBI-commissioned polling
Note: Teacher Tapp Survey of 7,817 teachers in England, including those in private schools, conducted 2 July 2025. Question answered by 7,108 teachers (results weighted to reflect national teacher and school demographics). Responses were filtered based on teachers answering that they had at some point used generative-AI tools, based on the question: “When was the last time you used a *general* AI tool (such as ChatGPT, Google Gemini, Microsoft Copilot, DALL-E, Midjourney) to help you with your school work?”
Information Failure
AI tools will characterise the next wave of edtech innovation, offering fresh opportunities to improve outcomes for millions of students. However, another major barrier to the adoption of these tools is information failure. Two asymmetries dominate. First, schools lack accessible, trusted information about AI tools: what exists; which products are reliable, curriculum-aligned and ethically sound; and how they compare. Second, leaders and teachers remain uncertain about what is required for compliance with regulatory standards in day-to-day practice.
Limited Information About AI Tools
Many educators do not have the market knowledge required to make confident choices about AI tools. This is clear to see even at the level of brand recognition, for example. In a two-stage Teacher Tapp exercise, educators first listed AI tools they had used or tried; researchers then tested recognition and usage of the 28 most-named brands in a second round. Five brands were not recognised at all; 18 were recognised by fewer than 10 per cent of teachers; only five were recognised by more than 11 per cent. In addition, more than half of teachers had not used any of the brands.
Most educators are unfamiliar with AI-tool brands
Source: Teacher Tapp
Even when teachers are aware of AI tools, more than a third (37 per cent) have concerns about using them[_] – common worries include whether the tool is safe, helps learning or represents good value for money. The supply side does not make selection easier. Product quality is highly uneven, not all offers are mature or evidence-backed, and information is scarce.
Against this backdrop, it is difficult for schools to distinguish robust solutions from over-hyped ones. Memories of past edtech waves – some successful, some disappointing – reinforce caution and raise the burden of proof. Trust in vendor claims is correspondingly low: one industry study found that only 8 per cent of educators trust statements made directly by edtech companies.[_] The subsequent risk is that potentially beneficial innovations are under-adopted (because schools hesitate without proof) and, conversely, that low-quality products are bought based on marketing rather than merit.
Uncertainty About Compliance
There is no dedicated regime that certifies AI edtech for school use. Instead, schools must navigate a patchwork of overlapping requirements – spanning data protection and privacy law, assessment-integrity rules, and safeguarding and online-safety obligations.
Guidance has improved. For instance, the DfE’s updated policy paper on generative AI (August 2025) signposts supporting materials.[_] The department has issued non-statutory product-safety expectations for suppliers, defining what safe tools should include. The Information Commissioner’s Office (ICO)’s guidance on AI and data protection continues to frame schools’ General Data Protection Regulation (GDPR) duties. Ofsted has confirmed it will not judge AI use in isolation, instead considering its impact on learners within existing frameworks.
Yet guidance remains dispersed and seldom comes with practical human support, so there is still considerable uncertainty about what compliance requires. DfE-commissioned research shows that 76 per cent of teachers are not confident advising pupils on appropriate AI use,[_] while nearly one in five teachers do not know whether pupils at their school are permitted to use GenAI. [_] Some schools have banned the use of AI tools outright, citing unresolved questions about safeguarding and privacy standards. And analysis by BCS, the UK’s professional body for computing, shows that many schools lack an agreed approach to AI, which leaves staff guessing about acceptable uses and obligations.[_]
Underdeveloped Digital Foundations
Inadequate digital infrastructure also remains a major brake. Meaningful use of AI depends on fast, reliable connectivity, up-to-date devices and the technical capacity to deploy them. Yet across England these foundations are uneven. Official data show that among teachers who have not tried to use AI, 21 per cent cite either a lack of necessary technology or school-level blocking as the reason.[_]
Lack of connectivity illustrates the point. While most schools have broadband, performance is often patchy and bandwidth insufficient to stream high-volume content or support many devices simultaneously. New analysis commissioned by TBI indicates that progress in expanding reliable WiFi coverage in schools between 2024 and 2025 has been modest at best, while rates of partial coverage have been flat or have slipped, leaving many classrooms unable to rely on whole-school WiFi (see Figure 8).
Reliable WiFi remains out of reach for many schools
Source: TBI-commissioned polling
Note: Teacher Tapp Survey of 7,817 teachers in England, including those in private schools, conducted 2 July 2025. Question answered by 6,968 teachers (results weighted to reflect national teacher and school demographics).
Devices remain a major bottleneck, too. Although access to laptops and tablets for classroom learning has improved since 2024, overall availability is still limited. Primary schools have made modest headway, but progress in secondary settings has lagged, so a large share of pupils still lacks dependable access to a suitable device. According to new TBI-commissioned polling, only 46 per cent of secondary-school teachers say they have access to laptops for in-class learning – when asked about tablets, this drops to 18 per cent.[_]
Affordability compounds these constraints. Nearly every school interviewed by the DfE cites budget limitations as a key challenge to adopting edtech.[_] Budget pressure makes it difficult to refresh ageing devices or fund new licences for AI tools. Early free pilots have shifted toward paid or “freemium” models, and some applications require specialised software or subscriptions that many schools cannot afford.[_]
Compatibility and integration issues then slow progress even where interest and resources are present. Schools must fit new tools around established learning platforms, management-information systems and assessment systems.[_] DfE-commissioned research highlights how integration problems and the absence of common interoperability standards raise switching costs and depress demand.[_]
Underwhelming Market Dynamics
Investment dynamics in the edtech and educational-AI sectors also help explain why AI diffusion remains slow and uneven. To better understand these dynamics, TBI conducted an analysis of England’s edtech market using data from TracXN,[_] examining investment patterns, funding stages and rates of company progression. The findings point to a market with potential, though current weaknesses are currently impeding its growth and capacity for innovation.
Investment in Edtech Is Faltering
There is no shortage of ideas in England’s edtech sector. Since 2015, more than 2,600 companies have been founded and the sector has seen 627 recorded funding rounds, amounting to £3.3 billion[_] in total, the largest investment in Europe. However, the broader pattern of investment in England’s edtech market lags behind that of other domestic sectors and international peers. Across key indicators – the progression and size of investment – the edtech market shows signs of faltering, with potentially serious implications for pupils’ and teachers’ access to high-quality tools.
First, progression through funding stages is weak. Based on the companies that have been founded since 2015, only 5.5 per cent have raised at least seed funding, compared to 10.2 per cent of companies in England’s health-tech sector.[_] There were only 11 seed-funding rounds in 2024, compared to 58 in 2019. As a result, early-stage funding is increasingly concentrated among a small minority, with limited capital available to scale promising innovations.
Second, the size of investment rounds lags behind both domestic health tech and international comparison. Early-stage funding rounds (Series A and Series B) – which enable firms to scale their products and expand market reach – were on average $5 million in 2024,[_] compared with $15 million in US edtech, and $10.7 million in domestic health tech. The median Series A size has also dropped, suggesting a broad-based contraction.
As Figure 9 shows, total early-stage funding in health tech has more than tripled since 2015, from $130 million in 2015 to $418 million in 2025, while edtech early-stage funding has almost halved, from $101 million to $57.2 million over the same period.
Funding in edtech and health tech in England
Source: TBI analysis of Tracxn data
Note: Indexed to 2015 = 100
England is also falling behind more dynamic international markets, as Figure 10 highlights. For instance, Germany’s smaller market is showing signs of growth – with early-round funding growing from $36.2 million in 2019 to $60.8 million in 2025.
Funding in edtech in England, US, India and Germany
Source: TBI analysis of Tracxn data
Note: Indexed to 2015 = 100
Investment in Schools’ Edtech Is Being Eroded
While overall edtech investment is declining, so too is the share of this investment directed towards schools-focused tools. Within the edtech market, companies providing tools for schools have seen their share of total investment drop dramatically – from 20 per cent of overall investment in the edtech market in 2020 to just 4 per cent in 2025. Meanwhile, accounting for the broader contraction in edtech funding, total investment in school technology has fallen by 96 per cent – from $32.8 million in 2020 to just $1.3 million in 2025.
The schools-tech market has shrunk relative to other parts of the edtech market
Source: TBI analysis of Tracxn data
Educational-AI Investment Lags Behind Other Sectors and Risks Further Decline
The share of overall edtech funding directed towards educational AI has risen in recent years, with much of the investment focused on school-level applications. The UK’s largest educational-AI provider, Century (founded in 2014), has raised $12.2 million in multiple funding rounds to develop its personalised learning platform. More recently, UK company Chalkie secured £1 million in October 2025 to support the development of its AI-generated lesson-planning tools for teachers.[_] Yet the educational-AI sector remains nascent, accounting for only 11 per cent of total edtech investment in 2025.
The broader weaknesses in the edtech market are particularly concerning for educational AI, where early momentum could easily stall. Investment in educational-AI companies has arrived later – and at lower levels – than in comparable sectors. It now faces additional pressure as wider market conditions tighten.
While there has been a modest uptick in AI spending over the past two years, the overall share remains small: since 2020, only 0.7 per cent of total edtech funding in England has gone to AI-focused companies, compared with 22 per cent in health tech.
Percentage of total investment allocated to AI companies
Source: TBI analysis of Tracxn data
Compounding this challenge, the UK continues to lag behind international peers. Total UK -ducational AI investment stands at $28.9 million, just 2 per cent of the US’s total of $1.2 billion. Although the US naturally invests at a greater scale, it has also consistently devoted a larger share of overall edtech funding to educational AI than England has in every year since 2020.
This gap is even clearer when looking at recent company progression. Since 2020, only one English educational-AI firm – Graide Assessment – has advanced from seed to Series A, and none has progressed to Series B. In contrast, 13 US companies have moved from seed to Series A, and eight from Series A to Series B, underscoring the structural barriers to scaling innovation in the UK.
Without targeted intervention, the relative weakness of England’s edtech and educational-AI markets could significantly constrain the development of home-grown, technology-enabled transformation in schools – leaving England increasingly dependent on importing international innovation.
Chapter 3
Despite isolated examples of innovative uptake, AI adoption across schools remains limited, fragmented and uncoordinated. To move beyond this shallow diffusion, the system needs a new policy framework that builds the foundations for safe, effective and widespread use. In practice, this requires the DfE to set out a clear, system-wide AI action plan for schools.
In developing such an approach, the DfE would not be starting from scratch. Across government, early examples are emerging of departments setting out structured plans to guide the safe and effective adoption of AI within their own systems and services. The Ministry of Justice (MoJ), for example, has established an AI action plan covering governance, ethics and assurance, capability-building within the department, the safe deployment of AI tools in frontline services, and the development of enabling data and infrastructure.[_] Central to this approach has been the creation of dedicated internal capacity to oversee delivery.
The DfE’s version of such a plan would, of course, need to reflect the distinctive characteristics of the education system – its scale, its diversity of institutions, and the central role of teachers and school leaders – as well as its strategic goals.
However, principles from the MoJ’s approach could be transposed. In particular, the DfE should establish a dedicated AI unit, recruit and retain appropriate technical and delivery expertise, and assign clear senior accountability for AI adoption to a designated Chief AI Officer. This unit would be responsible for overseeing the rollout and ongoing management of the reforms set out below, including, where specific proposals require it, working in partnership with other agencies and system actors that hold relevant responsibilities or expertise.
The end goal should be schools and teachers empowered to use AI; universal access for teachers and pupils to safe, high-quality, affordable tools; and a thriving domestic edtech market that innovates upon the baseline infrastructure provided by government.
Before identifying the policies needed to achieve this, it is worth stepping back to consider the foundations on which successful diffusion depends: what needs to be in place for safe, high-quality tools to take root and become available across the system? Achieving broad and effective diffusion of AI in education depends on getting five segments of the diffusion cycle right. Each segment represents a distinct point of intervention within a continuous cycle and calls for a tailored policy response.
Government has a key role to play across every segment of the diffusion cycle – both in enabling the conditions for progress at each point and, in some cases, in directly building elements of the enabling architecture. The sections that follow outline how government can act as both catalyst and creator across the full diffusion cycle.
The diffusion cycle
Source: TBI
Segment 1: Investing in AI Tools and Ensuring Product Diversity
For AI to take root across schools, government must lead in creating the conditions that allow innovation to flourish. That means delivering segments two to five in the diffusion cycle. However, it also requires strategic public investment in certain tools themselves – not to compete with the private sector but to help shape the market: setting baselines, filling gaps and catalysing innovation. Government should focus its direct building efforts in the following areas.
Develop Baseline Tools Available to All Schools
The government’s direct role in developing tools should be selective, purposeful and complementary to private-sector activity. Its focus should be on building baseline products that guarantee universal access, provide immediate value to end users and set standards for others to build upon. These tools should be able to scale easily and, if necessary, pivot or sunset if defined conditions are met – as shown by the recent example of the Incubator for Artificial Intelligence (i.AI)’s Redbox tool.[_]
By providing baseline offerings in key product categories, government can achieve several outcomes simultaneously. It can:
Establish a benchmark for quality and safety.
Give all schools, regardless of wider resources, access to safe and functional AI tools.
Set market signals that support innovation: when government builds a foundational product in an area, it invites the market to innovate upon that baseline, developing more advanced, specialised and differentiated products that build beyond the core.
These public tools should also remain interoperable, allowing private developers to plug into shared infrastructure. Wherever possible, underlying data sets should be made accessible, under robust governance frameworks, to promote further innovation while upholding privacy and security.
One such tool that the government should build is a national AI teaching assistant. The assistant would operate within a single safeguarded environment and would be primarily oriented around supporting teachers (Teacher Mode), while also including a Student Mode. It would build on the example of Australian AI tool NSWEduChat – which, after a successful trial in 50 schools, has now been rolled out to all public schools in New South Wales.
Teacher Mode would function as a general-purpose professional assistant. Alongside structured lesson planning, teachers could brainstorm lesson ideas, generate multiple versions of the same lesson content adjusted for different ability levels, draft letters and emails, summarise documents and obtain rapid explanations to refresh subject knowledge. Importantly, teachers could use the assistant during live classroom time: designing activities where pupils interact with the tool, previewing how Student Mode guides pupils to reach their own conclusions rather than answers questions for them, and creating tasks that ask pupils to critique or improve AI-generated output.
Student Mode would serve as a learning-support assistant that scaffolds thinking, prompts explanation and reasoning, and supports study beyond class time, while refusing to complete assessments for pupils. Its primary purpose as part of a national baseline tool would be to enrich teachers’ instruction and model effective use of AI in learning contexts, rather than to replace teaching. This capability already exists, as demonstrated by the version deployed in New South Wales[_] and other models that are making rapid progress, like Google’s LearnLM[_] – and there is no reason to wait for future models before giving pupils access to such a tool, even as the underlying technology continues to progress.
At the same time, the inclusion of a Student Mode would act as a clear signal to the market that government recognises the importance of learner-facing AI tools and expects innovation in this space to continue. Alongside the national baseline, government should actively support private-sector development of more advanced AI-tutoring and learning-support products, recognising that models, pedagogical approaches and capabilities are evolving rapidly.
Crucially, the national AI teaching assistant should evolve into a framework that allows trusted private-sector tools to be interoperable with the baseline tool, creating a coherent ecosystem rather than a series of disconnected products.
Address Any Market Failures and Niche Needs
There will be cases where the market cannot deliver what is needed. In smaller or more specialised areas, commercial returns may be too limited for the private sector to sustain development of new tools. Here, government has a clear mandate to step in. Public development in these areas can prevent structural inequities and ensure every learner benefits from technological progress.
Inspiration could be drawn, for instance, from the AI tools being piloted in local government. One such tool helps councils to generate documents in an “easy-read” format for citizens who have severe learning disabilities, automating a previously time-consuming and expensive process, with a return on investment for the public sector of 749,900 per cent.[_]
Develop Clear Signals
Effective investment across the AI-education landscape depends on clear government signalling. To give the market confidence about its intentions, it should signal where:
It will build: government should be explicit about the narrow and defined circumstances in which it will develop baseline tools – doing so to set minimum standards, ensure equitable access and establish a platform for further innovation.
This activity stops: making clear that it will not overreach or seek to dominate the market but rather create the conditions for others to build upon.
It will intervene: in areas of market failure, such as niche or low-return products, government should target investment carefully, intervening when confident that private provision cannot viably emerge.
Segment 2: Developing High-Quality Prototypes
The development of high-quality AI tools depends on access to equally high-quality data. Government’s role at this stage in the innovation cycle should be to leverage its unique access to secure, well-governed public data to catalyse innovation. And while the UK is unlikely to compete at scale with global leaders in frontier-model development, it can be a maker rather than a taker in how AI is applied – including becoming a world leader in AI for education, where smart use of public data gives the country a distinctive advantage.
Unlocking the Power of Data as a Public Good
Among the most powerful levers available to government is its unparalleled access to data. Decades of educational administration, accountability reporting and assessment have generated a vast resource that, if used responsibly, could transform how tools are built and evaluated. Yet much of these data remain underused or inaccessible to those best placed to turn them into innovation that benefits teachers and pupils alike. To date, there has been little open discussion about the principles guiding what data are gathered, why they are needed and how they should serve the public good.
This underutilisation represents a missed opportunity. For AI models, data quality and, in many cases, breadth are essential to achieving accuracy, fairness and utility. DfE analysis already shows that when generative models are trained on targeted educational data, accuracy rates can reach 92 per cent, compared to 67 per cent without such data.[_] By responsibly opening access to this resource – with strong safeguards and a clear sense of purpose – government could enable a new wave of AI-driven tools that both improve learning outcomes and stimulate commercial investment.
The Content Store: Data Innovation in Action
The DfE’s new content store offers a glimpse of what this future could look like. The store aggregates educational materials – such as curriculum guidance, anonymised pupil assessments and lesson plans – that can be used to train AI models for marking, developing teaching materials and managing administrative tasks. By bringing these data sets together, it lowers one of the biggest barriers to entry for vendors – access to safe, usable, well-tagged training data.
The process of developing the store has also forced early engagement with some of the most complex questions about public data in the age of AI – around ownership, intellectual property and value-sharing. For instance, exam boards own the copyright to the questions they set, but pupils retain rights to their answers. How should rights, responsibilities and value be allocated when AI systems draw on both proprietary assessment content and learners’ own data in combination? Similarly, if AI models built using public data generate significant commercial returns, how should the public benefit be captured? These are not yet settled issues, but the UK’s early engagement with these questions puts it ahead of the curve internationally. As the store matures, it can serve as a model for how to balance innovation with accountability and trust.
Embedding the Content Store Within the National Data Library
The content store can lower barriers to entry in the AI-in-education market, thereby boosting innovation. Most startups, researchers and non-profits simply do not have the technical and legal resources required to compile the data sets and undertake the processes (extensive tagging, determining IP permissions) required to obtain high-quality training data. More established companies would benefit too: the store will allow them to focus their investment on building value-added tools rather than reassembling raw inputs.
As the content store moves from design and test phases into full implementation, it should ultimately sit within the remit of the NDL, which can provide consistent governance and alignment with wider cross-government data policy. The NDL would oversee the store’s licensing model, ensuring that access to data reflects its status as public infrastructure, balancing innovation with public trust.
The next phase of rollout should allow qualified third-party developers to access data under defined conditions (outlined below). Over time, and subject to evaluations, this should expand into a broader system of tiered access for accredited developers and researchers, and new data sources should be integrated to deepen the store’s utility – including, for instance, data sourced from volunteer schools as part of the testbed network described in segment 3 below.
A dedicated governance body, housed within the NDL, should oversee approvals and enforce the usage conditions set out below, supported by a Data Offenders’ Register to penalise misuse and maintain public confidence.
Spotlight
Access to education data must rest on clear principles that preserve trust while enabling innovation:
Safety and ethical use: all applications must comply with safeguarding requirements, privacy laws and relevant data-protection standards.
Public interest: access should be limited to projects with clearly defined social value – such as improving learning outcomes, addressing attainment gaps or reducing teacher workload.
Transparency and provenance: developers should disclose which data sets are used to train AI models and for what purpose, enabling auditability and accountability.
Openness and competition: no actor should enjoy privileged or disproportionate access; fair licensing should prevent market capture and ensure a level playing field.
Ensuring Public-Benefit Returns
Because much of the store’s content derives from publicly funded or contributed materials, models trained on these data should give something back to the system – a measure that would also help build trust among parents and educators.
There is clear public value in enabling developers to build tools that improve educational outcomes. Beyond this, as the content store moves into the NDL’s remit, its access and charging model should align with whatever framework the NDL ultimately establishes for public-sector data. If the NDL adopts commercial licences that allow for cost recovery and the reinvestment of any surplus into public services, then the same model should apply to access to the content store.
Under such a model, developers should retain a reasonable operating margin, but a modest proportion of funds would flow back into the education system. These returns should support the wider ecosystem that makes innovation possible, including initiatives such as the AI Readiness Fund, the national testbed network or new digital infrastructure recommended in this paper.
Segment 3: Testing Efficacy and Meeting Minimum Standards
Innovation only matters if it works in the real world. For AI tools in education, that means two things: the ability to demonstrate impact through real-world testing, and the assurance that every product meets minimum safety and quality standards. These two goals are mutually reinforcing – effective tools cannot scale without trust, and trust cannot be built without evidence.
Building a Permanent Testbed Network for Market Tools
One of the biggest obstacles facing edtech innovators is the difficulty of trialling products in real school environments. There are few systematic routes for companies or researchers to pilot their tools, and many schools are understandably cautious. Tight budgets, staff workload pressures and a legacy of poorly evaluated technologies have made them wary of experimentation.
This lack of structured test-bedding holds back both sides. Innovators struggle to refine their tools or demonstrate real-world impact, while schools miss the chance to influence product design and ensure solutions reflect genuine classroom needs. The result is a cycle of under-evidencing and under-adoption: half of teachers cite a lack of staff knowledge about what products do as a key barrier to using them.[_]
To overcome this, government should establish a permanent national testbed network that connects schools, researchers and edtech developers to trial, evaluate and refine AI tools. This should be a standing system, not a time-limited programme, offering stability for schools and a clear signal to investors that the government is serious about evidence-based innovation.
Drawing on international models such as New York City’s iZone, the programme should operate around short, iterative evidence cycles for tools – typically six to 12 weeks long – that enable rapid learning and improvement. These trials would produce practical insights within school terms rather than academic years, lowering barriers to participation while maintaining rigour.
Participating schools should receive support and incentives to take part. This could include release time for teachers, financial assistance to offset additional workload and access to independent evaluation expertise. In time, the network could be linked to regional digital hubs or advisors to provide practical help and ensure insights are shared across the system.
Crucially, the testbed should be technology-agnostic. Schools’ needs vary widely, and innovation should not be pre-emptively channelled into narrow categories. Keeping the programme open to different technologies – from adaptive learning systems to administrative automation – would encourage experimentation and ensure the market evolves dynamically rather than being constrained by pre-set priorities. Because certain applications can be trialled more quickly and at lower risk than others, the test-bedding framework should also be stratified, with differentiated pathways and requirements that reflect the varying levels of challenge involved.
By embedding the testbed as a core piece of national infrastructure, the government would create a virtuous cycle: schools become confident co-designers of technology, companies gain the evidence they need to scale responsibly and investors see a system geared for sustainable innovation.
Assurance and Minimum Standards
Alongside efficacy testing, government should ensure that AI tools meet minimum safety and reliability standards. Realising AI’s benefits depends on careful management of its risks – from exposure to unsuitable content to breaches of pupil data or algorithmic bias – to build public trust.
Why Standards Matter
A chief concern is safety. Many AI applications – both existing educational-AI applications and non-edtech AI that could be adapted for use in schools – have not yet been developed with younger children in mind and could inadvertently present them with harmful or age-inappropriate content.[_] Without adequate safeguards like content moderation and adult oversight, they could compromise safe learning environments.
Privacy is another concern. AI tools often process large volumes of sensitive pupil data – including, for instance, personal details, assessment records and real-time interactions. Without clear governance and robust data-handling protocols, schools risk breaching data privacy – by using personal or sensitive information without proper authorisation – and data-protection requirements, by failing to safeguard data from unauthorised access or misuse.
Data and algorithmic bias, too, poses a risk if not managed well. The 2020 grading-algorithm controversy, which disadvantaged high-achieving pupils from historically lower-performing schools, underscored the potential for technological solutions to amplify inequity, albeit in this case one that was rushed through.[_] Similarly, studies have shown that some AI-powered plagiarism detectors disproportionately flag essays written by non-native English speakers.[_]
Another area that will need to be managed is reliability. AI systems can produce inaccurate or misleading information (“hallucination”) that appears credible to pupils and teachers alike. They can also blur boundaries in assessment, generating work indistinguishable from pupil output[_] and undermining confidence in qualifications.
As well as ensuring safety, privacy and reliability, it is worth noting that clear and proportionate standards would also enable innovation rather than restrain it. When vendors understand the benchmarks they must meet, they can design to those specifications with confidence. Predictable rules lower uncertainty, attract investment and raise the overall quality of products in the market. Establishing sensible standards would therefore encourage responsible innovation and signal to investors that England is a credible, trusted environment for AI in education.
What Minimum Standards Should Include
To build trust and consistency across the system, there should be clear baseline standards around safety and reliability that products must meet to be eligible for use in schools.
Safety requirements are the essential conditions of entry to the education-AI ecosystem. Products that fail to meet them should not be permitted for use in classrooms. These requirements focus on core areas of pupil safety and data protection:
Safety and safeguarding: tools must have strong content-moderation systems in place to protect pupils from harmful or distressing material, including harm ideation and bullying, and must be age-appropriate in design with clear routes for adult oversight.
Data protection and privacy: systems must comply with GDPR and sector-specific guidance on data minimisation, retention and secure processing.
Fairness and non-discrimination: models must be tested for bias across demographic groups, with transparent evidence of how mitigation steps have been implemented.
Beyond safety, there should be a set of baseline reliability requirements focused on the educational integrity of tools. This is not a matter of safeguarding, but of ensuring that AI products enhance, rather than confuse, students’ learning. Examples include:
Curriculum alignment: tools should be demonstrably consistent with the national curriculum – including forthcoming revisions – and appropriate for the age and stage of learners.
Accuracy and data integrity: outputs should be based on reliable and verifiable information, with clear documentation of data sources and model training.
Transparency and explainability: developers should disclose a tool’s known limitations, error rates and update cycles, and provide clear information on how outputs are generated.
Developing and Placing Standards on the Right Footing
In practice, establishing an effective assurance regime involves three distinct but related tasks: first, developing clear minimum standards; second, placing those standards on an appropriate footing within the system; and third, providing vendors with a straightforward, credible way to test their products against them.
Work on minimum standards and the footing on which they sit should not be treated as a one-off exercise. Given the pace of change in AI capabilities, standards will need to be iterated over time and informed by ongoing engagement between government, developers, educators and other experts.
This convening and iteration would benefit from being underpinned by a structured, temporary and exploratory process. The government’s proposed AI Growth Lab provides a promising model. The Growth Lab is intended as a space for government and industry to collaborate on emerging AI applications, test regulatory and policy approaches, and gather evidence before establishing suitable frameworks. An education-focused workstream should be built within the Growth Lab to support the co-development of minimum standards for AI in schools and to explore the most appropriate footing for those standards. Its purpose would be to gather evidence and stakeholder views on what standards are most effective and how they should be sustained over time, rather than to prematurely entrench existing models in a rapidly evolving technological landscape.
Testing Against These Benchmarks
Once standards are agreed and placed on an appropriate footing, testing becomes a distinct delivery function: the means by which products are assessed against those standards in practice.
The next evolution in assurance will be the ability to test for many of these features automatically, using technology. AI products can connect to independent tools to determine whether they meet certain standards without needing access to proprietary source code.
For example, technology can already assess measurable features such as the extent to which it aligns with the national curriculum, or whether it provides age-appropriate responses. The content store could also act as a reference data set, powering benchmark tests that evaluate models for curriculum coverage, for instance.
This makes it increasingly feasible to run standardised, technology-enabled evaluations at scale – moving assurance from manual audit to automated, continuous testing. The policy question, therefore, is how to embed these new testing capabilities within the broader compliance system.
A three-actor model would best balance expertise, scalability and trust. First, the DfE’s new dedicated AI unit should retain overall responsibility for the assurance regime, setting requirements and overseeing delivery. Second, the unit should work closely with the AI Security Institute (AISI), drawing on its technical expertise in model evaluation and, where appropriate, facilitating secondments or joint working arrangements. Third, the practical testing itself should be contracted to licensed external laboratories with the necessary technical capability and reach, operating under strict terms and oversight.
This approach would allow testing capacity to scale without requiring government to build extensive in-house infrastructure, while also helping to stimulate a domestic ecosystem of AI testing and benchmarking expertise. To expedite frontier development by removing cost frictions from the equation, and to boost compliance with emerging standards, government should guarantee free access to testing for all vendors.
In the first instance, participation in this testing regime should be voluntary, allowing vendors to opt in to licensed laboratory testing and to display certification where standards are met. Over time, as awareness and demand for assurance grow, market dynamics are likely to encourage widespread participation, establishing voluntary testing as a de-facto expectation. Government should monitor uptake and outcomes closely, and retain the flexibility to adapt the model if stronger levers prove necessary.
Segment 4: Ensuring Demand-Side Readiness
Even the best-designed AI tools will fail to make an impact if schools are not ready to adopt them. Diffusion ultimately depends on demand: the confidence, capability and capacity of leaders and teachers to use technology well, and the information systems that help them make informed choices. Yet schools today face a combination of barriers – constrained budgets, patchy digital maturity, low confidence and an opaque marketplace – that inhibit adoption. This phase of diffusion is therefore about ensuring schools have the resources and information to engage with AI on their own terms.
Building Leadership Buy-In: an AI Readiness Fund
Budgetary pressures are among the biggest brakes on AI adoption. After years of financial strain, many schools have cut back on digital investment: almost half of secondary-school leaders and more than half of primary-school leaders in England report scaling back IT spending for financial reasons.[_] Teacher Tapp data show a decline in whole-class access to devices since 2022,[_] and according to government research, nearly every school interviewed highlighted budget limitations as a key challenge in adopting edtech.[_] As a result, schools often default to free or multi-purpose platforms, or stick with outdated incumbent systems, limiting their ability to trial emerging technologies.[_],[_]
Currently, there is no dedicated funding from government to cover AI-tool subscriptions for schools. This means any spending must come from school or trust budgets. To unlock progress, government should establish a ringfenced AI Readiness Fund, allowing schools to draw down targeted support for adoption.
Access to the fund would be contingent on schools forming credible plans for integrating AI into teaching and learning. But the purpose would be empowerment, not prescription – the bar for approval should be designed to promote engagement, reflection and planning rather than compliance.
Schools applying for support would: articulate a clear vision for how AI will advance their specific learning or operational goals; allocate a portion of funds to workforce upskilling and staff capacity-building; and commit to using tools that meet recognised safety and reliability standards. There would be no restriction on tool type or approach, enabling flexibility and innovation.
DfE advisors would provide planning support for schools that wanted it. A national AI-planning toolkit could offer templates, best-practice examples and peer-reviewed guidance. Regional workshops, peer networks and virtual drop-in clinics would help schools share learning and navigate the marketplace. Feedback loops would allow schools to refine proposals rather than face outright rejection. And schools could opt to apply jointly, via multi-academy trusts, consortia or local authorities, to share expertise and reduce administrative time.
To ensure the fund is financially sustainable, a portion of revenues generated through the public-benefit return mechanism proposed in segment 2 could be directed towards it. This would create a self-reinforcing funding loop, where value generated from public data is reinvested to help schools adopt and integrate AI safely and effectively. The fund should exist in conjunction with other avenues for developing support, like public-private partnerships.[_]
Empowering Teachers: AI Leads
Teachers are central to diffusion. Adopting AI often requires shifts in pedagogy and classroom management – not simply technical ability. Change therefore needs to be led deliberately, with professional learning pathways and clear roles to support it.
The first step should be to update the Teachers’ Standards and the Initial Teacher Training and Early Career Framework to include AI proficiency.[_] All teachers should have access to structured opportunities to develop core digital and AI literacy – understanding what tools can (and cannot) do, how to interpret outputs, and how to use them ethically and effectively in practice.
But internal champions are also crucial. Recent Ofsted research highlights how early-adopting schools are approaching this: establishing cross-departmental AI committees, appointing AI champions, aligning AI use with strategic goals such as workload reduction, and updating governance frameworks. In many schools, AI champions – usually tech-savvy teachers or senior leaders – have proved pivotal, running workshops and offering one-on-one help.[_]
Building on these insights, every school should designate an AI lead – a named individual, responsible for coordinating adoption, supporting peers and acting as liaison with the wider system, who would have access to career-boosting progression and leadership routes. Groups of AI leads should then be networked into regional AI hubs, providing mutual support, sharing case studies and linking schools with trusted experts. International examples, such as Estonia’s nationwide digital-lead programme, show how this kind of distributed leadership can drive system-level capability.
It is worth noting that the proposed national AI teaching assistant recommended in segment 1 would directly reinforce these measures by providing a practical, hands-on onboarding route for educators. It would give staff immediate access to core AI functions and classroom applications, helping them develop confidence and capability as they adopt AI-enabled practices.
Segment 5: Deployment and Enabling Infrastructure
Even with strong demand and increasingly powerful tools, AI diffusion in schools will grind to a halt unless government ensures the digital foundations needed to use these technologies are in place. But as this paper has demonstrated, England is nowhere near that baseline. Connectivity remains patchy; access to devices is highly uneven; and too many schools rely on fragmented, incompatible systems that hamper the adoption of tools. Unless government moves urgently to fix these fundamentals, AI in education will remain a collection of fragmented initiatives, not the system-wide transformation the country needs.
Universal Connectivity and Core Infrastructure
Government must treat school digital infrastructure as critical national infrastructure – essential public capital on which the future of the education system depends. This requires a dramatic, accelerated scale-up of investment to ensure every school has high-speed broadband, resilient WiFi, and the technical capacity to manage devices and connectivity securely and at scale.
Government should set a clear deadline: within 12 months, every school must have access to high-speed broadband and reliable whole-school WiFi. Where traditional rollout is too slow or prohibitively complex – including in rural or remote communities – government should be ready to plug the gaps through alternative solutions such as satellite-enabled connectivity to guarantee universal access.
These investments are not discretionary enhancements. They are the non-negotiable prerequisites for safe, effective and equitable AI adoption. Without them, schools simply cannot participate in the next wave of educational innovation.
Ensuring Equitable Device Access
Digital equity also means ensuring that every pupil and teacher can actually use AI tools in practice – something that is impossible without consistent access to suitable devices. Government should adopt a hybrid national access model, with Bring Your Own Device as the default for laptops and tablets, backed by a guaranteed, government-funded device-loan scheme for pupils who lack personal access. Pupils’ access would be strictly limited to approved educational platforms and classroom-appropriate functions.
Under this model every teacher and secondary-school pupil would have individual access to a device suitable for AI-enabled learning. Primary schools would maintain a minimum ratio of one device per five pupils, supported by shared classroom sets. The programme should be monitored and regularly updated, ensuring that device ratios remain suitable and schools can adapt as technologies evolve.
To reduce costs and environmental impact, government should partner with industry and other public bodies to reclaim, refurbish and redeploy devices slated for disposal. This would expand access while promoting circular, sustainable technology use. Estonia offers a precedent: banks there collaborate with recyclers to repurpose well-maintained corporate devices for schools, showing how public–private partnerships can combine sustainability with digital inclusion.[_]
Interoperability
Even well-connected schools will struggle if digital systems cannot “talk” to one another. Today, schools use multiple management systems, learning platforms and specialist tools, many of which operate in isolation. This lack of interoperability – the ability of different software systems to exchange and use data seamlessly – creates two major problems:
Schools become tied to specific providers because switching systems is costly and complex. This stifles innovation and constrains choice.
Teachers and leaders cannot draw insights across tools. A school may use one system for attendance, another for assessment and a third for behaviour – but none shares data. As a result, opportunities to understand learning patterns holistically are lost.
When systems are interoperable, however, their value multiplies. AI tools can pull data securely from existing platforms to personalise learning or streamline reporting, while teachers can view unified dashboards that give a fuller picture of progress.
The government should develop an interoperability framework, co-designed with schools, industry and standards bodies, to enable secure and consistent data exchange between school systems and AI tools. The framework should clearly define what interoperability means in practice – covering both the technical connections between systems and the data standards that ensure coherence. All tools procured by publicly funded schools should be required to meet a common set of open standards, including open APIs for secure data transfer, standardised data formats and compatibility with major learning and management information systems.
AI will reshape education whether government acts or not. The question is whether this transformation is guided deliberately, in the public interest, or allowed to unfold unevenly, leaving schools to navigate change without clear direction or support. With traditional levers exhausted and pressures mounting, inaction would leave progress to capacity and chance – widening divides and squandering opportunity.
By contrast, getting diffusion right would deliver profound and lasting gains: empowering teachers, enabling personalising learning for every pupil, narrowing attainment gaps and building a stronger edtech ecosystem. With a clear AI action plan for schools, England can move decisively from isolated experimentation to system-wide impact – securing better outcomes for learners today and long-term prosperity for the country as a whole.
Notes on Methodology
Tracxn is a provider of private-market data, tracking more than five million companies globally to enable detailed analysis of startup ecosystems. Using Tracxn’s sectoral tagging, we identified 4,327 edtech companies operating in England. Of these, 1,473 focus specifically on the schools sector, and a further 1,023 are labelled by Tracxn as “enterprise” or “B2B2C” firms, indicating they sell directly to schools.
For segmenting to the edtech-AI market, Tracxn provided “special flags” for companies, which included “AI Native”, which they defined as follows: “AI-Native companies have core offerings built on proprietary AI/ML technologies. This includes infrastructure providers, AI-native applications and platforms fundamentally powered by AI. Companies with significant AI but not the primary product (e.g., Nvidia, Google) are classified as Debatable”. We included those tagged as “Yes” and “Debatable”, a total of 82 companies in England.
The data set covers funding activity from 2005 to the present, capturing 457 funding rounds across the English edtech market during this period.