“Beyond Capture” The DfE’s deployment of a £23M AI tutoring pilot on 450,000 disadvantaged children by Will Ellis, Reclaim Childhood

In January 2026, the Department for Education in England announced a £23 million pilot deploying AI tutoring systems to 450,000 disadvantaged pupils. The justification was straightforward: these tools would provide personalised, one-to-one learning support, levelling the playing field for children who cannot afford private tutors.

The playing field being described is real but the remedy being offered is not. The comparison the equity framing invites is between AI tutoring and a well-resourced teacher. The comparison it actually makes is between AI tutoring and nothing, and accepting nothing as the baseline is a political choice. Political choices, we must remember, can be unmade.

The equity framing is not original to England. South Korea’s Education Minister used it to justify a national AI textbook programme in 2023 while New Zealand’s Ministry of education explored AI for personalised learning in a process that involved no consultation beyond Microsoft. The UAE, Israel, Singapore, Trinidad and Tobago, and China have all followed the same script: governments with nothing else in common beyond having found, independently, that personalised learning for all, reduced teacher workload, co-designed with teachers and built to rigorous safety standards sounds progressive, is difficult to argue against, and asks no one to verify whether it is true. Generally absent from every version of that script was any mention of independent reviews, measurable benefit thresholds or named liabilities. They were not developed to describe what these programmes actually do to children. They were developed to foreclose the question before it could be asked.

South Korea’s programme, announced with identical promises about personalisation and equity, collapsed within a single academic year. A civic organisation called Political Mamas filed suit against the education minister before the programme had opened in a single classroom, arguing that mandatory deployment overlooked risks to children, lacked adequate data protection, and had been imposed without meaningful input from families or teachers. They were right on all counts. When the textbooks did open, teachers reported systemic factual errors, regular technical failures, and monitoring interfaces that demanded more time than conventional teaching. A parents’ petition gathered 56,605 signatures and the opposition won the presidential election on a pledge to rescind the policy. By August 2025, the National Assembly had stripped the AI textbooks of their official status entirely. The publishers who had invested approximately 800 billion won (around $567 million) expecting mandatory adoption are now facing mass layoffs. When the combined public and publisher investment in the programme totalled over $1.4 billion, the people who put it in motion commissioned launch events, not independent reviews. Political Mamas filed before the launch.

Lee Joo-ho, South Korean Education Minister. Source: Yonhap

* * *

The EdTech industry shares its commercial logic with the wider technology sector. Its revealed preferences are clearest in the places where no one is positioned to push back. In 2018, classrooms in Hangzhou began installing a system that scanned students’ faces every thirty seconds, classifying each expression across seven emotional categories and tracking six distinct behaviours, including sleeping, reading, writing, and what it categorised as listening. The technology was supplied by Hikvision, a company simultaneously contracted to build surveillance infrastructure inside Xinjiang detention facilities, and operating under US trade sanctions for human rights violations since 2019. 

Hangzhou No. 11 High School, 2018. Source: Sixth Tone

The same logic operated across different technologies and different provinces. In Zhejiang province for example, schools trialled neurological monitoring headbands developed by BrainCo, a company founded at Harvard and funded by American and international venture capital. The devices transmitted what the company described as attention data to a classroom dashboard in real time, colour-coding children by focus level: blue for focused, red for distracted. Within days a hashtag on Weibo, China’s equivalent to Twitter, had been viewed 220 million times and education officials ordered the school to stop. BrainCo’s founder had already told The Independent that the goal of the first 20,000 devices was to capture data from 1.2 million people. The Chinese classroom was not a proof of concept for educational benefit but a data collection operation at a scale no ethics review board in an open society would have permitted. The commercial logic does not require successful long-term deployment in a permissive market, it requires data, and the data had already been collected. In Guizhou province, eleven schools introduced GPS-enabled smart uniforms with microchips embedded in the shoulder pads, tracking movements, triggering alarms if a child strayed, and relaying the data via a mobile app. The principal of one participating school told state media that although the school retained the ability to track students at all times, they chose not to use it after hours. That discretion was entirely at the school’s disposal since no external constraint prevented its removal.

Students at Xiaoshun Central Primary School wearing BrainCo FocusEDU headbands. Source: BrainCo/WeChat.

The democracies that produced these companies, funded their research, and trained their engineers did not simply fail to prevent what happened in those classrooms. They produced everything that filled them except the permission. The venture capital was American. The university research environment was American. The engineers were trained in open societies with functioning ethics review boards that would have stopped what those engineers then built and deployed in jurisdictions where no such boards existed. Open societies did not stand by while the EdTech industry built its surveillance infrastructure in permissive markets. They financed it. 

* * *

The EdTech industry has no meaningful internal brake on surveillance, data extraction, or the displacement of children’s development by engagement metrics. This is not an accusation of bad faith directed at any individual actor. A ministry under political pressure to modernise, publishers under commercial pressure to recoup investment, school leaders under inspection pressure to demonstrate innovation, academic researchers dependent on technology company funding streams: each is making locally rational decisions within a framework that prices harm in the wrong direction, or does not price it at all. The Department for Education announced the pilot and simultaneously declared its safety standards already met. The same institution that commissioned the deployment certified it as safe and an institution does not commission an independent review of a decision it has already announced as safe.

The EdTech industry has no meaningful internal brake on surveillance, data extraction, or the displacement of children’s development by engagement metrics. 

Restraint, in this industry, is a function of cost. When external pressure raises the cost of harm high enough, behaviour changes. Teacher unions eventually arrive at resistance, but only after months of classroom failure, public crisis, and political toxicity have made the cost of association higher than the cost of dissent. The Korean Teachers and Education Workers Union filed suit alongside Political Mamas, and that action was critical, but it came eighteen months after the harm was already accumulating. Parents, whose children are in the classrooms while that calculation is being made, operate on a different timeline entirely. When parents organise around a specific visible harm to their own children, there is no funding relationship to protect, no inspection judgement to manage and no professional identity at stake. The motivation is not abstract, not ideological and not institutional. It is a specific child, a specific harm and a specific demand that it stop. Commercial logic is extraordinarily good at finding what an institution needs and providing it in exchange for compliance. It has no equivalent tool for a motivation that is, by its nature, irreducible.

* * *

Political Mamas is not an anomaly. The same structure of motivation has produced phone bans in French schools, a national social media ban in Australia, the Online Safety Act’s age provisions in the United Kingdom, and pledge networks covering tens of thousands of schools across more than thirty countries. In March 2023, Italy’s data protection authority blocked ChatGPT’s processing of Italian users’ personal data, effective immediately. France has restricted generative AI access for primary-age pupils and prohibited the submission of personal student data to non-approved platforms. The UAE issued a hard ban in February 2026 on all generative AI use by children under thirteen in school settings. The EU AI Act, binding across all twenty-seven member states from February 2025, prohibits emotion recognition systems in educational institutions outright, describing them as unreliable, potentially discriminatory, and intrusive to fundamental rights. England is not bound by that judgement but it is also not exempt from the reasoning behind it. Each of these restraints was produced by the same mechanism: independent oversight and parents who refused to wait. England is not being asked to do something unprecedented. It is being asked to do what comparable societies have already done. In England, SafeScreens and Close Screens Open Minds are already doing this work, running legal and political campaigns for a moratorium on pupil-facing EdTech and building the evidence base parents need to act. That work is already under way, although the outcome is not yet decided.

When parents organise around a specific visible harm to their own children, there is no funding relationship to protect, no inspection judgement to manage and no professional identity at stake. The motivation is not abstract, not ideological and not institutional. It is a specific child, a specific harm and a specific demand that it stop.

Close Screens Open Minds / SafeScreens

* * *

In 1980, a thirteen-year-old girl named Cari Lightner was killed by a drunk driver in California, a repeat offender arrested on another hit-and-run charge less than a week before. Her mother, Candy Lightner, founded Mothers Against Drunk Driving. Within five years MADD had grown to over 400 chapters and two million members, influenced the passing of over 1,000 new laws, and is credited with saving around 350,000 lives. This was all against an alcohol industry with considerable financial and political power and a legal establishment that had long normalised the harm. The car seat movement followed the same arc: parents pushing for child restraint legislation against an automotive industry with every incentive to resist, producing laws in every American state within seven years of the first.

The honest difference between those campaigns and this one is that drunk driving deaths and car crash injuries were acute, visible, and directly attributable to a specific act. The harm being done to children by educational technology is diffuse, cumulative, and contested, which makes it easier for its proponents to dismiss and harder for its opponents to prosecute in any single courtroom or parliamentary hearing. The history of organised resistance, however, does not require a body to point at. The Suffragettes did not wait for a visible catastrophe. The abolitionists organised for decades in the face of a system that showed no signs of moving, against opponents with vastly greater institutional power, on behalf of people whose suffering was being actively denied by those profiting from it. The research on cognitive offloading, on the reversal of the Flynn effect in fluid intelligence, on the particular vulnerability of the adolescent brain during its one unrepeatable developmental window: none of it produces a single child’s name to place in front of a policymaker. That is not a reason to wait for one. When harm is slow, diffuse, and invisible on any individual dashboard, the only constituency reliably motivated to name it and sustain that naming across years is the one that will be living with the consequences, not in quarterly reports or re-election cycles, but in the lives of actual children.

* * *

The English pilot is described as voluntary, teacher-involved, and built to rigorous safety standards: voluntary at the school level, where the child inside a participating school has no such choice. In March 2026, the International Baccalaureate published draft AI design principles stating that every learner’s data rights and privacy are non-negotiable and that AI must meet strong safeguarding standards. The DfE’s KCSIE 2026 consultation draft, which governs the schools that will run this pilot, contains no equivalent commitment. The platforms being introduced into English schools were not built for them. They were built in commercial contexts, optimised for engagement, and are now being presented to disadvantaged children under equity language that no government developed by looking carefully at what the technology actually does. The charitable analysis that applies to a school leader trying to satisfy an inspection framework, or a publisher under pressure to recoup investment, does not apply at the level of market dominance where decisions about which children to reach, and under what framing, are choices rather than constraints. Even a phased, co-designed pilot embeds the same data architecture as a mandatory one. Who holds the data, under what legal framework, assessed by whom, and against what independent standard of benefit: none of these questions have been answered before the first child logs in.

Bridget Phillipson, Secretary of State for Education. Source: bridgetphillipson.co.uk

England is not Hangzhou. The mechanisms of democratic accountability here are real, and the Korean case has demonstrated that those mechanisms, when activated by the group whose motivation cannot be redirected, can reverse deployments into which enormous capital has already been committed. Political Mamas filed their lawsuit before the programme opened in Korean classrooms. Leverage diminishes as infrastructure embeds, contracts extend, and the financial consequences of reversal multiply. The combined public and publisher investment already in motion in South Korea is the reason reversal required a presidential election. The £23 million now in motion in England has not yet reached that point, but it will. England has a choice that South Korea did not have in time. It can be the country that learned the lesson from someone else’s children, or the country that paid for it with its own. It will be decided by whether parents, whose motivation cannot be captured, continue to mobilise.

Will Ellis is an independent researcher, education analyst and former classroom teacher with twenty years’ experience across UK state schools and a leading British independent school in Tokyo. He writes under the publication Reclaim Childhood on Substack (reclaimchildhoodmedia.substack.com). He has no financial relationship with any EdTech company, publisher, or campaign organisation and is not affiliated with any political party or advocacy group.

Secretary of State for Education Bridget Phillipson is pictured - she has announced a £23Million AI tutoring pilot on Englands most disadvantaged children

Related News &
Blog