Latest Headlines
Keeping AI in Line at School: Oversight Matters
The application of artificial intelligence in education needs to be supervised and independently evaluated. Only in this way can schools fulfill their mission of cultivating critical thinking and shaping future citizens.
A global artificial intelligence experiment is underway in schools. Since the release of ChatGPT at the end of 2022, other “large-scale language models” have also followed up quickly, and the media has carried out overwhelming speculation and concerns about the possible impact of artificial intelligence on education. Given the application of “generative artificial intelligence” into schools, in the absence of inspections, rules, or supervision, the speed at which generative artificial intelligence technology is integrated into the education system is shocking.
Education must not only protect itself, but also promote development and learning. Therefore, it has a special obligation to pay close attention to the risks of artificial intelligence. It includes known risks and newly emerging risks, but we often ignore these risks. There is very little assessment of these risks. The education community needs more support to understand these risks and take measures to better protect schools from the hazards they may cause.
Rote Teaching
The risks and harms of artificial intelligence have been widely reported. These include biases and discrimination stemming from systems trained on historical datasets—serious issues that should give schools ample reason to question the hype surrounding AI. Education also faces more specific challenges.
One such challenge lies in the role of teachers. AI optimists often claim that automated instructors won’t replace human teachers; instead, they argue, AI will save teachers’ time, reduce workloads, and take on a range of routine tasks. Yet the risk of mechanizing teaching is that AI could come to demand more labor: educators would need to adapt their methods to fit automated technologies. Robots may not replace teachers, but AI could take over tasks like lesson planning, material preparation, providing student feedback, and grading assignments—functions that, in their repetitive nature, might make AI seem as imposing a presence in classrooms as a 10 ton overhead crane on a factory floor, designed to hoist the “heavy lifting” of routine work.
As American author Audrey Watters notes in her book Teaching Machines, the claim that automation can simplify instruction, “personalize” learning, and save educators’ time is a century-old promise. Watters argues that mechanized teaching does not stem from an educational vision but rather an industrial fantasy of hyper-efficient education—one that, like introducing a 1 ton gantry crane to handle tasks better suited to human judgment, risks reducing the nuance of teaching to mere heavy lifting.
Misleading Content
Many brilliant cases of artificial intelligence applications in schools are also based on a narrow learning perspective. Artificial intelligence scientists and company executives often cite a famous 1960s study that showed that one-on-one tutoring can bring better student results than group teaching. The research’s famous statistical “achievement effect” was found to be used to support the concept of automated “tutoring robots” for personalized teaching. This view is also too narrow, arguing that the purpose of education is only to improve personal, measurable performance.
Regarding the application of artificial intelligence in education, these concepts lack thinking about the broader goals of education, such as cultivating independent critical thinking, personal growth, and the ability of citizens to participate. Mechanical teaching aimed at improving the basic ability of individuals to learn is not suitable for achieving these broader goals and values of public education.
The form of mechanical teaching brought about by artificial intelligence is also not as reliable as people usually claim. Applications like ChatGPT or Google’s Bard can easily generate content that does not match the facts. From a basic technical level, they just predict the next word in the sequence and automatically generate content based on user prompts. Although technically impressive, this may lead to false or misleading content.
Paid Access
Artificial intelligence can also be used to review educational content. A noteworthy example is that a school district in the United States uses ChatGPT to identify books that are prohibited from being borrowed by libraries to meet neoconservative laws on educational content. Far from being a neutral gateway to knowledge and understanding, generative artificial intelligence may promote reactionary and regressive social policies and restrict people’s access to multicultural materials.
In addition to these examples, the boom in promoting the popularization of artificial intelligence in schools is not for clear educational purposes, but more for the vision and economic interests of the artificial intelligence industry. The operating cost of artificial intelligence technology is extremely high, but artificial intelligence in the field of education is considered lucrative. Schools, and even parents and students themselves, are expected to pay for the use of artificial intelligence applications, which has pushed up the market value of education companies that cooperate with large artificial intelligence operators.
As a result, schools or school districts will eventually pay for services through contracts, enabling artificial intelligence providers to offset operating costs. Eventually, public education funds will be siphoned away from schools to maintain the profitability of global artificial intelligence companies.
At the same time, schools may become dependent on technology companies and lose their autonomy over daily operations, resulting in public education relying on irresponsible private technology systems. In addition, artificial intelligence has a huge demand for energy resources. Running artificial intelligence in schools around the world may further exacerbate environmental degradation.
Artificial Intelligence in the Field of Audit Education
The rise of artificial intelligence in education presents a host of critical issues that educators and system leaders must address urgently. Schools worldwide need sound advice and guidance to better integrate AI, grounded in a clear articulation of educational goals and thorough risk assessment. International bodies have invested significant effort in developing ethical and regulatory frameworks for AI, and ensuring education receives equal protection under these frameworks is paramount. Just as the smooth, safe operation of an overhead crane depends on well maintained overhead crane wheels to guide its movement along tracks, these frameworks serve as the “tracks” that should guide AI’s integration into education, preventing erratic or harmful deployment.
Schools across the globe require informed counsel on how to engage with AI. With AI’s emergence, institutions capable of conducting independent “algorithmic audits”—assessing potential harms of automated systems—can act as safeguards, preventing AI from entering schools without necessary scrutiny, rules, or oversight. Implementing such protections demands political will from government bodies and external pressure from influential international organizations. In the face of unchecked AI expansion, independent evaluation and certification may be the best way to protect schools from becoming sites of unending technological experimentation—much like how regular inspections of overhead crane wheels ensure heavy machinery operates within safe parameters, avoiding accidents in industrial settings.







