Advocates of AI state it will quickly, and permanently, change the very nature of education, and suggest the possibilities for accelerating learning and reducing workload are exciting. Its critics point out the flaws, limitations and dangers of handing over too much control to AI.
No doubt, as is often the boring way, the answer lies between the two.
In this blog, Bradley Busch (Psychologist at InnerDrive) and David Budds (Deputy Headteacher at St Olave’s Grammar School) investigate the nuances of AI use in schools in more detail. In the first half, Bradley looks at some of the research and ideas on AI in education, and in the second half, David explores his school’s initial thoughts and plans on how to navigate AI in the classroom.
Part 1: What the recent research says, by Bradley Busch
First, let’s start with a huge caveat. Doing peer-reviewed research takes time. It requires careful planning, then thorough data analysis and several rounds of changes before it is finally published.
And yet, AI is breaking new ground at a rate of knots. What we know about it now dwarves our understanding from 6 months ago, and there is no doubt that it will look significantly different in 6 months’ time.
These two points taken together mean that the research body on the subject is only starting to emerge (and will be fast out of date). So here is a taster of what we know so far…
Can we detect AI in student work?
Recently, researchers from a range of universities across the world collaborated to explore whether we can use technology to check for AI use in students’ assignments/homework/coursework.
The results from their study? On average, AI detection tools are only 68% accurate. The researchers concluded that these are “neither accurate nor reliable”. Indeed, they found these detection tools “have been found to diagnose human written documents as AI-generated (false positives) and often diagnose AI-generated text as human written (false negatives)”.
So, at this stage, we unfortunately can’t be sure whether students are using AI or not. What are the implications for this? Well, if we can’t detect it, then we need to consider the type of assignments we give students. Live assessments, more hand-written work and homework that is based on retrieval instead of summarising ideas may be avenues worth exploring.
Can AI help students and teachers come up with innovative ideas?
A fascinating recent study looked to explore whether AI is more creative and innovative in coming up with original ideas than humans are.
To test this, they had three groups come up with answers to the question “what product could you sell to college students that costs less than $50?”. The first group was comprised of humans, the second was a basic version of ChatGPT, and the third was a version of ChatGPT that had been fed high-quality prompts.
In total, the three groups generated 400 different ideas. The researchers found that people rated both of the ChatGPT groups as coming up with better ideas overall. What was especially fascinating to see was that out of the top 40 ideas that were generated in the study, 35 of them (87.5%) came from the ChatGPT groups.
This suggests that AI-assisted programmes have the ability to come up with a higher quantity and quality of ideas. This could have some very interesting and significant implications for pieces of work that require original and creative ideas.
Should students use AI for Retrieval Practice?
We have previously blogged about whether students or teachers should use ChatGPT to aid Retrieval Practice. At first glance, it looks appealing: AI can generate questions in different formats and sometimes even provide feedback on them.
But all that glitters is not gold. Dig a little deeper and we find that the questions are often vague, not pitched at the right level of difficulty, and often with incorrect feedback.
Maybe (and this is a big maybe) teachers can use this to offer a very rough first draft for setting questions, generating lesson plans or activities for students. But it certainly isn’t capable of producing the finished product. And this becomes even more evident when it comes to how students use it, as they are less likely to be critical consumers of the AI output.
Recent research suggests that participants who are less knowledgeable in an area over-rely on AI, which can lead to them “falling asleep at the wheel”, making sloppy mistakes without any critical thinking.
Should teachers use AI for giving feedback?
AI wins the race in terms of speed and quantity. But when it comes to giving students feedback on their work, does it also do so in terms of quality, compared to humans?
Not yet. This was the result of a recent study which found that “human raters were better at providing high-quality feedback” in the majority of situations. But given that they also note the overall difference “wasn’t substantial”, then potentially (if improvements are made) one day it might be.
Valuing the process, not the product
One of the issues with AI is that it can produce work quickly. And on the surface, this output looks credible. However, the point of learning doesn’t just lie in what students produce, but in the process of how they did it.
We want to develop knowledge learners. Students who can think, create and critique. We want to help develop resilience, as well as students who can learn to manage their time so they don’t procrastinate.
Relying on AI is a similar problem to the philosophy of having students “just Google it”, which was prevalent a few years ago. AI technology works best if you have knowledge about the topic.
You can spot the errors. You can come up with better prompts. If we circumvent the process of acquiring that knowledge by rushing straight to judging what students produce through AI, we deny them the opportunity to gain the prerequisite knowledge in the first place.
Part 2: Initial thoughts at school level, by David Budds
Generative AI has provoked some pretty broad-ranging responses in our school, and I suspect our staff are typical in their diversity of response. We have full-on evangelical early-adopters who were keen to impress upon me how it would make everyone from poets to lawyers redundant within X months (where X is a relatively small number; teachers are usually regarded as immune in this hypothetical world), and how “no one need ever write their own assemblies ever again”.
We have those antediluvian artefacts (like me, on a cynical day) who are instinctively technology-phobic, who bristle, bridle and channel their inner Luddite at the mere mention of the phrase (and secretly delight when there is an internet outage) – and between these polar extremes a calm, AI-agnostic centre-ground who respond with some initial interest, before getting on with what they do best the way they know best.
One thing however all can agree on is that the genie is out of the bottle, the toothpaste is out of the tube, the dung is out of the horse, and the proverbial cat is not just out of the bag – it has shredded the bag, chewed it up and interred its remains.
AI is here to stay. The question is: what do we do about that in education?
Our response as a school is driven by the sure and certain knowledge that our young people are already living with this technology (so are we, whether we like/believe it or not), and it will only become more and more prevalent and sophisticated over time. Likely quite rapidly. We therefore must adapt by:
- Identifying and embracing the educational opportunities it offers (for teachers and students).
- Identifying and mitigating the educational risks.
We’ve put engaging with AI (embracing opportunities; mitigating risks) front and centre of our school improvement plan this year. We’re getting some non-contentious basics (“dos and don’ts”) out there early doors in updated homework and NEA policies. We’re harnessing the evangelical zeal of the early adopters, and we’re trying to lift the scales from the eyes of the unbelievers. We’re deliberately, systematically getting it on the agendas of departmental meetings, and asking subject leaders to collaborate and share ideas/approaches (and caveats) across departments.
We’re putting it front and centre of some of our headline whole-teaching-staff CPD sessions, beginning with a “here’s the basics” approach to engaging with Chat GPT et al: “Here’s how you can find it, here’s how you can use it – now get playing around with it in a safe space with colleagues” and setting aside a proportion of our directed hours for everyone to do this. We’re trying to ignite a conversation that needs to be had and promote a bottom-up rather than top-down rollout plan by getting all boots on the ground teachers in on the conversation.
Here are three broad things we do know, and that we’re working on.
-
It can help teacher workload
Want to produce 30 MCQs on a specified topic, but don’t have a free period to do it? AI can do it for you in pretty much the time it takes to type the request and set the parameters.
Want to generate a worksheet? Or a summary of key information on a given topic? Or a motivational speech for pupils to analyse? Or… any number of things which might take an individual teacher precious, non-existent spare minutes to produce by brain and by hand? AI can, in many cases, help you save some time.
Is it ever as good? Depends. Not yet in all circumstances, but certainly in some.
Would resources produced need refining with human input? In most cases, probably, and even with those that don’t, you’d want to check it through.
Is there a net saving of teacher time though using AI in this way? Like Carlsberg, “probably”.
-
It can be used to help students learn
The use of AI in assessment
Whilst we know some AI tech can analyse results of MCQs and generate more and more questions on topics which students find tricky, thereby delivering iterative personalised learning, it can’t yet give students meaningful feedback on essays etc. But I am assured by those in the know that if you give it sufficient time and a sufficiently large corpus of data from which to learn and on which to base its responses, it will get there.
The use of AI for research
JCQ have taken a clear, early lead on this. AI can be used by students for research, just like a book, an article or a website. It needs to be appropriately attributed to avoid falling foul of plagiarism rules, but it can be used.
The use of AI to generate sample responses for student criticism
Possibly where AI is at its richest at the moment in terms of helping pupils to learn. We know pupils will be using AI to generate responses to questions where we don’t want them to do. With this sad knowledge though comes the silver lining of a learning opportunity.
Can students critique AI-generated work (particularly for longer, written responses) in much the same way that pupils could do a colour-coded highlighting of where an essay delivers on specific Assessment Objectives or marking criteria? Can pupils look at AI-generated prose and start to learn where it is currently at its best and where it is currently at its most fallible? Can we get pupils to run Turing tests on possibly bogus historical sources? Does AI have a sense of idiom for English, anachronism for history, moral perspective in RS or age-appropriate expression in any number of subjects? How should we teach students to evaluate the merit of AI-generated research in project qualifications?
There are rich educational pickings to be had here for the brave and adventurous teacher.
-
There are definitely risks, and things we should very explicitly do to avoid them
Are there more risks than opportunities? Not sure, but here are a couple of the biggest ones and what we’re doing about them.
Plagiarism
Impress upon pupils, parents and staff that passing off AI work as pupils’ own work is plagiarism, pure and simple, and it comes with consequences, which can be particularly heavy in the context of any NEA work.
What to do about it: Put it in the policy. Share the policy with the pupils. Share the policy with the parents. Familiarise staff with tools for detecting AI-generated work; and then exercise caution if you think you’ve found AI-generated work, because none of these AI-fuelled AI-detectors seem to work with any degree of accuracy. A sad irony, but there’s still arguably a deterrent value in making a song and dance about the existence of AI-detection tools, and maybe, just maybe the threat of consequences might put them off (if you cross your fingers, and scrunch up your eyes really, really tight). We all know this won’t be enough, but we’ve got to do it as a public statement of intent.
Subverting the learning process
Arguably far worse in terms of long-term consequences of the above. In this context, AI is just the latest and most damnable in a long series of technological short cuts which starve the brain of good learning opportunities and seduce students into disempowering themselves by the siren song of sloth. Why do I need to know and remember facts when the internet knows them for me? Why should I bother having to remember them when the internet is our collective memory?
At the risk of sounding like a gloomy relic of the late twentieth century, ask me those questions after the powercut, the ransomware attack or the EMP. I’m delighted to have recently found some educational writing which gives a rationale to my long-held, pro-book-reading/anti-screen-reading bias (thank you, Peps McCrae for your article on “Screen Inferiority”, and much more besides).
What to do about it: The clear, consistent message we need to give to our kids is this: “AI may be able to do the work for you, but it can’t do the learning for you”. We need to help them live out this learning in the tasks we set, and the methods we ask of our students. Let’s not forget, most of them are still going to be assessed at KS2 SATS/GCSE/A-Level using pre-Victorian tech of pen and paper. They need to be able to use this old tech to transcribe things which are in their brains – not the abstract mind of an AI. Ban use of phones/laptops/tablets in lessons unless there are clearly explained reasons and parameters for their use. Set them homework tasks on paper. Even if they do sneakily generate a response via AI in this scenario, they still have to go to the effort of copying it up, maybe even recasting it in their own words, and there is engagement in this; where there is engagement, there is the possibility of thinking and remembering. If we’re buying into Willingham’s notion that memory is the residue of thought (and Ofsted’s definition that learning is remembering), then any time we encourage thinking (where the risk is that proxies for thinking take over), then we are doing the right thing to help our pupils learn.
Final thoughts
There’s lots more to be said, to be learnt, and to be lived out in the classroom if we’re to make the most of AI’s stunning educational possibilities and avoid the worst of its cavernous pitfalls, and even as I write this absurdly purple prose, the old-school idealist within yells “bet you can’t do that, Chat GPT!”; then the grinding inner realist, sounding vaguely like HAL in 2001: A Space Odyssey reassures me “It’s only a matter of time, Dave.”
This blog was written by a real, actual, flesh and blood teacher. Promise. Cross my heart, not my fingers. (and maybe you can call me Al ; )