Prohibition simply won’t work – universities must train their students to use ChatGPT to its full potential

Generative AI is here, and students are embracing it with or without their universities

The ancient institution of the university, with its many traditions and entrenched bureaucracies, will have to be uncharacteristically nimble to adapt to the coming revolutionisation of knowledge work. The ability of large language models (LLMs) to produce a well-written essay on just about any topic, with just a few sentences input from the user, threatens to undermine the core rationale of educational assessment as it is currently practiced – all while educators can’t even determine if an essay produced by ChatGPT counts as “plagiarism” per se.

Students, unsurprisingly, are well aware of Generative-AI’s potential uses as a study aid (to say nothing of its capacity to simply do the work for them). But as companies across all industries race to incorporate Generative-AI programs into their business models, universities seem instead to be scrambling to preserve the integrity of essay-based assessments and exams. Some educational institutions – including New York State’s Department of Education – have attempted to block access to LLMs entirely, while others have quickly replaced take-home assignments with in-person exams, in an attempt to ensure students are submitting only work they have penned themselves. In the face of a suite of tools so tailored to students’ needs as Generative-AI, educational institutions will need to accept that prohibition simply won’t work. Instead they will need to embrace it, educate people around it, and in doing so effectively manage it.

There is already a push to explore generative AI’s potential in the classroom. OpenAI are preparing to release a guide for teachers using ChatGPT that will explain how to write effective prompts to design lesson plans or activities, but also draw attention in detail to its limitations and biases. The real power of LLMs in this space though, will be in their direct interactions with students, and the more interesting suggestions for this that I’ve found involve exploiting these very limitations.

Amazingly, with the scaling up of computational resources has come a host of “emergent” capabilities like translation, arithmetic and even writing code. Nonetheless, this process has its limitations. Perhaps the biggest worry both in and outside of education has come to be known as “hallucination” – sometimes, these programs just make things up. This is due in part to the breadth of their training data, which includes basically all human language on the internet; but understanding that not everything on the internet is true is a lesson in tech-literacy that comes a few steps back from using LLMs. As well as this, the data-set on which any AI system is trained can lead that system to proffer certain biased “opinions” as fact – it should perhaps come as no surprise that using the whole of the internet to train ChatGPT has not avoided this problem.

Despite this seemingly damning issue, however, an adept user can take advantage of even these shortcomings. OpenAI’s aforementioned blog post reports on some teachers encouraging students to use ChatGPT 3.5 and 4 to role play challenging conversations, or as a stand-in for a debate partner, in order to develop and demonstrate critical thinking skills. Understanding that ChatGPT is fallible can encourage students to pay close attention to the information that they are being given and to scrutinise its accuracy – a skill that truly will remain valuable in the age of generative AI.

In essence, users need to avoid using LLMs as search engines. Programmer and tech-blogger Simon Willison has suggested that we instead think of them as “calculators for words”, able to perform functions like summarisation, answering questions about text input, extraction of facts from text input, rewriting in alternative styles, as a very effective thesaurus, or simply for entertainment. These are powerful tools, but they must be understood in order to be used effectively. Rather than trying in vain to detect and punish their use, universities will need to design modules for all disciplines on both how generative AI systems work and how to use them. They must embrace these new technologies alongside their students, and guide them in their use and understanding of them.

Generative AI is here, and students are embracing it with or without their universities. This has quite rightly raised difficult questions around academic integrity and the value of traditional coursework. But AI offers us a new educational model that is more effective, more inclusive, and better able to work with the needs of individual students. Rather than having it write for them, students might work with an LLM in a back and forth effort to complete an assignment. I suspect – if done right – that not only would this strategy produce a better end result, but in the process the student may come to learn and understand the material at a deeper level. Which is what both student and educator really want. If a student at university level is tempted to cheat, it is not because they would rather just have the grade than know the material. We are all here to learn for the sake of our own futures. Rather it is likely to be because of the mounting pressures of workload and deadlines, or perhaps because some particular necessary skill – like long-form English prose, for example – eludes them. Certainly then, the established model of higher education is threatened to be undermined at its very foundation, but the tools that are doing so can be used to build something better in its stead.