College have suggested a number of immediate adjustments to assessment to counter the use of ChatGPT and other artificial intelligence (AI) software by students, including additional presentations or oral exams.
An email sent to academic staff on Friday recommended reviewing current modes of assessment to gauge “risk” of AI being used, as well as discussing the use of ChatGPT in classes in the context of plagiarism, and requiring disclosure of where ChatGPT or similar services have been used “in support of an assignment”.
Launched in November, ChatGPT is an AI software capable of generating “human-like” text. A recent study by Brian Lucey of Trinity Business School and Michael Dowling of Dublin City University (DCU) showed it to be capable of writing a paper that would be accepted for an academic journal.
Teaching staff were also advised to make explicit to students the value in writing or coding themselves, to link assessment to class discussion, and to encourage students “to avail of opportunities to improve their academic skills”.
A spokesperson for College emphasised that these were suggestions for “possible responses to the issue” and were not mandatory or prescriptive.
A number of medium-term adjustments to assessments were also included in the email which was sent by David Shepherd, Dean of Undergraduate Studies, Martine Smith, Dean of Graduate Studies and Pauline Rooney, Head of Academic Practice.
These included articulating “where [and] how assessment links to learning”, by explaining what is being assessed, and why it is best assessed in its current format, for example an essay or lab report.
It was also advised that medium-term adjustments may include a “greater transition to oral exams” or a to “more frequent supervised lower-stakes assignments across the semester”.
Dowling and Lucey, who studied ChatGPT’s ability to write an academic paper have said: “ChatGPT is a tool. In our study, we showed that, with some care, it can be used to generate an acceptable finance research study. Even without care, it generates plausible work.”
They argue that “researchers should see ChatGPT as an aide, not a threat,” though highlight that use of the software “has some clear ethical implications”.
Use of the software has been banned or restricted by thousands of academic and scientific journals, though several researchers have already listed ChatGPT as a co-author on papers.
Additional reporting by Aidan Cusack and Ellen Kenny