Provost Linda Doyle has said that it is crucial that College “starts exploring” what the advancement and proliferation of artificial intelligence (AI) tools could mean for the university sector.
In an exclusive interview with Trinity News, Doyle expressed the belief that AI will have a “profound effect” on research and higher education, adding that its impact will probably be significant “in ways that we don’t fully know yet”.
Doyle said that AI tools such as ChatGPT have “huge numbers of implications” for higher education and how students learn going forward.
Taking coding as one example, she highlighted how it has become standard practice in tech companies to use large language models (LLMs) like ChatGPT to generate code: “So you then ask yourself, should I be learning code the way we learn code before this existed? And should I be learning different things; should I be focusing on how I test code or check its veracity?”
Doyle has previously said that she hopes that the use of AI will contribute to the development of stronger critical thinking skills among its users. She explained to Trinity News that she sees a contradiction between the shocked reactions to AI software producing false claims, while accepting the misleading information online constantly, expressing optimism that the widespread use of AI will see the application of critical thinking skills across all contexts.
“That to me is a really strange kind of juxtaposition… I’d like to [see people] bring the scepticism that they have about AI and understand that anything you read needs to be contested and dug into deeper.”
While it shares some flaws with any information technology in this sense, Doyle is quick to rebut those who would say that AI is “nothing new”.
She pointed particularly to the notion of “accelerated discovery” which promises to revolutionise scientific research, highlighting that AI tools can save years of work on experimental drugs by ruling out ineffective products from the outset.
“Humans will always continue to do inventive, creative things that AI just simply can’t”
The same logic of using AI to speed up certain elements of is beginning to be applied across disciplines, as academics in both STEM and the humanities and social sciences utilise the ability of AI to process large amounts of data. both numerical and verbal (see below).
Although she is supportive of a productive embrace of AI in research, Doyle said she understands the decision taken by a number of scientific journals earlier this year to ban the listing of ChatGPT as an author on papers.
“In the absence of knowing exactly what to do at the moment, I understand why they did that,” she said, adding that it’s “a tricky question” and one that is really important in the context of research integrity.
Additionally, Doyle emphasised in her view, machines are not likely to come to replace human researchers, saying that this is one area in which the potential of AI is over exaggerated.
“I am very optimistic in the sense that I always think that the human needs to be in the loop. I think humans will always continue to do inventive, creative things that you just simply can’t [with AI].”
She added that the integration of AI tools into research will prompt researchers to better identify the human element of their research and what they ultimately do.
The same is true in the case of teaching and learning, Doyle also said, pointing to software that allows lecturers to feed virtual avatars a script for a lecture, which then read it and record it to be distributed to students.
“Then you ask yourself, what should I be doing as a lecturer? If I’m recording material, what does it mean for me to do that, and what does it mean for my personality to be in what I’m saying?”
“I think there’s only two things you can do: go back to a more traditional exam, or embrace the use of ChatGPT in some way”
The same goes for students using AI tools to answer questions and complete assignments, Doyle said. “To me, they’re all parts of the same jigsaw puzzle, in terms of what it means to teach and learn.”
While she has previously said that the impact of AI on academic integrity in the classroom will be “among the least important”, Doyle clarified that this is merely due to the vast implications in other areas, and not because it is insignificant in itself.
Acknowledging the inevitable changes to assessment that widely accessible AI software will necessitate, Doyle said: “I think there’s only two things you can do: go back to a more traditional exam, or embrace the use of ChatGPT in some way.”
Reflecting on these different options, Doyle said “it would be an awful pity” for more considered coursework elements which require deeper engagement than traditional exams to be gotten rid of.
“I think you will just have to make this so that you have to be transparent about the tools you used in that process, or to do things in a way that recognises that those tools exist. But I think it is important that not everything reverts to one single exam.”
Adding that a third option would be to come up with new forms of assessment which “simply can’t be done by ChatGPT”, Doyle said that she would not recommend any single option over the others, but rather emphasised that different modes assess different strengths, and that it may be a matter for lecturers to decide on a case to case basis the most appropriate form to use.
While Doyle is keen to emphasise that she herself is not an authority on the subject, it is evident that she keeps a close eye on the development of AI as it relates to the university sector and is actively preparing for the ways in which universities will have to respond.
“To me it’s about what journey do we need to go on to be able to get to a point where we’re actually using it in a productive way for teaching research, learning, for administration for everything to do with the university.”
This article is part of our special report on AI in the university sector.