AI is not as intelligent as you think

A closer look at the academic threat of AI technology, the fact that it is not intelligent in the same way we are, or near as intelligent as we give it credit for being

In January 2023, a secondary school in London announced that they were going to make massive cuts to the amount of essay homework assigned to students. They explained that an AI (artificial intelligence), chat generator ChatGPT had made essay work redundant. Teachers at the school were given groups of essays, some written by students and others written by the AI. The AI generate essays were given high A* grades, and the school decided that action should be taken to prevent plagiarism and cheating. AI such as ChatGPT has become a huge talking point on the internet recently. Other areas such as AI-generated artwork have been targeted and criticised for plagiarising and exploiting artists. The overarching argument is that AI will eventually take over. That it will become harder or impossible to distinguish between AI work and human-generated work. But is AI really as big a threat to academia and creativity?

The short answer is probably not, or at least not in its current form. The problem in handling perceptions of AI, is the use of the word intelligent. AI is not intelligent in the way that humans think and experience intelligence. ChatGPT and other AI programs like it function by compiling large amounts of information together and finding patterns between them. It then reproduces content based on the information it has compiled and grouped together. The job of the AI in ChatGPT is to put words in the right order based on perceived patterns. It is, in short, a word predictor.

In this way, it can solve broad problems, but not specifics. For example, a YouTuber, Tom Scott, used ChatGPT AI to write code to solve a problem he was having in Gmail. The AI did produce code that worked, but it had to be amended by Scott himself. When he gave the AI a more difficult coding problem, one that would have required some ingenuity, it could not complete the task. The point being, that AI is not creative. It is not intelligent like a human, it cannot create anything that has previously never been created. Everything it creates is an amalgamation of other things.

Singer and songwriter Nick Cave was sent a ChatGPT-generated song. Someone asked ChatGPT to write a song in the style of Nick Cave. He pointed out poignantly, that firstly the song was not very good. Secondly, he made a poignant point, which sums up the reason why AI will never replace academia, or creative pursuits: “Songs arise out of suffering, by which I mean they are predicated upon the complex, internal human struggle of creation and, well, as far as I know, algorithms don’t feel. Data doesn’t suffer. ChatGPT has no inner being, it has been nowhere, it has endured nothing, it has not had the audacity to reach beyond its limitations, and hence it doesn’t have the capacity for a shared transcendent experience, as it has no limitations from which to transcend.”

“It is not creative, not the way a human is.”

This is the core reason that AI is not the problem we imagine it to be. It is not original. It is not creative, not the way a human is. In this example, without Nick Cave already having created a style of song, the AI would not be able to create it. It is a program, it is coded to perform a certain way, it cannot do anything that it has not been told or instructed to do. In this sense it will never truly cause problems for academia as a discipline, as it cannot produce or research anything new. However, it can cause problems with plagiarism and cheating within academic institutions.

What does this mean for academia? Well, right now it means that academics will have to start putting some effort into setting questions and reading essays. For example, a broad question that simply asks a student to compare two cases, or to broadly discuss a topic probably won’t work anymore. By asking more specific questions, while programs like ChatGPT can do a limited amount of work, they are not able to write an entire college standard-level essay.

Alongside this, eventually a plagiarism check system will be developed to cope with it. Before AI, when regular plagiarism was the biggest problem academia faced, systems developed in order to cope and handle the problem. For example, Turnitin the plagiarism software used in College. When an essay is uploaded to Turnitin, the programme takes the essay and logs it into its database. It then compares each line of the essay to all the other essays in its database. Therefore if you have plagiarised, it will be able to highlight the sentences or paragraphs from another essay within its system. It’s effectively the Where’s Wally of software, find and highlight.

“When ChatGPT writes an essay, it is using the same grammar rules, sentence structure and overall style every time.”

The problem with AI like ChatGPT is that it does create new content. As opposed to other essay writing software, it will write different paragraphs each time and creates a pattern of words independently of other sources. However, it still has its own style. When ChatGPT writes an essay, it is using the same grammar rules, sentence structure and overall style every time. Therefore, it is identifiable. Ironically, you could use an AI program to learn to recognise the style of ChatGPT AI, and use that to find essays that have been written by ChatGPT. Eventually, someone will figure out a way to recognise the system and it will be easy to identify. 

A point to note is that the only people being truly hurt by ChatGPT are the students using it. While it is slightly obvious, getting through a course or degree by cheating is not exactly beneficial. An article in The Times highlights this, with a student who went to Oxford to study Italian claiming that he cannot speak a word and cheated in nearly all of his exams using AI and other methods. This is of course a problem for the university, as giving out Bachelor’s degrees in Italian to students who cannot understand Italian is unadvisable. However, the person who is really harmed is the student, and it is their own fault. A question has to be asked about why people feel the need to not participate in their degree. Getting a degree through using ChatGPT is an expensive and time-consuming way to waste a university opportunity. 

AI is not the problem that it seems to be at first. Like most technological advancements, it always sounds a lot more interesting than it actually is. As Jeff Goldblum says in Jurassic Park, “Life finds a way”, and in this case the world will adapt to AI and find ways to work around it. Hopefully, it will eventually develop into something genuinely useful for society.

Sarah Murnane

Sarah Murnane is the Art Editor of the 69th volume of Trinity News.