We are currently witnessing an AI boom. Generative AI particularly has captivated the general public, the scientific community, journalists, and legislators. The launch of Dall-E in April 2022, which purportedly attracted over a million users within the first three months of release pushed generative AI into the mainstream. It was followed by the release of Midjourney in July and Stable Diffusion in August the same year and subsequently, the launch of ChatGPT in November 2023.
Generative AI is poised to disrupt every corner of life, from healthcare, law, education, arts, finance, mental wellbeing, to academia.
From Dall-E to ChatGPT, these systems indeed represent an impressive feat of engineering. When ‘fed’ vast amounts of data, these systems ‘learn’ through brute-force iterative processes. Based on the underlying statistical distribution of the training data, large language models, for example, predict the probabilities of co-occurrence of sequences of words with impressive accuracy. Current hype will have us believe that AI is a magic-like omnipotent entity. Far from that, these systems are riddled with drawbacks and limitations, such as brittleness (susceptibility to catastrophic failure), unreliability (they are known to fabricate seemingly factual nonsense), and encoding and exacerbating societal and historical injustice.
Hype around AI is not new and has existed as long as the field itself: “The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence” (Rosenblatt 1958). And as generative AI becomes mainstream, hype and misleading information around the capabilities of these systems is also peaking to the extent that not getting on the generative AI bandwagon is, at times, framed as missing out or falling behind. Widely spread hype and misleading claims include that these systems will take over humanity once they become smart enough or that they will replace humans, when in fact, what this often comes down to is humans babysitting these systems.
In reality, these are complicated and fancy systems that predict the future based on the past. They cannot replace humans, especially when it comes to complex, contingent, and multifaceted societal, cultural and historical challenges that require subjective care, contextual understanding, and sympathy. Evidently, when these systems are deployed as a “solution” within intricate and sensitive human affairs, the consequences are catastrophic. Meta’s LLM Galactica was shut down after 3 days of public API access as it was producing seemingly scientific facts that were dangerous and inaccurate; Microsoft’s chatbot Tay was shut down with 24 hours of release after racist, misogynist, white supremacist, and otherwise problematic output; and more recently, a chatbot deployed for therapy has led to death.
Although current AI is often framed as technology that “benefits humanity” and shoehorned into any possible systemic social problem, the fact remains that it is developed and owned by the tech corporations which currently hold unprecedented wealth, power, and influence. Frequently, these tools are put forward as a solution in search of a problem, where the problem is imaginary based on hypothetical possible futures. Furthermore, as most of current state-of-the-art AI is corporate proparitory property, with little transparency and accountability, little is known (to anyone outside these corp walls) about these models and underlying training datasets. Subsequently, it is irresponsible to deploy these systems without proper vetting, evaluations, and critical scrutiny.
Abeba Birhane is an adjunct assistant professor in the School of Computer Science and Statistics. Last month, Birhane was named in Time Magazine’s inaugural TIME100 AI list.