Decoding the EU AI Act: A student-friendly guide to the world’s first comprehensive artificial intelligence regulation

The European Union’s latest framework may prove the greatest regulatory challenge for the technology sector since GDPR

From parody deepfakes to TikTok-trending AI-generated art, biometric surveillance for law enforcement to screenplays authored by large language models, the past year has seen artificial intelligence (AI) become increasingly ubiquitous in our lives. While there has been a welcome increase in investment to fund the development of AI systems, the accelerated growth in capability enabled by advances in computing power has heightened concerns over the development and use of these technologies. Proposed by the European Commission back in April 2021, the EU AI Act marks a watershed development in the regulation of AI as the first comprehensive legal framework to address the challenges of a developing AI landscape. But what will it actually mean when it comes into law, and how will it impact the AI tools that we’ve become familiar with?

The EU AI Act defines an “artificial intelligence system” as a “system that is designed to operate with a certain level of autonomy”, receives “machine and/or human-provided data and inputs”, and then infers how to complete a predetermined set of “human-defined objectives” through machine learning, logic/knowledge-based approaches, or statistical methods. An AI system will then generate some form of output, such as providing information, predictions, decisions, or recommendations, which then influence the environment with which it interacts. AI systems developed to date focus on performing a specific task, and cannot operate outside of their predetermined functionality. This includes tools such as Google Translate, email spam-filtering, and Apple’s Siri: all of these systems can complete only a predefined range of functions. Greater availability of increased computing power has facilitated a considerable rise in Generative AI models, which similarly tend to have a narrow scope, but focus on creating new, original content. Generative AI can identify patterns and structures in input data, and apply this to new, generated content in response to prompts – think ChatGPT, Google’s Bard, and the AI image generators plastered that turn your selfies into Renaissance portraits.

The purpose of the EU AI Act is to ensure better conditions in Europe for the development and deployment of AI. Amidst the excitement of rapid advancements and investment in the AI industry, the new regulation seeks to establish the EU as a hub for trustworthy AI, with harmonised rules across member states to enable pioneering innovations, maximise industrial potential, and protect the fundamental rights and values of citizens. All parties involved from development to distribution of AI models and systems will be held accountable under the act. The act will take direct effect in each of the twenty-seven EU member states – eliminating the need to transpose the rules into each nation’s domestic laws – though the scope in practice is much broader. Providers of qualifying AI systems in Europe will have to be compliant, regardless of their primary location or base of operations.

The sliding scale of determined risk will dictate development and use requirements, with four distinct categories: minimal risk, limited risk, high risk, and unacceptable risk”

Central to the EU AI Act is the classification of AI systems by “risk”. The sliding scale of determined risk will dictate development and use requirements, with four distinct categories: minimal risk, limited risk, high risk, and unacceptable risk. Minimal risk refers to small-fry AI systems that pose little threat to users or wider society, such as AI-enabled video games. Free use of systems deemed to be of minimal risk will be permissible. Limited risk AI systems, such as deepfake technologies and Chatbots, will face transparency and code of conduct requirements to allow users to make informed decisions and ensure they are aware of the technology they are interacting with. AI systems related to transport, education, medicine, employment, and welfare will immediately be identified as high risk, and will be subject to a more extensive list of requirements and a “conformity assessment” prior to any market release. Finally, AI systems considered to be of unacceptable risk will be wholly prohibited; this encompasses any social scoring systems, or AI that actively monitors people in public spaces.

So, what kind of changes can we expect as users of AI systems? For many minimal risk systems, there will be no discernible change or action required. It may lead to greater transparency disclaimers accompanying deepfake technologies, or more comprehensive privacy and use declarations before a user can interact with online chatbots. The act specifically calls for consideration for vulnerable groups, and ensuring transparent communication of information is available in accessible formats, so information accompanying AI may be presented differently in future.

Systems identified as high risk will face the most significant challenges to development, with detailed requirements expected under categories such as risk management, accuracy, cybersecurity, data governance, and human oversight. A high risk evaluation is also expected for AI with a wider scope of capability and greater flexibility like ChatGPT (called “general purpose” AI in the legislation) that may be used in various contexts or integrated into other systems. These general purpose AI models may receive an exemption if high-risk uses are excluded by the provider, or if they circumvent high-risk situations by furnishing the user with greater information. It is also possible that companion legislation will be drawn up to specifically stipulate how to adjust high-risk criteria for general purpose AI, though this is yet to be confirmed. Likewise, the exact nature of requirements are unknown, though it is expected that failure to comply will carry a hefty penalty, with fines anticipated to exceed those levied for GDPR breaches.

There are two interesting exceptions to compliance rules. Firstly, law enforcement authorities are permitted to deploy some unacceptable risk technologies in limited circumstances. However, it is still expected that the legislation could disrupt plans for greater leveraging of these technologies by governing bodies, such as the Irish Government’s proposal to introduce facial recognition technology to an Garda Siochána. Secondly, much like similar rules under EU copyright law, AI technologies such as deepfakes may avoid greater transparency requirements if they are evidently for creative, satirical, or parody purposes – a welcome relief to those partial to the comedic relief of AI-generated content on social media. 

There are concerns that excessive compliance costs and increased liability risks will lead to companies withdrawing from the block, therefore leading to rescinded investment and damaging European competitiveness”

Responses to the act have been mixed, with some groups – such as the Irish Council for Civil Liberties – welcoming the decision, while others have expressed fear that the draft legislation will harm Europe’s competitiveness in the AI industry. An open letter to the European Commission signed by executives of some of Europe’s leading companies – including C-suite representatives from Renault and Siemens – raised “serious concerns” about the threat of the laws to Europe’s “technological sovereignty”, and their failure to address “the challenges [they] are and will be facing.” There are concerns that excessive compliance costs and increased liability risks will lead to companies withdrawing from the block, therefore leading to rescinded investment and damaging European competitiveness. MEP Dragos Tudorache, who led the development of the draft law, dismissed such concerns in a statement, calling it “a pity” that protests from stakeholders have undermined “the undeniable lead that Europe has taken [on regulating AI].”

The EU AI Act will now be subject to negotiations between the European Commission, European Council, and European Parliament, where details are to be ironed out in advance of its signing into law, expected at the end of 2023. A European Artificial Intelligence Board is set to be established to oversee implementation and harmonisation across member states, with compliance expected following a 24-36 month window. Although details of exact requirements and execution are unclear, it is undeniable that the new laws will set a precedent for AI regulation as the first comprehensive framework to address these new technological challenges. Whether it leads to more responsible development and use of AI as is hoped, or smothers technological innovation in Europe remains to be seen. In any case, it marks a significant step towards reshaping the development and use of these technologies to fit our society – and not the other way around.

Sadbh Boylan

Sadbh Boylan is the Deputy Scitech Editor for Trinity News and is currently in her Senior Sophister Year studying Management Science and Information System Studies