Front Page

Release peace: the magazine

Release peace: the magazine

Analysis & Background Stories on International Affairs

The EU Artificial Intelligence Act

Written by: Inés M. Pousadela

Inés M. Pousadela  is a CIVICUS Senior Research Specialist, Co-Director and Writer for CIVICUS Lens and co-author of the State of Civil Society ReportCIVICUS is the world’s biggest alliance of civil society organisations and activists, with over 12,000 members in 175 countries. Any opinions expressed in this article do not necessarily represent the views of Release Peace. A version of this article was originally published in the CIVICUS Lens.


What is the Artificial Intelligence Act?

The European Union (EU) is in the final stages of agreeing the EU Artificial Intelligence (AI) Act to regulate the development and use of AI systems. Reflecting social demands, proposed regulatory standards are based on a classification of AI systems by their respective risk levels. However, concerns have been raised about insufficient human rights safeguards. In the final stretch of the process, anticipating that the EU AI Act may prepare the ground for further global regulation, civil society continues to advocate for stronger human rights protections.

Civil society has long pointed to the risks of AI – for democratic processes, privacy, data protection and fundamental human rights – resulting from opaque algorithms replicating stereotypes and biases and by outright misuse of AI-powered tools, as well as a related series of tricky safety, security and liability issues. Spurred on by such concerns, in 2021 the EU began to develop a set of rules to regulate the development, deployment and use of AI technologies. It was during this period, in November 2022, that ChatGPT was launched. This burgeoning of generative AI was something EU negotiators struggled to grasp, showing how far ahead of regulatory efforts the technology is. The legal tool that’s the result of this process, the EU AI Act, broadly takes the advocated-for lines when it comes to basing regulation on a classification of risks. But civil society groups remain concerned about the current, near-final version of the Act. As the process enters its closing phase, they continue to call for human rights protections to be strengthened.

Uses and Risks

As defined in the EU AI Act, an AI system ‘can, for a given set of human-defined objectives, generate outputs… influencing the environments they interact with’. There’s been a lot of discussion surrounding whether ‘AI’ is an accurate designation or a misnomer. Some say it isn’t really artificial, as opposed to human, given that it depends on the continuous production of thought and art by real, creative human beings. And they claim it isn’t really intelligent either because human intelligence is so much more than pattern-matching. In any case, as with all successful technologies before, AI is spreading fast because of its many benign uses – its ability to process and systematise large quantities of data, identify patterns and make predictions. AI is already being used in areas from manufacturing and sales to food and energy production, farming and transportation to healthcare, public administration and disaster preparedness and mitigation. In the online sphere, it’s used to detect disinformation and identify and combat cyberthreats, among other things.

The Catch

But here’s the catch: alongside good uses comes the potential for AI’s misuse. Though It can be used to fend off attacks on democratic processes and preserve the integrity of elections, it can also be used to encourage online echo chambers, polarise public opinion, create and spread deepfakes, destroy reputations, distort decision making, manipulate voters, perpetrate fraud and repress people mobilising for rights. The latest technologies that governments are using to identify, harass and intimidate protesters are powered by AI. While some impacts of AI that may be regarded as negative – such as eliminating whole categories of jobs – are shared with other technologies that preceded it, others are specific to it. At the top of the list, digital rights organisation Access Now highlights the human rights violations, disproportionately affecting excluded groups, that result from biases in algorithms and lack of transparency in programming. As it is fed by historical data containing inequities and inequalities, AI often further reinforces Western bias. As Humanitarian OpenStreetMap noted, “The biggest challenges are the biases and the lack of transparency of the algorithms embedded in existing AI solutions The problem with existing models is [that] you cannot even know if they are biased, or how they are biased because they are black boxes. You cannot know what’s inside and training data and processes are not traceable.” Biometric technologies are generally designed to work with what is construed as a ‘normal’ body, perpetuating prejudice against persons with disabilities, and racial biases can lead to terrible outcomes. AI-based physical and behavioural biometric technologies such as facial and voice recognition systems, used to identify people and make predictions about them, often aren’t as accurate and reliable as they are presented to be – and they are particularly prone to function creep, being overstretched from their original, seemingly innocuous purposes. A biometric voice recognition system designed for healthcare workers to detect mental distress, for instance, could easily be repurposed as a lie detector in the hands of immigration law enforcement.


In 2021, the European Commission proposed the establishment of a comprehensive set of rules to address the ethical and legal challenges of AI technologies and regulate their development, deployment and use across EU states. Following the passage by the European Parliament of a first version of the EU AI Act in June 2023, a final round of debates – known as a trilogue – was held between the European Commission, Council and Parliament. The Commission was pressed to finish the process before the end of 2023 so the Act could be submitted to a parliamentary vote before 2024 European Parliament elections, scheduled for June. Although contentious issues remained, a provisional political agreement on what’s slated to be the world’s first comprehensive piece of legislation on AI was reached on 8 December. The European Parliament approved it in March and it now awaits the European Council’s endorsement.

The text was hailed in Brussels as achieving the best possible balance between businesses and people, law enforcement and freedoms, innovation and rights protections. Civil society disagrees. On the eve of the EU agreement, a global coalition of over 50 civil society and human rights organisations from more than 30 countries issued a ‘Civil Society Manifesto for Ethical AI’, an initiative to steer AI policies towards safeguarding rights and decolonise AI discourse. The manifesto demands the inclusion of people’s voices – and therefore of civil society – in the process to develop genuinely global, inclusive and accountable standards.


If you would also like to write articles on insightful stories you care about, send us a brief email!