EU Act: Shaping Responsible AI

The Bird is Ready to Fly.. but probably waiting for masters to confirm it won’t be caged!

With the emergence of ChatGPT and the tremendous buzz surrounding Generative AI since its launch in Autumn 2022, the need for statutory regulations to establish a comprehensive framework for Artificial Intelligence has gained significant momentum across.

The US Congress recently engaged in a discussions with Sam Altman (OpenAI CEO) , while the UK Government followed suit by holding a meeting with execs from 3 top AI orgs – OpenAI , Google DeepMind and Anthropic. 

Since 2020, the European Union has been diligently working on a Draft regulation for AI. However, the recent surge of interest in the tech industry has prompted the EU to prioritise its development. 

On 13th June, the EU Parliament published and approved its initial set of requirements—a concise summary document which has been in pipeline over last 3 years! This crucial step sets the stage for further deliberation among member states and potential enactment as a law by the end of year 2023.

The EU has consistently been at the forefront of pioneering regulations, exemplified by the GDPR. Consequently, these regulations are regarded as groundbreaking and will significantly contribute to shaping AI into a responsible technology across the world.

Of late, prominent academics and industry experts have expressed various concerns regarding the potential catastrophic impact of AI. The implementation of these regulatory guardrails will undoubtedly serve as a substantial step towards mitigating at least some, if not all, of these concerns.

Here is a brief summary of What these regulations are at high level and what they might mean for many heavy weights:

Overview of the Framework

EU has defined the applications that use AI in 4 broader categories in a risk based approach:

1.Unacceptable Risk –

Areas where NO AI is allowed, this clearly sets the boundaries for a ‘No-Go’ zone.. 

     For Exa:

  • Social Scoring or classification of human beings based on their behaviours and leading this to a unfavourable treatment
  • Real time Biometric scanning of living objects.. to categorise them based on sensitive characteristics 
  • Risk profiling to determine potential criminal offence .. 

All AI Systems with above features will be prohibited under EU AI Act.

2. High Risk – 

These will be permitted subject to compliance with AI requirements conformity assessment, which means submission of detailed analysis outcome to EU AI Office before application is rolled out, examples of this would be:

  • AI systems used as a safety component of a product or as a product (e.g. toys, aviation, cars, medical devices, lifts)
  • Management and operation of critical infrastructure
  • Employment and worker management 
  • Education and vocational training

All High-risk AI systems would be required to be registered in an EU-wide database before placing them on the market or putting them into service. Also, these systems would have to comply with a range of requirements particularly on risk management, testing, technical robustness, data training and data governance, transparency, human oversight, and cybersecurity

3. Limited Risk – AI systems that interacts with Humans (i.e Chatbots), emotions recognition systems, and AI that generates media contents (image/audio/video) will be subject to specific transparency obligation in terms of declaring what it is doing/using in background

4. Minimal or No Risk – All other low or minimal risk AI systems used in EU but need to adhere to right ‘Code of Conduct’

What Does this Mean?

Scope of this regulations is wide and most, if not all systems would fall into High to Limited risk categories, which then means a lot of lag work to be done by each and every org that intends to adapt AI practices to improve productivity. In a way, this could be an anti-pattern with the hype around Generative AI adoption – but a balanced approach would have to be adapted to be relevant in global competitive market. 

The EU AI Act defines 22 core requirements for all systems which needs to be met and can be widely categorised as:


  1. Data Sources: Provide an explanation of the data sources utilised for training the model.
  2. Copyrighted Data: Summarise the copyrighted data incorporated into the model training process.


  1. Compute: Disclose details regarding the compute resources employed, such as the model size, power requirements, and training time.
  2. Energy: Measure the energy consumption during training and take necessary steps to reduce energy usage.


  1. Capabilities/Limitations: Clearly outline the capabilities and limitations of the model.
  2. Risks/Mitigations: Describe potential risks associated with the model, present mitigation measures, and justify any risks that have not been mitigated.
  3. Evaluations: Benchmark the model against industry-standard or public benchmarks.
  4. Testing: Report the outcomes of both internal and external testing.


  1. Machine-Generated Content: Disclose that the generated content originates from machine algorithms and not humans.
  2. Member states: Specify the EU member states where the model is intended to be deployed.
  3. Downstream Documentation: Provide technical documentation to ensure downstream compliance.

How does it impact:

This will set in motion a lot of work for every org that is attempting to gain advantage using AI methods esp. Gen AI.. which in turn means extra bandwidth to remain compliant!

Stanford University’s Human Centred Artificial Intelligence conducted a research on Top 10 LLM models from Meta, Google, OpenAI etc. against EU AI Act’s 12 benchmarks and below is the outcome:


  • All have done poorly on Copyrighted Data clause which will be big bone of contention if these needs to be operated/deployed in any member state.  
  • Bloom from Hugging Face has outperformed the rest in terms of being ‘Compliant’. 
  • Each model is doing good on some Topic, which mean ‘Fit For AI’ would need to aggregate ‘Best of All’ to be totally EU Compliant! 

The AI space is generating lot of curiosity – and we all hope this gets converted into a end state that is ‘AI for Good’.. We don’t want a world that is as fanatic as ‘WALL.E’.. 


  1. EU Draft AI Act
  2. Stanford University – LLM Evaluation
  3. EU AI Act – Press Release

Disclaimer: All the opinions above are of author himself and does not endorse any organisation’s views on the topic. All the references are credited to the right people, if anyone is missed, kindly DM and gracefully happy to add/collaboration. 

Also, this article has NOT been written by AI, and all view are expressed by REAL Ashok Suthar and not his AI assistant 😉



Published By


Investment CTA: Pioneering AI Solutions! Invest with Avkalan to drive disruption and capitalize on the AI revolution (this should be in the next line). Then a similar tab to 'book your free consultation now'. Reach Out to us to Discuss Investment Possibilities