What you need to know about the EU AI Act and how Concentric AI can help

August 11, 2024
Mark Stone
5 min read

The European Union’s Artificial Intelligence Act (EU AI Act) is a new piece of legislation governing the use of artificial intelligence. The AI Act was published on July 12, 2024, and entered into force on August 1, 2024.  

The Act categorizes AI systems based on their risk levels to ensure safety, transparency, and accountability. Understanding and complying with this regulation is critical for organizations operating within the EU or dealing with EU residents.  

In this article, we’ll explore the key aspects of the EU AI Act and explain how Concentric AI can help organizations with compliance. 

What do I need to know about how the EU AI Act defines risk?

The EU AI Act is a comprehensive (and groundbreaking) regulatory framework that categorizes AI systems into four risk categories: unacceptable risk, high risk, limited risk, and minimal risk.  

The Act’s tiered approach is designed to regulate AI systems specifically related to the potential risks they pose to individuals and society. 

Here is how the Act defines these risks.  

Unacceptable risk: AI systems that pose a significant threat to safety, fundamental rights, or democratic processes are banned under the EU AI Act. Examples include AI systems used for social scoring by governments or systems that exploit vulnerabilities of specific groups. 

High risk: AI systems that significantly impact individuals’ safety or fundamental rights are classified as high risk. Critical infrastructure, education, employment, law enforcement, and biometric identification are key examples. The EU AI Act mandates stringent requirements for high-risk AI systems, including rigorous risk management, data governance, transparency, and human oversight. 

Limited risk: AI systems that require specific transparency obligations are considered limited. Here, users must be informed that they are interacting with an AI system – which includes chatbots and other systems that might not pose significant risks but still require disclosure. 

Minimal risk: AI systems with minimal or no risk are largely unregulated. This category includes AI systems used in games, spam filters, and other low-impact applications. 

The Act also includes specific provisions for General Purpose AI (GPAI) systems. These systems, capable of performing a wide range of tasks, must adhere to additional obligations, especially if they pose systemic risks.  

What are the key compliance requirements?

The EU AI Act contains several key compliance requirements for high-risk AI systems, including risk management, data governance, documentation and transparency, and human oversight.  

Risk management: Organizations must implement comprehensive risk management systems to identify, assess, and mitigate risks associated with AI systems. This typically means regular testing and evaluation to ensure safety and compliance. 

Data governance: Proper data governance is crucial to ensure data quality, integrity, and security. High-risk AI systems must also use high-quality datasets to minimize bias and inaccuracies. 

Documentation and transparency: Providers of high-risk AI systems are expected to maintain extensive documentation, including technical information, risk assessments, and user guidelines. Transparency measures exist to guarantee that users understand how AI decisions are made. 

Human oversight: High-risk AI systems must be designed to allow human oversight, so that decisions can be overridden if necessary. 

How is the act governed and what are the timelines?

The AI Office, which will be established within the European Commission, will oversee the governance of the AI Act. The Office will monitor compliance by General Purpose AI (GPAI) model providers and can conduct evaluations to assess compliance or investigate systemic risks. Downstream providers can also lodge complaints against upstream providers with the AI Office. 

The Act has specific timelines for compliance: six months for prohibited AI systems, 12 months for GPAI, 24 months (about 2 years) for high-risk AI systems under Annex III, and 36 months (about 3 years) for those under Annex I. Codes of practice must be established within nine months of the Act’s entry into force. 

What are the differences between the EU AI Act and the GDPR?

The EU AI Act and the General Data Protection Regulation (GDPR) are both important regulatory frameworks created to protect individuals’ rights within the European Union. 

Still, each has a different scope and focus.  

The goal of GDPR is to address the protection of personal data and privacy rights and establish rules for data processing, storage, and transfer. It emphasizes individual consent, data minimization, and the right to access and delete personal data. The EU AI Act, on the other hand, specifically regulates the development, deployment, and use of artificial intelligence systems, focusing on the risks these systems pose to safety, fundamental rights, and ethical standards. 

Both regulations take a risk-based approach, with GDPR assessing risks related to personal data processing and the EU AI Act categorizes AI systems based on their risk levels.  

GDPR mandates data controllers and processors to implement measures for data protection, including appointing Data Protection Officers (DPOs) and conducting Data Protection Impact Assessments (DPIAs). The EU AI Act mandates providers of high-risk AI systems to implement risk management systems, maintain documentation, ensure data governance, and enable human oversight. 

While both frameworks impose significant penalties for non-compliance, specific amounts for the EU AI Act are subject to further review as the regulation evolves. 

How can Concentric AI help with EU AI Act compliance?

When any new regulation is announced, compliance for any organization can be challenging. 

The EU AI Act is only weeks old at the time of this writing, and until some time has passed, the implications are difficult to quantify. But any organization using AI must acknowledge the privacy and security concerns it brings to operations.  

Before deploying any AI system, an organization must have a clear understanding of data risk — especially when it comes to adhering to standards and data privacy. 

Concentric AI leverages our own sophisticated natural language processing capabilities to accurately and autonomously categorize and classify data output from AI into categories that include privacy-sensitive data, intellectual property, financial information, legal agreements, human resources files, sales strategies, partnership plans and other business-critical information. 

Concentric AI can analyze the output from any type of AI to discover sensitive information – from financial data to PII/PCI/PHI — and label the data accordingly to ensure that only authorized personnel have access to it. Adopting these least privilege principles are crucial for adherence to standards and privacy regulations.   

Best of all, employees don’t have to worry about labelling the output, which means better security for everyone. 

Once data has been identified and classified, Concentric AI can autonomously identify risk from inappropriate permissioning, risky sharing, unauthorized access, wrong location etc. 

Remediation actions — such as changing entitlements, adjusting access controls, or preventing the data from being shared — can also be taken centrally to fix issues and prevent data loss. 

Book a demo today with our team of AI experts to learn how Concentric AI can help your organization manage AI, boost privacy efforts and reduce risk. 

concentric-logo

Libero nibh at ultrices torquent litora dictum porta info [email protected]

Getting started is easy

Start connecting your payment with Switch App.