What is the AI Act?Understanding European regulations on artificial intelligence
The AI ACT (or Artificial Intelligence Regulation) establishes harmonised standards to regulate the use of artificial intelligence (AI) within the European Union (EU).
Adopted in June 2024 and entered into force in July 2024, this regulation has a dual objective: to ensure the ethical, safe and fundamental rights-compliant use of AI while promoting innovation and competitiveness among businesses, including SMEs, within the EU.
This European regulation establishes a framework for the ethical and secure development of artificial intelligence in the EU. It defines clear governance of European artificial intelligence based on the level of risk (unacceptable risk, high risk, limited risk or minimal risk).
Regulation based on risk level
The ACT AI approach is based on a classification of AI systems according to their level of risk and aims to address the ethical, social and technical concerns surrounding these new technologies. The systems are categorised as follows:
Unacceptable risk
Systems prohibited for uses such as cognitive manipulation or social scoring systems that exploit human behaviour for surveillance purposes.
High risk
Concerns high-risk applications used in particular in the justice system, recruitment, critical infrastructure and education. These systems must comply with strict requirements in terms of documentation, human control, traceability and robustness.
Limited risk
Imposes transparency obligations (e.g. for the use of chatbots).
Minimal risk
No restrictive framework, free use.
This classification aims to create trustworthy AI while taking into account the diversity of technologies, risks and uses within the European Union.
Clear obligations for businesses
Companies that develop or operate AI systems must comply with a series of obligations. These obligations cover, in particular:
- Assessing the risks and identifying the level of danger presented by each system;
- Developing a risk management system;
- Drafting accurate technical documentation;
- Training teams in the responsible use of AI;
- Post-deployment monitoring to prevent any misuse.
A framework for trustworthy AI
The AI Act applies to all organisations, whether European or foreign, that design, distribute or operate AI systems within the EU.
This regulatory framework is supported by the creation of the European AI Office, which is responsible for coordinating national authorities and, like the EDPS, developing tools, codes of practice and guidelines.
The AI ACT represents a major step towards building a responsible European digital ecosystem. By regulating AI-related practices, this regulation aims to strengthen citizens’ trust in these technologies and reduce the risk of abuse.
A gradual implementation
The regulation published in the Official Journal on 13 June 2024 provides for gradual implementation:
- 2 February 2025: ban on systems posing an unacceptable risk
- 2 August 2025: obligations for suppliers of general-purpose AI systems, designation of competent authorities
- 2 August 2027: application of rules to high-risk AI systems
Tip: Even though not all measures are yet in force, it is advisable to anticipate deadlines and map your AI systems, assess the risks, and train your teams accordingly.
Tools for businesses: regulatory sandboxes and accompanying measures
In order to combine innovation and compliance, AI ACT offers companies, particularly SMEs, access to regulatory sandboxes. These supervised testing environments enable the development and testing of general-purpose AI models in safe conditions.
Each Member State must set up this type of mechanism, coordinated with the European AI Office, in order to provide technical and legal support. It is also a concrete way to help organisations take ownership of technologies, data, risks and expectations in terms of regulatory compliance.
Our support approach: rigorous and tailor-made
At Phénix Privacy, we support you in complying with the AI Act through a comprehensive methodology that integrates regulatory requirements while promoting ethical and responsible AI. Here is our approach:
1. Raise awareness and establish structure
We deliver tailor-made training courses to establish an AI culture that complies with the Regulation. We help to appoint an AI compliance officer, often in conjunction with the DPO, and to define the scope of intervention and the resources allocated to the process.
2. Map your AI systems
Identification of your AI systems (in development or in production), analysis of use cases, and mapping to ACT AI classification grids.
3. Assess and classify risks
Workshops enable each system to be classified according to its level of risk (unacceptable, high, limited, minimal) based on the sector, the data used, the AI model, etc.
4. Bring high-risk systems into compliance
We apply a structured method including auditing, technical data sheets, data validation, traceability, human supervision, technical documentation, learning data validation, CE marking and declaration of conformity.
5. Integrate the principles of responsible AI
We help you embed the seven ethical principles of AI (societal and environmental well-being; transparency and explainability; privacy and data protection; robustness and technical safety; accountability; fairness and equity; human autonomy and control) into the governance of your AI projects.
A long-term vision: compliance, innovation and trust
AI ACT is not just a constraint: it is also a strategic opportunity. By anticipating regulation, companies strengthen customer confidence, reduce risks and engage in responsible innovation on a European scale.
Our services for implementing ACT AI
1. Drafting of AI policies and procedures
A robust documentation base, compliant with the European regulatory framework, incorporating technical specifications, internal policies, processes and ethical commitment.
2. Risk assessment tools
AI risk analysis through technical tests, classification grids, and internal audits.
3. Structuring AI governance
Establishment of steering committees, appointment of AI compliance officers, alignment with best practices within the EU.
4. Regulatory assessment and AI mapping
Identification of your AI systems, cross-referencing with risk levels, examination of sensitive data and biometric data.
5. Legal framework for AI relations
Distribution of legal responsibilities, integration of contractual clauses adapted to AI ACT.