overview/documents/eu_ai_act_standardisation_page
EUEuropean UnionanalyzedRegulatory guidance

Document

Standardisation of the AI Act

The document primarily frames AI standardisation as enabling innovation and global competitiveness through harmonised standards that foster trust, reduce compliance costs, and position the EU as a global benchmark-setter. While safety and fundamental rights are mentioned as underlying concerns of the AI Act, the standardisation page's own reasoning emphasises how standards support innovation, market acceptance, and the EU's leadership role.

normalized text

1 section

  1. 01

    Full text

    Harmonised standards will offer legal certainty under the AI Act, support innovation, and position the EU to set global benchmarks for trustworthy AI. Ensuring an effective and clear implementation of the AI Act is a priority for the Commission. The AI Act regulates regulates ‘high-risk’ AI systems that impact safety, health, and fundamental rights, for example, in critical infrastructure and law enforcement, among other areas (see article 6 and annex III of the AI Act ). These requirements need to be fulfilled before placement on the market, ensuring high-risk AI systems are monitored throughout their lifecycle. Standards translate legal requirements into common technical language, simplifying compliance for companies and other stakeholders. Legal certainty and reduced compliance costs : European harmonised standards provide a clear pathway to compliance for businesses of all sizes. Market benchmarking : European harmonised standards often become de facto global benchmarks. For example, standards currently under development focused on setting methodologies for risk management and quality management are strong candidates to become market benchmarks in the future. Innovation and competitiveness : European harmonised standards foster trust and market acceptance, enabling developers who adopt them to compete on a global scale while ensuring their solutions meet the highest safety standards. The European Committee for Standardisation (CEN) and European Committee for Electrotechnical Standardisation (CENELEC) are European Standardisation Organisations. Working groups in these 2 organisations are actively developing harmonised standards for high-risk AI systems. They work together in a Joint Technical Committee called JTC 21 . The European Commission has requested that CEN and CENELEC develop standards in ten key areas : risk management governance and quality of datasets record keeping transparency human oversight accuracy robustness cybersecurity quality management conformity assessment Once harmonised standards are published by CEN and CENELEC, the Commission assesses whether they meet the intended objectives and legal requirements of the AI Act. After this final step, the standards are referenced in the Official Journal of the EU. The application of standards remains voluntary. Providers can choose any other framework to demonstrate their compliance with the AI Act. However, harmonised standards referenced in the Official Journal of the EU provide legal certainty. Companies that apply harmonised standards are presumed to be compliant with the legal requirements. On 30 October 2025, prEN 18286: Artificial Intelligence - Quality Management System for EU AI Act Regulatory Purposes became the first harmonised standard for AI to enter public enquiry, allowing for national standardisation bodies to provide comments on the draft before its final publication. This harmonised standard is specifically designed to help providers of high-risk AI systems comply with the AI Act's Article 17 requirements, offering a product-focused framework for AI lifecycle governance. Guidance and support are essential for the roll-out of any new law, and this is no different for the AI Act. On 19 November 2025, the Digital Omnibus proposed linking the entry into application of the rules governing high-risk AI systems to the availability of support tools, including but not limited to standards. The latest that the rules would become applicable is 2 December 2027 for high-risk AI systems covered in Annex III AI Act, and 2 August 2028 for AI systems covered under EU harmonisation legislation covered in Annex I. If support tools, including standards, are available earlier, the Commission can decide to make the rules applicable earlier. European Standardisation organisations do not work on standards in isolation. An 'international first' approach is one of the guiding principles of standardisation. This means international standards, when available and aligned with EU requirements, can become European harmonised standards. European Standardisation organisations and European companies are actively engaging with international standardisation bodies and therefore contributing to build a broader global framework of AI standards. For instance, ISO/IEC SC 42 is already developing AI-related international standards, and European representatives are actively shaping this process. This alignment is crucial to avoid regulatory fragmentation and to ensure that AI developers can operate across multiple markets without redundant compliance efforts. The benchmarks and standards established today will define the role of AI in our society for generations to come. By fostering the development of European harmonised standards, the EU can advance and lead the safe development and adoption of AI systems globally. Follow us Digital EU on Instagram Digital EU on Facebook Digital EU on LinkedIn Digital EU on Youtube DigitalEU on X News article 09 April 2026 The AI Continent Action Plan is already being delivered on, with a new report and briefing published on its anniversary. Press release 09 April 2026 One year on, the European Commission's AI Continent Action Plan is delivering on its goal to transform Europe's strong traditional industries and exceptional talent pool into engines of AI innovation and acceleration. News article 08 April 2026 The second meeting of the Signatory Taskforce under the General-Purpose AI (GPAI) Code of Practice – held on 13 March, 2026 – brought together representatives from nearly all Signatories to the Code. Press release 08 April 2026 On 8 April 2026, the EU and the Kingdom of Morocco strengthened their relationship as strategic partners in the digital field by launching the EU-Morocco Digital Dialogue. The AI Act is the first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally. 20 March 2026