overview/documents/eu_gpai_code_transparency_chapter_2025
EUEuropean UnionanalyzedStandards guideline

Document

Code of Practice for General-Purpose AI Models: Transparency Chapter

The document frames AI governance primarily around protecting fundamental rights, health, safety, democracy, and the rule of law as enshrined in the EU Charter, in alignment with the AI Act's stated objectives. Transparency obligations are justified by the need to enable regulatory oversight, protect rights of downstream users, and ensure trustworthy AI. Secondary framings include consumer/public safety (high-risk AI system compliance) and innovation enablement (supporting AI uptake and market functioning).

normalized text

1 section

  1. 01

    Full text

    1 Code of Practice for General-Purpose AI Models Transparency Chapter Nuria Oliver Rishi Bommasani Working Group 1 Co-Chair Working Group 1 Vice-Chair 2 Introductory note by the Chair and Vice-Chair of the Transparency Chapter. The Transparency Chapter of the Code of Practice describes three Measures which Signatories commit to implementing to comply with their transparency obligations under Article 53(1), points (a) and (b), and the corresponding Annexes XI and XII of the AI Act. To facilitate compliance and fulfilment of the commitments contained in Measure 1.1, we include a user-friendly Model Documentation Form which allows Signatories to easily compile the information required by the aforementioned provisions of the AI Act in a single place. The Model Documentation Form indicates for each item whether the information is intended for downstream providers, the AI Office or national competent authorities. Information intended for the AI Office or national competent authorities is only to be made available following a request from the AI Office, either ex officio or based on a request to the AI Office from national competent authorities. Such requests will state the legal basis and purpose of the request and will concern only items from the Form that are strictly necessary for the AI Office to fulfil its tasks under the AI Act at the time of the request, or for national competent authorities to exercise their supervisory tasks under the AI Act at the time of the request, in particular to assess compliance of providers high-risk AI systems built on general-purpose AI models where the provider of the system is different from the provider of the model. In accordance with Article 78 AI Act, the recipients of any of the information contained in the Model Documentation Form are obliged to respect the confidentiality of the information obtained, in particular intellectual property rights and confidential business information or trade secrets, and to put in place adequate and effective cybersecurity measures to protect the security and confidentiality of the information obtained. 3 Objectives The overarching objective of this Code of Practice (“Code”) is to improve the functioning of the internal market, to promote the uptake of human-centric and trustworthy artificial intelligence (“AI”), while ensuring a high level of protection of health, safety, and fundamental rights enshrined in the Charter, including democracy, the rule of law, and environmental protection, against harmful effects of AI in the Union, and to support innovation pursuant to Article 1(1) AI Act. To achieve this overarching objective, the specific objectives of this Code are: A. To serve as a guiding document for demonstrating compliance with the obligations provided for in Articles 53 and 55 AI Act, while recognising that adherence to the Code does not constitute conclusive evidence of compliance with these obligations under the AI Act. B. To ensure providers of general-purpose AI models comply with their obligations under the AI Act and to enable the AI Office to assess compliance of providers of general -purpose AI models who choose to rely on the Code to demonstrate compliance with their o bligations under the AI Act. 4 Recitals Whereas: (a) The Signatories recognise the particular role and responsibility of providers of general - purpose AI models along the AI value chain, as the models they provide may form the basis for a range of downstream AI systems, often provided by downstream providers that need a good understanding of the models and their capabilities, both to enable the i ntegration of such models into their products and to fulfil their obligations under the AI Act (see recital 101 AI Act). (b) The Signatories recognise that in the case of a fine-tuning or other modification of a general- purpose AI model, where the natural or legal person, public authority, agency or other body that modifies the model becomes the provider of the modified model su bject to the obligations for providers of general purpose AI models, their Commitments under the Transparency Chapter of the Code should be limited to that modification or fine -tuning, to comply with the principle of proportionality (see recital 109 AI Act). In this context, Signatories should take into account relevant guidelines by the European Commission. (c) The Signatories recognise that, without exceeding the Commitments under the Transparency Chapter of this Code, when providing information to the AI Office or to downstream providers they may need to take into account market and technological developments, so that the information continues to serve its purpose of allowing the AI Office and national competent authorities to fulfil their tasks under the AI Act, and downstream providers to integrate the Signatories’ models into AI systems and to comply with their obligations under the AI Act (see Article 56(2), point (a), AI Act). This Chapter of the Code focuses on the documentation obligations from Article 53(1), points (a) and (b), AI Act that are applicable to all providers of general-purpose AI models (without prejudice to the exception laid down in Article 53(2) AI Act), namely those concerning Annex XI, Section 1, and Annex XII AI Act. The documentation obligations concerning Annex XI, Section 2, AI Act, applicable only to providers of general-purpose AI models with systemic risk are covered by Measure 10.1 of the Safety and Security Chapter of this Code. 5 Commitment 1 Documentation LEGAL TEXT: Articles 53(1)(a), 53(1)(b), 53(2), 53(7), and Annexes XI and XII AI Act In order to fulfil the obligations in Article 53(1), points (a) and (b) , AI Act, Signatories commit to drawing up and keeping up-to-date model documentation in accordance with Measure 1.1, providing relevant information to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems (‘downstream providers’ hereafter), and to the AI Office upon request (possibly on behalf of national competent authori ties upon request to the AI Office when this is strictly necessary for the exercise of their supervisory tasks under the AI Act, in particular to assess the compliance of a high -risk AI system built on a general -purpose AI model where the provider of the system is different from the provider of the model 1), in accordance with Measure 1.2, and ensuring quality, security, and integrity of the documented information in accordance with Measure 1.3. In accordance with Article 53(2) AI Act, these Measures do not apply to providers of general-purpose AI models released under a free and open -source license that satisfy the conditions specified in that provision, unless the model is a general-purpose AI model with systemic risk. Measure 1.1 Drawing up and keeping up-to-date model documentation Signatories, when placing a general-purpose AI model on the market, will have documented at least all the information referred to in the Model Documentation Form below (hereafter this information is referred to as the ‘Model Documentation’). Signatories may choose to complete the Model Documentation Form provided in the Appendix to comply with this commitment. Signatories will update the Model Documentation to reflect relevant changes in the information contained in the Model Documentation, including in relation to updated versions of the same model, while keeping previous versions of the Model Documentation for a period ending 10 yea rs after the model has been placed on the market. Measure 1.2 Providing relevant information Signatories, when placing a general -purpose AI model on the market, will publicly disclose via their website, or via other appropriate means if they do not have a website, contact information for the AI Office and downstream providers to request access to the relevant information contained in the Model Documentation, or other necessary information. Signatories will provide, upon a request from the AI Office pursuant to Articles 91 or 75(3) AI Act for one or more elements of the Model Documentation, or any additional information, that are necessary for the AI Office to fulfil its tasks under the AI Act or for national competent authorities to exercise their supervisory tasks under the AI Act, in particular to assess compliance of high-risk AI systems built on general-purpose AI models where the provider of the system is different from the provider of the model,2 the requested information in its most up -to-date form, within the period specified in the AI Office’s request in accordance with Article 91(4) AI Act. 1 See Article 75(1) and (3) AI Act and Article 88(2) AI Act. 2 See Article 75(1) and (3) and Article 88(2) AI Act 6 Signatories will provide to downstream providers the information contained in the most up -to-date Model Documentation that is intended for downstream providers, subject to the confidentiality safeguards and conditions provided for under Articles 53(7) and 78 AI Act. Furthermore, without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law, Signatories will provide additional information upon a request from downstream providers insofar as such information is necessary to enable them to have a good understanding of the capabilities and limitations of the general-purpose AI model relevant for its integration into the downstream providers’ AI sys tem and to enable those downstream providers to comply with their obligations pursuant to the AI Act. Signatories will provide such information within a reasonable timeframe, and no later than 14 days of receiving the request save for exceptional circumstances. Signatories are encouraged to consider whether the documented information can be disclosed, in whole or in part, to the public to promote public transparency. Some of this information may also be required in a summarised form as part of the training conten t summary that providers must make publicly available under Article 53(1), point (d), AI Act, according to a template to be provided by the AI Office. Measure 1.3 Ensuring quality, integrity, and security of information Signatories will ensure that the documented information is controlled for quality and integrity, retained as evidence of compliance with obligations in the AI Act, and protected from unintended alterations. In the context of drawing -up, updating, and controlling the quality and security of the information and records, Signatories are encouraged to follow the established protocols and technical standards. Model Documentation Form Below is a static version of the Model Documentation Form. In this version, the input fields cannot be filled in. A fillable version of this form is separately available.