overview/documents/us_ai_standards_nist
USUnited StatesanalyzedStandards guideline

Document

AI Standards | NIST

The document primarily frames AI governance through the lens of enabling innovation while building public trust. NIST's standards work is explicitly described as promoting innovation, accelerating standards creation, and 'unleashing AI innovation,' while trust and risk management are secondary but present concerns. Economic competitiveness appears through references to U.S. leadership and global engagement strategies.

normalized text

1 section

  1. 01

    Full text

    Secure .gov websites use HTTPS A lock ( Lock A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites. https://www.nist.gov/artificial-intelligence/ai-standards On March 6, 2026, NIST’s Information Technology Laboratory (ITL) AI Program hosted a webinar on the international AI standards landscape and ITL’s role, priorities, and progress . The webinar included an overview on the current state of the international AI standards ecosystem, and ITL’s progress in accelerating and broadening participation in the standardization process. ( View Recording ) NIST leads and participates in the development of technical standards, including international standards, that promote innovation and public trust in systems that use AI. A broad spectrum of standards for AI data, performance, and governance are – and increasingly will be – a priority for trustworthy and responsible AI. NIST carries out its work consistent with the U.S. Government National Standards Strategy for Critical and Emerging Technology . NIST’s new AI Standards Zero Drafts project will pilot a process to broaden participation in and accelerate the creation of standards, helping standards meet the AI community’s needs and unleash AI innovation. In this project, NIST will collect input on topics with a science-backed body of work and use it to develop “zero drafts”—preliminary, stakeholder-driven drafts of standards that are as thorough as possible. These drafts then will be submitted into the private sector-led standardization process as proposals for further development into voluntary consensus standards. NIST has developed a plan for global engagement on promoting and developing AI standards. The goal is to drive the development and implementation of AI-related consensus standards, cooperation and coordination, and information sharing. Reflecting public and private sector input, on April 29, 2024, NIST released a draft plan. On July 26, 2024, after considering public comments on the draft, NIST released A Plan for Global Engagement on AI Standards (NIST AI 100-5e2025). In its role as federal AI standards coordinator, NIST works across the government and with industry stakeholders to identify critical standards development activities, strategies, and gaps. Based on priorities outlined in the NIST-developed “ Plan for Federal Engagement in AI Standards and Related Tools ,” NIST seeks out AI standards development opportunities, periodically collecting and analyzing information about agencies’ AI standards-related priority activities and making recommendations through the interagency process to optimize engagement. On March 1, 2022, NIST delivered to Congress a report summarizing progress that federal agencies have made to implement the recommendations of the US Leadership in AI plan. NIST is facilitating federal agency coordination in the development and use of AI standards in part through the Interagency Committee on Standards Policy (ICSP), which it chairs. An ICSP AI Standards Coordination Working Group (AISCWG) aims to promote effective and consistent federal policies leveraging AI standards, raise awareness, and foster agencies’ use of AI to inform the development of standards. The group helps to coordinate government and private sector positions regarding AI international standards activities. NIST’s role in ensuring awareness and federal coordination of AI standards is explained in more detail here . Incorporation of the AI RMF in international standards will further the Framework’s value as a resource to those designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems. The AI RMF seeks to “Take advantage of and foster greater awareness of existing standards, guidelines, best practices, methodologies, and tools for managing AI risks....” AI RMF 1.0 takes into account and cites international standards and documents. As part of the AI RMF Roadmap , NIST is making it a priority to continue to align the AI RMF and related guidance with applicable international standards, guidelines, and practices. The roadmap specifically cites “Alignment with international standards and production crosswalks to related standards (e.g., ISO/IEC 5338, ISO/IEC 38507, ISO/IEC 22989, ISO/IEC 24028, ISO/IEC DIS 42001, and ISO/IEC NP 42005.)” The first two crosswalks to the AI RMF created by NIST are for ISO/IEC FDIS23894 Information technology - Artificial intelligence - Guidance on risk management and a n illustration of how NIST AI RMF trustworthiness characteristics relate to the OECD Recommendation on AI, the Proposed EU AI Act, and several other key documents. Subsequently, NIST has posted additional AI RMF Crosswalks in the NIST Trustworthy and Responsible AI Center. On January 15, 2026, NIST released A Possible Approach for Evaluating AI Standards Development (GCR-26-069). The report intends to stimulate discussion on a potential approach to evaluate the effectiveness, utility, and relative value of the development of AI standards. It was prepared by Dr. Julia Lane, a NIST Associate and Professor Emerita at New York University. Recognizing the lack of formal or shared methods to measure the impact of standards development on the goals of innovation and trust, the report sketches out a conceptual structure for evaluating whether a given AI standard or set of standards meet these goals. The report draws on successful and well-tested evaluation approaches, tools, and metrics that are used for monitoring and assessing the effect of interventions in other domains.