Skip to main content
Mallory

GSA and NIST Launch Federal AI Evaluation Standards Partnership

standards-framework-updateai-platform-securitygovernment-diplomatic-threat
Updated May 5, 2026 at 11:01 PM6 sources
Share:
GSA and NIST Launch Federal AI Evaluation Standards Partnership

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

The General Services Administration (GSA) and NIST announced a partnership to create standardized methods for evaluating AI models and services before federal agencies deploy them in operational environments. The effort, housed in NIST’s Center for AI Standards and Innovation, is intended to establish common benchmarks, testing methodologies, and practical guidance so agencies can assess AI performance more consistently and reduce duplicated evaluation work across government. GSA said the work will also support USAi.gov, its platform for agency experimentation and onboarding of AI tools, with the stated goal of accelerating federal AI adoption while improving confidence in procurement and deployment decisions.

A separate analysis of GSA’s broader AI procurement posture highlights the policy context around this move, arguing that the agency is trying to impose governance controls after an extended federal push to speed AI adoption. That commentary focuses on GSA’s proposed contract clause GSAR 552.239-7001, which would govern issues such as data control, portability, sourcing, and conflicts with vendor terms, and frames it as a response to governance gaps in federal AI acquisition. Other references in the set discuss unrelated enterprise AI governance advice, foreign AI policy, telecom strategy, legacy-code modernization, and MWC product announcements, and do not describe the same federal standards initiative.

Timeline

  1. May 5, 2026

    Commerce announces CAISI testing of Google, Microsoft and xAI models

    The U.S. Commerce Department announced that NIST's Center for Artificial Intelligence Standards and Innovation will evaluate leading AI models from Google DeepMind, Microsoft, and xAI in classified environments before deployment to assess security and national security risks. The agreements expand earlier voluntary arrangements and align with the Trump administration's AI Action Plan, while also supporting development of best practices for commercial AI systems.

  2. May 1, 2026

    Senate Judiciary advances GUARD Act on minors and AI companions

    The Senate Judiciary Committee advanced the bipartisan GUARD Act, which would bar AI companies from allowing children to use AI companions and require such systems to disclose that they are not human and lack professional credentials. The bill would also criminalize AI companions that knowingly solicit sexual content from minors or generate such content, and would require age verification for all users.

  3. Apr 3, 2026

    Industry groups warn GSA draft AI clause could create civil liberties and IP risks

    Trade and industry groups including Americans for Responsible Innovation and the Business Software Alliance warned that GSA’s proposed AI procurement language could enable surveillance or profiling, conflict with vendor terms, weaken contractor intellectual property protections, and discourage commercial AI adoption. The criticism focused on provisions giving the government broad rights over input data, custom AI developments, and use of AI systems for lawful government purposes.

  4. Mar 18, 2026

    GSA and NIST announce partnership on AI evaluation standards

    GSA and the National Institute of Standards and Technology announced a partnership to create consistent testing methods, benchmarks, and onboarding resources for evaluating AI models and services before federal operational use. The effort is based in NIST’s Center for AI Standards and Innovation and is intended to support USAi.gov and speed the transition of AI pilots into production.

  5. Mar 18, 2026

    GSA develops draft federal AI contract clause

    The General Services Administration proposed draft clause GSAR 552.239-7001 to impose baseline safeguards on federal procurement of AI systems, including restrictions on use of government data, portability and evaluation rights, and upstream compliance obligations.

  6. Dec 11, 2025

    OMB issues memo on bias and transparency in federal LLM procurement

    The White House Office of Management and Budget issued memorandum M-26-04 directing federal agencies to procure large language models that are truthful, neutral, and free from bias, while adding transparency and vendor assessment requirements. The memo also left gaps, including reliance on vendor self-assessments and limited coverage of existing contracts.

  7. Jul 1, 2025

    Trump administration issues AI Action Plan

    The Trump administration released an AI Action Plan setting broader federal priorities for artificial intelligence, which later framed subsequent government procurement and evaluation efforts.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Stories

Federal Push for AI Security Standards and Playbooks

Federal Push for AI Security Standards and Playbooks

The U.S. government is intensifying efforts to secure artificial intelligence systems against foreign threats, with legislative and operational initiatives underway. A bipartisan bill, the Advanced AI Security Readiness Act, has been introduced to require the National Security Agency to develop a comprehensive security playbook for protecting federal AI systems. This playbook will address vulnerabilities such as model-weight theft, insider threats, and cyberespionage, and will involve collaboration with major AI developers, national laboratories, and multiple federal agencies. The move follows recent NSA publications on AI security and reflects growing concern over adversaries seeking to exploit American AI innovation. In parallel, the Defense Logistics Agency is accelerating the adoption of AI tools across its operations, emphasizing the need to keep pace with adversaries like China and Russia. The agency's CIO highlighted the importance of Pentagon-wide AI integration to maintain a technological edge in defense logistics. These developments underscore a coordinated federal approach to both leveraging and securing AI technologies in critical government functions.

1 months ago
US Policy Actions on AI Governance, Standards, and Transparency

US Policy Actions on AI Governance, Standards, and Transparency

US policymakers and regulators advanced multiple **AI governance** initiatives spanning labor-market measurement, standards-setting, and training-data transparency. Nine US senators urged the Department of Labor, the Bureau of Labor Statistics, and the Census Bureau to expand federal surveys (including the *Current Population Survey*, *JOLTS*, and the *National Longitudinal Survey*) to better quantify AI-driven workforce disruption and potential job growth, arguing current public data is insufficient to track AI’s economic impacts. Separately, a federal judge denied **xAI**’s attempt to block a California law requiring disclosures about AI training datasets, finding the company did not sufficiently show the disclosures would reveal protectable trade secrets or violate First/Fifth Amendment rights; the case unfolded amid heightened scrutiny of *Grok* over harmful outputs (including allegations involving antisemitic content and generation of NCII/CSAM). In Washington, a nominee to lead **NIST** told lawmakers he would prioritize AI metrology and global standards leadership—framing standards as economically and strategically important—while also emphasizing support for advanced semiconductor manufacturing and alignment with the administration’s AI and industrial policy priorities.

1 months ago
US Push to Export AI Cybersecurity Standards and Norms

US Push to Export AI Cybersecurity Standards and Norms

The **Office of the National Cyber Director (ONCD)** said the US government is pursuing diplomacy to encourage other countries to adopt **American AI cybersecurity standards and norms**, positioning secure AI deployment as part of a broader effort to advance US AI leadership. Alexandra Seymour, the ONCD’s principal deputy assistant national cyber director for policy, said the administration plans to promote industry best practices for secure AI deployment and to accelerate adoption of **AI-enabled defensive tools** to “detect, divert and deceive” threat actors targeting critical systems, alongside continued federal network modernization and preparation for **post-quantum cryptography**. Seymour’s remarks were delivered at the *Identity, Authentication, and the Road Ahead Policy Forum* and were framed as consistent with themes in the administration’s **AI Action Plan**, including a role for the Departments of Commerce and State in advocating international governance approaches aligned with US values and countering authoritarian influence. Reporting also noted that some internationally oriented guidance has already been issued (including releases in May and December) and that other governments are similarly seeking to shape global AI security standards, while a forthcoming national cybersecurity strategy is expected to further address AI’s role in defending federal networks.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.

GSA and NIST Launch Federal AI Evaluation Standards Partnership | Mallory