LegisPlain/S. 2938
🇺🇸United StatesS. 2938119th CongressMar 24, 2026 · 1 view

Artificial Intelligence Risk Evaluation Act of 2025

This bill requires the Department of Energy to build and run a mandatory testing program for the most powerful AI systems — those trained using more than 10^26 computing operations (roughly frontier-model scale).

📋What It DoesBenefits⚠️Impacts🔍Hidden Riders🎭Framing🚨Red Flags📍Status
📋

What It Does

This bill requires the Department of Energy to build and run a mandatory testing program for the most powerful AI systems — those trained using more than 10^26 computing operations (roughly frontier-model scale).

Any company that builds such a system must submit it for evaluation before deploying it commercially, or face $1 million/day in fines. The program is designed to generate empirical data on AI risks so Congress can eventually build a permanent regulatory framework.

Establishes the Advanced Artificial Intelligence Evaluation Program within DOE within 90 days of enactment
Applies to any developer of an AI system trained above 10^26 FLOP (integer or floating-point operations) — the current frontier-model threshold
Requires covered developers to hand over source code, training data, model weights, architecture details, and other materials on request
Prohibits deployment of a covered AI system in interstate or foreign commerce without program compliance
Imposes a minimum $1,000,000-per-day civil penalty for non-compliance
Mandates adversarial 'red team' testing to simulate real-world jailbreaking and misuse scenarios
Requires the Secretary to submit a permanent regulatory framework recommendation to Congress within 360 days, with annual updates for the program's 7-year life
Directs the program to evaluate whether any AI system is approaching 'artificial superintelligence' and to propose options including 'nationalization or other strategic measures'
Program sunsets after 7 years unless Congress reauthorizes it

Who Benefits

Congress — receives empirical testing data and a structured recommendation for permanent AI oversight, reducing reliance on industry self-reporting
National security agencies — gain classified assessments of AI weaponization risks, loss-of-control scenarios, and foreign adversary exploitation potential
Smaller AI developers — a standardized evaluation process could reduce compliance uncertainty compared to a patchwork of future regulations
The general public — program is explicitly designed to surface risks to civil liberties, labor markets, critical infrastructure, and existential safety before deployment
Allied governments — U.S. evaluation standards and data could form the basis of international AI governance frameworks
⚠️

Who Gets Hurt

Large frontier AI developers (OpenAI, Anthropic, Google DeepMind, Meta AI, xAI, and equivalents) — mandatory handover of proprietary code, weights, and training data is an extraordinary IP disclosure requirement
Open-source AI developers and releasing entities — 'deploy' is defined to include open-source releases, meaning even non-commercial open-source model drops above the threshold would require program compliance
AI startups with frontier ambitions — $1M/day penalties and compliance overhead could be existential for smaller players, consolidating the market around incumbents who can absorb costs
Cloud compute providers — proposed monitoring of 'AI hardware usage, computational resource inputs, and cloud-computing deployments' implies reporting obligations not yet explicitly defined
Academic and research institutions — if a training run exceeds the threshold, researchers would be subject to the same compliance regime as commercial developers
🔍

Hidden Riders

Definition change requires a joint resolution of Congress — Section 3(1)(B) mandates that any update to the 10^26 FLOP threshold requires congressional approval via a joint resolution, meaning the executive branch cannot adapt the definition as technology advances without a full legislative act; this is an unusual constraint that could freeze the program's scope
'Nationalization' floated as a regulatory option — Section 5(b)(8) explicitly instructs the program to 'develop proposed options' including 'nationalization' of AI development; this is buried in a list of program activities without any statutory definition, scope limitation, or procedural safeguard for what nationalization would entail
Blanket classified testing authority — Sections 5(b)(1) and 5(b)(3) establish classified testing and classified third-party assessments with no oversight structure, inspector general requirement, or transparency mechanism specified for how classification decisions are made or reviewed
'Substantially modifies' triggers developer status — Section 3(7) includes anyone who 'substantially modifies' a covered system as a developer, but 'substantially' is undefined, potentially sweeping in fine-tuners, deployers, or enterprise customers who customize base models
🎭

Framing Analysis

Framed as a data-gathering exercise to inform future regulation — the bill also contains an outright deployment ban with $1M/day penalties, making it a de facto pre-deployment approval requirement, not merely a study program
Framed as technology-neutral empirical testing — the 10^26 FLOP threshold specifically targets today's largest frontier models; the threshold is not indexed to inflation or technological change and requires an Act of Congress to update, meaning it could quickly become obsolete or inadvertently capture systems that are no longer frontier-level
Framed as a bipartisan national security measure (Hawley + Blumenthal) — the bill's scope extends well beyond national security into labor markets, economic competition, and civil liberties, suggesting broader regulatory ambitions than the national security framing implies
Framed as generating 'evidence-based' standards — the program as written gives the Secretary broad discretion over testing methodology, classification, and reporting with limited external accountability mechanisms, which could produce evidence that reflects the program's own design choices
🚩

Red Flags

Mandatory IP handover with no explicit confidentiality or security safeguards in the bill text — Section 4(a)(2) requires production of source code, training data, model weights, and architecture details; the bill does not specify how DOE must store, protect, or restrict access to this extraordinarily sensitive material, and a breach or leak would be catastrophic for affected companies
$1M/day penalty attaches from day of violation with no cure period — Section 4(c) imposes the minimum fine immediately with no notice-and-cure window, no tiered enforcement, and no adjudicatory process specified before penalties accrue
90-day stand-up deadline for DOE is almost certainly unworkable — Section 5(a) gives DOE 90 days to establish a classified AI evaluation program from scratch; DOE has no existing AI evaluation infrastructure at this scale, making the deadline a political signal rather than an operational commitment
'Scheming behavior' and 'loss-of-control scenario' are defined broadly enough to capture normal AI outputs — definitions in Section 3 include 'behaves in an unpredictable manner' and 'hiding its true capabilities,' which are contested empirical claims about current AI systems and could be assessed inconsistently across evaluators
Open-source model releases are explicitly covered — Section 3(8) defines 'deploy' to include open-source releases, meaning a researcher publishing model weights on Hugging Face could face $1M/day fines; this could effectively prohibit open-source frontier AI development in the U.S.
No private right of action, no ombudsman, no appeals process specified — the bill creates massive obligations and penalties but does not establish how a penalized developer contests a finding, challenges a classification decision, or appeals a testing outcome
'Nationalization' option is raised without any legal, constitutional, or procedural framework — Section 5(b)(8) asks the program to develop nationalization proposals, but the Takings Clause, existing property rights law, and the absence of any enabling authority for such action make this provision more of a political statement than a legal pathway
📊

Current Status

S.

2938 was introduced in the U.S. Senate on September 29, 2025 by Senators Josh Hawley (R-MO) and Richard Blumenthal (D-CT). It was read twice and referred to the Senate Committee on Commerce, Science, and Transportation. No committee markup, hearing, or floor action has been scheduled as of the date of introduction.

Civic Action

Tools to act on what you've learned

Share this report

Take action

Upgrade

Coming soon
Pro
$9/month
  • 📱
    SMS bill alertsText when your bills move
  • 📄
    PDF exportPrint-ready decode, shareable offline
  • 🔗
    Embed widgetLive decode card for your site
  • ✉️
    Send your rep's emailOne click — we send it via Resend
  • 📊
    Rep response tracker2-week follow-up: did they reply?
  • 🔔
    Follow unlimited billsEmail + SMS status updates
API$99/mo
  • Programmatic bill decode
  • Bulk analysis
  • JSON output
Decode another billDecoded by LegisPlain