Worried about AI risks? NIST’s red-team study on misinformation, data leaks & manipulation is still shelved. #AIrisk #NIST #AISecurity
🎧 Listen to the Audio
If you’re short on time, check out the key points in this audio version.
📝 Read the Full Text
If you prefer to read at your own pace, here’s the full explanation below.
Decoding the Delay: Why Is a Key U.S. AI Safety Report Still on the Shelf?
John: Hey Lila! Today, I want to dive into a topic that’s been creating some buzz in the AI policy and safety communities. It involves a key U.S. government agency, a much-anticipated report on AI risks, and a delay that has a lot of people talking. We’re looking at why an important AI risk study from NIST remains unpublished.
Lila: Hi John! That sounds serious. First off, can you remind me what NIST is and why a report from them would be such a big deal?
John: Of course! NIST stands for the National Institute of Standards and Technology. Think of them as the nation’s measurement experts. For over a century, they’ve set the standards for everything from the weight of a kilogram to the cybersecurity frameworks that protect our data. When it comes to new technology like AI, NIST’s role is to develop the guidelines and benchmarks to ensure it’s safe, reliable, and trustworthy. So, when they prepare a major report on AI safety, everyone pays close attention.
What’s in the Unpublished Report?
Lila: Okay, that makes sense. So what was this specific report supposed to be about?
John: This report was set to publish the findings from a massive AI testing initiative. The core of it focused on a practice called “red-teaming.”
Lila: “Red-teaming?” That sounds like something out of a spy movie. What does it mean for AI?
John: It’s a great analogy! In cybersecurity, a red team is a group of ethical hackers who try to break into a company’s systems to find vulnerabilities before malicious actors do. In AI, it’s very similar. Experts intentionally try to get AI models to fail or produce harmful outputs. For example, they might test if a large language model (LLM) could be tricked into:
- Generating instructions for building weapons or dangerous substances.
- Creating convincing misinformation or propaganda.
- Exhibiting discriminatory biases against certain groups.
The goal isn’t to be destructive; it’s to find these dangerous flaws so developers can fix them. This work was conducted by the AI Safety Institute Consortium (AISIC), a group of over 200 member organizations, including leading AI companies, academics, and civil society groups, all working together under NIST’s guidance.
A Timeline of the Delay: Past, Present, and Future
Lila: Wow, that sounds incredibly important. Everyone should have access to those findings! So, what happened? Why is it still sitting on a shelf?
John: To understand that, we need to look at the timeline. Let’s break it down into past, present, and what we might expect looking ahead.
Lila: Perfect, that helps a lot.
In the Past: A Plan for Transparency
John: The initiative for this report stems directly from President Biden’s October 2023 Executive Order on AI. The order mandated the creation of the U.S. Artificial Intelligence Safety Institute (USAISI) within NIST. A key part of its mission was to develop standards for AI safety testing. The red-teaming report was meant to be one of the first major public outputs, providing a baseline understanding of the risks found in current AI models. The original plan, according to a major report from Reuters, was to publish these findings to foster public transparency and guide future regulation.
Currently: A Leadership Shuffle and an Indefinite Pause
Lila: So what’s holding it up right now?
John: The delay is directly linked to a significant administrative and leadership change at the top of the new AI Safety Institute. Initially, Elham Tabassi, a long-time and highly respected AI expert at NIST, was expected to lead the institute. However, in February 2024, the Commerce Department announced that Elizabeth Kelly, a former special assistant to the President on technology and economic policy at the White House, would be appointed as the Director of the USAISI. Elham Tabassi was named Chief Technology Officer.
Lila: Is that change the reason for the delay? It seems like a big shift in direction.
John: Based on reporting from sources like Reuters, which cited anonymous NIST officials, the decision to shelve the report came after this leadership transition. The change reportedly created some internal friction and a shift in priorities, leading to the report being put on hold indefinitely. The official reason hasn’t been publicly stated, but the timing strongly suggests a connection. The focus seems to have shifted from immediate public reporting to internal strategy and organization under the new leadership.
Looking Ahead: An Uncertain Future for the Report
Lila: So, what happens now? Will the report ever be released?
John: That’s the million-dollar question. Looking ahead, the future of this specific report is uncertain. The USAISI is moving forward—it has leadership, a mandate, and the backing of the Commerce Department. However, a debate has emerged in the tech policy world about this situation.
- On one side: Critics and some AI safety advocates are concerned. They argue that withholding these foundational safety findings undermines transparency and slows down the collective effort to make AI safe. Without a public, government-backed benchmark, it’s harder to hold AI companies accountable for their safety claims.
- On the other side: Some experts, as noted in outlets like FedScoop, see a potential upside to Elizabeth Kelly’s appointment. They argue her deep connections within the White House and across government could give the USAISI more power, influence, and funding than it might have had otherwise. The argument is that this could be better for AI safety in the long run, even if it means short-term delays.
For now, the AI community is in a “wait-and-see” mode. The institute is operational, but the foundational report that many were expecting remains unpublished.
Why This Matters for Everyone
Lila: I can see the debate, but this still feels a bit inside-baseball. Why should someone like me, who isn’t an AI researcher or policymaker, care about this bureaucratic delay?
John: That’s the most important question, Lila. It matters because NIST’s work sets the “rules of the road” for technology that affects all of us. This isn’t just a shelved document; it’s about public trust. Think about it: we all use AI every day, whether through search engines, social media feeds, or workplace tools. We are trusting that these systems are reasonably safe.
John: NIST’s role is to provide an independent, objective standard for what “safe” means. By making red-teaming results public, they would give everyone—from other countries to independent auditors to everyday users—a clear yardstick to measure AI safety. When that information is kept private, we have to rely more on the claims of the tech companies themselves. It’s the difference between having a public vehicle safety rating from the government versus just taking a carmaker’s word that their car is safe. This delay touches on the core principles of transparency and public accountability in the age of AI.
John: This situation perfectly captures the central challenge of the AI era: balancing the incredible speed of innovation with the careful, deliberate work of ensuring it benefits humanity safely. The story of this unpublished report is a real-time lesson in how difficult, and how critical, that balancing act is.
Lila: Got it. It’s about making sure the safety conversation happens out in the open, not just behind the closed doors of government agencies or tech companies.
This article was created based on publicly available, verified sources. References:
- NIST’s Unpublished AI Risk Study Remains Shelved Amid Administrative Change
- Exclusive: US shelved plan to publish AI ‘red-teaming’ safety report, sources say – Reuters
- NIST’s new AI safety institute director has deep White House ties. Experts say that’s a good thing. – FedScoop
- Commerce Secretary Gina Raimondo Announces Elizabeth Kelly to Serve as Director of the U.S. Artificial Intelligence Safety Institute – NIST