NSA taps Anthropic’s Mythos despite Pentagon risk warnings: report
The NSA is reportedly using Anthropic's advanced Mythos Preview AI model despite the Department of Defense previously designating the startup as a 'supply chain risk.' This development highlights tension between U.S. national security agencies over AI procurement and vendor assessment, with implications for how government entities evaluate AI safety and security risks.
The NSA's adoption of Anthropic's Mythos Preview model creates a significant contradiction within the U.S. government's approach to AI governance. While the Department of Defense flagged Anthropic as a supply chain risk—a designation typically used for vendors posing security, counterintelligence, or operational vulnerabilities—the NSA independently pursued access to the company's most advanced capabilities. This divergence suggests either disagreement over risk assessment methodologies or compartmentalized decision-making that bypasses standardized vetting processes.
Anthropicon has positioned itself as a safety-focused AI company, emphasizing constitutional AI methods and responsible model development. However, the DoD's supply chain risk classification may stem from concerns about foreign investment, data handling practices, or model misuse potential. The NSA's move indicates confidence in either Anthropic's security posture or the model's utility for classified operations, despite institutional concerns.
For the AI industry, this signals that vendor assessments vary dramatically across federal agencies, creating regulatory fragmentation. Companies labeled as risks by one department can still secure contracts with others, reducing the deterrent effect of such designations. This inconsistency may embolden other AI startups to pursue government contracts despite initial rejection or warnings from defense bodies.
The precedent also raises questions about oversight mechanisms. If the NSA can independently access systems flagged as risky, oversight bodies may lack enforcement power to prevent potentially problematic vendor relationships. Future developments will likely include either harmonized federal AI vendor standards or explicit authority clarifications defining which agencies can override security designations.
- →NSA secured access to Anthropic's Mythos Preview despite DoD supply chain risk classification
- →Contradiction reveals fragmented federal AI procurement and assessment standards across agencies
- →Anthropic's safety-focused positioning did not prevent government security concerns
- →Regulatory uncertainty may allow AI startups to pursue government contracts despite risk flags
- →Incident highlights need for standardized or unified federal AI vendor vetting processes
