y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#military-ai News & Analysis

40 articles tagged with #military-ai. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

40 articles
AIBearishTechCrunch – AI · Mar 4🔥 8/104
🧠

The US military is still using Claude — but defense-tech clients are fleeing

The US military continues using Anthropic's Claude AI models for targeting decisions during aerial attacks on Iran, while defense-tech clients are reportedly leaving the platform. This highlights the ongoing tension between AI companies' military applications and their broader client relationships.

AIBearishThe Verge – AI · Feb 27🔥 8/108
🧠

AI vs. the Pentagon: killer robots, mass surveillance, and red lines

Anthropic is in heated negotiations with the Pentagon after refusing new military contract terms that would allow 'any lawful use' of their AI models, including mass surveillance and autonomous lethal weapons. While competitors OpenAI and xAI have agreed to the terms, Anthropic faces being designated a 'supply chain risk' and Trump has ordered federal agencies to drop their AI services.

AINeutralCrypto Briefing · 4d ago7/10
🧠

Paul Scharre: Definitions of autonomous weapons shape military strategy, AI’s role in target identification is crucial, and human oversight is essential for effective operations | Odd Lots

Paul Scharre discusses how definitions of autonomous weapons systems shape military strategy, emphasizing AI's critical role in target identification while stressing the necessity of human oversight in military operations. The analysis highlights tensions between automation and human control in warfare.

Paul Scharre: Definitions of autonomous weapons shape military strategy, AI’s role in target identification is crucial, and human oversight is essential for effective operations | Odd Lots
AIBearisharXiv – CS AI · Apr 67/10
🧠

Corporations Constitute Intelligence

This analysis of Anthropic's 2026 AI constitution reveals significant flaws in corporate AI governance, including military deployment exemptions and the exclusion of democratic input despite evidence that public participation reduces bias. The article argues that corporate transparency cannot substitute for democratic legitimacy in determining AI ethical principles.

🏢 Anthropic🧠 Claude
AINeutralCrypto Briefing · Mar 257/10
🧠

Michael Horowitz: The conflict between Anthropics and the Pentagon is rooted in politics, AI policy mandates impact vendor contracts, and concerns about mass surveillance are complex | Big Technology

Anthropic's conflict with the Pentagon highlights deep political and ethical tensions surrounding AI applications in military contexts. The dispute reflects broader concerns about AI policy mandates affecting vendor contracts and the complexities of mass surveillance issues.

Michael Horowitz: The conflict between Anthropics and the Pentagon is rooted in politics, AI policy mandates impact vendor contracts, and concerns about mass surveillance are complex | Big Technology
AIBullishMIT Technology Review · Mar 177/10
🧠

The Pentagon is planning for AI companies to train on classified data, defense official says

The Pentagon is planning to create secure environments for AI companies to train military-specific versions of their models on classified data. AI models like Anthropic's Claude are already being used in classified settings, including for analyzing targets in Iran, but training on classified data would represent a significant expansion of AI use in defense applications.

🏢 Anthropic🧠 Claude
AINeutralMIT Technology Review · Mar 177/10
🧠

The Download: OpenAI’s US military deal, and Grok’s CSAM lawsuit

This article discusses OpenAI's controversial agreement to provide AI technology access to the Pentagon, raising questions about potential military applications. The piece also mentions a lawsuit involving Grok and CSAM-related issues.

🏢 OpenAI🧠 Grok
AINeutralarXiv – CS AI · Mar 127/10
🧠

Measuring and Eliminating Refusals in Military Large Language Models

Researchers developed the first benchmark dataset to measure refusal rates in military Large Language Models, finding that current LLMs refuse up to 98.2% of legitimate military queries due to safety behaviors. The study tested 34 models and demonstrated techniques to reduce refusals while maintaining military task performance.

AIBearishIEEE Spectrum – AI · Mar 87/10
🧠

Military AI Policy Needs Democratic Oversight

A major dispute has escalated between the U.S. Department of Defense and Anthropic over military AI use, with Defense Secretary Pete Hegseth designating Anthropic a supply chain risk after the company refused to allow unrestricted use of its AI systems. The confrontation centers on Anthropic's refusal to enable domestic surveillance and autonomous military targeting, raising questions about democratic oversight of military AI policies.

Military AI Policy Needs Democratic Oversight
🏢 Anthropic
AIBearishTechCrunch – AI · Mar 67/10
🧠

Anthropic vs. the Pentagon, the SaaSpocalypse, and why competitions is good, actually

The Pentagon designated Anthropic a supply-chain risk after disputes over military control of AI models for weapons and surveillance, leading to a collapsed $200 million contract. The DoD shifted to OpenAI instead, which caused ChatGPT uninstalls to surge 295% following their acceptance of the military partnership.

🏢 OpenAI🏢 Anthropic🧠 ChatGPT
AINeutralTechCrunch – AI · Mar 57/10
🧠

Anthropic CEO Dario Amodei could still be trying to make a deal with Pentagon

Anthropic's $200 million contract with the Department of Defense collapsed due to disagreements over providing the military with unrestricted access to the company's AI technology. The breakdown highlights ongoing tensions between AI companies and government agencies over control and usage rights of advanced AI systems.

🏢 Anthropic
AINeutralarXiv – CS AI · Mar 57/10
🧠

The Controllability Trap: A Governance Framework for Military AI Agents

Researchers propose the Agentic Military AI Governance Framework (AMAGF) to address control failures in autonomous military AI systems. The framework introduces a Control Quality Score (CQS) to continuously measure and manage human control over AI agents throughout operations, moving beyond binary control models.

AINeutralWired – AI · Mar 47/101
🧠

What AI Models for War Actually Look Like

While Anthropic and other AI companies debate ethical limits on military AI applications, Smack Technologies is actively developing AI models specifically designed to plan and execute battlefield operations. This highlights the growing divide between companies taking cautious approaches to military AI and those directly pursuing defense applications.

What AI Models for War Actually Look Like
AIBearishCrypto Briefing · Mar 47/101
🧠

AI chose nukes in 95% of war games. The Pentagon wants to deploy it anyway.

Research reveals that AI systems chose nuclear weapons in 95% of military war game simulations, yet the Pentagon continues pursuing AI deployment in defense systems. This highlights significant concerns about the risks of weaponizing AI without proper ethical oversight and safeguards.

AI chose nukes in 95% of war games. The Pentagon wants to deploy it anyway.
AIBearishCrypto Briefing · Mar 37/103
🧠

Sam Altman says OpenAI rushed Pentagon deal as ChatGPT backlash erupts

Sam Altman acknowledged that OpenAI mishandled its Pentagon partnership deal, leading to significant user backlash. ChatGPT app uninstalls surged 295% while app store reviews declined sharply following the controversial military collaboration announcement.

Sam Altman says OpenAI rushed Pentagon deal as ChatGPT backlash erupts
AIBearishDecrypt – AI · Mar 2🔥 8/1011
🧠

Anthropic's AI Used in Iran Strikes After Trump Moved to Cut Ties: WSJ

Anthropic's Claude AI was reportedly used in U.S. Central Command operations during Iran strikes, even as the Trump administration ordered federal agencies to sever ties with the AI company. This highlights potential conflicts between government AI usage and political directives regarding AI companies.

Anthropic's AI Used in Iran Strikes After Trump Moved to Cut Ties: WSJ
AIBearishWired – AI · Feb 277/107
🧠

Trump Moves to Ban Anthropic From the US Government

President Trump has issued an order to ban Anthropic from US government use, following Defense Department pressure on the AI company to remove restrictions on military applications of its technology. This represents a significant escalation in government-AI company tensions over military use policies.

AINeutralTechCrunch – AI · Feb 277/105
🧠

Anthropic vs. the Pentagon: What’s actually at stake?

Anthropic and the Pentagon are in conflict over AI deployment in autonomous weapons systems and surveillance applications. This dispute highlights critical questions about corporate versus government control over military AI development and the ethical boundaries of AI technology in national security.

AINeutralTechCrunch – AI · Feb 277/107
🧠

Employees at Google and OpenAI support Anthropic’s Pentagon stand in open letter

Employees from Google and OpenAI have written an open letter supporting Anthropic's ethical stance regarding its Pentagon partnership. Anthropic maintains strict boundaries, refusing to allow its AI technology to be used for mass domestic surveillance or fully autonomous weapons systems.

AIBearishThe Verge – AI · Feb 277/106
🧠

We don’t have to have unsupervised killer robots

The Pentagon has issued an ultimatum to Anthropic demanding unchecked military access to its AI technology, including for surveillance and autonomous weapons, threatening to designate the company a supply chain risk if refused. This confrontation is prompting broader concerns among tech workers about their companies' military contracts and the future implications of AI weaponization.

Page 1 of 2Next →