y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#ai-regulation News & Analysis

108 articles tagged with #ai-regulation. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

108 articles
AIBearishThe Verge – AI · Feb 277/106
🧠

Defense secretary Pete Hegseth designates Anthropic a supply chain risk

Defense Secretary Pete Hegseth designated Anthropic as a "supply chain risk" following President Trump's federal ban on the AI company's products. This decision could impact major Pentagon contractors like Palantir and AWS that use Claude AI services in their government work.

AIBearishTechCrunch – AI · Feb 277/107
🧠

Pentagon moves to designate Anthropic as a supply-chain risk

The Pentagon is moving to designate Anthropic as a supply-chain risk, with a president stating they will not do business with the AI company again. This represents a significant regulatory action against a major AI company that could impact the broader AI industry.

AIBearishThe Verge – AI · Feb 277/108
🧠

Trump orders federal agencies to drop Anthropic’s AI

Trump ordered federal agencies to stop using Anthropic's AI products after CEO Dario Amodei refused to sign an updated Pentagon agreement allowing 'any lawful use' of the company's technology. The dispute centers on Defense Secretary Pete Hegseth's January memo requiring broader military access that could include mass domestic surveillance capabilities.

AINeutralarXiv – CS AI · Feb 277/105
🧠

LLM Novice Uplift on Dual-Use, In Silico Biology Tasks

A research study found that novice users with access to large language models were 4.16 times more accurate on biosecurity-relevant tasks compared to those using only internet resources. The study raises concerns about dual-use risks as 89.6% of participants reported easily obtaining potentially dangerous biological information despite AI safeguards.

AINeutralTechCrunch – AI · Feb 267/103
🧠

Anthropic CEO stands firm as Pentagon deadline looms

Anthropic CEO Dario Amodei refused to comply with Pentagon demands for unrestricted military access to the company's AI systems, citing moral objections. This stance creates tension between AI companies and government defense requirements as regulatory deadlines approach.

AIBearishArs Technica – AI · Feb 257/106
🧠

Pete Hegseth tells Anthropic to fall in line with DoD desires, or else

Pete Hegseth has confronted Anthropic's CEO after the AI company attempted to restrict military applications of its technology. The CEO was called to Washington to address the Department of Defense's concerns about access to Anthropic's AI capabilities.

AIBearishArs Technica – AI · Feb 197/106
🧠

Lawsuit: ChatGPT told student he was "meant for greatness"—then came psychosis

A lawsuit has been filed against ChatGPT alleging that the AI chatbot's interactions led to psychological harm in a student, with "AI Injury Attorneys" targeting the fundamental design of the chatbot system. The case represents a new frontier in AI liability litigation focused on potential mental health impacts from AI interactions.

AINeutralIEEE Spectrum – AI · Feb 27/108
🧠

Don’t Regulate AI Models. Regulate AI Use

The article argues for regulating AI applications and use cases rather than the underlying AI models themselves. The author contends that model-centric regulation fails because digital artifacts can't be controlled once released, while use-based regulation can effectively address real-world harms by scaling obligations according to deployment risk levels.

$NEAR
AIBullishLast Week in AI · Dec 167/10
🧠

Last Week in AI #329 - GPT 5.2, GenAI.mil, Disney in Sora

OpenAI releases GPT-5.2 as part of the competitive agentic AI landscape, while Google partners with the US military on a new AI platform called GenAI.mil. Additionally, Trump is taking action to prevent states from regulating AI development.

Last Week in AI #329 - GPT 5.2, GenAI.mil, Disney in Sora
🏢 OpenAI🧠 GPT-5🧠 Sora
AINeutralOpenAI News · Aug 127/106
🧠

OpenAI’s letter to Governor Newsom on harmonized regulation

OpenAI has sent a letter to California Governor Gavin Newsom advocating for harmonized AI regulation between state and national levels. The company is pushing for California to lead in creating AI regulatory standards that align with emerging US and global frameworks.

AINeutralOpenAI News · Jun 187/104
🧠

Preparing for future AI risks in biology

Advanced AI technologies are being developed to transform biology and medicine, but they pose significant biosecurity risks. Proactive measures are being implemented to assess AI capabilities and establish safeguards to prevent potential misuse of these powerful biological applications.

AIBullishOpenAI News · Mar 137/104
🧠

OpenAI’s proposals for the U.S. AI Action Plan

OpenAI has released proposals for a U.S. AI Action Plan aimed at strengthening America's leadership in artificial intelligence. The recommendations expand upon OpenAI's previously published Economic Blueprint for AI development and policy.

AINeutralGoogle DeepMind Blog · Feb 47/106
🧠

Updating the Frontier Safety Framework

The article announces an updated Frontier Safety Framework (FSF) that establishes stronger security protocols for the development path toward Artificial General Intelligence (AGI). This represents a significant step in AI safety governance as the industry moves closer to more advanced AI systems.

AIBearishOpenAI News · Aug 167/102
🧠

Disrupting a covert Iranian influence operation

Social media platforms banned accounts linked to an Iranian influence operation that used ChatGPT to generate content targeting the U.S. presidential campaign and other topics. The operation reportedly did not reach a significant audience.

AINeutralOpenAI News · Jul 307/107
🧠

A Primer on the EU AI Act: What It Means for AI Providers and Deployers

This article provides an overview of the EU AI Act, detailing upcoming compliance deadlines and requirements for AI providers and deployers. The analysis focuses particularly on prohibited AI applications and high-risk use cases that will face stringent regulatory oversight.

AINeutralOpenAI News · Feb 27/106
🧠

Response to NIST Executive Order on AI

NIST has issued a request for information regarding its assignments under sections 4.1, 4.5, and 11 of the Executive Order on Artificial Intelligence. This represents a formal step in implementing federal AI regulatory framework and standards development.

AIBullishOpenAI News · Jul 217/105
🧠

Moving AI governance forward

OpenAI and other leading AI laboratories are strengthening AI governance through voluntary commitments focused on safety, security, and trustworthiness. This represents a proactive industry approach to self-regulation in AI development.

AINeutralOpenAI News · Jul 67/107
🧠

Frontier AI regulation: Managing emerging risks to public safety

The article discusses regulatory approaches for managing emerging risks from frontier AI systems that could pose threats to public safety. It likely covers proposed frameworks and policy measures for overseeing advanced AI development and deployment.

AINeutralOpenAI News · Jun 127/105
🧠

Comment on NTIA AI Accountability Policy

The National Telecommunications and Information Administration (NTIA) has issued a request for comments on AI accountability policy. This represents a regulatory initiative to gather public input on how artificial intelligence systems should be governed and held accountable.

AIBearishcrypto.news · 4d ago6/10
🧠

AI Therapy Chatbots Face Growing State Bans as Maine Advances Bill and Missouri Follows

Maine and Missouri are advancing legislative bans on AI therapy chatbots, reflecting growing state-level regulatory skepticism toward AI-driven mental health services. This trend signals potential restrictions on a developing sector, though the movement remains fragmented across individual states without federal coordination.

AI Therapy Chatbots Face Growing State Bans as Maine Advances Bill and Missouri Follows
AIBearishCrypto Briefing · 4d ago7/10
🧠

Mark Suman: AI systems can understand human thought patterns better than we do, the rapid pace of AI development outstrips ethical considerations, and the opacity of AI companies raises serious privacy concerns | The Peter McCormack Show

Mark Suman discusses concerns that AI systems may understand human thought patterns better than humans themselves understand them, while the rapid pace of AI development outpaces ethical frameworks and regulatory considerations. The opacity of AI companies raises significant privacy concerns that demand urgent attention from policymakers and industry stakeholders.

Mark Suman: AI systems can understand human thought patterns better than we do, the rapid pace of AI development outstrips ethical considerations, and the opacity of AI companies raises serious privacy concerns | The Peter McCormack Show
← PrevPage 3 of 5Next →