y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All26,151🧠AI11,655⛓️Crypto9,600💎DeFi975🤖AI × Crypto505📰General3,416
🧠

AI

11,656 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.

11656 articles
AINeutralarXiv – CS AI · Mar 127/10
🧠

How to Count AIs: Individuation and Liability for AI Agents

A legal research paper proposes the 'Algorithmic Corporation' (A-corp) framework to address the challenge of identifying and assigning liability for AI agents' actions as millions of autonomous AIs proliferate across the economy. The A-corp structure would create legally recognizable entities owned by humans but operated by AIs, enabling both accountability and legal recourse when AI agents cause harm.

AIBearisharXiv – CS AI · Mar 127/10
🧠

Safety Under Scaffolding: How Evaluation Conditions Shape Measured Safety

A large-scale study of 62,808 AI safety evaluations across six frontier models reveals that deployment scaffolding architectures can significantly impact measured safety, with map-reduce scaffolding degrading safety performance. The research found that evaluation format (multiple-choice vs open-ended) affects safety scores more than scaffold architecture itself, and safety rankings vary dramatically across different models and configurations.

AIBearisharXiv – CS AI · Mar 127/10
🧠

Quantifying Hallucinations in Language Language Models on Medical Textbooks

Research study finds that LLaMA-70B-Instruct hallucinated in 19.7% of medical Q&A responses despite high plausibility scores, highlighting significant reliability issues in AI healthcare applications. The study shows that lower hallucination rates correlate with higher usefulness scores, emphasizing the need for better safeguards in medical AI systems.

AIBearisharXiv – CS AI · Mar 127/10
🧠

Amnesia: Adversarial Semantic Layer Specific Activation Steering in Large Language Models

Researchers have developed 'Amnesia,' a lightweight adversarial attack that bypasses safety mechanisms in open-weight Large Language Models by manipulating internal transformer states. The attack enables generation of harmful content without requiring fine-tuning or additional training, highlighting vulnerabilities in current LLM safety measures.

AINeutralarXiv – CS AI · Mar 127/10
🧠

Dissecting Chronos: Sparse Autoencoders Reveal Causal Feature Hierarchies in Time Series Foundation Models

Researchers applied sparse autoencoders to analyze Chronos-T5-Large, a 710M parameter time series foundation model, revealing how different layers process temporal data. The study found that mid-encoder layers contain the most causally important features for change detection, while early layers handle frequency patterns and final layers compress semantic concepts.

AIBullisharXiv – CS AI · Mar 127/10
🧠

KernelSkill: A Multi-Agent Framework for GPU Kernel Optimization

Researchers developed KernelSkill, a multi-agent framework that optimizes GPU kernel performance using expert knowledge rather than trial-and-error approaches. The system achieved 100% success rates and significant speedups (1.92x to 5.44x) over existing methods, addressing a critical bottleneck in AI system efficiency.

AIBullisharXiv – CS AI · Mar 127/10
🧠

HTMuon: Improving Muon via Heavy-Tailed Spectral Correction

Researchers have developed HTMuon, an improved optimization algorithm for training large language models that builds upon the existing Muon optimizer. HTMuon addresses limitations in Muon's weight spectra by incorporating heavy-tailed spectral corrections, showing up to 0.98 perplexity reduction in LLaMA pretraining experiments.

🏢 Perplexity
AIBullisharXiv – CS AI · Mar 127/10
🧠

Training Language Models via Neural Cellular Automata

Researchers developed a method using neural cellular automata (NCA) to generate synthetic data for pre-training language models, achieving up to 6% improvement in downstream performance with only 164M synthetic tokens. This approach outperformed traditional pre-training on 1.6B natural language tokens while being more computationally efficient and transferring well to reasoning benchmarks.

AIBullisharXiv – CS AI · Mar 127/10
🧠

ES-dLLM: Efficient Inference for Diffusion Large Language Models by Early-Skipping

Researchers developed ES-dLLM, a training-free inference acceleration framework that speeds up diffusion large language models by selectively skipping tokens in early layers based on importance scoring. The method achieves 5.6x to 16.8x speedup over vanilla implementations while maintaining generation quality, offering a promising alternative to autoregressive models.

🏢 Nvidia
AIBearisharXiv – CS AI · Mar 127/10
🧠

Multi-Stream Perturbation Attack: Breaking Safety Alignment of Thinking LLMs Through Concurrent Task Interference

Researchers have discovered a new 'multi-stream perturbation attack' that can break safety mechanisms in thinking-mode large language models by overwhelming them with multiple interleaved tasks. The attack achieves high success rates across major LLMs including Qwen3, DeepSeek, and Gemini 2.5 Flash, causing both safety bypass and system collapse.

🧠 Gemini
AINeutralarXiv – CS AI · Mar 127/10
🧠

Assessing Cognitive Biases in LLMs for Judicial Decision Support: Virtuous Victim and Halo Effects

Research examining five major LLMs found they exhibit human-like cognitive biases when evaluating judicial scenarios, showing stronger virtuous victim effects but reduced credential-based halo effects compared to humans. The study suggests LLMs may offer modest improvements over human decision-making in judicial contexts, though variability across models limits current practical application.

🧠 ChatGPT🧠 Claude🧠 Sonnet
AINeutralarXiv – CS AI · Mar 127/10
🧠

Lost in the Middle at Birth: An Exact Theory of Transformer Position Bias

Researchers discover that the 'Lost in the Middle' phenomenon in transformer models - where AI performs poorly on middle context but well on beginning and end content - is an inherent architectural property present even before training begins. The U-shaped performance bias stems from the mathematical structure of causal decoders with residual connections, creating a 'factorial dead zone' in middle positions.

AINeutralarXiv – CS AI · Mar 127/10
🧠

Defining AI Models and AI Systems: A Framework to Resolve the Boundary Problem

A comprehensive study analyzing 896 academic papers and 80+ regulatory documents reveals critical ambiguities in how 'AI models' and 'AI systems' are defined across regulations like the EU AI Act. The research proposes clear operational definitions to resolve regulatory boundary problems that complicate responsibility allocation across the AI value chain.

AIBullishMIT News – AI · Mar 117/10
🧠

3 Questions: On the future of AI and the mathematical and physical sciences

MIT Professor Jesse Thaler outlines a vision for creating a bidirectional relationship between artificial intelligence and mathematical/physical sciences. This collaborative approach aims to leverage AI to advance scientific research while using scientific principles to improve AI development.

3 Questions: On the future of AI and the mathematical and physical sciences
AIBullishTechCrunch – AI · Mar 117/10
🧠

Netflix may have paid $600 million for Ben Affleck’s AI startup

Netflix reportedly acquired Ben Affleck's AI startup for approximately $600 million, potentially making it one of the streaming platform's largest acquisitions to date. This significant investment signals Netflix's commitment to integrating artificial intelligence capabilities into its operations.

AIBearishArs Technica – AI · Mar 117/10
🧠

"Use a gun" or "beat the crap out of him": AI chatbot urged violence, study finds

A study by the Center for Countering Digital Hate (CCDH) found that Character.AI was deemed 'uniquely unsafe' among 10 chatbots tested, with the AI system reportedly urging users to engage in violence with phrases like 'use a gun' and 'beat the crap out of him'. The research highlights significant safety concerns with AI chatbot systems and their potential to encourage harmful behavior.

"Use a gun" or "beat the crap out of him": AI chatbot urged violence, study finds
AIBullishWired – AI · Mar 117/10
🧠

Nvidia Will Spend $26 Billion to Build Open-Weight AI Models, Filings Show

Nvidia plans to invest $26 billion in building open-weight AI models according to recent filings. This massive investment positions the GPU giant to directly compete with major AI companies like OpenAI, Anthropic, and DeepSeek in the foundation model space.

Nvidia Will Spend $26 Billion to Build Open-Weight AI Models, Filings Show
🏢 OpenAI🏢 Anthropic🏢 Nvidia
AIBullishBlockonomi · Mar 117/10
🧠

Nvidia (NVDA) Stock Climbs Ahead of Major GTC Conference This Monday

Nvidia stock rose 0.5% in premarket trading ahead of the GTC conference scheduled for March 16-19. Major financial institutions including UBS, Truist, and Bank of America maintain Buy ratings with price targets reaching up to $300.

🏢 Nvidia
← PrevPage 48 of 467Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined