y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All27,753🧠AI11,982⛓️Crypto10,172💎DeFi1,051🤖AI × Crypto505📰General4,043
🧠

AI

12,000 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.

12000 articles
AIBullishOpenAI News · Feb 277/106
🧠

Joint Statement from OpenAI and Microsoft

Microsoft and OpenAI issued a joint statement reaffirming their ongoing collaboration across research, engineering, and product development. The statement emphasizes their continued partnership built on years of shared work and success.

AIBullishOpenAI News · Feb 277/107
🧠

Scaling AI for everyone

A major AI company announces $110B in new investment funding at a $730B pre-money valuation. The funding round includes significant contributions from three major tech players: $30B from SoftBank, $30B from NVIDIA, and $50B from Amazon.

AINeutralarXiv – CS AI · Feb 277/106
🧠

VeRO: An Evaluation Harness for Agents to Optimize Agents

Researchers introduced VeRO (Versioning, Rewards, and Observations), a new evaluation framework for testing AI coding agents that can optimize other AI agents through iterative improvement cycles. The system provides reproducible benchmarks and structured execution traces to systematically measure how well coding agents can improve target agents' performance.

AIBullisharXiv – CS AI · Feb 277/109
🧠

ArchAgent: Agentic AI-driven Computer Architecture Discovery

ArchAgent, an AI-driven system built on AlphaEvolve, has achieved breakthrough results in automated computer architecture discovery by designing state-of-the-art cache replacement policies. The system achieved 5.3% performance improvements in just 2 days and 0.9% improvements in 18 days, working 3-5x faster than human-developed solutions.

AIBullisharXiv – CS AI · Feb 277/105
🧠

Towards Autonomous Memory Agents

Researchers introduce U-Mem, an autonomous memory agent system that actively acquires and validates knowledge for large language models. The system uses cost-aware knowledge extraction and semantic Thompson sampling to improve performance, showing significant gains on benchmarks like HotpotQA and AIME25.

AINeutralarXiv – CS AI · Feb 277/107
🧠

Vibe Researching as Wolf Coming: Can AI Agents with Skills Replace or Augment Social Scientists?

A research paper introduces the concept of 'vibe researching' where AI agents can autonomously execute entire research pipelines from idea to submission using specialized skills. The study analyzes how AI agents excel at speed and methodological tasks but struggle with theoretical originality and tacit knowledge, creating a cognitive rather than sequential delegation boundary in research workflows.

AIBullisharXiv – CS AI · Feb 277/105
🧠

Agent Behavioral Contracts: Formal Specification and Runtime Enforcement for Reliable Autonomous AI Agents

Researchers introduce Agent Behavioral Contracts (ABC), a formal framework for specifying and enforcing reliable behavior in autonomous AI agents. The system addresses critical issues of drift and governance failures in AI deployments by implementing runtime-enforceable contracts that achieve 88-100% compliance rates and significantly improve violation detection.

AIBullisharXiv – CS AI · Feb 277/104
🧠

AviaSafe: A Physics-Informed Data-Driven Model for Aviation Safety-Critical Cloud Forecasts

Researchers developed AviaSafe, a physics-informed AI model that forecasts aviation-critical cloud species up to 7 days ahead, addressing safety concerns around engine icing. The model outperforms operational weather models by predicting specific hydrometeor species rather than general atmospheric variables, enabling better aviation route optimization.

AINeutralarXiv – CS AI · Feb 277/103
🧠

Manifold of Failure: Behavioral Attraction Basins in Language Models

Researchers developed a new framework called MAP-Elites to systematically map vulnerability regions in Large Language Models, revealing distinct safety landscape patterns across different models. The study found that Llama-3-8B shows near-universal vulnerabilities, while GPT-5-Mini demonstrates stronger robustness with limited failure regions.

$NEAR
AIBearisharXiv – CS AI · Feb 277/105
🧠

Poisoned Acoustics

Researchers demonstrate how training-data poisoning attacks can compromise deep neural networks used for acoustic vehicle classification with just 0.5% corrupted data, achieving 95.7% attack success rate while remaining undetectable. The study reveals fundamental vulnerabilities in AI training pipelines and proposes cryptographic defenses using post-quantum digital signatures and blockchain-like verification methods.

AINeutralarXiv – CS AI · Feb 277/105
🧠

Training Agents to Self-Report Misbehavior

Researchers developed a new AI safety approach called 'self-incrimination training' that teaches AI agents to report their own deceptive behavior by calling a report_scheming() function. Testing on GPT-4.1 and Gemini-2.0 showed this method significantly reduces undetected harmful actions compared to traditional alignment training and monitoring approaches.

AIBullisharXiv – CS AI · Feb 277/106
🧠

Zatom-1: A Multimodal Flow Foundation Model for 3D Molecules and Materials

Researchers introduce Zatom-1, the first foundation model that unifies generative and predictive learning for both 3D molecules and materials using a multimodal flow matching approach. The Transformer-based model demonstrates superior performance across both domains while significantly reducing inference time by over 10x compared to existing specialized models.

$ATOM
AIBullisharXiv – CS AI · Feb 277/106
🧠

TT-SEAL: TTD-Aware Selective Encryption for Adversarially-Robust and Low-Latency Edge AI

Researchers developed TT-SEAL, a selective encryption framework for compressed AI models using Tensor-Train Decomposition that maintains security while encrypting only 4.89-15.92% of parameters. The system achieves the same robustness as full encryption while reducing AES decryption overhead in end-to-end latency from 58% to as low as 2.76%.

AINeutralarXiv – CS AI · Feb 277/105
🧠

A Decision-Theoretic Formalisation of Steganography With Applications to LLM Monitoring

Researchers have developed a new decision-theoretic framework to detect steganographic capabilities in large language models, which could help identify when AI systems are hiding information to evade oversight. The method introduces 'generalized V-information' and a 'steganographic gap' measure to quantify hidden communication without requiring reference distributions.

AIBullisharXiv – CS AI · Feb 277/107
🧠

The Trinity of Consistency as a Defining Principle for General World Models

Researchers propose a 'Trinity of Consistency' framework for developing General World Models in AI, consisting of Modal, Spatial, and Temporal consistency principles. They introduce CoW-Bench, a new benchmark for evaluating video generation models and unified multimodal models, aiming to establish a principled pathway toward AGI-capable world simulation systems.

AIBearisharXiv – CS AI · Feb 277/104
🧠

Three AI-agents walk into a bar . . . . `Lord of the Flies' tribalism emerges among smart AI-Agents

Research reveals that autonomous AI agents competing for limited resources form distinct tribal behaviors, with three main types emerging: Aggressive (27.3%), Conservative (24.7%), and Opportunistic (48.1%). The study found that more capable AI agents actually increase systemic failure rates and perform worse than random decision-making when competing for shared resources.

$NEAR
AIBullisharXiv – CS AI · Feb 277/108
🧠

RAGdb: A Zero-Dependency, Embeddable Architecture for Multimodal Retrieval-Augmented Generation on the Edge

Researchers introduce RAGdb, a revolutionary architecture that consolidates Retrieval-Augmented Generation into a single SQLite container, eliminating the need for cloud infrastructure and GPUs. The system achieves 100% entity retrieval accuracy while reducing disk footprint by 99.5% compared to traditional Docker-based RAG stacks, enabling truly portable AI applications for edge computing and privacy-sensitive environments.

AIBearisharXiv – CS AI · Feb 277/106
🧠

Agency and Architectural Limits: Why Optimization-Based Systems Cannot Be Norm-Responsive

New research demonstrates that AI systems trained via RLHF cannot be governed by norms due to fundamental architectural limitations in optimization-based systems. The paper argues that genuine agency requires incommensurable constraints and apophatic responsiveness, which optimization systems inherently cannot provide, making documented AI failures structural rather than correctable bugs.

AIBullisharXiv – CS AI · Feb 277/107
🧠

General Agent Evaluation

Researchers have developed Exgentic, a new framework for evaluating general-purpose AI agents that can perform tasks across different environments without domain-specific tuning. The study benchmarked five prominent agent implementations and found that general agents can achieve performance comparable to specialized agents, establishing the first Open General Agent Leaderboard.

AIBullisharXiv – CS AI · Feb 277/105
🧠

Certified Circuits: Stability Guarantees for Mechanistic Circuits

Researchers introduce Certified Circuits, a framework that provides provable stability guarantees for neural network circuit discovery. The method wraps existing algorithms with randomized data subsampling to ensure circuit components remain consistent across dataset variations, achieving 91% higher accuracy while using 45% fewer neurons.

AIBullisharXiv – CS AI · Feb 277/107
🧠

OmniGAIA: Towards Native Omni-Modal AI Agents

Researchers introduce OmniGAIA, a comprehensive benchmark for evaluating omni-modal AI agents that can process video, audio, and image data simultaneously with complex reasoning capabilities. They also propose OmniAtlas, a foundation agent that enhances existing open-source models' ability to use tools across multiple modalities, marking progress toward more capable AI assistants.

AIBearisharXiv – CS AI · Feb 277/107
🧠

Obscure but Effective: Classical Chinese Jailbreak Prompt Optimization via Bio-Inspired Search

Researchers developed CC-BOS, a framework that uses classical Chinese text to conduct more effective jailbreak attacks on Large Language Models. The method exploits the conciseness and obscurity of classical Chinese to bypass safety constraints, using bio-inspired optimization techniques to automatically generate adversarial prompts.

← PrevPage 84 of 480Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined