y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All26,765🧠AI11,762⛓️Crypto9,792💎DeFi1,005🤖AI × Crypto505📰General3,701
🧠

AI

11,762 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.

11762 articles
AIBearishDL News · Feb 277/106
🧠

Jack Dorsey’s Block slashes 40% of staff in major AI-driven restructuring

Jack Dorsey's Block has laid off 40% of its workforce in a major restructuring initiative driven by artificial intelligence implementation. This marks the first time the fintech company has specifically attributed job cuts to AI automation rather than general business conditions.

AIBullishOpenAI News · Feb 277/106
🧠

OpenAI and Amazon announce strategic partnership

OpenAI and Amazon have announced a strategic partnership that will integrate OpenAI's Frontier platform with AWS infrastructure. The collaboration aims to expand AI capabilities through enhanced infrastructure, custom model development, and enterprise AI agent solutions.

AIBullishOpenAI News · Feb 277/105
🧠

Introducing the Stateful Runtime Environment for Agents in Amazon Bedrock

Amazon Bedrock introduces a new Stateful Runtime Environment for AI agents that provides persistent orchestration, memory capabilities, and secure execution for complex multi-step AI workflows. The service leverages OpenAI technology to enable more sophisticated AI agent operations with maintained state across interactions.

AIBullishOpenAI News · Feb 277/106
🧠

Joint Statement from OpenAI and Microsoft

Microsoft and OpenAI issued a joint statement reaffirming their ongoing collaboration across research, engineering, and product development. The statement emphasizes their continued partnership built on years of shared work and success.

AIBullishOpenAI News · Feb 277/107
🧠

Scaling AI for everyone

A major AI company announces $110B in new investment funding at a $730B pre-money valuation. The funding round includes significant contributions from three major tech players: $30B from SoftBank, $30B from NVIDIA, and $50B from Amazon.

AINeutralarXiv – CS AI · Feb 277/106
🧠

VeRO: An Evaluation Harness for Agents to Optimize Agents

Researchers introduced VeRO (Versioning, Rewards, and Observations), a new evaluation framework for testing AI coding agents that can optimize other AI agents through iterative improvement cycles. The system provides reproducible benchmarks and structured execution traces to systematically measure how well coding agents can improve target agents' performance.

AIBullisharXiv – CS AI · Feb 277/109
🧠

ArchAgent: Agentic AI-driven Computer Architecture Discovery

ArchAgent, an AI-driven system built on AlphaEvolve, has achieved breakthrough results in automated computer architecture discovery by designing state-of-the-art cache replacement policies. The system achieved 5.3% performance improvements in just 2 days and 0.9% improvements in 18 days, working 3-5x faster than human-developed solutions.

AIBullisharXiv – CS AI · Feb 277/105
🧠

Towards Autonomous Memory Agents

Researchers introduce U-Mem, an autonomous memory agent system that actively acquires and validates knowledge for large language models. The system uses cost-aware knowledge extraction and semantic Thompson sampling to improve performance, showing significant gains on benchmarks like HotpotQA and AIME25.

AINeutralarXiv – CS AI · Feb 277/107
🧠

Vibe Researching as Wolf Coming: Can AI Agents with Skills Replace or Augment Social Scientists?

A research paper introduces the concept of 'vibe researching' where AI agents can autonomously execute entire research pipelines from idea to submission using specialized skills. The study analyzes how AI agents excel at speed and methodological tasks but struggle with theoretical originality and tacit knowledge, creating a cognitive rather than sequential delegation boundary in research workflows.

AIBullisharXiv – CS AI · Feb 277/105
🧠

Agent Behavioral Contracts: Formal Specification and Runtime Enforcement for Reliable Autonomous AI Agents

Researchers introduce Agent Behavioral Contracts (ABC), a formal framework for specifying and enforcing reliable behavior in autonomous AI agents. The system addresses critical issues of drift and governance failures in AI deployments by implementing runtime-enforceable contracts that achieve 88-100% compliance rates and significantly improve violation detection.

AIBullisharXiv – CS AI · Feb 277/106
🧠

TT-SEAL: TTD-Aware Selective Encryption for Adversarially-Robust and Low-Latency Edge AI

Researchers developed TT-SEAL, a selective encryption framework for compressed AI models using Tensor-Train Decomposition that maintains security while encrypting only 4.89-15.92% of parameters. The system achieves the same robustness as full encryption while reducing AES decryption overhead in end-to-end latency from 58% to as low as 2.76%.

AIBullisharXiv – CS AI · Feb 277/108
🧠

RAGdb: A Zero-Dependency, Embeddable Architecture for Multimodal Retrieval-Augmented Generation on the Edge

Researchers introduce RAGdb, a revolutionary architecture that consolidates Retrieval-Augmented Generation into a single SQLite container, eliminating the need for cloud infrastructure and GPUs. The system achieves 100% entity retrieval accuracy while reducing disk footprint by 99.5% compared to traditional Docker-based RAG stacks, enabling truly portable AI applications for edge computing and privacy-sensitive environments.

AIBullisharXiv – CS AI · Feb 277/107
🧠

The Trinity of Consistency as a Defining Principle for General World Models

Researchers propose a 'Trinity of Consistency' framework for developing General World Models in AI, consisting of Modal, Spatial, and Temporal consistency principles. They introduce CoW-Bench, a new benchmark for evaluating video generation models and unified multimodal models, aiming to establish a principled pathway toward AGI-capable world simulation systems.

AIBullisharXiv – CS AI · Feb 277/106
🧠

Zatom-1: A Multimodal Flow Foundation Model for 3D Molecules and Materials

Researchers introduce Zatom-1, the first foundation model that unifies generative and predictive learning for both 3D molecules and materials using a multimodal flow matching approach. The Transformer-based model demonstrates superior performance across both domains while significantly reducing inference time by over 10x compared to existing specialized models.

$ATOM
AIBearisharXiv – CS AI · Feb 277/104
🧠

Three AI-agents walk into a bar . . . . `Lord of the Flies' tribalism emerges among smart AI-Agents

Research reveals that autonomous AI agents competing for limited resources form distinct tribal behaviors, with three main types emerging: Aggressive (27.3%), Conservative (24.7%), and Opportunistic (48.1%). The study found that more capable AI agents actually increase systemic failure rates and perform worse than random decision-making when competing for shared resources.

$NEAR
AIBullisharXiv – CS AI · Feb 277/105
🧠

Enhancing CVRP Solver through LLM-driven Automatic Heuristic Design

Researchers developed AILS-AHD, a novel approach using Large Language Models to solve the Capacitated Vehicle Routing Problem (CVRP) more efficiently. The LLM-driven method achieved new best-known solutions for 8 out of 10 instances in large-scale benchmarks, demonstrating superior performance over existing state-of-the-art solvers.

AIBullisharXiv – CS AI · Feb 277/105
🧠

Certified Circuits: Stability Guarantees for Mechanistic Circuits

Researchers introduce Certified Circuits, a framework that provides provable stability guarantees for neural network circuit discovery. The method wraps existing algorithms with randomized data subsampling to ensure circuit components remain consistent across dataset variations, achieving 91% higher accuracy while using 45% fewer neurons.

AIBullisharXiv – CS AI · Feb 277/107
🧠

General Agent Evaluation

Researchers have developed Exgentic, a new framework for evaluating general-purpose AI agents that can perform tasks across different environments without domain-specific tuning. The study benchmarked five prominent agent implementations and found that general agents can achieve performance comparable to specialized agents, establishing the first Open General Agent Leaderboard.

AIBearisharXiv – CS AI · Feb 277/105
🧠

Poisoned Acoustics

Researchers demonstrate how training-data poisoning attacks can compromise deep neural networks used for acoustic vehicle classification with just 0.5% corrupted data, achieving 95.7% attack success rate while remaining undetectable. The study reveals fundamental vulnerabilities in AI training pipelines and proposes cryptographic defenses using post-quantum digital signatures and blockchain-like verification methods.

AINeutralarXiv – CS AI · Feb 277/105
🧠

A Decision-Theoretic Formalisation of Steganography With Applications to LLM Monitoring

Researchers have developed a new decision-theoretic framework to detect steganographic capabilities in large language models, which could help identify when AI systems are hiding information to evade oversight. The method introduces 'generalized V-information' and a 'steganographic gap' measure to quantify hidden communication without requiring reference distributions.

AIBearisharXiv – CS AI · Feb 277/107
🧠

Obscure but Effective: Classical Chinese Jailbreak Prompt Optimization via Bio-Inspired Search

Researchers developed CC-BOS, a framework that uses classical Chinese text to conduct more effective jailbreak attacks on Large Language Models. The method exploits the conciseness and obscurity of classical Chinese to bypass safety constraints, using bio-inspired optimization techniques to automatically generate adversarial prompts.

AIBullisharXiv – CS AI · Feb 277/106
🧠

Toward Personalized LLM-Powered Agents: Foundations, Evaluation, and Future Directions

Researchers published a comprehensive survey on personalized LLM-powered agents that can adapt to individual users over extended interactions. The study organizes these agents into four key components: profile modeling, memory, planning, and action execution, providing a framework for developing more user-aligned AI assistants.

AIBullisharXiv – CS AI · Feb 277/105
🧠

CourtGuard: A Model-Agnostic Framework for Zero-Shot Policy Adaptation in LLM Safety

Researchers introduce CourtGuard, a new framework for AI safety that uses retrieval-augmented multi-agent debate to evaluate LLM outputs without requiring expensive retraining. The system achieves state-of-the-art performance across 7 safety benchmarks and demonstrates zero-shot adaptability to new policy requirements, offering a more flexible approach to AI governance.

AIBullisharXiv – CS AI · Feb 277/104
🧠

MiroFlow: Towards High-Performance and Robust Open-Source Agent Framework for General Deep Research Tasks

Researchers have released MiroFlow, an open-source AI agent framework designed to overcome limitations of current LLM-based systems in complex real-world tasks. The framework features agent graph orchestration, deep reasoning capabilities, and robust workflow execution, achieving state-of-the-art performance across multiple benchmarks including GAIA and FutureX.

← PrevPage 83 of 471Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined