y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All29,788🧠AI12,754⛓️Crypto10,798💎DeFi1,116🤖AI × Crypto548📰General4,572

AI × Crypto News Feed

Real-time AI-curated news from 29,788+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

29788 articles
AIBullisharXiv – CS AI · Mar 267/10
🧠

Bottlenecked Transformers: Periodic KV Cache Consolidation for Generalised Reasoning

Researchers introduce Bottlenecked Transformers, a new architecture that improves AI reasoning by up to 6.6 percentage points through periodic memory consolidation inspired by brain processes. The system uses a Cache Processor to rewrite key-value cache entries at reasoning step boundaries, achieving better performance on math reasoning benchmarks compared to standard Transformers.

AIBearisharXiv – CS AI · Mar 267/10
🧠

Enhancing Jailbreak Attacks on LLMs via Persona Prompts

Researchers developed a genetic algorithm-based method using persona prompts to exploit large language models, reducing refusal rates by 50-70% across multiple LLMs. The study reveals significant vulnerabilities in AI safety mechanisms and demonstrates how these attacks can be enhanced when combined with existing methods.

AINeutralarXiv – CS AI · Mar 267/10
🧠

Evaluation of Large Language Models via Coupled Token Generation

Researchers propose a new method called coupled autoregressive generation to evaluate large language models more efficiently by controlling for randomness in their responses. The study shows this approach can reduce evaluation samples by up to 75% while revealing that current model rankings may be confounded by inherent randomness in generation processes.

🧠 Llama
AIBullisharXiv – CS AI · Mar 267/10
🧠

Moonwalk: Inverse-Forward Differentiation

Researchers introduce Moonwalk, a new algorithm that solves backpropagation's memory limitations by eliminating the need to store intermediate activations during neural network training. The method uses vector-inverse-Jacobian products and submersive networks to reconstruct gradients in a forward sweep, enabling training of networks more than twice as deep under the same memory constraints.

AINeutralarXiv – CS AI · Mar 267/10
🧠

Entire Space Counterfactual Learning for Reliable Content Recommendations

Researchers developed ESCM² (Entire Space Counterfactual Multitask Model), a new framework that improves post-click conversion rate estimation in recommender systems by addressing intrinsic estimation bias and false independence assumptions. The model-agnostic approach incorporates counterfactual learning to enhance recommendation accuracy and has been validated on large-scale industrial datasets.

AINeutralarXiv – CS AI · Mar 267/10
🧠

The Collaboration Paradox: Why Generative AI Requires Both Strategic Intelligence and Operational Stability in Supply Chain Management

Research reveals a 'collaboration paradox' where AI agents using Large Language Models in supply chain management perform worse than non-AI baselines due to inventory hoarding behavior. The study proposes a two-layer solution combining high-level AI policy-setting with low-level collaborative execution protocols to achieve operational stability.

AIBullisharXiv – CS AI · Mar 267/10
🧠

OSS-CRS: Liberating AIxCC Cyber Reasoning Systems for Real-World Open-Source Security

Researchers have created OSS-CRS, an open framework that makes DARPA's AI Cyber Challenge systems usable for real-world cybersecurity applications. The system successfully ported the winning Atlantis CRS and discovered 10 previously unknown bugs, including three high-severity issues, across 8 open-source projects.

AINeutralarXiv – CS AI · Mar 267/10
🧠

Divide, then Ground: Adapting Frame Selection to Query Types for Long-Form Video Understanding

Researchers propose DIG, a training-free framework that improves long-form video understanding by adapting frame selection strategies based on query types. The system uses uniform sampling for global queries and specialized selection for localized queries, achieving better performance than existing methods while scaling to 256 input frames.

AIBullisharXiv – CS AI · Mar 267/10
🧠

DanQing: An Up-to-Date Large-Scale Chinese Vision-Language Pre-training Dataset

Researchers have released DanQing, a large-scale Chinese vision-language dataset containing 100 million high-quality image-text pairs curated from Common Crawl data. The dataset addresses the bottleneck in Chinese VLP development and demonstrates superior performance compared to existing Chinese datasets across various AI tasks.

AIBullisharXiv – CS AI · Mar 267/10
🧠

Physics-driven human-like working memory outperforms digital networks in dynamic vision

Researchers have developed a physics-driven AI system called Intrinsic Plasticity Network (IPNet) that uses magnetic tunnel junctions to create human-like working memory. The system demonstrates 18x error reduction in dynamic vision tasks while reducing memory-energy overhead by over 90,000x compared to traditional digital AI systems.

AINeutralarXiv – CS AI · Mar 267/10
🧠

From Guidelines to Guarantees: A Graph-Based Evaluation Harness for Domain-Specific Evaluation of LLMs

Researchers developed a graph-based evaluation framework that transforms clinical guidelines into dynamic benchmarks for testing domain-specific language models. The system addresses key evaluation challenges by providing contamination resistance, comprehensive coverage, and maintainable assessment tools that reveal systematic capability gaps in current AI models.

AIBullisharXiv – CS AI · Mar 267/10
🧠

Toward Ultra-Long-Horizon Agentic Science: Cognitive Accumulation for Machine Learning Engineering

Researchers have developed ML-Master 2.0, an autonomous AI agent that achieves breakthrough performance in ultra-long-horizon machine learning tasks by using Hierarchical Cognitive Caching architecture. The system achieved a 56.44% medal rate on OpenAI's MLE-Bench, demonstrating the ability to maintain strategic coherence over experimental cycles spanning days or weeks.

🏢 OpenAI
AIBullisharXiv – CS AI · Mar 267/10
🧠

ODMA: On-Demand Memory Allocation Strategy for LLM Serving on LPDDR-Class Accelerators

Researchers developed ODMA, a new memory allocation strategy that improves Large Language Model serving performance on memory-constrained accelerators by up to 27%. The technique addresses bandwidth limitations in LPDDR systems through adaptive bucket partitioning and dynamic generation-length prediction.

AIBullisharXiv – CS AI · Mar 267/10
🧠

From Pixels to Digital Agents: An Empirical Study on the Taxonomy and Technological Trends of Reinforcement Learning Environments

Researchers conducted a large-scale empirical study analyzing over 2,000 publications to map the evolution of reinforcement learning environments. The study reveals a paradigm shift toward two distinct ecosystems: LLM-driven 'Semantic Prior' agents and 'Domain-Specific Generalization' systems, providing a roadmap for next-generation AI simulators.

AIBullisharXiv – CS AI · Mar 267/10
🧠

AI-Supervisor: Autonomous AI Research Supervision via a Persistent Research World Model

Researchers have developed AI-Supervisor, a multi-agent framework that maintains a persistent Research World Model to autonomously conduct end-to-end AI research supervision. Unlike traditional linear pipelines, the system uses specialized agents with structured gap discovery, self-correcting loops, and consensus mechanisms to continuously evolve research understanding.

AIBearisharXiv – CS AI · Mar 267/10
🧠

When AI output tips to bad but nobody notices: Legal implications of AI's mistakes

Research reveals that generative AI's legal fabrications aren't random 'hallucinations' but predictable failures when the AI's internal state crosses a calculable threshold. The study shows AI can flip from reliable legal reasoning to creating fake case law and statutes, posing serious risks for attorneys and courts who may unknowingly use fabricated legal content.

AIBullisharXiv – CS AI · Mar 267/10
🧠

SCoOP: Semantic Consistent Opinion Pooling for Uncertainty Quantification in Multiple Vision-Language Model Systems

Researchers developed SCoOP, a training-free framework that combines multiple Vision-Language Models to improve uncertainty quantification and reduce hallucinations in AI systems. The method achieves 10-13% better hallucination detection performance compared to existing approaches while adding only microsecond-level overhead to processing time.

AI × CryptoBullishCrypto Briefing · Mar 267/10
🤖

Metanova Labs: Bittensor revolutionizes drug discovery with decentralized virtual screening, combinatorial reactions expand possibilities to 65 billion, and dual incentives drive innovation | TWIST

Metanova Labs is revolutionizing drug discovery by using Bittensor's decentralized AI network to screen billions of molecules efficiently. The platform utilizes combinatorial reactions to expand screening possibilities to 65 billion compounds and implements dual incentive mechanisms to drive innovation in pharmaceutical research.

Metanova Labs: Bittensor revolutionizes drug discovery with decentralized virtual screening, combinatorial reactions expand possibilities to 65 billion, and dual incentives drive innovation | TWIST
$TAO
DeFiBearishDL News · Mar 267/10
💎

Crypto dev loses lawsuit seeking protection from DOJ

A Texas judge ruled against a crypto developer who sought legal protection from potential DOJ prosecution for publishing DeFi code. The court determined the developer failed to demonstrate sufficient evidence that criminal charges would actually be filed against them for code publication.

Crypto dev loses lawsuit seeking protection from DOJ
AIBearishThe Register – AI · Mar 267/10
🧠

GitHub hits CTRL-Z, decides it will train its AI with user data after all

GitHub has reversed its previous decision and will now train its AI systems using user data from its platform. This policy change affects millions of developers who store code repositories on GitHub, raising concerns about data privacy and intellectual property rights in AI training.

AIBullishApple Machine Learning · Mar 267/10
🧠

Revisiting the Scaling Properties of Downstream Metrics in Large Language Model Training

Researchers propose a new framework for predicting Large Language Model performance on downstream tasks directly from training budget, finding that simple power laws can accurately model scaling behavior. This challenges the traditional view that downstream task performance prediction is unreliable, offering better extrapolation than previous two-stage methods.

AIBearishCrypto Briefing · Mar 257/10
🧠

Mark Warner: Government and society are unprepared for AI advancements, rising unemployment among recent graduates, and the urgent need for regulatory action | Big Technology

Senator Mark Warner warns that government and society are unprepared for AI's rapid advancement, which is contributing to rising unemployment among recent graduates. He calls for urgent regulatory action to prevent broader economic disruption as AI threatens job security across multiple sectors.

Mark Warner: Government and society are unprepared for AI advancements, rising unemployment among recent graduates, and the urgent need for regulatory action | Big Technology
AIBullishDecrypt · Mar 257/10
🧠

Google Shrinks AI Memory With No Accuracy Loss—But There's a Catch

Google has developed a technique that significantly reduces memory requirements for running large language models as context windows expand, without compromising accuracy. This breakthrough addresses a major constraint in AI deployment, though the article suggests there are limitations to the approach.

Google Shrinks AI Memory With No Accuracy Loss—But There's a Catch
← PrevPage 187 of 1192Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined