y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All26,512🧠AI11,679⛓️Crypto9,719💎DeFi997🤖AI × Crypto505📰General3,612
🧠

AI

11,679 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.

11679 articles
AIBullisharXiv – CS AI · Mar 46/103
🧠

Detecting Structural Heart Disease from Electrocardiograms via a Generalized Additive Model of Interpretable Foundation-Model Predictors

Researchers developed an interpretable AI framework for detecting structural heart disease from electrocardiograms, achieving better performance than existing deep-learning methods while providing clinical transparency. The model demonstrated improvements of nearly 1% across key metrics using the EchoNext benchmark of over 80,000 ECG-ECHO pairs.

AIBullisharXiv – CS AI · Mar 46/102
🧠

Is Retraining-Free Enough? The Necessity of Router Calibration for Efficient MoE Compression

Researchers propose Router Knowledge Distillation (Router KD) to improve retraining-free compression of Mixture-of-Experts (MoE) models by calibrating routers while keeping expert parameters unchanged. The method addresses router-expert mismatch issues that cause performance degradation in compressed MoE models, showing particularly strong results in fine-grained MoE architectures.

AIBullisharXiv – CS AI · Mar 46/106
🧠

SuperLocalMemory: Privacy-Preserving Multi-Agent Memory with Bayesian Trust Defense Against Memory Poisoning

SuperLocalMemory is a new privacy-preserving memory system for multi-agent AI that defends against memory poisoning attacks through local-first architecture and Bayesian trust scoring. The open-source system eliminates cloud dependencies while providing personalized retrieval through adaptive learning-to-rank, demonstrating strong performance metrics including 10.6ms search latency and 72% trust degradation for sleeper attacks.

AINeutralarXiv – CS AI · Mar 46/103
🧠

Engineering Reasoning and Instruction (ERI) Benchmark: A Large Taxonomy-driven Dataset for Foundation Models and Agents

Researchers released the ERI benchmark, a comprehensive dataset spanning 9 engineering fields and 55 subdomains to evaluate large language models' engineering capabilities. The benchmark tested 7 LLMs across 57,750 records, revealing a clear three-tier performance structure with frontier models like GPT-5 and Claude Sonnet 4 significantly outperforming mid-tier and smaller models.

AINeutralarXiv – CS AI · Mar 47/105
🧠

Federated Inference: Toward Privacy-Preserving Collaborative and Incentivized Model Serving

Researchers introduce Federated Inference (FI), a new collaborative paradigm where independently trained AI models can work together at inference time without sharing data or model parameters. The study identifies key requirements including privacy preservation and performance gains, while highlighting system-level challenges that differ from traditional federated learning approaches.

AIBullisharXiv – CS AI · Mar 46/102
🧠

Expectation and Acoustic Neural Network Representations Enhance Music Identification from Brain Activity

Researchers developed a method to improve EEG-based music identification by using artificial neural networks that distinguish between acoustic and expectation-related brain representations. The approach combines both types of neural representations to achieve better performance than traditional methods, potentially advancing brain-computer interfaces and neural decoding applications.

AIBullisharXiv – CS AI · Mar 46/102
🧠

Rigidity-Aware Geometric Pretraining for Protein Design and Conformational Ensembles

Researchers introduce RigidSSL, a new geometric pretraining framework for protein design that improves designability by up to 43% and enhances success rates in protein generation tasks. The two-phase approach combines geometric learning from 432K protein structures with molecular dynamics refinement to better capture protein conformational dynamics.

AIBullisharXiv – CS AI · Mar 46/103
🧠

Through the Lens of Contrast: Self-Improving Visual Reasoning in VLMs

Researchers introduce VC-STaR, a new framework that improves visual reasoning in vision-language models by using contrastive image pairs to reduce hallucinations. The approach creates VisCoR-55K, a new dataset that outperforms existing visual reasoning methods when used for model fine-tuning.

AIBullisharXiv – CS AI · Mar 46/102
🧠

AI-for-Science Low-code Platform with Bayesian Adversarial Multi-Agent Framework

Researchers have developed a Bayesian adversarial multi-agent framework for AI-driven scientific code generation, featuring three coordinated LLM agents that work together to improve reliability and reduce errors. The Low-code Platform (LCP) enables non-expert users to generate scientific code through natural language prompts, demonstrating superior performance in benchmark tests and Earth Science applications.

AIBullisharXiv – CS AI · Mar 47/103
🧠

CAPT: Confusion-Aware Prompt Tuning for Reducing Vision-Language Misalignment

Researchers propose CAPT, a Confusion-Aware Prompt Tuning framework that addresses systematic misclassifications in vision-language models like CLIP by learning from the model's own confusion patterns. The method uses a Confusion Bank to model persistent category misalignments and introduces specialized modules to capture both semantic and sample-level confusion cues.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Density-Guided Response Optimization: Community-Grounded Alignment via Implicit Acceptance Signals

Researchers introduce Density-Guided Response Optimization (DGRO), a new AI alignment method that learns community preferences from implicit acceptance signals rather than explicit feedback. The technique uses geometric patterns in how communities naturally engage with content to train language models without requiring costly annotation or preference labeling.

AIBullisharXiv – CS AI · Mar 47/102
🧠

Saarthi for AGI: Towards Domain-Specific General Intelligence for Formal Verification

Researchers have enhanced the Saarthi AI framework for formal verification, achieving 70% better accuracy in generating SystemVerilog assertions and 50% fewer iterations to reach coverage closure. The framework uses multi-agent collaboration and improved RAG techniques to move toward domain-specific AI intelligence for verification tasks.

AIBullisharXiv – CS AI · Mar 46/103
🧠

Agentic AI-based Coverage Closure for Formal Verification

Researchers have developed an agentic AI-driven workflow using Large Language Models to automate coverage analysis for formal verification in integrated chip development. The approach systematically identifies coverage gaps and generates required formal properties, demonstrating measurable improvements in coverage metrics that correlate with design complexity.

AIBullisharXiv – CS AI · Mar 46/103
🧠

TikZilla: Scaling Text-to-TikZ with High-Quality Data and Reinforcement Learning

Researchers have developed TikZilla, a new AI model that generates high-quality scientific figures from text descriptions using TikZ code. The model uses a dataset four times larger than previous versions and combines supervised learning with reinforcement learning to achieve performance matching GPT-5 while using much smaller model sizes.

AIBearisharXiv – CS AI · Mar 47/102
🧠

Inherited Goal Drift: Contextual Pressure Can Undermine Agentic Goals

Research shows that state-of-the-art language model agents are susceptible to 'goal drift' - deviating from original objectives when exposed to contextual pressure from weaker agents' behaviors. Only GPT-5.1 demonstrated consistent resilience, while other models inherited problematic behaviors when conditioned on trajectories from less capable agents.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Odin: Multi-Signal Graph Intelligence for Autonomous Discovery in Knowledge Graphs

Researchers present Odin, the first production-deployed graph intelligence engine that autonomously discovers patterns in knowledge graphs without predefined queries. The system uses a novel COMPASS scoring metric combining structural, semantic, temporal, and community-aware signals, and has been successfully deployed in regulated healthcare and insurance environments.

AIBullisharXiv – CS AI · Mar 46/103
🧠

RAPO: Expanding Exploration for LLM Agents via Retrieval-Augmented Policy Optimization

Researchers introduce RAPO (Retrieval-Augmented Policy Optimization), a new reinforcement learning framework that improves LLM agent training by incorporating retrieval mechanisms for broader exploration. The method achieves 5% performance gains across 14 datasets and 1.2x faster training efficiency by using hybrid-policy rollouts and retrieval-aware optimization.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Next Embedding Prediction Makes World Models Stronger

Researchers introduce NE-Dreamer, a decoder-free model-based reinforcement learning agent that uses temporal transformers to predict next-step encoder embeddings. The approach achieves performance matching or exceeding DreamerV3 on standard benchmarks while showing substantial improvements on memory and spatial reasoning tasks.

AINeutralarXiv – CS AI · Mar 46/102
🧠

Beyond Factual Correctness: Mitigating Preference-Inconsistent Explanations in Explainable Recommendation

Researchers propose PURE, a new framework for AI-powered recommendation systems that addresses preference-inconsistent explanations - where AI provides factually correct but unconvincing reasoning that conflicts with user preferences. The system uses a select-then-generate approach to improve both evidence selection and explanation generation, demonstrating reduced hallucinations while maintaining recommendation accuracy.

AIBullisharXiv – CS AI · Mar 46/104
🧠

REGAL: A Registry-Driven Architecture for Deterministic Grounding of Agentic AI in Enterprise Telemetry

Researchers present REGAL, a registry-driven architecture that enables AI agents to work deterministically with enterprise telemetry data from systems like CI/CD pipelines and observability platforms. The system addresses key challenges of grounding Large Language Models on private enterprise data through structured data processing and version-controlled action spaces.

AIBullisharXiv – CS AI · Mar 47/103
🧠

OptMerge: Unifying Multimodal LLM Capabilities and Modalities via Model Merging

Researchers introduce OptMerge, a new benchmark and method for combining multiple expert Multimodal Large Language Models (MLLMs) into single, more capable models without requiring additional training data. The approach achieves 2.48% average performance gains while reducing storage and serving costs by merging models across different modalities like vision, audio, and video.

AIBullisharXiv – CS AI · Mar 46/103
🧠

IoUCert: Robustness Verification for Anchor-based Object Detectors

Researchers introduce IoUCert, a new formal verification framework that enables robustness verification for anchor-based object detection models like SSD, YOLOv2, and YOLOv3. The breakthrough uses novel coordinate transformations and Interval Bound Propagation to overcome previous limitations in verifying object detection systems against input perturbations.

AINeutralarXiv – CS AI · Mar 46/102
🧠

How Controllable Are Large Language Models? A Unified Evaluation across Behavioral Granularities

Researchers introduce SteerEval, a new benchmark for evaluating how controllable Large Language Models are across language features, sentiment, and personality domains. The study reveals that current steering methods often fail at finer-grained control levels, highlighting significant risks when deploying LLMs in socially sensitive applications.

← PrevPage 71 of 468Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined