y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All31,138🧠AI13,273⛓️Crypto11,244💎DeFi1,160🤖AI × Crypto566📰General4,895
🧠

AI

13,275 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.

13275 articles
AIBullisharXiv – CS AI · Mar 27/1016
🧠

DiffuMamba: High-Throughput Diffusion LMs with Mamba Backbone

Researchers introduce DiffuMamba, a new diffusion language model using Mamba backbone architecture that achieves up to 8.2x higher inference throughput than Transformer-based models while maintaining comparable performance. The model demonstrates linear scaling with sequence length and represents a significant advancement in efficient AI text generation systems.

AIBullisharXiv – CS AI · Mar 26/1017
🧠

MITS: Enhanced Tree Search Reasoning for LLMs via Pointwise Mutual Information

Researchers introduce MITS (Mutual Information Tree Search), a new framework that improves reasoning capabilities in large language models using information-theoretic principles. The method uses pointwise mutual information for step-wise evaluation and achieves better performance while being more computationally efficient than existing tree search methods like Tree-of-Thought.

AIBullisharXiv – CS AI · Mar 26/1010
🧠

CowPilot: A Framework for Autonomous and Human-Agent Collaborative Web Navigation

Researchers introduce CowPilot, a framework that combines autonomous AI agents with human collaboration for web navigation tasks. The system achieved 95% success rate while requiring humans to perform only 15.2% of total steps, demonstrating effective human-AI cooperation for complex web tasks.

AIBullisharXiv – CS AI · Mar 27/1015
🧠

MACD: Multi-Agent Clinical Diagnosis with Self-Learned Knowledge for LLM

Researchers developed MACD, a Multi-Agent Clinical Diagnosis framework that enables large language models to self-learn clinical knowledge and improve medical diagnosis accuracy. The system achieved up to 22.3% improvement over clinical guidelines and 16% improvement over physician-only diagnosis when tested on 4,390 real-world patient cases.

AIBullisharXiv – CS AI · Mar 27/1017
🧠

CoMind: Towards Community-Driven Agents for Machine Learning Engineering

Researchers introduce CoMind, a multi-agent AI system that leverages community knowledge to automate machine learning engineering tasks. The system achieved a 36% medal rate on 75 past Kaggle competitions and outperformed 92.6% of human competitors in eight live competitions, establishing new state-of-the-art performance.

AINeutralarXiv – CS AI · Mar 26/1016
🧠

Do LLMs Benefit From Their Own Words?

Research reveals that large language models don't significantly benefit from conditioning on their own previous responses in multi-turn conversations. The study found that omitting assistant history can reduce context lengths by up to 10x while maintaining response quality, and in some cases even improves performance by avoiding context pollution where models over-condition on previous responses.

AIBullisharXiv – CS AI · Mar 26/1016
🧠

Does Your Reasoning Model Implicitly Know When to Stop Thinking?

Researchers introduce SAGE (Self-Aware Guided Efficient Reasoning), a novel sampling paradigm that improves AI reasoning efficiency by helping large reasoning models know when to stop thinking. The approach addresses the problem of redundant, lengthy reasoning chains that don't improve accuracy while reducing computational costs and response times.

AINeutralarXiv – CS AI · Mar 27/1011
🧠

FaultXformer: A Transformer-Encoder Based Fault Classification and Location Identification model in PMU-Integrated Active Electrical Distribution System

Researchers developed FaultXformer, a Transformer-based AI model that achieves 98.76% accuracy in fault classification and 98.92% accuracy in fault location identification in electrical distribution systems using PMU data. The dual-stage architecture significantly outperforms traditional deep learning methods like CNN, RNN, and LSTM, particularly in systems with distributed energy resources integration.

AIBullisharXiv – CS AI · Mar 27/1016
🧠

SafeGen-LLM: Enhancing Safety Generalization in Task Planning for Robotic Systems

Researchers propose SafeGen-LLM, a new approach to enhance safety in robotic task planning by combining supervised fine-tuning with policy optimization guided by formal verification. The system demonstrates superior safety generalization across multiple domains compared to existing classical planners, reinforcement learning methods, and base large language models.

AIBullisharXiv – CS AI · Mar 26/1013
🧠

Efficient Discovery of Approximate Causal Abstractions via Neural Mechanism Sparsification

Researchers have developed a new method to extract interpretable causal mechanisms from neural networks using structured pruning as a search technique. The approach reframes network pruning as finding approximate causal abstractions, yielding closed-form criteria for simplifying networks while maintaining their causal structure under interventions.

AIBullisharXiv – CS AI · Mar 26/1017
🧠

Controllable Reasoning Models Are Private Thinkers

Researchers developed a method to train AI reasoning models to follow privacy instructions in their internal reasoning traces, not just final answers. The approach uses separate LoRA adapters and achieves up to 51.9% improvement on privacy benchmarks, though with some trade-offs in task performance.

AIBullisharXiv – CS AI · Mar 26/1015
🧠

A Mixed Diet Makes DINO An Omnivorous Vision Encoder

Researchers have developed an 'Omnivorous Vision Encoder' that creates consistent feature representations across different visual modalities (RGB, depth, segmentation) of the same scene. The framework addresses the poor cross-modal alignment in existing vision encoders like DINOv2 by training with dual objectives to maximize feature alignment while preserving discriminative semantics.

AIBullisharXiv – CS AI · Mar 26/1012
🧠

Task-Centric Acceleration of Small-Language Models

Researchers propose TASC (Task-Adaptive Sequence Compression), a framework for accelerating small language models through two methods: TASC-ft for fine-tuning with expanded vocabularies and TASC-spec for training-free speculative decoding. The methods demonstrate improved inference efficiency while maintaining task performance across low output-variability generation tasks.

AINeutralarXiv – CS AI · Mar 26/1011
🧠

Memory Caching: RNNs with Growing Memory

Researchers introduce Memory Caching (MC), a technique that enhances recurrent neural networks by allowing their memory capacity to grow with sequence length, bridging the gap between fixed-memory RNNs and growing-memory Transformers. The approach offers four variants and shows competitive performance with Transformers on language modeling and long-context tasks while maintaining better computational efficiency.

AIBullisharXiv – CS AI · Mar 26/1014
🧠

An Efficient Unsupervised Federated Learning Approach for Anomaly Detection in Heterogeneous IoT Networks

Researchers propose an efficient unsupervised federated learning framework for anomaly detection in heterogeneous IoT networks that preserves privacy while leveraging shared features from multiple datasets. The approach uses explainable AI techniques like SHAP for transparency and demonstrates superior performance compared to conventional federated learning methods on real-world IoT datasets.

AIBullisharXiv – CS AI · Mar 26/1015
🧠

DiffusionHarmonizer: Bridging Neural Reconstruction and Photorealistic Simulation with Online Diffusion Enhancer

Researchers introduce DiffusionHarmonizer, an AI framework that enhances neural reconstruction simulations for autonomous robots by converting multi-step image diffusion models into single-step enhancers. The system addresses artifacts in NeRF and 3D Gaussian Splatting methods while improving realism for applications like self-driving vehicle simulation.

AIBullisharXiv – CS AI · Mar 27/1016
🧠

Toward Guarantees for Clinical Reasoning in Vision Language Models via Formal Verification

Researchers developed a neurosymbolic verification framework to audit logical consistency in AI-generated radiology reports, addressing issues where vision-language models produce diagnostic conclusions unsupported by their findings. The system uses formal verification methods to identify hallucinations and missing logical conclusions in medical AI outputs, improving diagnostic accuracy.

AIBullisharXiv – CS AI · Mar 26/109
🧠

Preference Packing: Efficient Preference Optimization for Large Language Models

Researchers propose 'preference packing,' a new optimization technique for training large language models that reduces training time by at least 37% through more efficient handling of duplicate input prompts. The method optimizes attention operations and KV cache memory usage in preference-based training methods like Direct Preference Optimization.

AIBullisharXiv – CS AI · Mar 26/1017
🧠

Quant Experts: Token-aware Adaptive Error Reconstruction with Mixture of Experts for Large Vision-Language Models Quantization

Researchers introduce Quant Experts (QE), a new post-training quantization technique for Vision-Language Models that uses adaptive error compensation with mixture-of-experts architecture. The method addresses computational and memory overhead issues by intelligently handling token-dependent and token-independent channels, maintaining performance comparable to full-precision models across 2B to 70B parameter scales.

AIBullisharXiv – CS AI · Mar 26/1011
🧠

Multimodal Optimal Transport for Unsupervised Temporal Segmentation in Surgical Robotics

Researchers developed TASOT, an unsupervised AI method for surgical phase recognition that combines visual and textual information without requiring expensive large-scale pre-training. The approach showed significant improvements over existing zero-shot methods across multiple surgical datasets, demonstrating that effective surgical AI can be achieved with more efficient training methods.

AINeutralarXiv – CS AI · Mar 27/1014
🧠

Task Complexity Matters: An Empirical Study of Reasoning in LLMs for Sentiment Analysis

A comprehensive study of 504 AI model configurations reveals that reasoning capabilities in large language models are highly task-dependent, with simple tasks like binary classification actually degrading by up to 19.9 percentage points while complex 27-class emotion recognition improves by up to 16.0 points. The research challenges the assumption that reasoning universally improves AI performance across all language tasks.

AINeutralarXiv – CS AI · Mar 26/1014
🧠

Jailbreak Foundry: From Papers to Runnable Attacks for Reproducible Benchmarking

Researchers introduce Jailbreak Foundry (JBF), a system that automatically converts AI jailbreak research papers into executable code modules for standardized testing. The system successfully reproduced 30 attacks with high accuracy and reduces implementation code by nearly half while enabling consistent evaluation across multiple AI models.

AIBullisharXiv – CS AI · Mar 27/1011
🧠

Foundation World Models for Agents that Learn, Verify, and Adapt Reliably Beyond Static Environments

Researchers propose a new framework for foundation world models that enables autonomous agents to learn, verify, and adapt reliably in dynamic environments. The approach combines reinforcement learning with formal verification and adaptive abstraction to create agents that can synthesize verifiable programs and maintain correctness while adapting to novel conditions.

AIBullisharXiv – CS AI · Mar 27/1015
🧠

Interpretable Debiasing of Vision-Language Models for Social Fairness

Researchers have developed DeBiasLens, a new framework that uses sparse autoencoders to identify and deactivate social bias neurons in Vision-Language models without degrading their performance. The model-agnostic approach addresses concerns about unintended social bias in VLMs by making the debiasing process interpretable and targeting internal model dynamics rather than surface-level fixes.

← PrevPage 248 of 531Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined