y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All29,991🧠AI12,852⛓️Crypto10,834💎DeFi1,118🤖AI × Crypto554📰General4,633
🧠

AI

9,653 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.

9653 articles
AIBullisharXiv – CS AI · Mar 27/1024
🧠

DUET: Distilled LLM Unlearning from an Efficiently Contextualized Teacher

Researchers propose DUET, a new distillation-based method for LLM unlearning that removes undesirable knowledge from AI models without full retraining. The technique combines computational efficiency with security advantages, achieving better performance in both knowledge removal and utility preservation while being significantly more data-efficient than existing methods.

AIBullisharXiv – CS AI · Mar 26/1021
🧠

Multi-View Encoders for Performance Prediction in LLM-Based Agentic Workflows

Researchers developed Agentic Predictor, a lightweight AI system that uses multi-view encoding to optimize LLM-based agent workflows without expensive trial-and-error evaluations. The system incorporates code architecture, textual prompts, and interaction graphs to predict task success rates and select optimal configurations across different domains.

AIBullisharXiv – CS AI · Mar 26/1015
🧠

Robust and Efficient Tool Orchestration via Layered Execution Structures with Reflective Correction

Researchers propose a new approach to tool orchestration in AI agent systems using layered execution structures with reflective error correction. The method reduces execution complexity by using coarse-grained layer structures for global guidance while handling failures locally, eliminating the need for precise dependency graphs or fine-grained planning.

AIBullisharXiv – CS AI · Mar 27/1026
🧠

RE-PO: Robust Enhanced Policy Optimization as a General Framework for LLM Alignment

Researchers introduce RE-PO (Robust Enhanced Policy Optimization), a new framework that addresses noise in human preference data used to train large language models. The method uses expectation-maximization to identify unreliable labels and reweight training data, improving alignment algorithm performance by up to 7% on benchmarks.

$LINK
AIBullisharXiv – CS AI · Mar 26/1017
🧠

MITS: Enhanced Tree Search Reasoning for LLMs via Pointwise Mutual Information

Researchers introduce MITS (Mutual Information Tree Search), a new framework that improves reasoning capabilities in large language models using information-theoretic principles. The method uses pointwise mutual information for step-wise evaluation and achieves better performance while being more computationally efficient than existing tree search methods like Tree-of-Thought.

AINeutralarXiv – CS AI · Mar 27/1014
🧠

Demystifying the Lifecycle of Failures in Platform-Orchestrated Agentic Workflows

Researchers present AgentFail, a dataset of 307 real-world failure cases from agentic workflow platforms, analyzing how multi-agent AI systems fail and can be repaired. The study reveals that failures in these low-code orchestrated AI workflows propagate differently than traditional software, making them harder to diagnose and fix.

AIBullisharXiv – CS AI · Mar 26/1021
🧠

Reallocating Attention Across Layers to Reduce Multimodal Hallucination

Researchers propose a training-free solution to reduce hallucinations in multimodal AI models by rebalancing attention between perception and reasoning layers. The method achieves 4.2% improvement in reasoning accuracy with minimal computational overhead.

AIBullisharXiv – CS AI · Mar 27/1017
🧠

CoMind: Towards Community-Driven Agents for Machine Learning Engineering

Researchers introduce CoMind, a multi-agent AI system that leverages community knowledge to automate machine learning engineering tasks. The system achieved a 36% medal rate on 75 past Kaggle competitions and outperformed 92.6% of human competitors in eight live competitions, establishing new state-of-the-art performance.

AIBullisharXiv – CS AI · Mar 27/1015
🧠

MACD: Multi-Agent Clinical Diagnosis with Self-Learned Knowledge for LLM

Researchers developed MACD, a Multi-Agent Clinical Diagnosis framework that enables large language models to self-learn clinical knowledge and improve medical diagnosis accuracy. The system achieved up to 22.3% improvement over clinical guidelines and 16% improvement over physician-only diagnosis when tested on 4,390 real-world patient cases.

AIBullisharXiv – CS AI · Mar 27/1013
🧠

CUDA Agent: Large-Scale Agentic RL for High-Performance CUDA Kernel Generation

Researchers developed CUDA Agent, a reinforcement learning system that significantly outperforms existing methods for GPU kernel optimization, achieving 100% faster performance than torch.compile on benchmark tests. The system uses large-scale agentic RL with automated verification and profiling to improve CUDA kernel generation, addressing a critical bottleneck in deep learning performance.

AINeutralarXiv – CS AI · Mar 26/1016
🧠

Do LLMs Benefit From Their Own Words?

Research reveals that large language models don't significantly benefit from conditioning on their own previous responses in multi-turn conversations. The study found that omitting assistant history can reduce context lengths by up to 10x while maintaining response quality, and in some cases even improves performance by avoiding context pollution where models over-condition on previous responses.

AIBullisharXiv – CS AI · Mar 26/1010
🧠

CowPilot: A Framework for Autonomous and Human-Agent Collaborative Web Navigation

Researchers introduce CowPilot, a framework that combines autonomous AI agents with human collaboration for web navigation tasks. The system achieved 95% success rate while requiring humans to perform only 15.2% of total steps, demonstrating effective human-AI cooperation for complex web tasks.

AIBullisharXiv – CS AI · Mar 27/1016
🧠

Automating the Refinement of Reinforcement Learning Specifications

Researchers introduce AutoSpec, a framework that automatically refines reinforcement learning specifications to help AI agents learn complex tasks more effectively. The system improves coarse-grained logical specifications through exploration-guided strategies while maintaining specification soundness, demonstrating promising improvements in solving complex control tasks.

AIBullisharXiv – CS AI · Mar 26/1013
🧠

Efficient Discovery of Approximate Causal Abstractions via Neural Mechanism Sparsification

Researchers have developed a new method to extract interpretable causal mechanisms from neural networks using structured pruning as a search technique. The approach reframes network pruning as finding approximate causal abstractions, yielding closed-form criteria for simplifying networks while maintaining their causal structure under interventions.

AINeutralarXiv – CS AI · Mar 26/1011
🧠

Memory Caching: RNNs with Growing Memory

Researchers introduce Memory Caching (MC), a technique that enhances recurrent neural networks by allowing their memory capacity to grow with sequence length, bridging the gap between fixed-memory RNNs and growing-memory Transformers. The approach offers four variants and shows competitive performance with Transformers on language modeling and long-context tasks while maintaining better computational efficiency.

AIBullisharXiv – CS AI · Mar 26/1012
🧠

Radiologist Copilot: An Agentic Framework Orchestrating Specialized Tools for Reliable Radiology Reporting

Researchers have developed Radiologist Copilot, an AI agentic framework that orchestrates specialized tools to complete the entire radiology reporting workflow beyond simple report generation. The system integrates image localization, interpretation, template selection, report composition, and quality control to support radiologists throughout the comprehensive reporting process.

AIBullisharXiv – CS AI · Mar 26/1018
🧠

Taming Momentum: Rethinking Optimizer States Through Low-Rank Approximation

Researchers introduce LoRA-Pre, a memory-efficient optimizer that reduces memory overhead in training large language models by using low-rank approximation of momentum states. The method achieves superior performance on Llama models from 60M to 1B parameters while using only 1/8 the rank of baseline methods.

AINeutralarXiv – CS AI · Mar 27/1011
🧠

FaultXformer: A Transformer-Encoder Based Fault Classification and Location Identification model in PMU-Integrated Active Electrical Distribution System

Researchers developed FaultXformer, a Transformer-based AI model that achieves 98.76% accuracy in fault classification and 98.92% accuracy in fault location identification in electrical distribution systems using PMU data. The dual-stage architecture significantly outperforms traditional deep learning methods like CNN, RNN, and LSTM, particularly in systems with distributed energy resources integration.

AIBullisharXiv – CS AI · Mar 26/1015
🧠

A Mixed Diet Makes DINO An Omnivorous Vision Encoder

Researchers have developed an 'Omnivorous Vision Encoder' that creates consistent feature representations across different visual modalities (RGB, depth, segmentation) of the same scene. The framework addresses the poor cross-modal alignment in existing vision encoders like DINOv2 by training with dual objectives to maximize feature alignment while preserving discriminative semantics.

AIBullisharXiv – CS AI · Mar 26/1014
🧠

An Efficient Unsupervised Federated Learning Approach for Anomaly Detection in Heterogeneous IoT Networks

Researchers propose an efficient unsupervised federated learning framework for anomaly detection in heterogeneous IoT networks that preserves privacy while leveraging shared features from multiple datasets. The approach uses explainable AI techniques like SHAP for transparency and demonstrates superior performance compared to conventional federated learning methods on real-world IoT datasets.

AIBullisharXiv – CS AI · Mar 26/1011
🧠

Multimodal Optimal Transport for Unsupervised Temporal Segmentation in Surgical Robotics

Researchers developed TASOT, an unsupervised AI method for surgical phase recognition that combines visual and textual information without requiring expensive large-scale pre-training. The approach showed significant improvements over existing zero-shot methods across multiple surgical datasets, demonstrating that effective surgical AI can be achieved with more efficient training methods.

AIBullisharXiv – CS AI · Mar 26/1017
🧠

Controllable Reasoning Models Are Private Thinkers

Researchers developed a method to train AI reasoning models to follow privacy instructions in their internal reasoning traces, not just final answers. The approach uses separate LoRA adapters and achieves up to 51.9% improvement on privacy benchmarks, though with some trade-offs in task performance.

AIBullisharXiv – CS AI · Mar 26/1012
🧠

Task-Centric Acceleration of Small-Language Models

Researchers propose TASC (Task-Adaptive Sequence Compression), a framework for accelerating small language models through two methods: TASC-ft for fine-tuning with expanded vocabularies and TASC-spec for training-free speculative decoding. The methods demonstrate improved inference efficiency while maintaining task performance across low output-variability generation tasks.

AIBullisharXiv – CS AI · Mar 26/109
🧠

Preference Packing: Efficient Preference Optimization for Large Language Models

Researchers propose 'preference packing,' a new optimization technique for training large language models that reduces training time by at least 37% through more efficient handling of duplicate input prompts. The method optimizes attention operations and KV cache memory usage in preference-based training methods like Direct Preference Optimization.

← PrevPage 193 of 387Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined