y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All26,393🧠AI11,674⛓️Crypto9,677💎DeFi991🤖AI × Crypto505📰General3,546
🧠

AI

11,674 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.

11674 articles
AINeutralarXiv – CS AI · Mar 57/10
🧠

InEdit-Bench: Benchmarking Intermediate Logical Pathways for Intelligent Image Editing Models

Researchers introduced InEdit-Bench, the first evaluation benchmark specifically designed to test image editing models' ability to reason through intermediate logical pathways in multi-step visual transformations. Testing 14 representative models revealed significant shortcomings in handling complex scenarios requiring dynamic reasoning and procedural understanding.

AIBullisharXiv – CS AI · Mar 56/10
🧠

Towards Self-Robust LLMs: Intrinsic Prompt Noise Resistance via CoIPO

Researchers propose CoIPO (Contrastive Learning-based Inverse Direct Preference Optimization), a new method to improve Large Language Model robustness against noisy or imperfect user prompts. The approach enhances LLMs' intrinsic ability to handle prompt variations without relying on external preprocessing tools, showing significant accuracy improvements on benchmark tests.

AINeutralarXiv – CS AI · Mar 57/10
🧠

Molt Dynamics: Emergent Social Phenomena in Autonomous AI Agent Populations

Researchers analyzed 770,000 autonomous AI agents interacting in MoltBook, revealing emergent social behaviors including role specialization, information cascades, and limited cooperative task resolution. The study found that while agents naturally develop coordination patterns, collaborative outcomes perform worse than individual agents, establishing baseline metrics for decentralized AI systems.

AIBullisharXiv – CS AI · Mar 56/10
🧠

From Exact Hits to Close Enough: Semantic Caching for LLM Embeddings

Researchers propose semantic caching solutions for large language models to improve response times and reduce costs by reusing semantically similar requests. The study proves that optimal offline semantic caching is NP-hard and introduces polynomial-time heuristics and online policies combining recency, frequency, and locality factors.

AINeutralarXiv – CS AI · Mar 56/10
🧠

Belief-Sim: Towards Belief-Driven Simulation of Demographic Misinformation Susceptibility

Researchers introduce BeliefSim, a framework that uses Large Language Models to simulate how different demographic groups are susceptible to misinformation based on their underlying beliefs. The system achieves up to 92% accuracy in predicting misinformation susceptibility by incorporating psychology-informed belief profiles.

AIBullisharXiv – CS AI · Mar 57/10
🧠

HumanLM: Simulating Users with State Alignment Beats Response Imitation

Researchers introduce HumanLM, a novel AI training framework that creates user simulators by aligning psychological states rather than just imitating response patterns. The system achieved 16.3% improvement in alignment scores across six datasets with 26k users and 216k responses, demonstrating superior ability to simulate real human behavior.

AIBearisharXiv – CS AI · Mar 56/10
🧠

Language Model Goal Selection Differs from Humans' in an Open-Ended Task

Research comparing four state-of-the-art language models (GPT-5, Gemini 2.5 Pro, Claude Sonnet 4.5, and Centaur) to humans in goal selection tasks reveals substantial divergence in behavior. While humans explore diverse approaches and learn gradually, the AI models tend to exploit single solutions or show poor performance, raising concerns about using current LLMs as proxies for human decision-making in critical applications.

🧠 Claude🧠 Gemini
AIBullisharXiv – CS AI · Mar 57/10
🧠

Draft-Conditioned Constrained Decoding for Structured Generation in LLMs

Researchers introduce Draft-Conditioned Constrained Decoding (DCCD), a training-free method that improves structured output generation in large language models by up to 24 percentage points. The technique uses a two-step process that first generates an unconstrained draft, then applies constraints to ensure valid outputs like JSON and API calls.

AIBullisharXiv – CS AI · Mar 56/10
🧠

TTSR: Test-Time Self-Reflection for Continual Reasoning Improvement

Researchers introduce TTSR, a new framework that enables AI models to improve their reasoning abilities during test time by having a single model alternate between student and teacher roles. The system allows models to learn from their mistakes by analyzing failed reasoning attempts and generating targeted practice questions for continuous improvement.

AIBullisharXiv – CS AI · Mar 56/10
🧠

PlugMem: A Task-Agnostic Plugin Memory Module for LLM Agents

Researchers propose PlugMem, a task-agnostic plugin memory module for LLM agents that structures episodic memories into knowledge-centric graphs for efficient retrieval. The system consistently outperforms existing memory designs across multiple benchmarks while maintaining transferability between different tasks.

AINeutralarXiv – CS AI · Mar 57/10
🧠

Goal-Driven Risk Assessment for LLM-Powered Systems: A Healthcare Case Study

Researchers propose a new goal-driven risk assessment framework for LLM-powered systems, specifically targeting healthcare applications. The approach uses attack trees to identify detailed threat vectors combining adversarial AI attacks with conventional cyber threats, addressing security gaps in LLM system design.

AINeutralarXiv – CS AI · Mar 56/10
🧠

SafeCRS: Personalized Safety Alignment for LLM-Based Conversational Recommender Systems

Researchers introduce SafeCRS, a safety-aware training framework for LLM-based conversational recommender systems that addresses personalized safety vulnerabilities. The system reduces safety violation rates by up to 96.5% while maintaining recommendation quality by respecting individual user constraints like trauma triggers and phobias.

AIBullisharXiv – CS AI · Mar 56/10
🧠

TATRA: Training-Free Instance-Adaptive Prompting Through Rephrasing and Aggregation

Researchers introduce TATRA, a training-free prompting method for Large Language Models that creates instance-specific few-shot prompts without requiring labeled training data. The method achieves state-of-the-art performance on mathematical reasoning benchmarks like GSM8K and DeepMath, matching or outperforming existing prompt optimization methods that rely on expensive training processes.

AIBullisharXiv – CS AI · Mar 56/10
🧠

From Conflict to Consensus: Boosting Medical Reasoning via Multi-Round Agentic RAG

Researchers developed MA-RAG, a Multi-Round Agentic RAG framework that improves medical AI reasoning by iteratively refining responses through conflict detection and external evidence retrieval. The system achieved a substantial +6.8 point accuracy improvement over baseline models across 7 medical Q&A benchmarks by addressing hallucinations and outdated knowledge in healthcare AI applications.

AIBullisharXiv – CS AI · Mar 57/10
🧠

Perfect score on IPhO 2025 theory by Gemini agent

Google's Gemini 3.1 Pro Preview achieved a perfect score on IPhO 2025 theory problems across five runs, surpassing previous AI performance that fell behind top human contestants. However, the researchers acknowledge potential data contamination since the model was released after the competition.

🧠 Gemini
AIBullisharXiv – CS AI · Mar 56/10
🧠

AriadneMem: Threading the Maze of Lifelong Memory for LLM Agents

Researchers have developed AriadneMem, a new memory system for long-horizon LLM agents that addresses challenges in maintaining accurate memory under fixed context budgets. The system uses a two-phase pipeline with entropy-aware gating and conflict-aware coarsening to improve multi-hop reasoning while reducing runtime by 77.8% and using only 497 context tokens.

🧠 GPT-4
AIBullisharXiv – CS AI · Mar 56/10
🧠

A Dual-Helix Governance Approach Towards Reliable Agentic AI for WebGIS Development

Researchers propose a dual-helix governance framework to address AI agent reliability issues in WebGIS development, implementing a 3-track architecture that achieved 51% reduction in code complexity. The framework uses knowledge graphs and self-learning cycles to overcome LLM limitations like context constraints and instruction failures.

AIBullisharXiv – CS AI · Mar 56/10
🧠

Error as Signal: Stiffness-Aware Diffusion Sampling via Embedded Runge-Kutta Guidance

Researchers propose Embedded Runge-Kutta Guidance (ERK-Guid), a new method that improves diffusion model sampling by using solver-induced errors as guidance signals. The technique addresses stiffness issues in ODE trajectories and demonstrates superior performance over existing methods on ImageNet benchmarks.

AIBullisharXiv – CS AI · Mar 57/10
🧠

mlx-snn: Spiking Neural Networks on Apple Silicon via MLX

Researchers have released mlx-snn, the first spiking neural network library built natively for Apple's MLX framework, targeting Apple Silicon hardware. The library demonstrates 2-2.5x faster training and 3-10x lower GPU memory usage compared to existing PyTorch-based solutions, achieving 97.28% accuracy on MNIST classification tasks.

AINeutralarXiv – CS AI · Mar 57/10
🧠

Old Habits Die Hard: How Conversational History Geometrically Traps LLMs

Researchers introduce History-Echoes, a framework revealing how large language models become trapped by their conversational history, with past interactions creating geometric constraints in latent space that bias future responses. The study demonstrates that behavioral persistence in LLMs manifests as mathematical traps where previous hallucinations and responses influence subsequent model behavior across multiple model families and datasets.

← PrevPage 61 of 467Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined