y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🧠All17,272🧠AI12,522🤖AI × Crypto516📰General4,234
Home/AI Pulse

AI Pulse News

Models, papers, tools. 17,284 articles with AI-powered sentiment analysis and key takeaways.

17284 articles
AIBullisharXiv – CS AI · Mar 56/10
🧠

AriadneMem: Threading the Maze of Lifelong Memory for LLM Agents

Researchers have developed AriadneMem, a new memory system for long-horizon LLM agents that addresses challenges in maintaining accurate memory under fixed context budgets. The system uses a two-phase pipeline with entropy-aware gating and conflict-aware coarsening to improve multi-hop reasoning while reducing runtime by 77.8% and using only 497 context tokens.

🧠 GPT-4
AINeutralarXiv – CS AI · Mar 57/10
🧠

One Bias After Another: Mechanistic Reward Shaping and Persistent Biases in Language Reward Models

Researchers identified persistent biases in high-quality language model reward systems, including length bias, sycophancy, and newly discovered model-style and answer-order biases. They developed a mechanistic reward shaping method to reduce these biases without degrading overall reward quality using minimal labeled data.

AIBullisharXiv – CS AI · Mar 56/10
🧠

PlugMem: A Task-Agnostic Plugin Memory Module for LLM Agents

Researchers propose PlugMem, a task-agnostic plugin memory module for LLM agents that structures episodic memories into knowledge-centric graphs for efficient retrieval. The system consistently outperforms existing memory designs across multiple benchmarks while maintaining transferability between different tasks.

AIBullisharXiv – CS AI · Mar 56/10
🧠

From Conflict to Consensus: Boosting Medical Reasoning via Multi-Round Agentic RAG

Researchers developed MA-RAG, a Multi-Round Agentic RAG framework that improves medical AI reasoning by iteratively refining responses through conflict detection and external evidence retrieval. The system achieved a substantial +6.8 point accuracy improvement over baseline models across 7 medical Q&A benchmarks by addressing hallucinations and outdated knowledge in healthcare AI applications.

AIBearisharXiv – CS AI · Mar 56/10
🧠

Language Model Goal Selection Differs from Humans' in an Open-Ended Task

Research comparing four state-of-the-art language models (GPT-5, Gemini 2.5 Pro, Claude Sonnet 4.5, and Centaur) to humans in goal selection tasks reveals substantial divergence in behavior. While humans explore diverse approaches and learn gradually, the AI models tend to exploit single solutions or show poor performance, raising concerns about using current LLMs as proxies for human decision-making in critical applications.

🧠 Claude🧠 Gemini
AIBullisharXiv – CS AI · Mar 56/10
🧠

TTSR: Test-Time Self-Reflection for Continual Reasoning Improvement

Researchers introduce TTSR, a new framework that enables AI models to improve their reasoning abilities during test time by having a single model alternate between student and teacher roles. The system allows models to learn from their mistakes by analyzing failed reasoning attempts and generating targeted practice questions for continuous improvement.

AIBullisharXiv – CS AI · Mar 56/10
🧠

TATRA: Training-Free Instance-Adaptive Prompting Through Rephrasing and Aggregation

Researchers introduce TATRA, a training-free prompting method for Large Language Models that creates instance-specific few-shot prompts without requiring labeled training data. The method achieves state-of-the-art performance on mathematical reasoning benchmarks like GSM8K and DeepMath, matching or outperforming existing prompt optimization methods that rely on expensive training processes.

AIBullisharXiv – CS AI · Mar 56/10
🧠

From Exact Hits to Close Enough: Semantic Caching for LLM Embeddings

Researchers propose semantic caching solutions for large language models to improve response times and reduce costs by reusing semantically similar requests. The study proves that optimal offline semantic caching is NP-hard and introduces polynomial-time heuristics and online policies combining recency, frequency, and locality factors.

AIBullisharXiv – CS AI · Mar 57/10
🧠

HumanLM: Simulating Users with State Alignment Beats Response Imitation

Researchers introduce HumanLM, a novel AI training framework that creates user simulators by aligning psychological states rather than just imitating response patterns. The system achieved 16.3% improvement in alignment scores across six datasets with 26k users and 216k responses, demonstrating superior ability to simulate real human behavior.

AINeutralarXiv – CS AI · Mar 57/10
🧠

Bridging the Reproducibility Divide: Open Source Software's Role in Standardizing Healthcare AI

A study reveals that 74% of healthcare AI research papers still use private datasets or don't share code, creating reproducibility issues that undermine trust in medical AI applications. Papers that embrace open practices by sharing both public datasets and code receive 110% more citations on average, demonstrating clear benefits for scientific impact.

AINeutralarXiv – CS AI · Mar 56/10
🧠

Fragile Thoughts: How Large Language Models Handle Chain-of-Thought Perturbations

Research reveals that Large Language Models show varying vulnerabilities to different types of Chain-of-Thought reasoning perturbations, with math errors causing 50-60% accuracy loss in small models while unit conversion issues remain challenging even for the largest models. The study tested 13 models across parameter ranges from 3B to 1.5T parameters, finding that scaling provides protection against some perturbations but limited defense against dimensional reasoning tasks.

AIBullisharXiv – CS AI · Mar 56/10
🧠

Ethical and Explainable AI in Reusable MLOps Pipelines

Researchers developed a unified MLOps framework that integrates ethical AI principles, reducing demographic bias from 0.31 to 0.04 while maintaining predictive accuracy. The system automatically blocks deployments and triggers retraining based on fairness metrics, demonstrating practical implementation of ethical AI in production environments.

AIBullisharXiv – CS AI · Mar 57/10
🧠

Perfect score on IPhO 2025 theory by Gemini agent

Google's Gemini 3.1 Pro Preview achieved a perfect score on IPhO 2025 theory problems across five runs, surpassing previous AI performance that fell behind top human contestants. However, the researchers acknowledge potential data contamination since the model was released after the competition.

🧠 Gemini
AIBullisharXiv – CS AI · Mar 57/10
🧠

AutoHarness: improving LLM agents by automatically synthesizing a code harness

Researchers developed AutoHarness, a technique where smaller LLMs like Gemini-2.5-Flash can automatically generate code harnesses to prevent illegal moves in games, outperforming larger models like Gemini-2.5-Pro and GPT-5.2-High. The method eliminates 78% of failures attributed to illegal moves in chess competitions and demonstrates superior performance across 145 different games.

🧠 Gemini
AIBullisharXiv – CS AI · Mar 57/10
🧠

AOI: Turning Failed Trajectories into Training Signals for Autonomous Cloud Diagnosis

Researchers present AOI (Autonomous Operations Intelligence), a multi-agent AI framework that automates Site Reliability Engineering tasks while maintaining security constraints. The system achieved 66.3% success rate on benchmark tests, outperforming previous methods by 24.4 points, and can learn from failed operations to improve future performance.

🧠 Claude
AINeutralarXiv – CS AI · Mar 57/10
🧠

Certainty robustness: Evaluating LLM stability under self-challenging prompts

Researchers introduce the Certainty Robustness Benchmark, a new evaluation framework that tests how large language models handle challenges to their responses in interactive settings. The study reveals significant differences in how AI models balance confidence and adaptability when faced with prompts like "Are you sure?" or "You are wrong!", identifying a critical new dimension for AI evaluation.

AIBullisharXiv – CS AI · Mar 56/10
🧠

PulseLM: A Foundation Dataset and Benchmark for PPG-Text Learning

Researchers introduced PulseLM, a large-scale dataset combining PPG cardiovascular sensor data with natural language processing for multimodal AI models. The dataset contains 1.31 million PPG segments with 3.15 million question-answer pairs, designed to enable language-based physiological reasoning in healthcare AI applications.

AIBullisharXiv – CS AI · Mar 57/10
🧠

Draft-Conditioned Constrained Decoding for Structured Generation in LLMs

Researchers introduce Draft-Conditioned Constrained Decoding (DCCD), a training-free method that improves structured output generation in large language models by up to 24 percentage points. The technique uses a two-step process that first generates an unconstrained draft, then applies constraints to ensure valid outputs like JSON and API calls.

AIBearisharXiv – CS AI · Mar 57/10
🧠

Sleeper Cell: Injecting Latent Malice Temporal Backdoors into Tool-Using LLMs

Researchers demonstrate a novel backdoor attack method called 'SFT-then-GRPO' that can inject hidden malicious behavior into AI agents while maintaining their performance on standard benchmarks. The attack creates 'sleeper agents' that appear benign but can execute harmful actions under specific trigger conditions, highlighting critical security vulnerabilities in the adoption of third-party AI models.

AINeutralarXiv – CS AI · Mar 57/10
🧠

Old Habits Die Hard: How Conversational History Geometrically Traps LLMs

Researchers introduce History-Echoes, a framework revealing how large language models become trapped by their conversational history, with past interactions creating geometric constraints in latent space that bias future responses. The study demonstrates that behavioral persistence in LLMs manifests as mathematical traps where previous hallucinations and responses influence subsequent model behavior across multiple model families and datasets.

AIBullisharXiv – CS AI · Mar 56/10
🧠

Controllable and explainable personality sliders for LLMs at inference time

Researchers propose Sequential Adaptive Steering (SAS), a new framework for controlling Large Language Model personalities at inference time without retraining. The method uses orthogonalized steering vectors to enable precise, multi-dimensional personality control by adjusting coefficients, validated on Big Five personality traits.

AIBullisharXiv – CS AI · Mar 56/10
🧠

Non-Invasive Reconstruction of Intracranial EEG Across the Deep Temporal Lobe from Scalp EEG based on Conditional Normalizing Flow

Researchers developed NeuroFlowNet, a novel AI framework using Conditional Normalizing Flow to reconstruct deep brain EEG signals from non-invasive scalp measurements. This breakthrough enables analysis of deep temporal lobe brain activity without requiring invasive electrode implantation, potentially transforming neuroscience research and clinical diagnosis.

AINeutralarXiv – CS AI · Mar 56/10
🧠

Automated Concept Discovery for LLM-as-a-Judge Preference Analysis

Researchers developed automated methods to discover biases in Large Language Models when used as judges, analyzing over 27,000 paired responses. The study found LLMs exhibit systematic biases including preference for refusing sensitive requests more than humans, favoring concrete and empathetic responses, and showing bias against certain legal guidance.

AIBullisharXiv – CS AI · Mar 56/10
🧠

DIALEVAL: Automated Type-Theoretic Evaluation of LLM Instruction Following

Researchers introduce DIALEVAL, a new automated framework that uses dual LLM agents to evaluate how well AI models follow instructions. The system achieves 90.38% accuracy by breaking down instructions into verifiable components and applying type-specific evaluation criteria, showing 26.45% error reduction over existing methods.

AIBullisharXiv – CS AI · Mar 56/10
🧠

Towards Self-Robust LLMs: Intrinsic Prompt Noise Resistance via CoIPO

Researchers propose CoIPO (Contrastive Learning-based Inverse Direct Preference Optimization), a new method to improve Large Language Model robustness against noisy or imperfect user prompts. The approach enhances LLMs' intrinsic ability to handle prompt variations without relying on external preprocessing tools, showing significant accuracy improvements on benchmark tests.

← PrevPage 132 of 692Next →
◆ AI Mentions
🏢OpenAI
96×
🏢Nvidia
65×
🧠GPT-5
48×
🧠Claude
43×
🧠Gemini
39×
🏢Anthropic
39×
🧠ChatGPT
24×
🧠GPT-4
19×
🧠Llama
18×
🏢Meta
11×
🧠Opus
10×
🏢xAI
9×
🏢Google
9×
🧠Sonnet
8×
🏢Perplexity
7×
🏢Hugging Face
7×
🧠Grok
6×
🏢Microsoft
6×
🏢Cohere
2×
🧠DALL E
1×
▲ Trending Tags
1#ai6252#iran5673#market4244#geopolitical3905#trump1346#security1177#openai968#artificial-intelligence839#nvidia6410#inflation5511#fed5312#google5113#china5014#meta4515#machine-learning39
Tag Sentiment
#ai625 articles
#iran567 articles
#market424 articles
#geopolitical390 articles
#trump134 articles
#security117 articles
#openai96 articles
#artificial-intelligence83 articles
#nvidia64 articles
#inflation55 articles
BullishNeutralBearish
Stay Updated
Models, papers, tools
Tag Connections
#geopolitical↔#iran
268
#iran↔#market
176
#geopolitical↔#market
148
#iran↔#trump
90
#ai↔#artificial-intelligence
69
#ai↔#market
65
#market↔#trump
51
#geopolitical↔#trump
50
#ai↔#openai
44
#ai↔#google
41
Filters
Sentiment
Importance
Sort
📡 See all 70+ sources
y0.exchange
Your AI agent for DeFi
Connect Claude or GPT to your wallet. AI reads balances, proposes swaps and bridges — you approve. Your keys never leave your device.
8 MCP tools · 15 chains · $0 fees
Connect Wallet to AI →How it works →
Viewing: AI Pulse feed
Filters
Sentiment
Importance
Sort
Stay Updated
Models, papers, tools
y0news
y0.exchangeLaunch AppDigestsSourcesAboutRSSAI NewsCrypto News
© 2026 y0.exchange