y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🧠All19,007🧠AI13,487🤖AI × Crypto584📰General4,936
Home/AI Pulse

AI Pulse News

Models, papers, tools. 19,008 articles with AI-powered sentiment analysis and key takeaways.

19008 articles
AIBullisharXiv – CS AI · Apr 76/10
🧠

Profile-Then-Reason: Bounded Semantic Complexity for Tool-Augmented Language Agents

Researchers introduce Profile-Then-Reason (PTR), a new framework for AI language agents that use external tools, which reduces computational overhead by pre-planning workflows rather than recomputing after each step. The approach limits language model calls to 2-3 times maximum and shows superior performance in 16 of 24 test configurations compared to reactive execution methods.

AIBullisharXiv – CS AI · Apr 76/10
🧠

Schema-Aware Planning and Hybrid Knowledge Toolset for Reliable Knowledge Graph Triple Verification

Researchers have developed SHARP, a new AI agent that significantly improves knowledge graph verification by combining internal structural data with external evidence. The system achieved 4.2% and 12.9% accuracy improvements over existing methods on major datasets, offering better interpretability for complex fact verification tasks.

AIBearisharXiv – CS AI · Apr 76/10
🧠

Don't Blink: Evidence Collapse during Multimodal Reasoning

Research reveals that Vision Language Models (VLMs) progressively lose visual grounding during reasoning tasks, creating dangerous low-entropy predictions that appear confident but lack visual evidence. The study found attention to visual evidence drops by over 50% during reasoning across multiple benchmarks, requiring task-aware monitoring for safe AI deployment.

AINeutralarXiv – CS AI · Apr 76/10
🧠

TimeSeek: Temporal Reliability of Agentic Forecasters

TimeSeek introduces a benchmark showing that AI language models perform best at predicting binary market outcomes early in a market's lifecycle and on high-uncertainty markets, but struggle near resolution and on consensus markets. Web search generally improves forecasting accuracy across models, though not uniformly, while simple ensembles reduce errors without beating market performance overall.

AINeutralarXiv – CS AI · Apr 76/10
🧠

Pedagogical Safety in Educational Reinforcement Learning: Formalizing and Detecting Reward Hacking in AI Tutoring Systems

Researchers developed a four-layer pedagogical safety framework for AI tutoring systems and introduced the Reward Hacking Severity Index (RHSI) to measure misalignment between proxy rewards and genuine learning. Their study of 18,000 simulated interactions found that engagement-optimized AI agents systematically selected high-engagement actions with no learning benefits, requiring constrained architectures to reduce reward hacking.

AIBullisharXiv – CS AI · Apr 76/10
🧠

Context Engineering: A Practitioner Methodology for Structured Human-AI Collaboration

Researchers introduce Context Engineering, a structured methodology for improving AI output quality through better context assembly rather than just prompting techniques. The study of 200 AI interactions showed that structured context reduced iteration cycles from 3.8 to 2.0 and improved first-pass acceptance rates from 32% to 55%.

🧠 ChatGPT🧠 Claude
AIBullisharXiv – CS AI · Apr 76/10
🧠

InferenceEvolve: Towards Automated Causal Effect Estimators through Self-Evolving AI

Researchers introduce InferenceEvolve, an AI framework using large language models to automatically discover and refine causal inference methods. The system outperformed 58 human submissions in a recent competition and demonstrates how AI can optimize complex scientific programs through evolutionary approaches.

AINeutralarXiv – CS AI · Apr 76/10
🧠

Implementing surrogate goals for safer bargaining in LLM-based agents

Researchers developed methods to implement 'surrogate goals' in LLM-based agents to reduce bargaining risks by deflecting threats away from what principals care about. The study tested four approaches (prompting, fine-tuning, scaffolding) and found that scaffolding and fine-tuning methods outperformed simple prompting for implementing desired threat response behaviors.

AIBullisharXiv – CS AI · Apr 76/10
🧠

REAM: Merging Improves Pruning of Experts in LLMs

Researchers propose REAM (Router-weighted Expert Activation Merging), a new method for compressing large language models that groups and merges expert weights instead of pruning them. The technique preserves model performance better than existing pruning methods while reducing memory requirements for deployment.

AIBullisharXiv – CS AI · Apr 76/10
🧠

Decocted Experience Improves Test-Time Inference in LLM Agents

Researchers present a new approach to improve Large Language Model performance without updating model parameters by using 'decocted experience' - extracting and organizing key insights from previous interactions to guide better reasoning. The method shows effectiveness across reasoning tasks including math, web browsing, and software engineering by constructing better contextual inputs rather than simply scaling computational resources.

AIBullisharXiv – CS AI · Apr 76/10
🧠

Optimizing Service Operations via LLM-Powered Multi-Agent Simulation

Researchers introduce an LLM-powered multi-agent simulation framework for optimizing service operations by modeling human behavior through AI agents. The method uses prompts to embed design choices and extracts outcomes from LLM responses to create a controlled Markov chain model, showing superior performance in supply chain and contest design applications.

AINeutralarXiv – CS AI · Apr 76/10
🧠

Automatically Generating Hard Math Problems from Hypothesis-Driven Error Analysis

Researchers have developed a new automated pipeline that generates challenging math problems by first identifying specific mathematical concepts where LLMs struggle, then creating targeted problems to test these weaknesses. The method successfully reduced a leading LLM's accuracy from 77% to 45%, demonstrating its effectiveness at creating more rigorous benchmarks.

🧠 Llama
AINeutralarXiv – CS AI · Apr 76/10
🧠

Empirical Characterization of Rationale Stability Under Controlled Perturbations for Explainable Pattern Recognition

Researchers propose a new metric to assess consistency of AI model explanations across similar inputs, implementing it on BERT models for sentiment analysis. The framework uses cosine similarity of SHAP values to detect inconsistent reasoning patterns and biased feature reliance, providing more robust evaluation of model behavior.

AIBullisharXiv – CS AI · Apr 76/10
🧠

Memory Intelligence Agent

Researchers have developed Memory Intelligence Agent (MIA), a new AI framework that improves deep research agents through a Manager-Planner-Executor architecture with advanced memory systems. The framework enables continuous learning during inference and demonstrates superior performance across eleven benchmarks through enhanced cooperation between parametric and non-parametric memory systems.

AIBullisharXiv – CS AI · Apr 76/10
🧠

SuperLocalMemory V3.3: The Living Brain -- Biologically-Inspired Forgetting, Cognitive Quantization, and Multi-Channel Retrieval for Zero-LLM Agent Memory Systems

Researchers have released SuperLocalMemory V3.3, an open-source AI agent memory system that operates entirely locally without cloud LLMs, implementing biologically-inspired forgetting mechanisms and multi-channel retrieval. The system achieves 70.4% performance on LoCoMo benchmarks while running on CPU only, addressing the paradox of AI agents having vast knowledge but poor conversational memory.

AIBullisharXiv – CS AI · Apr 76/10
🧠

Search, Do not Guess: Teaching Small Language Models to Be Effective Search Agents

Researchers developed a new training approach that makes small language models more effective search agents by teaching them to consistently use search tools rather than relying on internal knowledge. The method achieved significant performance improvements of 17.3 points on Bamboogle and 15.3 points on HotpotQA, reaching large language model-level results while maintaining lower computational costs.

AIBullisharXiv – CS AI · Apr 76/10
🧠

ANX: Protocol-First Design for AI Agent Interaction with a Supporting 3EX Decoupled Architecture

ANX is a new protocol-first framework designed for AI agent interaction, featuring a 3EX decoupled architecture that reduces token consumption by up to 66% compared to existing methods. The open-source protocol addresses security and efficiency issues in current AI agent implementations through agent-native design and integrated CLI, Skill, and MCP components.

🧠 GPT-4
AIBullisharXiv – CS AI · Apr 76/10
🧠

Learning, Potential, and Retention: An Approach for Evaluating Adaptive AI-Enabled Medical Devices

Researchers introduce a new framework for evaluating adaptive AI models in medical devices, using three key measurements: learning, potential, and retention. The approach addresses challenges in assessing AI systems that continuously update, providing insights for regulatory oversight of adaptive medical AI safety and effectiveness.

AIBullisharXiv – CS AI · Apr 76/10
🧠

Scaling DPPs for RAG: Density Meets Diversity

Researchers propose ScalDPP, a new retrieval mechanism for RAG systems that uses Determinantal Point Processes to optimize both density and diversity in context selection. The approach addresses limitations in current RAG pipelines that ignore interactions between retrieved information chunks, leading to redundant contexts that reduce effectiveness.

AINeutralarXiv – CS AI · Apr 76/10
🧠

AI Governance Control Stack for Operational Stability: Achieving Hardened Governance in AI Systems

Researchers propose a six-layer AI Governance Control Stack for Operational Stability to ensure traceable and resilient AI system behavior in high-stakes environments. The framework integrates version control, verification, explainability logging, monitoring, drift detection, and escalation mechanisms while aligning with emerging regulatory frameworks like the EU AI Act and NIST standards.

AIBullisharXiv – CS AI · Apr 76/10
🧠

Event-Driven Neuromorphic Vision Enables Energy-Efficient Visual Place Recognition

Researchers developed SpikeVPR, a bio-inspired visual place recognition system using event-based cameras and spiking neural networks that achieves comparable performance to deep networks while using 50x fewer parameters and consuming 30-250x less energy. The neuromorphic approach enables real-time deployment on mobile platforms for autonomous robot navigation.

AINeutralarXiv – CS AI · Apr 76/10
🧠

Beyond Static Vision: Scene Dynamic Field Unlocks Intuitive Physics Understanding in Multi-modal Large Language Models

Researchers identify critical limitations in current Multimodal Large Language Models' ability to understand physics and physical world dynamics. They propose Scene Dynamic Field (SDF), a new approach using physics simulators that achieves up to 20.7% performance improvements on fluid dynamics tasks.

AIBearisharXiv – CS AI · Apr 76/10
🧠

The Ideation Bottleneck: Decomposing the Quality Gap Between AI-Generated and Human Economics Research

Research reveals AI-generated economics papers significantly underperform human-authored publications, with idea quality representing the primary bottleneck (71% of the gap) rather than execution quality. Analysis of 953 papers shows human research achieves 47.1% exceptional probability versus 16.5% for AI, with only 0.8% of AI papers surpassing median human quality on both dimensions.

🧠 Gemini
AIBullisharXiv – CS AI · Apr 76/10
🧠

Towards Intelligent Energy Security: A Unified Spatio-Temporal and Graph Learning Framework for Scalable Electricity Theft Detection in Smart Grids

Researchers have developed SmartGuard Energy Intelligence System (SGEIS), an AI framework that combines machine learning, deep learning, and graph neural networks to detect electricity theft in smart grids. The system achieved 96% accuracy in identifying high-risk nodes and demonstrates strong performance with practical applications for energy security.

AIBullisharXiv – CS AI · Apr 76/10
🧠

Generative AI for material design: A mechanics perspective from burgers to matter

Researchers demonstrate that generative AI and computational mechanics share fundamental principles by using diffusion models to design burger recipes and materials. The study trained models on 2,260 recipes to generate new combinations, with three AI-designed burgers outperforming McDonald's Big Mac in taste tests with 100 participants.

← PrevPage 321 of 761Next →
◆ AI Mentions
🏢OpenAI
74×
🏢Anthropic
45×
🧠Claude
35×
🏢Nvidia
33×
🧠ChatGPT
19×
🧠Gemini
19×
🧠GPT-4
18×
🧠GPT-5
16×
🧠Llama
15×
🏢Perplexity
12×
🏢xAI
9×
🧠Opus
9×
🏢Hugging Face
5×
🏢Google
5×
🧠Sonnet
5×
🏢Meta
4×
🏢Microsoft
3×
🧠Grok
3×
🧠Haiku
2×
🧠Sora
1×
▲ Trending Tags
1#ai2422#geopolitical-risk2363#geopolitics2104#iran1915#market-volatility1256#middle-east1217#sanctions898#oil-markets899#energy-markets7910#inflation7611#geopolitical7412#openai6513#ai-infrastructure6114#strait-of-hormuz5915#market53
Tag Sentiment
#ai242 articles
#geopolitical-risk236 articles
#geopolitics210 articles
#iran191 articles
#market-volatility125 articles
#middle-east121 articles
#oil-markets89 articles
#sanctions89 articles
#energy-markets79 articles
#inflation76 articles
BullishNeutralBearish
Stay Updated
Models, papers, tools
Tag Connections
#geopolitics↔#iran
63
#geopolitical-risk↔#market-volatility
43
#geopolitics↔#oil-markets
43
#geopolitical↔#iran
42
#geopolitics↔#middle-east
40
#geopolitical-risk↔#middle-east
40
#geopolitical-risk↔#oil-markets
39
#oil-markets↔#strait-of-hormuz
30
#geopolitical-risk↔#strait-of-hormuz
29
#iran↔#trump
29
Filters
Sentiment
Importance
Sort
📡 See all 70+ sources
y0.exchange
Your AI agent for DeFi
Connect Claude or GPT to your wallet. AI reads balances, proposes swaps and bridges — you approve. Your keys never leave your device.
8 MCP tools · 15 chains · $0 fees
Connect Wallet to AI →How it works →
Viewing: AI Pulse feed
Filters
Sentiment
Importance
Sort
Stay Updated
Models, papers, tools
y0news
y0.exchangeLaunch AppDigestsSourcesAboutRSSAI NewsCrypto News
© 2026 y0.exchange