y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🧠All17,287🧠AI12,522🤖AI × Crypto516📰General4,249
Home/AI Pulse

AI Pulse News

Models, papers, tools. 17,290 articles with AI-powered sentiment analysis and key takeaways.

17290 articles
AIBullisharXiv – CS AI · Mar 57/10
🧠

Perfect score on IPhO 2025 theory by Gemini agent

Google's Gemini 3.1 Pro Preview achieved a perfect score on IPhO 2025 theory problems across five runs, surpassing previous AI performance that fell behind top human contestants. However, the researchers acknowledge potential data contamination since the model was released after the competition.

🧠 Gemini
AINeutralarXiv – CS AI · Mar 57/10
🧠

Bridging the Reproducibility Divide: Open Source Software's Role in Standardizing Healthcare AI

A study reveals that 74% of healthcare AI research papers still use private datasets or don't share code, creating reproducibility issues that undermine trust in medical AI applications. Papers that embrace open practices by sharing both public datasets and code receive 110% more citations on average, demonstrating clear benefits for scientific impact.

AIBearisharXiv – CS AI · Mar 57/10
🧠

Sleeper Cell: Injecting Latent Malice Temporal Backdoors into Tool-Using LLMs

Researchers demonstrate a novel backdoor attack method called 'SFT-then-GRPO' that can inject hidden malicious behavior into AI agents while maintaining their performance on standard benchmarks. The attack creates 'sleeper agents' that appear benign but can execute harmful actions under specific trigger conditions, highlighting critical security vulnerabilities in the adoption of third-party AI models.

AIBullisharXiv – CS AI · Mar 57/10
🧠

AOI: Turning Failed Trajectories into Training Signals for Autonomous Cloud Diagnosis

Researchers present AOI (Autonomous Operations Intelligence), a multi-agent AI framework that automates Site Reliability Engineering tasks while maintaining security constraints. The system achieved 66.3% success rate on benchmark tests, outperforming previous methods by 24.4 points, and can learn from failed operations to improve future performance.

🧠 Claude
AIBullisharXiv – CS AI · Mar 56/10
🧠

LiteVLA-Edge: Quantized On-Device Multimodal Control for Embedded Robotics

Researchers developed LiteVLA-Edge, a deployment-oriented Vision-Language-Action model pipeline that enables fully on-device inference on embedded robotics hardware like Jetson Orin. The system achieves 150.5ms latency (6.6Hz) through FP32 fine-tuning combined with 4-bit quantization and GPU-accelerated inference, operating entirely offline within a ROS 2 framework.

AIBullisharXiv – CS AI · Mar 57/10
🧠

MemSifter: Offloading LLM Memory Retrieval via Outcome-Driven Proxy Reasoning

MemSifter is a new AI framework that uses smaller proxy models to handle memory retrieval for large language models, addressing computational costs in long-term memory tasks. The system uses reinforcement learning to optimize retrieval accuracy and has been open-sourced with demonstrated performance improvements on benchmark tests.

AINeutralarXiv – CS AI · Mar 57/10
🧠

Goal-Driven Risk Assessment for LLM-Powered Systems: A Healthcare Case Study

Researchers propose a new goal-driven risk assessment framework for LLM-powered systems, specifically targeting healthcare applications. The approach uses attack trees to identify detailed threat vectors combining adversarial AI attacks with conventional cyber threats, addressing security gaps in LLM system design.

AIBearisharXiv – CS AI · Mar 57/10
🧠

Image-based Prompt Injection: Hijacking Multimodal LLMs through Visually Embedded Adversarial Instructions

Researchers have developed Image-based Prompt Injection (IPI), a black-box attack that embeds adversarial instructions into natural images to manipulate multimodal AI models. Testing on GPT-4-turbo achieved up to 64% attack success rate, demonstrating a significant security vulnerability in vision-language AI systems.

🧠 GPT-4
AINeutralarXiv – CS AI · Mar 57/10
🧠

InEdit-Bench: Benchmarking Intermediate Logical Pathways for Intelligent Image Editing Models

Researchers introduced InEdit-Bench, the first evaluation benchmark specifically designed to test image editing models' ability to reason through intermediate logical pathways in multi-step visual transformations. Testing 14 representative models revealed significant shortcomings in handling complex scenarios requiring dynamic reasoning and procedural understanding.

AINeutralarXiv – CS AI · Mar 57/10
🧠

RAG-X: Systematic Diagnosis of Retrieval-Augmented Generation for Medical Question Answering

Researchers propose RAG-X, a diagnostic framework for evaluating retrieval-augmented generation systems in medical AI applications. The study reveals an 'Accuracy Fallacy' showing a 14% gap between perceived system success and actual evidence-based grounding in medical question-answering systems.

AINeutralarXiv – CS AI · Mar 56/10
🧠

SafeCRS: Personalized Safety Alignment for LLM-Based Conversational Recommender Systems

Researchers introduce SafeCRS, a safety-aware training framework for LLM-based conversational recommender systems that addresses personalized safety vulnerabilities. The system reduces safety violation rates by up to 96.5% while maintaining recommendation quality by respecting individual user constraints like trauma triggers and phobias.

AINeutralarXiv – CS AI · Mar 57/10
🧠

Molt Dynamics: Emergent Social Phenomena in Autonomous AI Agent Populations

Researchers analyzed 770,000 autonomous AI agents interacting in MoltBook, revealing emergent social behaviors including role specialization, information cascades, and limited cooperative task resolution. The study found that while agents naturally develop coordination patterns, collaborative outcomes perform worse than individual agents, establishing baseline metrics for decentralized AI systems.

AINeutralarXiv – CS AI · Mar 57/10
🧠

On Google's SynthID-Text LLM Watermarking System: Theoretical Analysis and Empirical Validation

Researchers have conducted the first theoretical analysis of Google's SynthID-Text watermarking system, revealing vulnerabilities in its detection methods and proposing attacks that can break the system. The study identifies weaknesses in the mean score detection approach and demonstrates that the Bayesian score offers better robustness, while establishing optimal parameters for watermark detection.

AINeutralarXiv – CS AI · Mar 56/10
🧠

Belief-Sim: Towards Belief-Driven Simulation of Demographic Misinformation Susceptibility

Researchers introduce BeliefSim, a framework that uses Large Language Models to simulate how different demographic groups are susceptible to misinformation based on their underlying beliefs. The system achieves up to 92% accuracy in predicting misinformation susceptibility by incorporating psychology-informed belief profiles.

AIBearisharXiv – CS AI · Mar 56/10
🧠

Baseline Performance of AI Tools in Classifying Cognitive Demand of Mathematical Tasks

A research study tested 11 AI tools on their ability to classify the cognitive demand of mathematical tasks, finding they achieved only 63% accuracy on average with no tool exceeding 83%. The tools showed systematic bias toward middle-category classifications and struggled with reasoning about underlying cognitive processes versus surface textual features.

🏢 Perplexity🧠 ChatGPT🧠 Claude
AIBullisharXiv – CS AI · Mar 56/10
🧠

Test-Time Meta-Adaptation with Self-Synthesis

Researchers introduce MASS, a meta-learning framework that enables large language models to self-adapt at test time by generating synthetic training data and performing targeted self-updates. The system uses bilevel optimization to meta-learn data-attribution signals and optimize synthetic data through scalable meta-gradients, showing effectiveness in mathematical reasoning tasks.

AINeutralarXiv – CS AI · Mar 57/10
🧠

The Controllability Trap: A Governance Framework for Military AI Agents

Researchers propose the Agentic Military AI Governance Framework (AMAGF) to address control failures in autonomous military AI systems. The framework introduces a Control Quality Score (CQS) to continuously measure and manage human control over AI agents throughout operations, moving beyond binary control models.

AIBullisharXiv – CS AI · Mar 57/10
🧠

MMAI Gym for Science: Training Liquid Foundation Models for Drug Discovery

Researchers introduce MMAI Gym for Science, a training framework for molecular foundation models in drug discovery. Their Liquid Foundation Model (LFM) outperforms larger general-purpose models on drug discovery tasks while being more efficient and specialized for molecular applications.

AIBullisharXiv – CS AI · Mar 57/10
🧠

Phys4D: Fine-Grained Physics-Consistent 4D Modeling from Video Diffusion

Researchers have developed Phys4D, a new pipeline that enhances video diffusion models with physics-consistent 4D world representations through a three-stage training process. The system addresses current limitations where AI-generated videos often exhibit physically implausible dynamics, using pseudo-supervised pretraining, physics-grounded fine-tuning, and reinforcement learning to improve spatiotemporal consistency.

AIBullisharXiv – CS AI · Mar 56/10
🧠

PhyPrompt: RL-based Prompt Refinement for Physically Plausible Text-to-Video Generation

Researchers developed PhyPrompt, a reinforcement learning framework that automatically refines text prompts to generate physically realistic videos from AI models. The system uses a two-stage approach with curriculum learning to improve both physical accuracy and semantic fidelity, outperforming larger models like GPT-4o with only 7B parameters.

🧠 GPT-4
AIBullisharXiv – CS AI · Mar 56/10
🧠

PRIVATEEDIT: A Privacy-Preserving Pipeline for Face-Centric Generative Image Editing

Researchers have developed PRIVATEEDIT, a privacy-preserving pipeline for face-centric image editing that keeps biometric data on-device rather than uploading to third-party services. The system uses local segmentation and masking to separate identity-sensitive regions from editable content, allowing high-quality editing while maintaining user control over facial data.

AIBullisharXiv – CS AI · Mar 57/10
🧠

Parallel Test-Time Scaling with Multi-Sequence Verifiers

Researchers introduce Multi-Sequence Verifier (MSV), a new technique that improves large language model performance by jointly processing multiple candidate solutions rather than scoring them individually. The system achieves better accuracy while reducing inference latency by approximately half through improved calibration and early-stopping strategies.

AIBullisharXiv – CS AI · Mar 57/10
🧠

Farther the Shift, Sparser the Representation: Analyzing OOD Mechanisms in LLMs

Researchers discovered that Large Language Models become increasingly sparse in their internal representations when handling more difficult or out-of-distribution tasks. This sparsity mechanism appears to be an adaptive response that helps stabilize reasoning under challenging conditions, leading to the development of a new learning strategy called Sparsity-Guided Curriculum In-Context Learning (SG-ICL).

AIBullisharXiv – CS AI · Mar 56/10
🧠

Error as Signal: Stiffness-Aware Diffusion Sampling via Embedded Runge-Kutta Guidance

Researchers propose Embedded Runge-Kutta Guidance (ERK-Guid), a new method that improves diffusion model sampling by using solver-induced errors as guidance signals. The technique addresses stiffness issues in ODE trajectories and demonstrates superior performance over existing methods on ImageNet benchmarks.

AIBullisharXiv – CS AI · Mar 57/10
🧠

mlx-snn: Spiking Neural Networks on Apple Silicon via MLX

Researchers have released mlx-snn, the first spiking neural network library built natively for Apple's MLX framework, targeting Apple Silicon hardware. The library demonstrates 2-2.5x faster training and 3-10x lower GPU memory usage compared to existing PyTorch-based solutions, achieving 97.28% accuracy on MNIST classification tasks.

← PrevPage 133 of 692Next →
◆ AI Mentions
🏢OpenAI
95×
🏢Nvidia
65×
🧠GPT-5
47×
🧠Claude
43×
🧠Gemini
39×
🏢Anthropic
39×
🧠ChatGPT
24×
🧠GPT-4
19×
🧠Llama
18×
🏢Meta
11×
🧠Opus
10×
🏢Google
9×
🏢xAI
9×
🧠Sonnet
8×
🏢Perplexity
7×
🏢Hugging Face
7×
🧠Grok
6×
🏢Microsoft
6×
🏢Cohere
2×
🧠Stable Diffusion
1×
▲ Trending Tags
1#ai5702#iran5673#market4164#geopolitical3865#trump1316#security1067#openai958#artificial-intelligence749#nvidia6310#inflation5811#fed5312#china5013#google5014#meta4215#microsoft38
Tag Sentiment
#ai570 articles
#iran567 articles
#market416 articles
#geopolitical386 articles
#trump131 articles
#security106 articles
#openai95 articles
#artificial-intelligence74 articles
#nvidia63 articles
#inflation58 articles
BullishNeutralBearish
Stay Updated
Models, papers, tools
Tag Connections
#geopolitical↔#iran
267
#iran↔#market
174
#geopolitical↔#market
146
#iran↔#trump
89
#ai↔#market
63
#ai↔#artificial-intelligence
62
#geopolitical↔#trump
50
#market↔#trump
49
#ai↔#openai
43
#ai↔#google
41
Filters
Sentiment
Importance
Sort
📡 See all 70+ sources
y0.exchange
Your AI agent for DeFi
Connect Claude or GPT to your wallet. AI reads balances, proposes swaps and bridges — you approve. Your keys never leave your device.
8 MCP tools · 15 chains · $0 fees
Connect Wallet to AI →How it works →
Viewing: AI Pulse feed
Filters
Sentiment
Importance
Sort
Stay Updated
Models, papers, tools
y0news
y0.exchangeLaunch AppDigestsSourcesAboutRSSAI NewsCrypto News
© 2026 y0.exchange