y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🧠All19,032🧠AI13,494🤖AI × Crypto593📰General4,945
Home/AI Pulse

AI Pulse News

Models, papers, tools. 19,032 articles with AI-powered sentiment analysis and key takeaways.

19032 articles
AIBearisharXiv – CS AI · Mar 266/10
🧠

PoiCGAN: A Targeted Poisoning Based on Feature-Label Joint Perturbation in Federated Learning

Researchers propose PoiCGAN, a new targeted poisoning attack method for federated learning that uses feature-label joint perturbation to bypass detection mechanisms. The attack achieves 83.97% higher success rates than existing methods while maintaining model performance with less than 8.87% accuracy reduction.

AIBullisharXiv – CS AI · Mar 266/10
🧠

APreQEL: Adaptive Mixed Precision Quantization For Edge LLMs

Researchers propose APreQEL, an adaptive mixed precision quantization method for deploying large language models on edge devices. The approach optimizes memory, latency, and accuracy by applying different quantization levels to different layers based on their importance and hardware characteristics.

AINeutralarXiv – CS AI · Mar 266/10
🧠

LLMORPH: Automated Metamorphic Testing of Large Language Models

Researchers have developed LLMORPH, an automated testing tool for Large Language Models that uses Metamorphic Testing to identify faulty behaviors without requiring human-labeled data. The tool was tested on GPT-4, LLAMA3, and HERMES 2 across four NLP benchmarks, generating over 561,000 test executions and successfully exposing model inconsistencies.

🧠 GPT-4
AIBullisharXiv – CS AI · Mar 266/10
🧠

LLMLOOP: Improving LLM-Generated Code and Tests through Automated Iterative Feedback Loops

Researchers have developed LLMLOOP, a framework that automatically refines LLM-generated code and test cases through five iterative loops addressing compilation errors, static analysis issues, test failures, and quality improvements. The tool was evaluated on HUMANEVAL-X benchmark and demonstrated effectiveness in improving the quality of AI-generated code outputs.

AIBullisharXiv – CS AI · Mar 266/10
🧠

PLACID: Privacy-preserving Large language models for Acronym Clinical Inference and Disambiguation

Researchers developed PLACID, a privacy-preserving system using small on-device AI models (2B-10B parameters) for clinical acronym disambiguation in healthcare settings. The cascaded approach combines general-purpose models for detection with domain-specific biomedical models, achieving 81% expansion accuracy while keeping sensitive health data local.

AINeutralarXiv – CS AI · Mar 266/10
🧠

Assessment Design in the AI Era: A Method for Identifying Items Functioning Differentially for Humans and Chatbots

Researchers developed a method using Differential Item Functioning (DIF) analysis to identify systematic differences between human and AI chatbot performance on educational assessments. The study tested six leading chatbots including ChatGPT-4o, Gemini, and Claude on chemistry and entrance exams to help educators design AI-resistant assessments.

🏢 Meta🧠 ChatGPT🧠 Claude
AINeutralarXiv – CS AI · Mar 266/10
🧠

The Diminishing Returns of Early-Exit Decoding in Modern LLMs

Research shows that newer LLMs have diminishing effectiveness for early-exit decoding techniques due to improved architectures that reduce layer redundancy. The study finds that dense transformers outperform Mixture-of-Experts models for early-exit, with larger models (20B+ parameters) and base pretrained models showing the highest early-exit potential.

AINeutralarXiv – CS AI · Mar 266/10
🧠

PoliticsBench: Benchmarking Political Values in Large Language Models with Multi-Turn Roleplay

Researchers developed PoliticsBench, a new framework to evaluate political bias in large language models through multi-turn roleplay scenarios. The study found that 7 out of 8 major LLMs (Claude, Deepseek, Gemini, GPT, Llama, Qwen) showed left-leaning political bias, while only Grok exhibited right-leaning tendencies.

🧠 Claude🧠 Gemini🧠 Llama
AINeutralarXiv – CS AI · Mar 266/10
🧠

Can VLMs Reason Robustly? A Neuro-Symbolic Investigation

Researchers investigated whether Vision-Language Models (VLMs) can reason robustly under distribution shifts and found that fine-tuned VLMs achieve high accuracy in-distribution but fail to generalize. They propose VLC, a neuro-symbolic method combining VLM-based concept recognition with circuit-based symbolic reasoning that demonstrates consistent performance under covariate shifts.

AIBullisharXiv – CS AI · Mar 266/10
🧠

Latent Bias Alignment for High-Fidelity Diffusion Inversion in Real-World Image Reconstruction and Manipulation

Researchers have developed new methods called Latent Bias Optimization (LBO) and Image Latent Boosting (ILB) to improve diffusion model performance in reconstructing real-world images from noise. The techniques address key challenges in diffusion inversion by reducing misalignment between generation processes and improving reconstruction quality for applications like image editing.

AINeutralarXiv – CS AI · Mar 266/10
🧠

Revealing Multi-View Hallucination in Large Vision-Language Models

Researchers identify 'multi-view hallucination' as a major problem in large vision-language models (LVLMs), where these AI systems confuse visual information from different viewpoints or instances. They created MVH-Bench benchmark and developed Reference Shift Contrastive Decoding (RSCD) technique, which improved performance by up to 34.6 points without requiring model retraining.

AIBullisharXiv – CS AI · Mar 266/10
🧠

Kirchhoff-Inspired Neural Networks for Evolving High-Order Perception

Researchers propose Kirchhoff-Inspired Neural Networks (KINN), a new deep learning architecture based on Kirchhoff's current law that better mimics biological neural systems. KINN uses state-variable dynamics and differential equations to achieve superior performance on PDE solving and ImageNet classification compared to existing methods.

AIBullisharXiv – CS AI · Mar 266/10
🧠

From Untamed Black Box to Interpretable Pedagogical Orchestration: The Ensemble of Specialized LLMs Architecture for Adaptive Tutoring

Researchers introduced ES-LLMs, a new AI tutoring architecture that separates decision-making from language generation to create more reliable and interpretable educational AI systems. The system outperformed traditional monolithic LLMs in human evaluations (91.7% preference) while reducing costs by 54% and achieving 100% adherence to pedagogical constraints.

AIBullisharXiv – CS AI · Mar 266/10
🧠

Towards Effective Experiential Learning: Dual Guidance for Utilization and Internalization

Researchers propose Dual Guidance Optimization (DGO), a new framework that improves large language model training by combining external experience banks with internal knowledge to better mimic human learning patterns. The approach shows consistent improvements over existing reinforcement learning methods for reasoning tasks.

AIBearisharXiv – CS AI · Mar 266/10
🧠

The Alignment Tax: Response Homogenization in Aligned LLMs and Its Implications for Uncertainty Estimation

Research reveals that RLHF-aligned language models suffer from 'alignment tax' - producing homogenized responses that severely impair uncertainty estimation methods. The study found 40-79% of questions on TruthfulQA generate nearly identical responses, with alignment processes like DPO being the primary cause of this response homogenization.

AIBullisharXiv – CS AI · Mar 266/10
🧠

MedAidDialog: A Multilingual Multi-Turn Medical Dialogue Dataset for Accessible Healthcare

Researchers have introduced MedAidDialog, a multilingual medical dialogue dataset covering seven languages, and developed MedAidLM, a conversational AI model for preliminary medical consultations. The system uses parameter-efficient fine-tuning on small language models to enable deployment without high-end computational infrastructure while incorporating patient context for personalized consultations.

AIBullisharXiv – CS AI · Mar 266/10
🧠

A Deep Dive into Scaling RL for Code Generation with Synthetic Data and Curricula

Researchers developed a scalable multi-turn synthetic data generation pipeline using reinforcement learning to improve large language models' code generation capabilities. The approach uses teacher models to create structured difficulty progressions and curriculum-based training, showing consistent improvements in code generation across Llama3.1-8B and Qwen models.

🧠 Llama
AIBearisharXiv – CS AI · Mar 266/10
🧠

Who Benefits from RAG? The Role of Exposure, Utility and Attribution Bias

Research reveals that Retrieval-Augmented Generation (RAG) systems exhibit fairness issues, with queries from certain demographic groups systematically receiving higher accuracy than others. The study identifies three key factors affecting fairness: group exposure in retrieved documents, utility of group-specific documents, and attribution bias in how generators use different group documents.

🏢 Meta
AIBullisharXiv – CS AI · Mar 266/10
🧠

Accelerating Diffusion-based Video Editing via Heterogeneous Caching: Beyond Full Computing at Sampled Denoising Timestep

Researchers introduce HetCache, a training-free acceleration framework for diffusion-based video editing that achieves 2.67x speedup by selectively caching contextually relevant tokens instead of processing all attention operations. The method reduces computational redundancy in Diffusion Transformers while maintaining video editing quality and consistency.

AINeutralarXiv – CS AI · Mar 266/10
🧠

GameplayQA: A Benchmarking Framework for Decision-Dense POV-Synced Multi-Video Understanding of 3D Virtual Agents

Researchers introduce GameplayQA, a new benchmarking framework for evaluating multimodal large language models on 3D virtual agent perception and reasoning tasks. The framework uses densely annotated multiplayer gameplay videos with 2.4K diagnostic QA pairs, revealing substantial performance gaps between current frontier models and human-level understanding.

AIBullisharXiv – CS AI · Mar 266/10
🧠

Enhancing Efficiency and Performance in Deepfake Audio Detection through Neuron-level Dropin & Neuroplasticity Mechanisms

Researchers developed novel 'dropin' and 'plasticity' algorithms inspired by brain neuroplasticity to improve deepfake audio detection efficiency. The methods dynamically adjust neuron counts in model layers, achieving up to 66% reduction in error rates while improving computational efficiency across multiple architectures including ResNet and Wav2Vec.

AIBullisharXiv – CS AI · Mar 266/10
🧠

LensWalk: Agentic Video Understanding by Planning How You See in Videos

Researchers introduced LensWalk, an agentic AI framework that enables Large Language Models to actively control their visual observation of videos through dynamic temporal sampling. The system uses a reason-plan-observe loop to progressively gather evidence, achieving 5% accuracy improvements on challenging video benchmarks without requiring model fine-tuning.

AINeutralarXiv – CS AI · Mar 266/10
🧠

Retrieval Improvements Do Not Guarantee Better Answers: A Study of RAG for AI Policy QA

A research study on retrieval-augmented generation (RAG) systems for AI policy analysis found that improving retrieval quality doesn't necessarily lead to better question-answering performance. The research used 947 AI policy documents and discovered that stronger retrieval can paradoxically cause more confident hallucinations when relevant information is missing.

AIBullisharXiv – CS AI · Mar 266/10
🧠

Learning To Guide Human Decision Makers With Vision-Language Models

Researchers introduce Learning to Guide (LTG), a new AI framework where machines provide interpretable guidance to human decision-makers rather than making automated decisions. The SLOG approach transforms vision-language models into guidance generators using human feedback, showing promise in medical diagnosis applications.

AINeutralarXiv – CS AI · Mar 266/10
🧠

GeoSketch: A Neural-Symbolic Approach to Geometric Multimodal Reasoning with Auxiliary Line Construction and Affine Transformation

Researchers introduce GeoSketch, a neural-symbolic AI framework that solves geometric problems through dynamic visual manipulation, including drawing auxiliary lines and applying transformations. The system combines perception, symbolic reasoning, and interactive sketch actions, achieving superior performance on geometric problem-solving benchmarks compared to static image processing methods.

← PrevPage 341 of 762Next →
◆ AI Mentions
🏢OpenAI
73×
🏢Anthropic
43×
🧠Claude
35×
🏢Nvidia
35×
🧠Gemini
19×
🧠ChatGPT
19×
🧠GPT-4
18×
🧠GPT-5
16×
🧠Llama
15×
🏢Perplexity
12×
🧠Opus
9×
🏢xAI
9×
🧠Sonnet
5×
🏢Google
5×
🏢Hugging Face
5×
🏢Meta
4×
🧠Grok
3×
🏢Microsoft
3×
🧠Haiku
2×
🧠Sora
1×
▲ Trending Tags
1#ai2422#geopolitical-risk2283#geopolitics1904#iran1845#market-volatility1216#middle-east1127#oil-markets818#sanctions809#energy-markets7610#geopolitical7411#inflation7112#openai6513#ai-infrastructure6114#strait-of-hormuz5515#market53
Tag Sentiment
#ai242 articles
#geopolitical-risk228 articles
#geopolitics190 articles
#iran184 articles
#market-volatility121 articles
#middle-east112 articles
#oil-markets81 articles
#sanctions80 articles
#energy-markets76 articles
#geopolitical74 articles
BullishNeutralBearish
Stay Updated
Models, papers, tools
Tag Connections
#geopolitics↔#iran
55
#geopolitical↔#iran
42
#geopolitical-risk↔#market-volatility
41
#geopolitical-risk↔#oil-markets
38
#geopolitics↔#middle-east
37
#geopolitics↔#oil-markets
37
#geopolitical-risk↔#middle-east
37
#iran↔#trump
29
#geopolitical-risk↔#strait-of-hormuz
29
#energy-markets↔#geopolitical-risk
28
Filters
Sentiment
Importance
Sort
📡 See all 70+ sources
y0.exchange
Your AI agent for DeFi
Connect Claude or GPT to your wallet. AI reads balances, proposes swaps and bridges — you approve. Your keys never leave your device.
8 MCP tools · 15 chains · $0 fees
Connect Wallet to AI →How it works →
Viewing: AI Pulse feed
Filters
Sentiment
Importance
Sort
Stay Updated
Models, papers, tools
y0news
y0.exchangeLaunch AppDigestsSourcesAboutRSSAI NewsCrypto News
© 2026 y0.exchange