y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🧠All21,576🧠AI14,870🤖AI × Crypto699📰General6,007
Home/AI Pulse

AI Pulse News

Models, papers, tools. 21,586 articles with AI-powered sentiment analysis and key takeaways.

21586 articles
AINeutralarXiv – CS AI · Mar 266/10
🧠

DepthCharge: A Domain-Agnostic Framework for Measuring Depth-Dependent Knowledge in Large Language Models

Researchers developed DepthCharge, a new framework for measuring how deeply large language models can maintain accurate responses when questioned about domain-specific knowledge. Testing across four domains revealed significant variation in model performance depth, with no single AI model dominating all areas and expensive models not always achieving superior results.

AIBullisharXiv – CS AI · Mar 266/10
🧠

From Physician Expertise to Clinical Agents: Preserving, Standardizing, and Scaling Physicians' Medical Expertise with Lightweight LLM

Researchers developed Med-Shicheng, a framework that enables lightweight LLMs to learn and transfer medical expertise from distinguished physicians. Built on a 1.5B parameter model, it achieves performance comparable to much larger models like GPT-5 while running on resource-constrained hardware.

🧠 GPT-5
AINeutralarXiv – CS AI · Mar 266/10
🧠

Qworld: Question-Specific Evaluation Criteria for LLMs

Researchers introduce Qworld, a new method for evaluating large language models that generates question-specific criteria using recursive expansion trees instead of static rubrics. The approach covers 89% of expert-authored criteria and reveals capability differences across 11 frontier LLMs that traditional evaluation methods miss.

AIBullisharXiv – CS AI · Mar 266/10
🧠

Navigating the Concept Space of Language Models

Researchers have developed Concept Explorer, a scalable interactive system for exploring features from sparse autoencoders (SAEs) trained on large language models. The tool uses hierarchical neighborhood embeddings to organize thousands of AI model features into interpretable concept clusters, enabling better discovery and analysis of how language models understand concepts.

AINeutralarXiv – CS AI · Mar 266/10
🧠

Did You Forget What I Asked? Prospective Memory Failures in Large Language Models

Research reveals that large language models fail to follow formatting instructions 2-21% more often when performing complex tasks simultaneously, with terminal constraints showing up to 50% degradation. Enhanced formatting with explicit framing and reminders can restore compliance to 90-100% in most cases.

AIBullisharXiv – CS AI · Mar 266/10
🧠

MDKeyChunker: Single-Call LLM Enrichment with Rolling Keys and Key-Based Restructuring for High-Accuracy RAG

Researchers introduce MDKeyChunker, a three-stage pipeline that improves RAG (Retrieval-Augmented Generation) systems by using structure-aware chunking of Markdown documents, single-call LLM enrichment, and semantic key-based restructuring. The system achieves superior retrieval performance with Recall@5=1.000 using BM25 over structural chunks, significantly improving upon traditional fixed-size chunking methods.

🏢 OpenAI
AIBearisharXiv – CS AI · Mar 266/10
🧠

Large Language Models and Scientific Discourse: Where's the Intelligence?

A research paper argues that Large Language Models lack true intelligence and understanding compared to humans, as they rely on written discourse rather than tacit knowledge built through social interaction. The authors demonstrate this through examples like the Monty Hall problem, showing that LLM improvements come from changes in training data rather than enhanced reasoning abilities.

🧠 ChatGPT
AIBullisharXiv – CS AI · Mar 266/10
🧠

Mixture of Demonstrations for Textual Graph Understanding and Question Answering

Researchers propose MixDemo, a new GraphRAG framework that uses a Mixture-of-Experts mechanism to select high-quality demonstrations for improving large language model performance in domain-specific question answering. The framework includes a query-specific graph encoder to reduce noise in retrieved subgraphs and significantly outperforms existing methods across multiple textual graph benchmarks.

AIBullisharXiv – CS AI · Mar 266/10
🧠

Safe Reinforcement Learning with Preference-based Constraint Inference

Researchers propose Preference-based Constrained Reinforcement Learning (PbCRL), a new approach for safe AI decision-making that learns safety constraints from human preferences rather than requiring extensive expert demonstrations. The method addresses limitations in existing Bradley-Terry models by introducing a dead zone mechanism and Signal-to-Noise Ratio loss to better capture asymmetric safety costs and improve constraint alignment.

AIBullisharXiv – CS AI · Mar 266/10
🧠

AscendOptimizer: Episodic Agent for Ascend NPU Operator Optimization

Researchers introduce AscendOptimizer, an AI agent that optimizes operators for Huawei's Ascend NPUs through evolutionary search and experience-based learning. The system achieved 1.19x geometric-mean speedup over baselines on 127 real operators, with nearly 50% outperforming reference implementations.

AIBearisharXiv – CS AI · Mar 266/10
🧠

PoiCGAN: A Targeted Poisoning Based on Feature-Label Joint Perturbation in Federated Learning

Researchers propose PoiCGAN, a new targeted poisoning attack method for federated learning that uses feature-label joint perturbation to bypass detection mechanisms. The attack achieves 83.97% higher success rates than existing methods while maintaining model performance with less than 8.87% accuracy reduction.

AIBullisharXiv – CS AI · Mar 266/10
🧠

APreQEL: Adaptive Mixed Precision Quantization For Edge LLMs

Researchers propose APreQEL, an adaptive mixed precision quantization method for deploying large language models on edge devices. The approach optimizes memory, latency, and accuracy by applying different quantization levels to different layers based on their importance and hardware characteristics.

AINeutralarXiv – CS AI · Mar 266/10
🧠

LLMORPH: Automated Metamorphic Testing of Large Language Models

Researchers have developed LLMORPH, an automated testing tool for Large Language Models that uses Metamorphic Testing to identify faulty behaviors without requiring human-labeled data. The tool was tested on GPT-4, LLAMA3, and HERMES 2 across four NLP benchmarks, generating over 561,000 test executions and successfully exposing model inconsistencies.

🧠 GPT-4
AIBullisharXiv – CS AI · Mar 266/10
🧠

LLMLOOP: Improving LLM-Generated Code and Tests through Automated Iterative Feedback Loops

Researchers have developed LLMLOOP, a framework that automatically refines LLM-generated code and test cases through five iterative loops addressing compilation errors, static analysis issues, test failures, and quality improvements. The tool was evaluated on HUMANEVAL-X benchmark and demonstrated effectiveness in improving the quality of AI-generated code outputs.

AIBullisharXiv – CS AI · Mar 266/10
🧠

PLACID: Privacy-preserving Large language models for Acronym Clinical Inference and Disambiguation

Researchers developed PLACID, a privacy-preserving system using small on-device AI models (2B-10B parameters) for clinical acronym disambiguation in healthcare settings. The cascaded approach combines general-purpose models for detection with domain-specific biomedical models, achieving 81% expansion accuracy while keeping sensitive health data local.

AINeutralarXiv – CS AI · Mar 266/10
🧠

Assessment Design in the AI Era: A Method for Identifying Items Functioning Differentially for Humans and Chatbots

Researchers developed a method using Differential Item Functioning (DIF) analysis to identify systematic differences between human and AI chatbot performance on educational assessments. The study tested six leading chatbots including ChatGPT-4o, Gemini, and Claude on chemistry and entrance exams to help educators design AI-resistant assessments.

🏢 Meta🧠 ChatGPT🧠 Claude
AINeutralarXiv – CS AI · Mar 266/10
🧠

The Diminishing Returns of Early-Exit Decoding in Modern LLMs

Research shows that newer LLMs have diminishing effectiveness for early-exit decoding techniques due to improved architectures that reduce layer redundancy. The study finds that dense transformers outperform Mixture-of-Experts models for early-exit, with larger models (20B+ parameters) and base pretrained models showing the highest early-exit potential.

AINeutralarXiv – CS AI · Mar 266/10
🧠

PoliticsBench: Benchmarking Political Values in Large Language Models with Multi-Turn Roleplay

Researchers developed PoliticsBench, a new framework to evaluate political bias in large language models through multi-turn roleplay scenarios. The study found that 7 out of 8 major LLMs (Claude, Deepseek, Gemini, GPT, Llama, Qwen) showed left-leaning political bias, while only Grok exhibited right-leaning tendencies.

🧠 Claude🧠 Gemini🧠 Llama
AINeutralarXiv – CS AI · Mar 266/10
🧠

Can VLMs Reason Robustly? A Neuro-Symbolic Investigation

Researchers investigated whether Vision-Language Models (VLMs) can reason robustly under distribution shifts and found that fine-tuned VLMs achieve high accuracy in-distribution but fail to generalize. They propose VLC, a neuro-symbolic method combining VLM-based concept recognition with circuit-based symbolic reasoning that demonstrates consistent performance under covariate shifts.

AIBullisharXiv – CS AI · Mar 266/10
🧠

Latent Bias Alignment for High-Fidelity Diffusion Inversion in Real-World Image Reconstruction and Manipulation

Researchers have developed new methods called Latent Bias Optimization (LBO) and Image Latent Boosting (ILB) to improve diffusion model performance in reconstructing real-world images from noise. The techniques address key challenges in diffusion inversion by reducing misalignment between generation processes and improving reconstruction quality for applications like image editing.

AINeutralarXiv – CS AI · Mar 266/10
🧠

Revealing Multi-View Hallucination in Large Vision-Language Models

Researchers identify 'multi-view hallucination' as a major problem in large vision-language models (LVLMs), where these AI systems confuse visual information from different viewpoints or instances. They created MVH-Bench benchmark and developed Reference Shift Contrastive Decoding (RSCD) technique, which improved performance by up to 34.6 points without requiring model retraining.

AIBullisharXiv – CS AI · Mar 266/10
🧠

Kirchhoff-Inspired Neural Networks for Evolving High-Order Perception

Researchers propose Kirchhoff-Inspired Neural Networks (KINN), a new deep learning architecture based on Kirchhoff's current law that better mimics biological neural systems. KINN uses state-variable dynamics and differential equations to achieve superior performance on PDE solving and ImageNet classification compared to existing methods.

AIBullisharXiv – CS AI · Mar 266/10
🧠

From Untamed Black Box to Interpretable Pedagogical Orchestration: The Ensemble of Specialized LLMs Architecture for Adaptive Tutoring

Researchers introduced ES-LLMs, a new AI tutoring architecture that separates decision-making from language generation to create more reliable and interpretable educational AI systems. The system outperformed traditional monolithic LLMs in human evaluations (91.7% preference) while reducing costs by 54% and achieving 100% adherence to pedagogical constraints.

AIBullisharXiv – CS AI · Mar 266/10
🧠

Towards Effective Experiential Learning: Dual Guidance for Utilization and Internalization

Researchers propose Dual Guidance Optimization (DGO), a new framework that improves large language model training by combining external experience banks with internal knowledge to better mimic human learning patterns. The approach shows consistent improvements over existing reinforcement learning methods for reasoning tasks.

AIBearisharXiv – CS AI · Mar 266/10
🧠

The Alignment Tax: Response Homogenization in Aligned LLMs and Its Implications for Uncertainty Estimation

Research reveals that RLHF-aligned language models suffer from 'alignment tax' - producing homogenized responses that severely impair uncertainty estimation methods. The study found 40-79% of questions on TruthfulQA generate nearly identical responses, with alignment processes like DPO being the primary cause of this response homogenization.

← PrevPage 425 of 864Next →
◆ AI Mentions
🏢OpenAI
93×
🧠Claude
60×
🧠Gemini
55×
🏢Anthropic
52×
🧠Llama
50×
🏢Nvidia
46×
🧠GPT-5
42×
🧠GPT-4
36×
🧠ChatGPT
29×
🏢Perplexity
27×
🏢Hugging Face
24×
🏢Meta
21×
🏢xAI
14×
🧠Opus
12×
🧠Grok
9×
🧠Sonnet
8×
🏢Google
5×
🧠Stable Diffusion
4×
🧠o1
3×
🏢Microsoft
2×
▲ Trending Tags
1#machine-learning3242#ai2843#reinforcement-learning1604#ai-safety1315#geopolitics1266#iran1227#language-models1228#ai-infrastructure1159#geopolitical-risk10910#neural-networks10811#market9712#openai7913#deep-learning7714#benchmark7715#bitcoin72
Tag Sentiment
#machine-learning324 articles
#ai284 articles
#reinforcement-learning160 articles
#ai-safety131 articles
#geopolitics126 articles
#iran122 articles
#language-models122 articles
#ai-infrastructure115 articles
#geopolitical-risk109 articles
#neural-networks108 articles
BullishNeutralBearish
Stay Updated
Models, papers, tools
Tag Connections
#geopolitics↔#iran
32
#geopolitical↔#iran
30
#ai↔#artificial-intelligence
26
#geopolitical-risk↔#strait-of-hormuz
25
#ai↔#google
24
#geopolitics↔#oil-markets
22
#geopolitical-risk↔#oil-markets
22
#energy-markets↔#geopolitical-risk
22
#iran↔#trump
21
#geopolitics↔#middle-east
21
Filters
Sentiment
Importance
Sort
📡 See all 70+ sources
y0.exchange
Your AI agent for DeFi
Connect Claude or GPT to your wallet. AI reads balances, proposes swaps and bridges — you approve. Your keys never leave your device.
8 MCP tools · 15 chains · $0 fees
Connect Wallet to AI →How it works →
Viewing: AI Pulse feed
Filters
Sentiment
Importance
Sort
Stay Updated
Models, papers, tools
y0news
y0.exchangeLaunch AppDigestsSourcesAboutRSSAI NewsCrypto News
© 2026 y0.exchange