y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🧠All19,008🧠AI13,487🤖AI × Crypto585📰General4,936
Home/AI Pulse

AI Pulse News

Models, papers, tools. 19,008 articles with AI-powered sentiment analysis and key takeaways.

19008 articles
AIBullisharXiv – CS AI · Apr 76/10
🧠

Vocabulary Dropout for Curriculum Diversity in LLM Co-Evolution

Researchers introduce vocabulary dropout, a technique to prevent diversity collapse in co-evolutionary language model training where one model generates problems and another solves them. The method sustains proposer diversity and improves mathematical reasoning performance by +4.4 points on average in Qwen3 models.

AINeutralarXiv – CS AI · Apr 76/10
🧠

Incentives shape how humans co-create with generative AI

A randomized control trial reveals that incentive structures significantly influence how humans use generative AI in creative tasks. When participants were rewarded for originality rather than just quality, they produced more diverse collective output by using AI more selectively for brainstorming and editing rather than copying suggestions verbatim.

AIBullisharXiv – CS AI · Apr 76/10
🧠

LangFIR: Discovering Sparse Language-Specific Features from Monolingual Data for Language Steering

Researchers introduce LangFIR, a method that enables better language control in multilingual AI models using only monolingual data instead of expensive parallel datasets. The technique identifies sparse language-specific features and achieves superior performance in controlling language output across multiple models including Gemma and Llama.

🧠 Llama
AIBullisharXiv – CS AI · Apr 76/10
🧠

Focus Matters: Phase-Aware Suppression for Hallucination in Vision-Language Models

Researchers developed a new method to reduce hallucinations in Large Vision-Language Models (LVLMs) by identifying a three-phase attention structure in vision processing and selectively suppressing low-attention tokens during the focus phase. The training-free approach significantly reduces object hallucinations while maintaining caption quality with minimal inference latency impact.

AIBullisharXiv – CS AI · Apr 76/10
🧠

Unveiling Language Routing Isolation in Multilingual MoE Models for Interpretable Subnetwork Adaptation

Researchers discovered that multilingual MoE AI models exhibit 'Language Routing Isolation,' where high and low-resource languages activate different expert sets. They developed RISE, a framework that exploits this isolation to improve low-resource language performance by up to 10.85% F1 score while preserving other language capabilities.

AINeutralarXiv – CS AI · Apr 76/10
🧠

Can Humans Tell? A Dual-Axis Study of Human Perception of LLM-Generated News

A research study using JudgeGPT platform found that humans cannot reliably distinguish between AI-generated and human-written news articles across 2,318 judgments from 1,054 participants. The study tested six different LLMs and concluded that user-side detection is not viable, suggesting the need for cryptographic content provenance systems.

AIBullisharXiv – CS AI · Apr 76/10
🧠

Automated Attention Pattern Discovery at Scale in Large Language Models

Researchers developed AP-MAE, a vision transformer model that analyzes attention patterns in large language models at scale to improve interpretability. The system can predict code generation accuracy with 55-70% precision and enable targeted interventions that increase model accuracy by 13.6%.

AIBullisharXiv – CS AI · Apr 76/10
🧠

Representational Collapse in Multi-Agent LLM Committees: Measurement and Diversity-Aware Consensus

Research reveals that multi-agent LLM committees suffer from 'representational collapse' where agents produce highly similar outputs despite different role prompts, with mean cosine similarity of 0.888. A new diversity-aware consensus protocol (DALC) improves accuracy to 87% while reducing token costs by 26% compared to traditional self-consistency methods.

AIBullisharXiv – CS AI · Apr 76/10
🧠

I-CALM: Incentivizing Confidence-Aware Abstention for LLM Hallucination Mitigation

Researchers developed I-CALM, a prompt-based framework that reduces AI hallucinations by encouraging language models to abstain from answering when uncertain, rather than providing confident but incorrect responses. The method uses verbal confidence assessment and reward schemes to improve reliability without model retraining.

🧠 GPT-5
AIBullisharXiv – CS AI · Apr 76/10
🧠

Automating Cloud Security and Forensics Through a Secure-by-Design Generative AI Framework

Researchers developed a secure-by-design AI framework combining PromptShield and CIAF to automate cloud security and forensic investigations while protecting against prompt injection attacks. The system achieved over 93% accuracy in classification tasks and enhanced ransomware detection in AWS and Azure environments.

AIBullisharXiv – CS AI · Apr 76/10
🧠

VLA-Forget: Vision-Language-Action Unlearning for Embodied Foundation Models

Researchers introduce VLA-Forget, a new unlearning framework for vision-language-action (VLA) models used in robotic manipulation. The hybrid approach addresses the challenge of removing unsafe or unwanted behaviors from embodied AI foundation models while preserving their core perception, language, and action capabilities.

AINeutralarXiv – CS AI · Apr 76/10
🧠

Extracting and Steering Emotion Representations in Small Language Models: A Methodological Comparison

Researchers conducted the first comprehensive analysis of emotion representations in small language models (100M-10B parameters), finding that these models do possess internal emotion vectors similar to larger frontier models. The study evaluated 9 models across 5 architectural families and discovered that emotion representations localize at middle transformer layers, with generation-based extraction methods proving superior to comprehension-based approaches.

🏢 Perplexity🧠 Llama
AINeutralarXiv – CS AI · Apr 76/10
🧠

Graphic-Design-Bench: A Comprehensive Benchmark for Evaluating AI on Graphic Design Tasks

Researchers introduce GraphicDesignBench (GDB), the first comprehensive benchmark suite for evaluating AI models on professional graphic design tasks including layout, typography, and animation. Testing reveals current AI models struggle with spatial reasoning, vector code generation, and typographic precision despite showing promise in high-level semantic understanding.

AINeutralarXiv – CS AI · Apr 76/10
🧠

ClawArena: Benchmarking AI Agents in Evolving Information Environments

Researchers introduce ClawArena, a new benchmark for evaluating AI agents' ability to maintain accurate beliefs in evolving information environments with conflicting sources. The benchmark tests 64 scenarios across 8 professional domains, revealing significant performance gaps between different AI models and frameworks in handling dynamic belief revision and multi-source reasoning.

AIBearisharXiv – CS AI · Apr 76/10
🧠

Which English Do LLMs Prefer? Triangulating Structural Bias Towards American English in Foundation Models

A new research study reveals that major large language models exhibit systematic bias toward American English over British English across training data, tokenization, and outputs. The research introduces DiAlign, a method for measuring dialectal alignment, and finds evidence of linguistic homogenization that could impact global AI equity.

AIBullisharXiv – CS AI · Apr 76/10
🧠

APPA: Adaptive Preference Pluralistic Alignment for Fair Federated RLHF of LLMs

Researchers propose APPA, a new framework for aligning large language models with diverse human preferences in federated learning environments. The method dynamically reweights group-level rewards to improve fairness, achieving up to 28% better alignment for underperforming groups while maintaining overall model performance.

🏢 Meta🧠 Llama
AINeutralarXiv – CS AI · Apr 76/10
🧠

Poisoned Identifiers Survive LLM Deobfuscation: A Case Study on Claude Opus 4.6

Research study reveals that when Claude Opus 4.6 deobfuscates JavaScript code, poisoned identifier names from the original string table consistently survive in the reconstructed code, even when the AI demonstrates correct understanding of the code's semantics. Changing the task framing from 'deobfuscate' to 'write fresh implementation' significantly reduced this persistence while maintaining algorithmic accuracy.

🧠 Claude🧠 Haiku🧠 Opus
AIBullisharXiv – CS AI · Apr 76/10
🧠

HighFM: Towards a Foundation Model for Learning Representations from High-Frequency Earth Observation Data

Researchers have developed HighFM, a foundation model for analyzing high-frequency Earth observation data using over 2TB of satellite imagery to enable real-time disaster monitoring. The model adapts masked autoencoding frameworks with temporal encodings to capture short-term environmental changes and demonstrates superior performance in cloud masking and fire detection tasks.

AIBullisharXiv – CS AI · Apr 76/10
🧠

GROUNDEDKG-RAG: Grounded Knowledge Graph Index for Long-document Question Answering

Researchers introduced GroundedKG-RAG, a new retrieval-augmented generation system that creates knowledge graphs directly grounded in source documents to improve long-document question answering. The system reduces resource consumption and hallucinations while maintaining accuracy comparable to state-of-the-art models at lower cost.

AIBullisharXiv – CS AI · Apr 76/10
🧠

Context is All You Need

Researchers introduce CONTXT, a lightweight neural network adaptation method that improves AI model performance when deployed on data different from training data. The technique uses simple additive and multiplicative transforms to modulate internal representations, providing consistent gains across both discriminative and generative models including LLMs.

AIBullisharXiv – CS AI · Apr 76/10
🧠

Training Transformers in Cosine Coefficient Space

Researchers developed a new method to train transformer neural networks using discrete cosine transform (DCT) coefficients, achieving the same performance while using only 52% of the parameters. The technique requires no architectural changes and simply replaces standard linear layers with spectral layers that store DCT coefficients instead of full weight matrices.

🏢 Perplexity
AIBullisharXiv – CS AI · Apr 76/10
🧠

Conversational Control with Ontologies for Large Language Models: A Lightweight Framework for Constrained Generation

Researchers developed a lightweight framework that uses ontological definitions to provide modular and explainable control over Large Language Model outputs in conversational systems. The method fine-tunes LLMs to generate content according to specific constraints like English proficiency level and content polarity, consistently outperforming pre-trained baselines across seven state-of-the-art models.

AIBullisharXiv – CS AI · Apr 76/10
🧠

DP-OPD: Differentially Private On-Policy Distillation for Language Models

Researchers have developed DP-OPD (Differentially Private On-Policy Distillation), a new framework for training privacy-preserving language models that significantly improves performance over existing methods. The approach simplifies the training pipeline by eliminating the need for DP teacher training and offline synthetic text generation while maintaining strong privacy guarantees.

🏢 Perplexity
AINeutralarXiv – CS AI · Apr 76/10
🧠

Reproducibility study on how to find Spurious Correlations, Shortcut Learning, Clever Hans or Group-Distributional non-robustness and how to fix them

A reproducibility study unifies research on spurious correlations in deep neural networks across different domains, comparing correction methods including XAI-based approaches. The research finds that Counterfactual Knowledge Distillation (CFKD) most effectively improves model generalization, though practical deployment remains challenging due to group labeling dependencies and data scarcity issues.

AINeutralarXiv – CS AI · Apr 76/10
🧠

Multilingual Prompt Localization for Agent-as-a-Judge: Language and Backbone Sensitivity in Requirement-Level Evaluation

A research study reveals that AI model performance rankings change dramatically based on the evaluation language used, with GPT-4o performing best in English while Gemini leads in Arabic and Hindi. The study tested 55 development tasks across five languages and six AI models, showing no single model dominates across all languages.

🧠 GPT-4🧠 Gemini
← PrevPage 322 of 761Next →
◆ AI Mentions
🏢OpenAI
74×
🏢Anthropic
45×
🧠Claude
35×
🏢Nvidia
33×
🧠Gemini
19×
🧠ChatGPT
19×
🧠GPT-4
18×
🧠GPT-5
16×
🧠Llama
15×
🏢Perplexity
12×
🧠Opus
9×
🏢xAI
9×
🏢Hugging Face
5×
🏢Google
5×
🧠Sonnet
5×
🏢Meta
4×
🏢Microsoft
3×
🧠Grok
3×
🧠Haiku
2×
🧠Sora
1×
▲ Trending Tags
1#ai2422#geopolitical-risk2353#geopolitics2084#iran1905#market-volatility1246#middle-east1207#oil-markets888#sanctions879#energy-markets7810#inflation7611#geopolitical7412#openai6513#ai-infrastructure6114#strait-of-hormuz5815#market53
Tag Sentiment
#ai242 articles
#geopolitical-risk235 articles
#geopolitics208 articles
#iran190 articles
#market-volatility124 articles
#middle-east120 articles
#oil-markets88 articles
#sanctions87 articles
#energy-markets78 articles
#inflation76 articles
BullishNeutralBearish
Stay Updated
Models, papers, tools
Tag Connections
#geopolitics↔#iran
62
#geopolitical-risk↔#market-volatility
42
#geopolitics↔#oil-markets
42
#geopolitical↔#iran
42
#geopolitical-risk↔#middle-east
40
#geopolitics↔#middle-east
39
#geopolitical-risk↔#oil-markets
39
#oil-markets↔#strait-of-hormuz
30
#geopolitical-risk↔#strait-of-hormuz
29
#energy-markets↔#geopolitical-risk
29
Filters
Sentiment
Importance
Sort
📡 See all 70+ sources
y0.exchange
Your AI agent for DeFi
Connect Claude or GPT to your wallet. AI reads balances, proposes swaps and bridges — you approve. Your keys never leave your device.
8 MCP tools · 15 chains · $0 fees
Connect Wallet to AI →How it works →
Viewing: AI Pulse feed
Filters
Sentiment
Importance
Sort
Stay Updated
Models, papers, tools
y0news
y0.exchangeLaunch AppDigestsSourcesAboutRSSAI NewsCrypto News
© 2026 y0.exchange