y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All30,304🧠AI12,901⛓️Crypto10,980💎DeFi1,131🤖AI × Crypto566📰General4,726
🧠

AI

12,901 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.

12901 articles
AIBullishMarkTechPost · Mar 116/10
🧠

How to Build a Self-Designing Meta-Agent That Automatically Constructs, Instantiates, and Refines Task-Specific AI Agents

This tutorial demonstrates building a Meta-Agent system that automatically designs and instantiates task-specific AI agents from simple descriptions. The system dynamically analyzes tasks, selects appropriate tools, configures memory architecture and planners, then creates fully functional agent runtimes without relying on static templates.

AIBullisharXiv – CS AI · Mar 116/10
🧠

LDP: An Identity-Aware Protocol for Multi-Agent LLM Systems

Researchers present LLM Delegate Protocol (LDP), a new AI-native communication protocol for multi-agent LLM systems that introduces identity awareness, progressive payloads, and governance mechanisms. The protocol achieves 12x lower latency on simple tasks and 37% token reduction compared to existing protocols like A2A, though quality improvements remain limited in small delegate pools.

AINeutralarXiv – CS AI · Mar 116/10
🧠

Latent Generative Models with Tunable Complexity for Compressed Sensing and other Inverse Problems

Researchers developed tunable-complexity priors for generative models (diffusion models, normalizing flows, and variational autoencoders) that can dynamically adjust complexity based on the specific inverse problem. The approach uses nested dropout and demonstrates superior performance across compressed sensing, inpainting, denoising, and phase retrieval tasks compared to fixed-complexity baselines.

AIBullisharXiv – CS AI · Mar 116/10
🧠

Automating Forecasting Question Generation and Resolution for AI Evaluation

Researchers developed an automated system using LLM-powered web research agents to generate and resolve forecasting questions at scale, creating 1,499 diverse real-world questions with 96% quality rate. The system demonstrates that more advanced AI models perform significantly better at forecasting tasks, with potential applications for improving AI evaluation benchmarks.

🧠 GPT-5🧠 Gemini
AIBullisharXiv – CS AI · Mar 116/10
🧠

An AI-powered Bayesian Generative Modeling Approach for Arbitrary Conditional Inference

Researchers have developed Bayesian Generative Modeling (BGM), a new AI framework that enables flexible conditional inference on any partition of observed variables without retraining. The approach uses stochastic iterative Bayesian updating with theoretical guarantees for convergence and statistical consistency, offering a universal engine for conditional prediction with uncertainty quantification.

AIBullisharXiv – CS AI · Mar 116/10
🧠

Latent Speech-Text Transformer

Facebook Research introduces the Latent Speech-Text Transformer (LST), which aggregates speech tokens into higher-level patches to improve computational efficiency and cross-modal alignment. The model achieves up to +6.5% absolute gain on speech HellaSwag benchmarks while maintaining text performance and reducing inference costs for ASR and TTS tasks.

AINeutralarXiv – CS AI · Mar 116/10
🧠

Debiasing International Attitudes: LLM Agents for Simulating US-China Perception Changes

Researchers developed an LLM-agent framework to model how media influences US-China attitudes from 2005-2025, testing three debiasing mechanisms to reduce AI model prejudices. The study found that devil's advocate agents were most effective at producing human-like opinion formation, while revealing geographic biases tied to AI models' origins.

🧠 GPT-4
AIBullisharXiv – CS AI · Mar 116/10
🧠

RECODE: Reasoning Through Code Generation for Visual Question Answering

Researchers introduce RECODE, a new framework that improves visual reasoning in AI models by converting images into executable code for verification. The system generates multiple candidate programs to reproduce visuals, then selects and refines the most accurate reconstruction, significantly outperforming existing methods on visual reasoning benchmarks.

AINeutralarXiv – CS AI · Mar 116/10
🧠

OPENXRD: A Comprehensive Benchmark Framework for LLM/MLLM XRD Question Answering

Researchers introduced OPENXRD, a comprehensive benchmarking framework for evaluating large language models and multimodal LLMs in crystallography question answering. The study tested 74 state-of-the-art models and found that mid-sized models (7B-70B parameters) benefit most from contextual materials, while very large models often show saturation or interference.

🧠 GPT-4🧠 GPT-4.5🧠 GPT-5
AIBearisharXiv – CS AI · Mar 116/10
🧠

Why do we Trust Chatbots? From Normative Principles to Behavioral Drivers

Researchers argue that trust in chatbots is often driven by behavioral manipulation rather than demonstrated trustworthiness, proposing they be viewed as skilled salespeople rather than assistants. The study highlights how design choices exploit cognitive biases to influence user behavior, creating a gap between psychological trust formation and actual trustworthiness.

AINeutralarXiv – CS AI · Mar 116/10
🧠

SCENEBench: An Audio Understanding Benchmark Grounded in Assistive and Industrial Use Cases

Researchers introduce SCENEBench, a new benchmark for evaluating Large Audio Language Models (LALMs) beyond speech recognition, focusing on real-world audio understanding including background sounds, noise localization, and vocal characteristics. Testing of five state-of-the-art models revealed significant performance gaps, with some tasks performing below random chance while others achieved high accuracy.

AIBullisharXiv – CS AI · Mar 116/10
🧠

MSSR: Memory-Aware Adaptive Replay for Continual LLM Fine-Tuning

Researchers propose MSSR (Memory-Inspired Sampler and Scheduler Replay), a new framework for continual fine-tuning of large language models that mitigates catastrophic forgetting while maintaining adaptability. The method estimates sample-level memory strength and schedules rehearsal at adaptive intervals, showing superior performance across three backbone models and 11 sequential tasks compared to existing replay-based strategies.

AIBullisharXiv – CS AI · Mar 116/10
🧠

Ego: Embedding-Guided Personalization of Vision-Language Models

Researchers propose Ego, a new method for personalizing vision-language AI models without requiring additional training stages. The approach extracts visual tokens using the model's internal attention mechanisms to create concept memories, enabling personalized responses across single-concept, multi-concept, and video scenarios.

AINeutralarXiv – CS AI · Mar 116/10
🧠

MM-tau-p$^2$: Persona-Adaptive Prompting for Robust Multi-Modal Agent Evaluation in Dual-Control Settings

Researchers propose MM-tau-p², a new benchmark for evaluating multi-modal AI agents that adapt to user personas in customer service settings. The framework introduces 12 novel metrics to assess robustness and performance of LLM-based agents using voice and visual inputs, showing limitations even in advanced models like GPT-4 and GPT-5.

🧠 GPT-4🧠 GPT-5
AIBullisharXiv – CS AI · Mar 116/10
🧠

Towards a Neural Debugger for Python

Researchers have developed neural debuggers - AI models that can emulate traditional Python debuggers by stepping through code execution, setting breakpoints, and predicting both forward and backward program states. This breakthrough enables more interactive control over neural code interpretation compared to existing approaches that only execute programs linearly.

🏢 Meta
AIBullisharXiv – CS AI · Mar 116/10
🧠

From Spatial to Actions: Grounding Vision-Language-Action Model in Spatial Foundation Priors

FALCON introduces a novel vision-language-action model that bridges the spatial reasoning gap by injecting 3D spatial tokens into action heads while preserving language reasoning capabilities. The system achieves state-of-the-art performance across simulation benchmarks and real-world tasks by leveraging spatial foundation models to provide geometric priors from RGB input alone.

AIBearisharXiv – CS AI · Mar 116/10
🧠

Common Sense vs. Morality: The Curious Case of Narrative Focus Bias in LLMs

Researchers have identified a critical flaw in Large Language Models (LLMs) where they prioritize moral reasoning over commonsense understanding, struggling to detect logical contradictions within moral dilemmas. The study introduces the CoMoral benchmark and reveals a 'narrative focus bias' where LLMs better identify contradictions attributed to secondary characters rather than primary narrators.

AIBullisharXiv – CS AI · Mar 116/10
🧠

Grounding Synthetic Data Generation With Vision and Language Models

Researchers introduce ARAS400k, a large-scale remote sensing dataset containing 400k images (100k real, 300k synthetic) with segmentation maps and descriptions. The study demonstrates that combining real and synthetic data consistently outperforms training on real data alone for semantic segmentation and image captioning tasks.

AIBearisharXiv – CS AI · Mar 116/10
🧠

Investigating Gender Stereotypes in Large Language Models via Social Determinants of Health

A new research study reveals that Large Language Models (LLMs) propagate gender stereotypes and biases when processing healthcare data, particularly through interactions between gender and social determinants of health. The research used French patient records to demonstrate how LLMs rely on embedded stereotypes to make gendered decisions in healthcare contexts.

← PrevPage 210 of 517Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined