y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#research News & Analysis

913 articles tagged with #research. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

913 articles
AINeutralarXiv โ€“ CS AI ยท Mar 126/10
๐Ÿง 

Nurture-First Agent Development: Building Domain-Expert AI Agents Through Conversational Knowledge Crystallization

Researchers propose Nurture-First Development (NFD), a new paradigm for building domain-expert AI agents through progressive growth via conversational interaction rather than traditional code-first or prompt-first approaches. The method uses a Knowledge Crystallization Cycle to convert operational dialogue into structured knowledge assets, demonstrated through a financial research agent case study.

AIBullisharXiv โ€“ CS AI ยท Mar 126/10
๐Ÿง 

One Model, Many Skills: Parameter-Efficient Fine-Tuning for Multitask Code Analysis

Researchers conducted the first comprehensive evaluation of parameter-efficient fine-tuning (PEFT) for multi-task code analysis, showing that a single PEFT module can match full fine-tuning performance while reducing computational costs by up to 85%. The study found that even 1B-parameter models with multi-task PEFT outperform large general-purpose LLMs like DeepSeek and CodeLlama on code analysis tasks.

AINeutralarXiv โ€“ CS AI ยท Mar 126/10
๐Ÿง 

SpreadsheetArena: Decomposing Preference in LLM Generation of Spreadsheet Workbooks

Researchers introduce SpreadsheetArena, a platform for evaluating large language models' ability to generate spreadsheet workbooks from natural language prompts. The study reveals that preferred spreadsheet features vary significantly across use cases, and even top-performing models struggle with domain-specific best practices in areas like finance.

AINeutralarXiv โ€“ CS AI ยท Mar 126/10
๐Ÿง 

FERRET: Framework for Expansion Reliant Red Teaming

Researchers introduce FERRET, a new automated red teaming framework designed to generate multi-modal adversarial conversations to test AI model vulnerabilities. The framework uses three types of expansions (horizontal, vertical, and meta) to create more effective attack strategies and demonstrates superior performance compared to existing red teaming approaches.

AINeutralarXiv โ€“ CS AI ยท Mar 126/10
๐Ÿง 

Prompts and Prayers: the Rise of GPTheology

A research paper introduces the concept of 'GPTheology' - the phenomenon of AI being perceived and treated as divine entities in modern culture. The study examines how AI interactions are developing ritualistic qualities and new belief systems through analysis of online communities and real-world projects like AI-powered religious statues.

๐Ÿง  ChatGPT
AINeutralarXiv โ€“ CS AI ยท Mar 126/10
๐Ÿง 

ADVERSA: Measuring Multi-Turn Guardrail Degradation and Judge Reliability in Large Language Models

Researchers developed ADVERSA, an automated red-teaming framework that measures how AI guardrails degrade over multiple conversation turns rather than single-prompt attacks. Testing on three frontier models revealed a 26.7% jailbreak rate, with successful attacks concentrated in early rounds rather than accumulating through sustained pressure.

๐Ÿง  GPT-5๐Ÿง  Claude๐Ÿง  Opus
AIBullisharXiv โ€“ CS AI ยท Mar 126/10
๐Ÿง 

CLIPO: Contrastive Learning in Policy Optimization Generalizes RLVR

Researchers introduce CLIPO (Contrastive Learning in Policy Optimization), a new method that improves upon Reinforcement Learning with Verifiable Rewards (RLVR) for training Large Language Models. CLIPO addresses hallucination and answer-copying issues by incorporating contrastive learning to better capture correct reasoning patterns across multiple solution paths.

AIBearisharXiv โ€“ CS AI ยท Mar 126/10
๐Ÿง 

Reactive Writers: How Co-Writing with AI Changes How We Engage with Ideas

A research study reveals that AI co-writing tools fundamentally change how people write by shifting them into 'Reactive Writing' mode, where writers evaluate AI suggestions rather than generating original ideas first. This process influences writers' opinions and expressed views without them realizing the AI's impact, as they focus on suggestion evaluation rather than traditional ideation.

AIBullisharXiv โ€“ CS AI ยท Mar 126/10
๐Ÿง 

Designing Service Systems from Textual Evidence

Researchers developed PP-LUCB, an algorithm that efficiently identifies optimal service system configurations by combining biased AI evaluation with selective human audits. The method reduces human audit costs by 90% while maintaining accuracy in selecting the best performing systems from textual evidence like customer support transcripts.

AINeutralThe Verge โ€“ AI ยท Mar 116/10
๐Ÿง 

Anthropic is launching a new think tank amid Pentagon blacklist fight

Anthropic is launching the Anthropic Institute, a new internal think tank combining three research teams to study AI's large-scale implications, amid an ongoing conflict with the Pentagon that has resulted in a blacklist and lawsuit. The announcement coincides with C-suite changes including cofounder Jack Clark's role transition.

Anthropic is launching a new think tank amid Pentagon blacklist fight
๐Ÿข OpenAI๐Ÿข Anthropic
AIBullisharXiv โ€“ CS AI ยท Mar 116/10
๐Ÿง 

Evaluate-as-Action: Self-Evaluated Process Rewards for Retrieval-Augmented Agents

Researchers propose EvalAct, a new method that improves retrieval-augmented AI agents by converting retrieval quality assessment into explicit actions and using Process-Calibrated Advantage Rescaling (PCAR) for optimization. The approach shows superior performance on multi-step reasoning tasks across seven open-domain QA benchmarks by providing better process-level feedback signals.

AINeutralarXiv โ€“ CS AI ยท Mar 116/10
๐Ÿง 

Enhancing Debunking Effectiveness through LLM-based Personality Adaptation

Researchers developed a method using Large Language Models to create personalized fake news debunking messages tailored to individuals' Big Five personality traits. The study found that personalized debunking messages are more persuasive than generic ones, with traits like Openness increasing persuadability while Neuroticism decreases it.

AIBullisharXiv โ€“ CS AI ยท Mar 116/10
๐Ÿง 

PRECEPT: Planning Resilience via Experience, Context Engineering & Probing Trajectories A Unified Framework for Test-Time Adaptation with Compositional Rule Learning and Pareto-Guided Prompt Evolution

Researchers introduce PRECEPT, a new framework for AI language model agents that improves knowledge retrieval and adaptation through structured rule learning and conflict-aware memory systems. The framework shows significant performance improvements over existing methods, with 41% better first-try accuracy and enhanced compositional reasoning capabilities.

AINeutralarXiv โ€“ CS AI ยท Mar 116/10
๐Ÿง 

Influencing LLM Multi-Agent Dialogue via Policy-Parameterized Prompts

Researchers propose a framework using policy-parameterized prompts to influence multi-agent LLM dialogue behavior without training. The approach treats prompts as actions and dynamically constructs them through five components to control conversation flow based on metrics like responsiveness and stance shift.

AIBearisharXiv โ€“ CS AI ยท Mar 116/10
๐Ÿง 

Investigating Gender Stereotypes in Large Language Models via Social Determinants of Health

A new research study reveals that Large Language Models (LLMs) propagate gender stereotypes and biases when processing healthcare data, particularly through interactions between gender and social determinants of health. The research used French patient records to demonstrate how LLMs rely on embedded stereotypes to make gendered decisions in healthcare contexts.

AIBearisharXiv โ€“ CS AI ยท Mar 116/10
๐Ÿง 

Common Sense vs. Morality: The Curious Case of Narrative Focus Bias in LLMs

Researchers have identified a critical flaw in Large Language Models (LLMs) where they prioritize moral reasoning over commonsense understanding, struggling to detect logical contradictions within moral dilemmas. The study introduces the CoMoral benchmark and reveals a 'narrative focus bias' where LLMs better identify contradictions attributed to secondary characters rather than primary narrators.

AINeutralarXiv โ€“ CS AI ยท Mar 116/10
๐Ÿง 

MM-tau-p$^2$: Persona-Adaptive Prompting for Robust Multi-Modal Agent Evaluation in Dual-Control Settings

Researchers propose MM-tau-pยฒ, a new benchmark for evaluating multi-modal AI agents that adapt to user personas in customer service settings. The framework introduces 12 novel metrics to assess robustness and performance of LLM-based agents using voice and visual inputs, showing limitations even in advanced models like GPT-4 and GPT-5.

๐Ÿง  GPT-4๐Ÿง  GPT-5
AINeutralarXiv โ€“ CS AI ยท Mar 116/10
๐Ÿง 

OPENXRD: A Comprehensive Benchmark Framework for LLM/MLLM XRD Question Answering

Researchers introduced OPENXRD, a comprehensive benchmarking framework for evaluating large language models and multimodal LLMs in crystallography question answering. The study tested 74 state-of-the-art models and found that mid-sized models (7B-70B parameters) benefit most from contextual materials, while very large models often show saturation or interference.

๐Ÿง  GPT-4๐Ÿง  GPT-4.5๐Ÿง  GPT-5
AIBullisharXiv โ€“ CS AI ยท Mar 116/10
๐Ÿง 

Automating Forecasting Question Generation and Resolution for AI Evaluation

Researchers developed an automated system using LLM-powered web research agents to generate and resolve forecasting questions at scale, creating 1,499 diverse real-world questions with 96% quality rate. The system demonstrates that more advanced AI models perform significantly better at forecasting tasks, with potential applications for improving AI evaluation benchmarks.

๐Ÿง  GPT-5๐Ÿง  Gemini
AINeutralarXiv โ€“ CS AI ยท Mar 116/10
๐Ÿง 

Latent Generative Models with Tunable Complexity for Compressed Sensing and other Inverse Problems

Researchers developed tunable-complexity priors for generative models (diffusion models, normalizing flows, and variational autoencoders) that can dynamically adjust complexity based on the specific inverse problem. The approach uses nested dropout and demonstrates superior performance across compressed sensing, inpainting, denoising, and phase retrieval tasks compared to fixed-complexity baselines.

AIBullisharXiv โ€“ CS AI ยท Mar 96/10
๐Ÿง 

Cut to the Chase: Training-free Multimodal Summarization via Chain-of-Events

Researchers introduce CoE, a training-free multimodal summarization framework that uses a Chain-of-Events approach with Hierarchical Event Graph to better understand and summarize content across videos, transcripts, and images. The system achieves significant performance improvements over existing methods, showing average gains of +3.04 ROUGE, +9.51 CIDEr, and +1.88 BERTScore across eight datasets.

AIBullisharXiv โ€“ CS AI ยท Mar 96/10
๐Ÿง 

Dynamic Chunking Diffusion Transformer

Researchers introduce Dynamic Chunking Diffusion Transformer (DC-DiT), a new AI model that adaptively processes images by allocating more computational resources to detail-rich regions and fewer to uniform backgrounds. The system improves image generation quality while reducing computational costs by up to 16x compared to traditional diffusion transformers.