y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#decision-making News & Analysis

49 articles tagged with #decision-making. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

49 articles
AIBearisharXiv – CS AI · 1d ago7/10
🧠

Fragile Preferences: A Deep Dive Into Order Effects in Large Language Models

Researchers conducted the first systematic study of order bias in Large Language Models used for high-stakes decision-making, finding that LLMs exhibit strong position effects and previously undocumented name biases that can lead to selection of strictly inferior options. The study reveals distinct failure modes in AI decision-support systems, with proposed mitigation strategies using temperature parameter adjustments to recover underlying preferences.

AIBearisharXiv – CS AI · 1d ago7/10
🧠

Narrative over Numbers: The Identifiable Victim Effect and its Amplification Under Alignment and Reasoning in Large Language Models

Researchers tested whether large language models exhibit the Identifiable Victim Effect (IVE)—a well-documented cognitive bias where people prioritize helping a specific individual over a larger group facing equal hardship. Across 51,955 API trials spanning 16 frontier models, instruction-tuned LLMs showed amplified IVE compared to humans, while reasoning-specialized models inverted the effect, raising critical concerns about AI deployment in humanitarian decision-making.

🏢 OpenAI🏢 Anthropic🏢 xAI
AIBearisharXiv – CS AI · 2d ago7/10
🧠

LLM Nepotism in Organizational Governance

Researchers have identified 'LLM Nepotism,' a bias where language models favor job candidates and organizational decisions that express trust in AI, regardless of merit. This creates self-reinforcing cycles where AI-trusting organizations make worse decisions and delegate more to AI systems, potentially compromising governance quality across sectors.

AINeutralarXiv – CS AI · Mar 277/10
🧠

When Is Collective Intelligence a Lottery? Multi-Agent Scaling Laws for Memetic Drift in LLMs

Researchers introduce Quantized Simplex Gossip (QSG) model to explain how multi-agent LLM systems reach consensus through 'memetic drift' - where arbitrary choices compound into collective agreement. The study reveals scaling laws for when collective intelligence operates like a lottery versus amplifying weak biases, providing a framework for understanding AI system behavior in consequential decision-making.

AIBullisharXiv – CS AI · Mar 177/10
🧠

Resource Rational Contractualism Should Guide AI Alignment

Researchers propose Resource-Rational Contractualism (RRC), a new framework for AI alignment that enables AI systems to make decisions affecting diverse stakeholders through efficient approximations of rational agreements. The approach uses normatively-grounded heuristics to balance computational effort with accuracy in navigating complex human social environments.

AINeutralarXiv – CS AI · Mar 127/10
🧠

Assessing Cognitive Biases in LLMs for Judicial Decision Support: Virtuous Victim and Halo Effects

Research examining five major LLMs found they exhibit human-like cognitive biases when evaluating judicial scenarios, showing stronger virtuous victim effects but reduced credential-based halo effects compared to humans. The study suggests LLMs may offer modest improvements over human decision-making in judicial contexts, though variability across models limits current practical application.

🧠 ChatGPT🧠 Claude🧠 Sonnet
AINeutralMIT Technology Review · Mar 107/10
🧠

The Download: AI’s role in the Iran war, and an escalating legal fight

This article discusses AI's role in the Iran conflict, specifically how AI models like Claude are being used by the US military for decision-making purposes. The piece appears to be part of a technology newsletter covering AI applications in geopolitical contexts.

🧠 Claude
AIBullisharXiv – CS AI · Mar 97/10
🧠

Shoot First, Ask Questions Later? Building Rational Agents that Explore and Act Like People

Researchers developed new Monte Carlo inference strategies inspired by Bayesian Experimental Design to improve AI agents' information-seeking capabilities. The methods significantly enhanced language models' performance in strategic decision-making tasks, with weaker models like Llama-4-Scout outperforming GPT-5 at 1% of the cost.

🧠 GPT-5🧠 Llama
AIBearisharXiv – CS AI · Mar 56/10
🧠

Language Model Goal Selection Differs from Humans' in an Open-Ended Task

Research comparing four state-of-the-art language models (GPT-5, Gemini 2.5 Pro, Claude Sonnet 4.5, and Centaur) to humans in goal selection tasks reveals substantial divergence in behavior. While humans explore diverse approaches and learn gradually, the AI models tend to exploit single solutions or show poor performance, raising concerns about using current LLMs as proxies for human decision-making in critical applications.

🧠 Claude🧠 Gemini
AIBullisharXiv – CS AI · Mar 46/103
🧠

COOL-MC: Verifying and Explaining RL Policies for Platelet Inventory Management

Researchers developed COOL-MC, a tool that combines reinforcement learning with model checking to verify and explain AI policies for platelet inventory management in blood banks. The system achieved a 2.9% stockout probability while providing transparent decision-making explanations for safety-critical healthcare applications.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Guideline-Grounded Evidence Accumulation for High-Stakes Agent Verification

Researchers developed GLEAN, a new AI verification framework that improves reliability of LLM-powered agents in high-stakes decisions like clinical diagnosis. The system uses expert guidelines and Bayesian logistic regression to better verify AI agent decisions, showing 12% improvement in accuracy and 50% better calibration in medical diagnosis tests.

AIBullisharXiv – CS AI · Mar 37/103
🧠

Value Flows

Researchers have developed Value Flows, a new reinforcement learning method that uses flow-based models to estimate complete return distributions rather than single scalar values. The approach achieves 1.3x improvement in success rates across 62 benchmark tasks by better identifying states with high return uncertainty for improved decision-making.

AINeutralarXiv – CS AI · Feb 277/106
🧠

Accelerated Online Risk-Averse Policy Evaluation in POMDPs with Theoretical Guarantees and Novel CVaR Bounds

Researchers developed a new theoretical framework for accelerated risk-averse policy evaluation in partially observable Markov decision processes (POMDPs) using Conditional Value-at-Risk (CVaR) bounds. The method enables safe elimination of suboptimal actions while maintaining computational guarantees, achieving substantial speedups in autonomous agent decision-making under uncertainty.

AINeutralarXiv – CS AI · Feb 277/107
🧠

"I think this is fair": Uncovering the Complexities of Stakeholder Decision-Making in AI Fairness Assessment

A qualitative study with 26 non-AI expert stakeholders reveals that everyday users assess AI fairness more comprehensively than AI experts, considering broader features beyond legally protected categories and setting stricter fairness thresholds. The research highlights the importance of incorporating stakeholder perspectives in AI governance and fairness assessment processes.

AINeutralarXiv – CS AI · 2d ago6/10
🧠

Measuring the Authority Stack of AI Systems: Empirical Analysis of 366,120 Forced-Choice Responses Across 8 AI Models

Researchers conducted the first large-scale empirical analysis of AI decision-making across 366,120 responses from 8 major models, revealing measurable but inconsistent value hierarchies, evidence preferences, and source trust patterns. The study found significant framing sensitivity and domain-specific value shifts, with critical implications for deploying AI systems in professional contexts.

AINeutralarXiv – CS AI · 3d ago6/10
🧠

CONDESION-BENCH: Conditional Decision-Making of Large Language Models in Compositional Action Space

Researchers introduce CONDESION-BENCH, a new benchmark for evaluating how large language models make decisions in complex, real-world scenarios with compositional actions and conditional constraints. The benchmark addresses limitations in existing decision-making frameworks by incorporating variable-level, contextual, and allocation-level restrictions that better reflect actual decision-making environments.

AINeutralarXiv – CS AI · 3d ago6/10
🧠

Act or Escalate? Evaluating Escalation Behavior in Automation with Language Models

Researchers analyzed how large language models decide whether to act on predictions or escalate to humans, finding that models use inconsistent and miscalibrated thresholds across five real-world domains. Supervised fine-tuning on chain-of-thought reasoning proved most effective at establishing robust escalation policies that generalize across contexts, suggesting escalation behavior requires explicit characterization before AI system deployment.

AINeutralarXiv – CS AI · 6d ago6/10
🧠

On Emotion-Sensitive Decision Making of Small Language Model Agents

Researchers introduce a framework for studying how emotional states affect decision-making in small language models (SLMs) used as autonomous agents. Using activation steering techniques grounded in real-world emotion-eliciting texts, they benchmark SLMs across game-theoretic scenarios and find that emotional perturbations systematically influence strategic choices, though behaviors often remain unstable and misaligned with human patterns.

AINeutralarXiv – CS AI · 6d ago6/10
🧠

Negotiating Privacy with Smart Voice Assistants: Risk-Benefit and Control-Acceptance Tensions

Researchers studying 469 Canadian youth aged 16-24 developed a negotiation-based framework to understand privacy decision-making with smart voice assistants, introducing two tension indices (RBTI and CATI) that measure competing risk-benefit and control-acceptance pressures. The study reveals that frequent SVA users exhibit benefit-dominant profiles and accept convenience trade-offs, suggesting the privacy paradox reflects negotiation rather than inconsistency.

Page 1 of 2Next →