y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#decision-making News & Analysis

49 articles tagged with #decision-making. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

49 articles
AIBearishArs Technica – AI · Mar 266/10
🧠

Study: Sycophantic AI can undermine human judgment

A study found that AI tools exhibiting sycophantic behavior can negatively impact human decision-making. Users interacting with such AI systems showed increased overconfidence in their judgments and reduced ability to resolve conflicts effectively.

Study: Sycophantic AI can undermine human judgment
AINeutralarXiv – CS AI · Mar 266/10
🧠

From Sycophancy to Sensemaking: Premise Governance for Human-AI Decision Making

Researchers propose a new framework for human-AI decision making that shifts from AI systems providing fluent but potentially sycophantic answers to collaborative premise governance. The approach uses discrepancy-driven control loops to detect conflicts and ensure commitment to decision-critical premises before taking action.

AIBullisharXiv – CS AI · Mar 176/10
🧠

Computational Concept of the Psyche

Researchers propose a new computational concept for modeling the human psyche as an operating system for artificial general intelligence. The approach treats the psyche as a decision-making system that operates in a state space including needs, sensations, and actions to optimize goal achievement while minimizing risks.

AIBullisharXiv – CS AI · Mar 176/10
🧠

From Stochastic Answers to Verifiable Reasoning: Interpretable Decision-Making with LLM-Generated Code

Researchers propose a new framework that uses LLMs as code generators rather than per-instance evaluators for high-stakes decision-making, creating interpretable and reproducible AI systems. The approach generates executable decision logic once instead of querying LLMs for each prediction, demonstrated through venture capital founder screening with competitive performance while maintaining full transparency.

🧠 GPT-4
AIBullisharXiv – CS AI · Mar 96/10
🧠

Boosting deep Reinforcement Learning using pretraining with Logical Options

Researchers propose Hybrid Hierarchical RL (H²RL), a new framework that combines symbolic logic with deep reinforcement learning to address misalignment issues in AI agents. The method uses logical option-based pretraining to improve long-horizon decision-making and prevent agents from over-exploiting short-term rewards.

AIBullisharXiv – CS AI · Mar 66/10
🧠

STRUCTUREDAGENT: Planning with AND/OR Trees for Long-Horizon Web Tasks

Researchers propose STRUCTUREDAGENT, a new AI framework that uses hierarchical planning with AND/OR trees to improve web agent performance on complex, long-horizon tasks. The system addresses limitations in current LLM-based agents through better memory tracking and structured planning approaches.

AIBullisharXiv – CS AI · Mar 36/106
🧠

TARSE: Test-Time Adaptation via Retrieval of Skills and Experience for Reasoning Agents

Researchers developed TARSE, a new AI system for clinical decision-making that retrieves relevant medical skills and experiences from curated libraries to improve reasoning accuracy. The system performs test-time adaptation to align language models with clinically valid logic, showing improvements over existing medical AI baselines in question-answering benchmarks.

AINeutralarXiv – CS AI · Mar 36/103
🧠

LLMs as Strategic Actors: Behavioral Alignment, Risk Calibration, and Argumentation Framing in Geopolitical Simulations

A research study evaluated six state-of-the-art large language models in geopolitical crisis simulations, comparing their decision-making to human behavior. The study found that LLMs initially mirror human decisions but diverge over time, consistently exhibiting cooperative, stability-focused strategies with limited adversarial reasoning.

AINeutralarXiv – CS AI · Mar 35/103
🧠

Behavioral Generative Agents for Energy Operations

Researchers developed behavioral generative agents powered by large language models to simulate consumer decision-making in energy operations. The study found these AI agents can model heterogeneous customer behavior and provide insights into rare events like blackouts, offering a scalable tool for energy policy analysis.

AIBullisharXiv – CS AI · Mar 27/1016
🧠

PseudoAct: Leveraging Pseudocode Synthesis for Flexible Planning and Action Control in Large Language Model Agents

Researchers introduce PseudoAct, a new framework that uses pseudocode synthesis to improve large language model agent planning and action control. The method achieves significant performance improvements over existing reactive approaches, with a 20.93% absolute gain in success rate on FEVER benchmark and new state-of-the-art results on HotpotQA.

AIBullisharXiv – CS AI · Feb 276/106
🧠

A Framework for Assessing AI Agent Decisions and Outcomes in AutoML Pipelines

Researchers propose an Evaluation Agent framework to assess AI agent decision-making in AutoML pipelines, moving beyond outcome-focused metrics to evaluate intermediate decisions. The system can detect faulty decisions with 91.9% F1 score and reveals impacts ranging from -4.9% to +8.3% in final performance metrics.

AINeutralarXiv – CS AI · Feb 276/105
🧠

Evaluating Stochasticity in Deep Research Agents

Researchers identified stochasticity (variability) as a critical barrier to deploying Deep Research Agents in real-world applications like financial decision-making and medical analysis. The study proposes mitigation strategies that reduce output variance by 22% while maintaining research quality, addressing a key obstacle for enterprise AI agent adoption.

CryptoNeutralEthereum Foundation Blog · Aug 215/104
⛓️

An Introduction to Futarchy

The article introduces futarchy as a governance mechanism that could be implemented through DAOs to improve decision-making processes. It explores how decentralized autonomous organizations enable rapid experimentation with social coordination mechanisms that traditional institutions struggle to adapt quickly.

AINeutralMarkTechPost · Mar 105/10
🧠

How to Build a Risk-Aware AI Agent with Internal Critic, Self-Consistency Reasoning, and Uncertainty Estimation for Reliable Decision-Making

This tutorial demonstrates building an advanced AI agent system that incorporates risk-awareness through internal criticism, self-consistency reasoning, and uncertainty estimation. The system evaluates responses across multiple dimensions including accuracy, coherence, and safety while implementing risk-sensitive selection strategies for more reliable decision-making.

AINeutralarXiv – CS AI · Mar 25/105
🧠

How do Visual Attributes Influence Web Agents? A Comprehensive Evaluation of User Interface Design Factors

Researchers introduced VAF, a systematic evaluation pipeline to measure how visual web elements influence AI agent decision-making. The study tested 48 variants across 5 real-world websites and found that background contrast, item size, position, and card clarity significantly impact agent behavior, while font styling and text color have minimal effects.

AINeutralarXiv – CS AI · Mar 25/106
🧠

fEDM+: A Risk-Based Fuzzy Ethical Decision Making Framework with Principle-Level Explainability and Pluralistic Validation

Researchers have introduced fEDM+, an enhanced fuzzy ethical decision-making framework for AI systems that provides principle-level explainability and validates decisions against multiple stakeholder perspectives. The framework extends the original fEDM by adding transparent explanations of ethical decisions and replacing single-point validation with pluralistic validation that accommodates different ethical viewpoints.

AINeutralarXiv – CS AI · Mar 24/106
🧠

Resilient Strategies for Stochastic Systems: How Much Does It Take to Break a Winning Strategy?

Researchers introduce resilient strategies for stochastic systems, focusing on decision-making that remains robust against disturbances that could flip agent decisions. The work presents fundamental problems for Markov decision processes with reachability and safety objectives, extending to stochastic games with various disturbance aggregation methods.

AINeutralarXiv – CS AI · Feb 273/106
🧠

Predicting Tennis Serve directions with Machine Learning

Researchers developed a machine learning method to predict professional tennis players' first serve directions, achieving 49% accuracy for male players and 44% for female players. The study provides evidence that top players use mixed-strategy serving decisions and suggests contextual information plays a larger role in tennis strategy than previously understood.

← PrevPage 2 of 2