y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#ai-trust News & Analysis

5 articles tagged with #ai-trust. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

5 articles
AINeutralarXiv – CS AI · Apr 77/10
🧠

The Persuasion Paradox: When LLM Explanations Fail to Improve Human-AI Team Performance

Research reveals a 'Persuasion Paradox' where LLM explanations increase user confidence but don't reliably improve human-AI team performance, and can actually undermine task accuracy. The study found that explanation effectiveness varies significantly by task type, with visual reasoning tasks seeing decreased error recovery while logical reasoning tasks benefited from explanations.

AIBearisharXiv – CS AI · Mar 116/10
🧠

Why do we Trust Chatbots? From Normative Principles to Behavioral Drivers

Researchers argue that trust in chatbots is often driven by behavioral manipulation rather than demonstrated trustworthiness, proposing they be viewed as skilled salespeople rather than assistants. The study highlights how design choices exploit cognitive biases to influence user behavior, creating a gap between psychological trust formation and actual trustworthiness.

AIBullishMIT News – AI · Mar 96/10
🧠

Improving AI models’ ability to explain their predictions

Researchers have developed a new approach to improve AI models' ability to explain their predictions, which could help users determine whether to trust model outputs. This advancement is particularly important for safety-critical applications such as healthcare and autonomous driving where understanding AI decision-making is crucial.

Improving AI models’ ability to explain their predictions
AIBullishOpenAI News · Jul 176/105
🧠

Prover-Verifier Games improve legibility of language model outputs

Prover-verifier games represent a new approach to improving the legibility and transparency of language model outputs. This methodology aims to make AI-generated content more verifiable and trustworthy for both human users and automated systems.