y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#trustworthy-ai News & Analysis

22 articles tagged with #trustworthy-ai. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

22 articles
AI × CryptoBullisharXiv – CS AI · Apr 77/10
🤖

Quantifying Trust: Financial Risk Management for Trustworthy AI Agents

Researchers introduce the Agentic Risk Standard (ARS), a payment settlement framework for AI-mediated transactions that provides contractual compensation for agent failures. The standard shifts trust from implicit model behavior expectations to explicit, measurable guarantees through financial risk management principles.

AIBullisharXiv – CS AI · Mar 277/10
🧠

Decidable By Construction: Design-Time Verification for Trustworthy AI

Researchers propose a framework for verifying AI model properties at design time rather than after deployment, using algebraic constraints over finitely generated abelian groups. The approach eliminates computational overhead of post-hoc verification by building trustworthiness into the model architecture from the start.

AINeutralarXiv – CS AI · Mar 177/10
🧠

How Meta-research Can Pave the Road Towards Trustworthy AI In Healthcare: Catalogue of Ideas and Roadmap for Future Research

Researchers convened a February 2025 workshop to explore how meta-research methodologies can enhance Trustworthy AI (TAI) implementation in healthcare. The study identifies key challenges including robustness, reproducibility, clinical integration, and transparency gaps, proposing a roadmap for interdisciplinary collaboration between TAI and meta-research fields.

AINeutralarXiv – CS AI · Mar 177/10
🧠

Efficient Federated Conformal Prediction with Group-Conditional Guarantee

Researchers propose group-conditional federated conformal prediction (GC-FCP), a new protocol that enables trustworthy AI uncertainty quantification across distributed clients while providing coverage guarantees for specific groups. The framework addresses challenges in federated learning for applications in healthcare, finance, and mobile sensing by creating compact weighted summaries that support efficient calibration.

AIBullisharXiv – CS AI · Mar 97/10
🧠

RAG-Driver: Generalisable Driving Explanations with Retrieval-Augmented In-Context Learning in Multi-Modal Large Language Model

Researchers introduce RAG-Driver, a retrieval-augmented multi-modal large language model designed for autonomous driving that can provide explainable decisions and control predictions. The system addresses data scarcity and generalization challenges in AI-driven autonomous vehicles by using in-context learning and expert demonstration retrieval.

AINeutralarXiv – CS AI · Mar 56/10
🧠

From Privacy to Trust in the Agentic Era: A Taxonomy of Challenges in Trustworthy Federated Learning Through the Lens of Trust Report 2.0

Researchers propose Trustworthy Federated Learning (TFL) framework that treats trust as a continuously maintained system condition rather than static property, addressing challenges in AI systems with autonomous decision-making. The framework introduces Trust Report 2.0 as a privacy-preserving coordination blueprint for multi-stakeholder governance in federated learning deployments.

AINeutralarXiv – CS AI · Mar 46/105
🧠

Architecting Trust in Artificial Epistemic Agents

Researchers propose a framework for developing trustworthy AI agents that function as epistemic entities, capable of pursuing knowledge goals and shaping information environments. The paper argues that as AI models increasingly replace traditional search methods and provide specialized advice, their calibration to human epistemic norms becomes critical to prevent cognitive deskilling and epistemic drift.

AIBullisharXiv – CS AI · Mar 276/10
🧠

ElephantBroker: A Knowledge-Grounded Cognitive Runtime for Trustworthy AI Agents

Researchers have introduced ElephantBroker, an open-source cognitive runtime system that combines knowledge graphs with vector storage to create more trustworthy AI agents with verifiable memory. The system implements comprehensive safety measures, evidence verification, and multi-organizational access controls for enterprise AI deployments.

AIBullisharXiv – CS AI · Mar 266/10
🧠

Learning To Guide Human Decision Makers With Vision-Language Models

Researchers introduce Learning to Guide (LTG), a new AI framework where machines provide interpretable guidance to human decision-makers rather than making automated decisions. The SLOG approach transforms vision-language models into guidance generators using human feedback, showing promise in medical diagnosis applications.

AIBearisharXiv – CS AI · Mar 176/10
🧠

Do Metrics for Counterfactual Explanations Align with User Perception?

A new study reveals that standard algorithmic metrics used to evaluate AI counterfactual explanations poorly correlate with human perceptions of explanation quality. The research found weak and dataset-dependent relationships between technical metrics and user judgments, highlighting fundamental limitations in current AI explainability evaluation methods.

AINeutralarXiv – CS AI · Mar 176/10
🧠

Concisely Explaining the Doubt: Minimum-Size Abductive Explanations for Linear Models with a Reject Option

Researchers developed a method to compute minimum-size abductive explanations for AI linear models with reject options, addressing a key challenge in explainable AI for critical domains. The approach uses log-linear algorithms for accepted instances and integer linear programming for rejected instances, proving more efficient than existing methods despite theoretical NP-hardness.

AINeutralarXiv – CS AI · Mar 166/10
🧠

Causality Is Key to Understand and Balance Multiple Goals in Trustworthy ML and Foundation Models

Researchers propose integrating causal methods into machine learning systems to balance competing objectives like fairness, privacy, robustness, accuracy, and explainability. The paper argues that addressing these principles in isolation leads to conflicts and suboptimal solutions, while causal approaches can help navigate trade-offs in both trustworthy ML and foundation models.

AINeutralarXiv – CS AI · Mar 37/109
🧠

Property-Driven Evaluation of GNN Expressiveness at Scale: Datasets, Framework, and Study

Researchers developed a comprehensive evaluation framework for Graph Neural Networks (GNNs) using formal specification methods, creating 336 new datasets to test GNN expressiveness across 16 fundamental graph properties. The study reveals that no single pooling approach consistently performs well across all properties, with attention-based pooling excelling in generalization while second-order pooling provides better sensitivity.

AIBullisharXiv – CS AI · Mar 37/1010
🧠

Inference-Time Safety For Code LLMs Via Retrieval-Augmented Revision

Researchers developed a new inference-time safety mechanism for code-generating AI models that uses retrieval-augmented generation to identify and fix security vulnerabilities in real-time. The approach leverages Stack Overflow discussions to guide AI code revision without requiring model retraining, improving security while maintaining interpretability.

AIBullisharXiv – CS AI · Mar 36/103
🧠

Calibrating Verbalized Confidence with Self-Generated Distractors

Researchers introduce DINCO (Distractor-Normalized Coherence), a method to improve confidence calibration in large language models by using self-generated alternative claims to reduce overconfidence bias. The approach addresses LLM suggestibility issues that cause models to express high confidence on low-accuracy outputs, potentially improving AI safety and trustworthiness.

AIBullisharXiv – CS AI · Mar 27/1024
🧠

DUET: Distilled LLM Unlearning from an Efficiently Contextualized Teacher

Researchers propose DUET, a new distillation-based method for LLM unlearning that removes undesirable knowledge from AI models without full retraining. The technique combines computational efficiency with security advantages, achieving better performance in both knowledge removal and utility preservation while being significantly more data-efficient than existing methods.

AIBullishOpenAI News · Dec 36/105
🧠

How confessions can keep language models honest

OpenAI researchers are developing a 'confessions' method to train AI language models to acknowledge their mistakes and undesirable behavior. This approach aims to enhance AI honesty, transparency, and overall trustworthiness in model outputs.

AINeutralarXiv – CS AI · Mar 275/10
🧠

A Unified Memory Perspective for Probabilistic Trustworthy AI

Researchers present a unified framework for probabilistic AI computation that treats deterministic and stochastic data access under a common perspective. The study identifies memory systems as performance bottlenecks in trustworthy AI and proposes compute-in-memory approaches to address scalability challenges.

AINeutralOpenAI News · Jul 154/104
🧠

Intellectual freedom by design

ChatGPT is positioned as a versatile AI tool designed with three core principles: usefulness, trustworthiness, and adaptability. The design philosophy emphasizes user customization and intellectual freedom in how the AI system can be utilized.