y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#human-computer-interaction News & Analysis

25 articles tagged with #human-computer-interaction. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

25 articles
AINeutralarXiv – CS AI · Mar 277/10
🧠

Does Explanation Correctness Matter? Linking Computational XAI Evaluation to Human Understanding

A user study with 200 participants found that while explanation correctness in AI systems affects human understanding, the relationship is not linear - performance drops significantly at 70% correctness but doesn't degrade further below that threshold. The research challenges assumptions that higher computational correctness metrics automatically translate to better human comprehension of AI decisions.

AIBearisharXiv – CS AI · Mar 167/10
🧠

Large language models show fragile cognitive reasoning about human emotions

Researchers introduced CoRE, a benchmark testing whether large language models can reason about human emotions through cognitive dimensions rather than just labels. The study found that while LLMs capture systematic relations between cognitive appraisals and emotions, they show misalignment with human judgments and instability across different contexts.

AIBullisharXiv – CS AI · Mar 117/10
🧠

AgentOS: From Application Silos to a Natural Language-Driven Data Ecosystem

Researchers propose AgentOS, a new operating system paradigm that replaces traditional GUI/CLI interfaces with natural language-driven interactions powered by AI agents. The system would feature an Agent Kernel for intent interpretation and task coordination, transforming conventional applications into modular skills that users can compose through natural language commands.

AIBearisharXiv – CS AI · Mar 176/10
🧠

Do Metrics for Counterfactual Explanations Align with User Perception?

A new study reveals that standard algorithmic metrics used to evaluate AI counterfactual explanations poorly correlate with human perceptions of explanation quality. The research found weak and dataset-dependent relationships between technical metrics and user judgments, highlighting fundamental limitations in current AI explainability evaluation methods.

AIBullisharXiv – CS AI · Mar 166/10
🧠

CRAFT-GUI: Curriculum-Reinforced Agent For GUI Tasks

Researchers introduce CRAFT-GUI, a curriculum learning framework that uses reinforcement learning to improve AI agents' performance in graphical user interface tasks. The method addresses difficulty variation across GUI tasks and provides more nuanced feedback, achieving 5.6% improvement on Android Control benchmarks and 10.3% on internal benchmarks.

AIBullisharXiv – CS AI · Mar 45/102
🧠

MultiSessionCollab: Learning User Preferences with Memory to Improve Long-Term Collaboration

Researchers introduce MultiSessionCollab, a benchmark for evaluating conversational AI agents' ability to learn and adapt to user preferences across multiple collaboration sessions. The study demonstrates that equipping agents with persistent memory significantly improves long-term collaboration quality, task success rates, and user experience.

AIBullisharXiv – CS AI · Mar 37/108
🧠

Egocentric Co-Pilot: Web-Native Smart-Glasses Agents for Assistive Egocentric AI

Researchers have developed Egocentric Co-Pilot, a web-native AI framework that runs on smart glasses and uses Large Language Models to provide assistive AI without requiring screens or free hands. The system combines perception, reasoning, and web tools to support accessibility for people with vision impairments or cognitive overload, showing superior performance compared to commercial baselines.

AIBullisharXiv – CS AI · Mar 27/1012
🧠

Hello-Chat: Towards Realistic Social Audio Interactions

Researchers have introduced Hello-Chat, an end-to-end audio language model designed to create more realistic and emotionally resonant AI conversations. The model addresses the robotic nature of existing Large Audio Language Models by using real-life conversation data and achieving breakthrough performance in prosodic naturalness and emotional alignment.

AINeutralIEEE Spectrum – AI · Feb 116/104
🧠

How Can AI Companions Be Helpful, not Harmful?

AI companions are becoming increasingly popular due to advances in large language models, but research from UT Austin highlights potential harms including reduced well-being, disconnection from the physical world, and commitment burden on users. While AI companions may offer benefits like addressing loneliness and building social skills, researchers emphasize the need to establish harm pathways early to guide better design and prevent negative outcomes.

AIBullishGoogle DeepMind Blog · Oct 305/104
🧠

Pushing the frontiers of audio generation

New speech generation technologies are being developed to create more natural and conversational digital assistants and AI tools. The advancement aims to improve human-computer interaction through more intuitive audio interfaces.

AIBullisharXiv – CS AI · Mar 175/10
🧠

Integrating Personality into Digital Humans: A Review of LLM-Driven Approaches for Virtual Reality

Researchers have published a comprehensive review of methods for integrating large language models (LLMs) into virtual reality environments to create more realistic digital humans with personality traits. The study explores various approaches including zero-shot, few-shot, and fine-tuning methods while highlighting challenges like computational demands and latency issues that need to be addressed for practical applications.

AINeutralarXiv – CS AI · Mar 114/10
🧠

Unpacking Interpretability: Human-Centered Criteria for Optimal Combinatorial Solutions

Researchers developed a framework to identify what makes AI-generated optimal solutions more interpretable to humans, focusing on bin-packing problems. The study found that humans prefer solutions with three key properties: alignment with greedy heuristics, simple within-bin composition, and ordered visual representation.

AINeutralarXiv – CS AI · Mar 94/10
🧠

Facial Expression Recognition Using Residual Masking Network

Researchers propose a novel Residual Masking Network that combines deep residual networks with attention mechanisms for facial expression recognition. The method achieves state-of-the-art accuracy on FER2013 and VEMO datasets by using segmentation networks to refine feature maps and focus on relevant facial information.

AINeutralarXiv – CS AI · Mar 34/105
🧠

A Resource-Rational Principle for Modeling Visual Attention Control

Researchers have developed a new resource-rational framework for modeling visual attention as a sequential decision-making process using AI techniques like Partially Observable Markov Decision Processes. The framework successfully models human eye-movement behaviors in tasks like reading and multitasking, offering potential applications for Human-Computer Interaction design.

AINeutralarXiv – CS AI · Feb 274/105
🧠

Simulation-based Optimization for Augmented Reading

Researchers propose a new approach to augmented reading systems that uses simulation-based optimization and resource-rational models of human cognition. The method includes offline design exploration and online personalization to create adaptive reading interfaces without extensive human testing.

AINeutralarXiv – CS AI · Feb 274/103
🧠

PuppetChat: Fostering Intimate Communication through Bidirectional Actions and Micronarratives

PuppetChat is a research prototype messaging system that uses AI-powered recommendations and personalized micronarratives to enhance intimate communication between close partners and friends. A 10-day field study with 11 dyads showed the system improved social presence, self-disclosure, and relationship continuity through more expressive bidirectional interactions.

AINeutralGoogle Research Blog · Sep 184/106
🧠

Sensible Agent: A framework for unobtrusive interaction with proactive AR agents

Sensible Agent introduces a framework for creating proactive augmented reality agents that interact with users in unobtrusive ways. The research focuses on human-computer interaction principles and visualization techniques to improve AR agent integration into daily experiences.

AINeutralGoogle Research Blog · Jul 24/106
🧠

Making group conversations more accessible with sound localization

Research focuses on improving accessibility in group conversations through sound localization technology. The work falls under Human-Computer Interaction and Visualization, aiming to help users better identify and follow multiple speakers in group settings.

AINeutralarXiv – CS AI · Mar 34/106
🧠

PleaSQLarify: Visual Pragmatic Repair for Natural Language Database Querying

Researchers present PleaSQLarify, a visual interface system that helps resolve ambiguity in natural language database queries through pragmatic repair - an incremental clarification process. The system uses interpretable decision variables and visual exploration to help users efficiently disambiguate queries when their intent doesn't match system interpretation.