25 articles tagged with #human-computer-interaction. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AINeutralarXiv – CS AI · Mar 277/10
🧠A user study with 200 participants found that while explanation correctness in AI systems affects human understanding, the relationship is not linear - performance drops significantly at 70% correctness but doesn't degrade further below that threshold. The research challenges assumptions that higher computational correctness metrics automatically translate to better human comprehension of AI decisions.
AIBearisharXiv – CS AI · Mar 167/10
🧠Researchers introduced CoRE, a benchmark testing whether large language models can reason about human emotions through cognitive dimensions rather than just labels. The study found that while LLMs capture systematic relations between cognitive appraisals and emotions, they show misalignment with human judgments and instability across different contexts.
AIBullisharXiv – CS AI · Mar 117/10
🧠Researchers propose AgentOS, a new operating system paradigm that replaces traditional GUI/CLI interfaces with natural language-driven interactions powered by AI agents. The system would feature an Agent Kernel for intent interpretation and task coordination, transforming conventional applications into modular skills that users can compose through natural language commands.
AIBearisharXiv – CS AI · Mar 176/10
🧠A new study reveals that standard algorithmic metrics used to evaluate AI counterfactual explanations poorly correlate with human perceptions of explanation quality. The research found weak and dataset-dependent relationships between technical metrics and user judgments, highlighting fundamental limitations in current AI explainability evaluation methods.
AIBullisharXiv – CS AI · Mar 166/10
🧠Researchers introduce CRAFT-GUI, a curriculum learning framework that uses reinforcement learning to improve AI agents' performance in graphical user interface tasks. The method addresses difficulty variation across GUI tasks and provides more nuanced feedback, achieving 5.6% improvement on Android Control benchmarks and 10.3% on internal benchmarks.
AINeutralarXiv – CS AI · Mar 45/103
🧠Researchers propose a new framework for handling ambiguity in natural language queries for tabular data analysis, reframing ambiguity as a cooperative feature rather than a deficiency. The study analyzes 15 datasets and finds that current evaluation methods inadequately assess both system accuracy and interpretation capabilities.
AIBullisharXiv – CS AI · Mar 45/102
🧠Researchers introduce MultiSessionCollab, a benchmark for evaluating conversational AI agents' ability to learn and adapt to user preferences across multiple collaboration sessions. The study demonstrates that equipping agents with persistent memory significantly improves long-term collaboration quality, task success rates, and user experience.
AIBullisharXiv – CS AI · Mar 37/108
🧠Researchers have developed Egocentric Co-Pilot, a web-native AI framework that runs on smart glasses and uses Large Language Models to provide assistive AI without requiring screens or free hands. The system combines perception, reasoning, and web tools to support accessibility for people with vision impairments or cognitive overload, showing superior performance compared to commercial baselines.
AIBullisharXiv – CS AI · Mar 27/1012
🧠Researchers have introduced Hello-Chat, an end-to-end audio language model designed to create more realistic and emotionally resonant AI conversations. The model addresses the robotic nature of existing Large Audio Language Models by using real-life conversation data and achieving breakthrough performance in prosodic naturalness and emotional alignment.
AIBullisharXiv – CS AI · Feb 275/107
🧠Researchers developed EyeLayer, a module that integrates human eye-tracking patterns into large language models to improve code summarization. The system achieved up to 13.17% improvement on BLEU-4 metrics by using human gaze data to guide AI attention mechanisms.
AINeutralIEEE Spectrum – AI · Feb 116/104
🧠AI companions are becoming increasingly popular due to advances in large language models, but research from UT Austin highlights potential harms including reduced well-being, disconnection from the physical world, and commitment burden on users. While AI companions may offer benefits like addressing loneliness and building social skills, researchers emphasize the need to establish harm pathways early to guide better design and prevent negative outcomes.
AIBullishGoogle DeepMind Blog · Oct 305/104
🧠New speech generation technologies are being developed to create more natural and conversational digital assistants and AI tools. The advancement aims to improve human-computer interaction through more intuitive audio interfaces.
AINeutralarXiv – CS AI · Mar 274/10
🧠Researchers analyzed AI data science systems designed for medical settings, finding that success depends on creating transparent intermediate artifacts like readable query languages and concept definitions. These intermediates help users reason about analytical choices and contribute domain expertise, despite opacity in other parts of the AI process.
AIBullisharXiv – CS AI · Mar 175/10
🧠Researchers have published a comprehensive review of methods for integrating large language models (LLMs) into virtual reality environments to create more realistic digital humans with personality traits. The study explores various approaches including zero-shot, few-shot, and fine-tuning methods while highlighting challenges like computational demands and latency issues that need to be addressed for practical applications.
AINeutralarXiv – CS AI · Mar 124/10
🧠Researchers have developed a platform-agnostic Digital Human Modelling framework that integrates multimodal biosensing (EEG, EMG, EOG, PPG) with game-based interactions for AI research. The framework separates sensing from AI inference to enable ethical, reproducible research in accessibility and human-computer interaction studies.
AINeutralarXiv – CS AI · Mar 114/10
🧠Researchers developed a framework to identify what makes AI-generated optimal solutions more interpretable to humans, focusing on bin-packing problems. The study found that humans prefer solutions with three key properties: alignment with greedy heuristics, simple within-bin composition, and ordered visual representation.
AINeutralarXiv – CS AI · Mar 94/10
🧠Researchers propose a novel Residual Masking Network that combines deep residual networks with attention mechanisms for facial expression recognition. The method achieves state-of-the-art accuracy on FER2013 and VEMO datasets by using segmentation networks to refine feature maps and focus on relevant facial information.
AINeutralarXiv – CS AI · Mar 34/105
🧠Researchers have developed a new resource-rational framework for modeling visual attention as a sequential decision-making process using AI techniques like Partially Observable Markov Decision Processes. The framework successfully models human eye-movement behaviors in tasks like reading and multitasking, offering potential applications for Human-Computer Interaction design.
AIBullishApple Machine Learning · Mar 35/102
🧠EMBridge is a new AI framework that enhances gesture recognition from EMG biosignals by aligning them with high-quality structured data from videos and images. The technology enables zero-shot gesture generalization on low-power wearable devices, potentially advancing human-computer interaction applications.
AINeutralarXiv – CS AI · Feb 274/105
🧠Researchers propose a new approach to augmented reading systems that uses simulation-based optimization and resource-rational models of human cognition. The method includes offline design exploration and online personalization to create adaptive reading interfaces without extensive human testing.
AINeutralarXiv – CS AI · Feb 274/103
🧠PuppetChat is a research prototype messaging system that uses AI-powered recommendations and personalized micronarratives to enhance intimate communication between close partners and friends. A 10-day field study with 11 dyads showed the system improved social presence, self-disclosure, and relationship continuity through more expressive bidirectional interactions.
AINeutralGoogle Research Blog · Feb 104/108
🧠This research focuses on human-computer interaction and visualization methods for creating, simulating, and testing dynamic group conversations involving multiple humans and AI systems. The work extends beyond traditional one-on-one interactions to explore more complex multi-participant dialogue scenarios.
AINeutralGoogle Research Blog · Sep 184/106
🧠Sensible Agent introduces a framework for creating proactive augmented reality agents that interact with users in unobtrusive ways. The research focuses on human-computer interaction principles and visualization techniques to improve AR agent integration into daily experiences.
AINeutralGoogle Research Blog · Jul 24/106
🧠Research focuses on improving accessibility in group conversations through sound localization technology. The work falls under Human-Computer Interaction and Visualization, aiming to help users better identify and follow multiple speakers in group settings.
AINeutralarXiv – CS AI · Mar 34/106
🧠Researchers present PleaSQLarify, a visual interface system that helps resolve ambiguity in natural language database queries through pragmatic repair - an incremental clarification process. The system uses interpretable decision variables and visual exploration to help users efficiently disambiguate queries when their intent doesn't match system interpretation.