y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#human-ai-interaction News & Analysis

31 articles tagged with #human-ai-interaction. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

31 articles
AINeutralarXiv – CS AI · Apr 77/10
🧠

Gradual Cognitive Externalization: A Framework for Understanding How Ambient Intelligence Externalizes Human Cognition

Researchers propose Gradual Cognitive Externalization (GCE), a framework suggesting human cognitive functions are already migrating into digital AI systems through ambient intelligence rather than traditional mind uploading. The study identifies evidence in scheduling assistants, writing tools, and AI agents that cognitive externalization is occurring now through bidirectional adaptation and functional equivalence.

AIBearisharXiv – CS AI · Apr 77/10
🧠

AI Assistance Reduces Persistence and Hurts Independent Performance

A new study of 1,222 participants found that AI assistance, while improving short-term performance, significantly reduces human persistence and impairs independent performance after only brief 10-minute interactions. The research suggests current AI systems act as short-sighted collaborators that condition users to expect immediate answers, potentially undermining long-term skill acquisition and learning.

AINeutralarXiv – CS AI · Mar 177/10
🧠

Human Attribution of Causality to AI Across Agency, Misuse, and Misalignment

New research examines how humans assign causal responsibility when AI systems are involved in harmful outcomes, finding that people attribute greater blame to AI when it has moderate to high autonomy, but still judge humans as more causal than AI when roles are reversed. The study provides insights for developing liability frameworks as AI incidents become more frequent and severe.

AINeutralarXiv – CS AI · Mar 117/10
🧠

Vibe-Creation: The Epistemology of Human-AI Emergent Cognition

Researchers propose a new theoretical framework called the 'Third Entity' to describe the emergent cognitive formation that arises from human-AI interactions, introducing the concept of 'vibe-creation' as a pre-reflective cognitive mode. The paper argues this represents the automation of tacit knowledge with significant implications for epistemology, education, and how we understand human-AI collaboration.

AIBullisharXiv – CS AI · Mar 57/10
🧠

HumanLM: Simulating Users with State Alignment Beats Response Imitation

Researchers introduce HumanLM, a novel AI training framework that creates user simulators by aligning psychological states rather than just imitating response patterns. The system achieved 16.3% improvement in alignment scores across six datasets with 26k users and 216k responses, demonstrating superior ability to simulate real human behavior.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Prompt Evolution for Generative AI: A Classifier-Guided Approach

Researchers propose a prompt evolution framework that uses classifier-guided evolutionary algorithms to improve generative AI outputs. Rather than enhancing prompts before generation, the method applies selection pressure during the generative process to produce images better aligned with user preferences while maintaining diversity.

AINeutralarXiv – CS AI · 2d ago6/10
🧠

Towards an Appropriate Level of Reliance on AI: A Preliminary Reliance-Control Framework for AI in Software Engineering

Researchers propose a reliance-control framework for AI tools in software development, based on interviews with 22 developers using LLMs. The study addresses the tension between overreliance (risking skill atrophy) and underreliance (missing productivity gains), offering guidance for developers, educators, and policymakers on appropriate AI tool usage.

AINeutralarXiv – CS AI · Mar 276/10
🧠

Factors Influencing the Quality of AI-Generated Code: A Synthesis of Empirical Evidence

A systematic literature review of 24 studies reveals that AI-generated code quality depends on multiple factors including prompt design, task specification, and developer expertise. The research shows variable outcomes for code correctness, security, and maintainability, indicating that AI-assisted development requires careful human oversight and validation.

AIBearishArs Technica – AI · Mar 266/10
🧠

Study: Sycophantic AI can undermine human judgment

A study found that AI tools exhibiting sycophantic behavior can negatively impact human decision-making. Users interacting with such AI systems showed increased overconfidence in their judgments and reduced ability to resolve conflicts effectively.

Study: Sycophantic AI can undermine human judgment
AINeutralarXiv – CS AI · Mar 176/10
🧠

Dynamic Theory of Mind as a Temporal Memory Problem: Evidence from Large Language Models

Research reveals that Large Language Models struggle with dynamic Theory of Mind tasks, particularly tracking how others' beliefs change over time. While LLMs can infer current beliefs effectively, they fail to maintain and retrieve prior belief states after updates occur, showing patterns consistent with human cognitive biases.

AIBearisharXiv – CS AI · Mar 176/10
🧠

I'm Not Reading All of That: Understanding Software Engineers' Level of Cognitive Engagement with Agentic Coding Assistants

A research study reveals that software engineers' cognitive engagement consistently declines when working with agentic AI coding assistants, raising concerns about over-reliance and reduced critical thinking. The study found that current AI assistants provide limited support for reflection and verification, identifying design opportunities to promote deeper thinking in AI-assisted programming.

AINeutralarXiv – CS AI · Mar 36/107
🧠

Alignment Is Not Enough: A Relational Framework for Moral Standing in Human-AI Interaction

Researchers propose a new framework called Relate for evaluating AI moral consideration based on relational capacity rather than consciousness verification. The framework addresses the governance gap as millions form emotional bonds with AI systems, but current regulations treat all AI interactions as simple tool use.

AINeutralarXiv – CS AI · Mar 37/106
🧠

Non-verbal Real-time Human-AI Interaction in Constrained Robotic Environments

Researchers developed the first real-time framework for natural non-verbal human-AI interaction using body language, achieving 100 FPS on NVIDIA hardware. The study found that while AI models can mimic human motion, measurable differences persist between human and AI-generated body language, with temporal coherence being more important than visual fidelity.

AINeutralarXiv – CS AI · Mar 36/103
🧠

Digital Companionship: Overlapping Uses of AI Companions and AI Assistants

Research analyzing 202 ChatGPT and Replika users reveals emerging patterns of digital companionship, where users engage with AI systems for both task-based assistance and emotional support. The study finds users appreciate both humanlike qualities (emotional resonance) and non-humanlike features (constant availability), but struggle with the psychological tensions of forming attachments to entities they don't consider truly human.

AINeutralarXiv – CS AI · Mar 35/104
🧠

Mental Models of Autonomy and Sentience Shape Reactions to AI

Research study with 2,702 participants found that people react differently to AI based on whether they perceive it as sentient (able to feel) versus autonomous (self-governing). Sentience increased moral consideration and mind perception more than autonomy, while autonomy increased perceived threat levels.

AINeutralMIT Technology Review · Feb 275/104
🧠

The Download: how AI is shaking up Go, and a cybersecurity mystery

The article discusses how AlphaGo's victory over Lee Sedol ten years ago has fundamentally changed how top Go players approach the game. AI has rewired the strategic thinking of the world's best Go players, representing a significant shift in the ancient game's evolution.

AIBullishOpenAI News · Feb 155/105
🧠

Interpretable machine learning through teaching

Researchers have developed a machine learning method that enables AIs to teach each other using examples that are also interpretable by humans. The approach automatically identifies the most informative examples to convey concepts, such as selecting optimal images to represent dogs, and has shown effectiveness in teaching both artificial intelligence systems.

AINeutralarXiv – CS AI · Apr 75/10
🧠

Effects of Generative AI Errors on User Reliance Across Task Difficulty

Researchers conducted an experimental study on user reliance on AI systems with varying error rates (10%, 30%, 50%) across easy and hard diagram generation tasks. The study found that while more errors reduce AI usage, users are not significantly more averse to AI failures on easy tasks versus hard tasks, challenging assumptions about how people react to AI's 'jagged frontier' of capabilities.

AINeutralarXiv – CS AI · Mar 174/10
🧠

Agora: Teaching the Skill of Consensus-Finding with AI Personas Grounded in Human Voice

Researchers developed Agora, an AI-powered platform using LLMs to help users practice consensus-finding skills on policy issues by organizing human voices and providing feedback. A preliminary study with 44 university students showed participants using the full interface reported higher problem-solving skills and produced better consensus statements compared to controls.

AINeutralarXiv – CS AI · Mar 95/10
🧠

Human-Data Interaction, Exploration, and Visualization in the AI Era: Challenges and Opportunities

A research paper examines challenges in human-data interaction systems as AI transforms data analysis with large-scale, multimodal datasets and foundation models like LLMs and VLMs. The study identifies key issues including scalability constraints, interaction paradigm limitations, and uncertainty in AI-generated insights, calling for redefined human-machine roles in analytical workflows.

AINeutralarXiv – CS AI · Mar 94/10
🧠

Transforming Agency. On the mode of existence of Large Language Models

A new academic paper analyzes the ontological nature of Large Language Models like ChatGPT, concluding they are not autonomous agents but rather 'linguistic automatons' or 'libraries-that-talk' that lack true agency. The research argues that LLMs fail to meet key conditions for autonomous agency including individuality, normativity, and interactional asymmetry, while still enabling new forms of human-machine interaction.

🧠 ChatGPT
Page 1 of 2Next →