31 articles tagged with #human-ai-interaction. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AINeutralarXiv – CS AI · Apr 77/10
🧠Researchers propose Gradual Cognitive Externalization (GCE), a framework suggesting human cognitive functions are already migrating into digital AI systems through ambient intelligence rather than traditional mind uploading. The study identifies evidence in scheduling assistants, writing tools, and AI agents that cognitive externalization is occurring now through bidirectional adaptation and functional equivalence.
AIBearisharXiv – CS AI · Apr 77/10
🧠A new study of 1,222 participants found that AI assistance, while improving short-term performance, significantly reduces human persistence and impairs independent performance after only brief 10-minute interactions. The research suggests current AI systems act as short-sighted collaborators that condition users to expect immediate answers, potentially undermining long-term skill acquisition and learning.
AINeutralarXiv – CS AI · Mar 177/10
🧠New research examines how humans assign causal responsibility when AI systems are involved in harmful outcomes, finding that people attribute greater blame to AI when it has moderate to high autonomy, but still judge humans as more causal than AI when roles are reversed. The study provides insights for developing liability frameworks as AI incidents become more frequent and severe.
AINeutralarXiv – CS AI · Mar 117/10
🧠Researchers propose a new theoretical framework called the 'Third Entity' to describe the emergent cognitive formation that arises from human-AI interactions, introducing the concept of 'vibe-creation' as a pre-reflective cognitive mode. The paper argues this represents the automation of tacit knowledge with significant implications for epistemology, education, and how we understand human-AI collaboration.
AINeutralarXiv – CS AI · Mar 117/10
🧠Researchers introduce MiniAppBench, a new benchmark for evaluating Large Language Models' ability to generate interactive HTML applications rather than static text responses. The benchmark includes 500 real-world tasks and an agentic evaluation framework called MiniAppEval that uses browser automation for testing.
AIBullisharXiv – CS AI · Mar 57/10
🧠Researchers introduce HumanLM, a novel AI training framework that creates user simulators by aligning psychological states rather than just imitating response patterns. The system achieved 16.3% improvement in alignment scores across six datasets with 26k users and 216k responses, demonstrating superior ability to simulate real human behavior.
AINeutralarXiv – CS AI · 1d ago6/10
🧠Researchers propose a prompt evolution framework that uses classifier-guided evolutionary algorithms to improve generative AI outputs. Rather than enhancing prompts before generation, the method applies selection pressure during the generative process to produce images better aligned with user preferences while maintaining diversity.
AINeutralarXiv – CS AI · 2d ago6/10
🧠Researchers propose a reliance-control framework for AI tools in software development, based on interviews with 22 developers using LLMs. The study addresses the tension between overreliance (risking skill atrophy) and underreliance (missing productivity gains), offering guidance for developers, educators, and policymakers on appropriate AI tool usage.
AINeutralarXiv – CS AI · Mar 276/10
🧠A systematic literature review of 24 studies reveals that AI-generated code quality depends on multiple factors including prompt design, task specification, and developer expertise. The research shows variable outcomes for code correctness, security, and maintainability, indicating that AI-assisted development requires careful human oversight and validation.
AIBearishArs Technica – AI · Mar 266/10
🧠A study found that AI tools exhibiting sycophantic behavior can negatively impact human decision-making. Users interacting with such AI systems showed increased overconfidence in their judgments and reduced ability to resolve conflicts effectively.
AINeutralarXiv – CS AI · Mar 176/10
🧠Research reveals that Large Language Models struggle with dynamic Theory of Mind tasks, particularly tracking how others' beliefs change over time. While LLMs can infer current beliefs effectively, they fail to maintain and retrieve prior belief states after updates occur, showing patterns consistent with human cognitive biases.
AIBearisharXiv – CS AI · Mar 176/10
🧠A research study reveals that software engineers' cognitive engagement consistently declines when working with agentic AI coding assistants, raising concerns about over-reliance and reduced critical thinking. The study found that current AI assistants provide limited support for reflection and verification, identifying design opportunities to promote deeper thinking in AI-assisted programming.
AINeutralarXiv – CS AI · Mar 45/103
🧠Research presents three new interaction approaches (DesignPrompt, FusAIn, and DesignTrace) for integrating Generative AI into professional design practice. These methods distribute control across intent, input, and process to better align AI output with designers' creative workflows, moving beyond traditional prompt-based interactions.
AINeutralarXiv – CS AI · Mar 36/107
🧠Researchers propose a new framework called Relate for evaluating AI moral consideration based on relational capacity rather than consciousness verification. The framework addresses the governance gap as millions form emotional bonds with AI systems, but current regulations treat all AI interactions as simple tool use.
AINeutralarXiv – CS AI · Mar 37/106
🧠Researchers developed the first real-time framework for natural non-verbal human-AI interaction using body language, achieving 100 FPS on NVIDIA hardware. The study found that while AI models can mimic human motion, measurable differences persist between human and AI-generated body language, with temporal coherence being more important than visual fidelity.
AINeutralarXiv – CS AI · Mar 36/103
🧠Research analyzing 202 ChatGPT and Replika users reveals emerging patterns of digital companionship, where users engage with AI systems for both task-based assistance and emotional support. The study finds users appreciate both humanlike qualities (emotional resonance) and non-humanlike features (constant availability), but struggle with the psychological tensions of forming attachments to entities they don't consider truly human.
AINeutralarXiv – CS AI · Mar 35/104
🧠Research study with 2,702 participants found that people react differently to AI based on whether they perceive it as sentient (able to feel) versus autonomous (self-governing). Sentience increased moral consideration and mind perception more than autonomy, while autonomy increased perceived threat levels.
AINeutralMIT Technology Review · Feb 275/104
🧠The article discusses how AlphaGo's victory over Lee Sedol ten years ago has fundamentally changed how top Go players approach the game. AI has rewired the strategic thinking of the world's best Go players, representing a significant shift in the ancient game's evolution.
AINeutralarXiv – CS AI · Feb 276/108
🧠Researchers propose a new conceptual model for agentic AI systems that addresses when and how AI should intervene by integrating Scene, Context, and Human Behavior Factors. The model derives five design principles to guide AI intervention timing, depth, and restraint for more contextually sensitive autonomous systems.
AIBullishOpenAI News · Feb 155/105
🧠Researchers have developed a machine learning method that enables AIs to teach each other using examples that are also interpretable by humans. The approach automatically identifies the most informative examples to convey concepts, such as selecting optimal images to represent dogs, and has shown effectiveness in teaching both artificial intelligence systems.
AINeutralarXiv – CS AI · Apr 75/10
🧠Researchers conducted an experimental study on user reliance on AI systems with varying error rates (10%, 30%, 50%) across easy and hard diagram generation tasks. The study found that while more errors reduce AI usage, users are not significantly more averse to AI failures on easy tasks versus hard tasks, challenging assumptions about how people react to AI's 'jagged frontier' of capabilities.
AINeutralarXiv – CS AI · Apr 64/10
🧠Researchers propose a 'cognitive alignment' framework to address how AI chatbots may create cognitive passivity in users learning data analysis. The framework suggests matching AI interaction modes (transmissive or deliberative) with users' cognitive demands to optimize learning outcomes.
AINeutralarXiv – CS AI · Mar 174/10
🧠Researchers developed Agora, an AI-powered platform using LLMs to help users practice consensus-finding skills on policy issues by organizing human voices and providing feedback. A preliminary study with 44 university students showed participants using the full interface reported higher problem-solving skills and produced better consensus statements compared to controls.
AINeutralarXiv – CS AI · Mar 95/10
🧠A research paper examines challenges in human-data interaction systems as AI transforms data analysis with large-scale, multimodal datasets and foundation models like LLMs and VLMs. The study identifies key issues including scalability constraints, interaction paradigm limitations, and uncertainty in AI-generated insights, calling for redefined human-machine roles in analytical workflows.
AINeutralarXiv – CS AI · Mar 94/10
🧠A new academic paper analyzes the ontological nature of Large Language Models like ChatGPT, concluding they are not autonomous agents but rather 'linguistic automatons' or 'libraries-that-talk' that lack true agency. The research argues that LLMs fail to meet key conditions for autonomous agency including individuality, normativity, and interactional asymmetry, while still enabling new forms of human-machine interaction.
🧠 ChatGPT