y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#human-ai-collaboration News & Analysis

49 articles tagged with #human-ai-collaboration. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

49 articles
AINeutralarXiv – CS AI · Apr 77/10
🧠

The Persuasion Paradox: When LLM Explanations Fail to Improve Human-AI Team Performance

Research reveals a 'Persuasion Paradox' where LLM explanations increase user confidence but don't reliably improve human-AI team performance, and can actually undermine task accuracy. The study found that explanation effectiveness varies significantly by task type, with visual reasoning tasks seeing decreased error recovery while logical reasoning tasks benefited from explanations.

AIBullisharXiv – CS AI · Mar 277/10
🧠

The Future of AI-Driven Software Engineering

A paradigm shift is occurring in software engineering as AI systems like LLMs increasingly boost development productivity. The paper presents a vision for growing symbiotic partnerships between human developers and AI, identifying key research challenges the software engineering community must address.

AIBullisharXiv – CS AI · Mar 167/10
🧠

Human-AI Governance (HAIG): A Trust-Utility Approach

Researchers introduce the Human-AI Governance (HAIG) framework that treats AI systems as collaborative partners rather than mere tools, proposing a trust-utility approach to governance across three dimensions: Decision Authority, Process Autonomy, and Accountability Configuration. The framework aims to enable adaptive regulatory design for evolving AI capabilities, particularly as foundation models and multi-agent systems demonstrate increasing autonomy.

AIBullisharXiv – CS AI · Mar 97/10
🧠

Accelerating Scientific Research with Gemini: Case Studies and Common Techniques

Google's Gemini-based AI models, particularly Gemini Deep Think, have demonstrated the ability to collaborate with researchers to solve open problems and generate new proofs across theoretical computer science, economics, optimization, and physics. The research identifies effective techniques for human-AI collaboration including iterative refinement, problem decomposition, and deploying AI as adversarial reviewers to detect flaws in existing proofs.

🧠 Gemini
AI × CryptoBullishCryptoPotato · Mar 67/10
🤖

Vitalik Buterin Proposes Human-Verified AI Wallets for Crypto Transactions

Ethereum founder Vitalik Buterin has proposed a new wallet design that combines AI assistance with human verification for cryptocurrency transactions. The system would allow AI algorithms to suggest transaction plans while requiring users to manually confirm large transfers, aiming to balance automation with security.

Vitalik Buterin Proposes Human-Verified AI Wallets for Crypto Transactions
AIBullishTechCrunch – AI · Mar 57/10
🧠

Netflix buys Ben Affleck’s AI filmmaking company InterPositive

Netflix has acquired Ben Affleck's AI filmmaking company InterPositive, marking a significant move by the streaming giant into AI-powered content creation. Affleck emphasized his goal to preserve human judgment and storytelling elements while leveraging artificial intelligence in the filmmaking process.

AIBullisharXiv – CS AI · Mar 46/102
🧠

PlayWrite: A Multimodal System for AI Supported Narrative Co-Authoring Through Play in XR

PlayWrite is a new mixed-reality AI system that allows users to create stories by directly manipulating virtual characters and props in XR, rather than through traditional text prompts. The system uses multi-agent AI to interpret user actions into structured narrative elements and generates final stories via large language models, demonstrating a novel approach to AI-human creative collaboration.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Skywork-Reward-V2: Scaling Preference Data Curation via Human-AI Synergy

Researchers introduce Skywork-Reward-V2, a suite of AI reward models trained on SynPref-40M, a massive 40-million preference pair dataset created through human-AI collaboration. The models achieve state-of-the-art performance across seven major benchmarks by combining human annotation quality with AI scalability for better preference learning.

AINeutralarXiv – CS AI · Mar 46/105
🧠

Architecting Trust in Artificial Epistemic Agents

Researchers propose a framework for developing trustworthy AI agents that function as epistemic entities, capable of pursuing knowledge goals and shaping information environments. The paper argues that as AI models increasingly replace traditional search methods and provide specialized advice, their calibration to human epistemic norms becomes critical to prevent cognitive deskilling and epistemic drift.

AIBearisharXiv – CS AI · Mar 47/102
🧠

The Geometry of Learning Under AI Delegation

Researchers developed a mathematical model showing how AI delegation can create stable low-skill equilibria where humans become persistently reliant on AI systems. The study reveals that while AI assistance improves short-term performance, it can lead to long-term skill degradation through reduced practice and negative feedback loops.

AINeutralarXiv – CS AI · 3d ago6/10
🧠

Investigating Multimodal Large Language Models to Support Usability Evaluation

Researchers investigate how multimodal large language models (MLLMs) can assist with usability evaluation of user interfaces by analyzing text and visual context together. The study compares MLLM-generated assessments against expert evaluations, finding that these models can effectively prioritize usability issues by severity and offer complementary insights to traditional resource-intensive evaluation methods.

AINeutralarXiv – CS AI · 3d ago6/10
🧠

AI-Induced Human Responsibility (AIHR) in AI-Human teams

A research study reveals that people assign significantly more responsibility to human decision-makers when they work alongside AI systems compared to human teammates, even in scenarios involving moral harm. This 'AI-Induced Human Responsibility' (AIHR) effect stems from perceiving AI as a constrained tool rather than an autonomous agent, raising important questions about accountability structures in AI-augmented organizations.

$MKR
AINeutralarXiv – CS AI · 6d ago6/10
🧠

Mixed-Initiative Context: Structuring and Managing Context for Human-AI Collaboration

Researchers propose Mixed-Initiative Context, a framework that reconceptualizes how multi-turn AI interactions are managed by treating context as an explicit, structured, and dynamically adjustable object rather than a fixed chronological sequence. The approach enables both humans and AI to actively participate in context construction, addressing current limitations where irrelevant exchanges clutter context windows and users lack direct control mechanisms.

AINeutralarXiv – CS AI · Apr 76/10
🧠

Incentives shape how humans co-create with generative AI

A randomized control trial reveals that incentive structures significantly influence how humans use generative AI in creative tasks. When participants were rewarded for originality rather than just quality, they produced more diverse collective output by using AI more selectively for brainstorming and editing rather than copying suggestions verbatim.

AIBullisharXiv – CS AI · Apr 76/10
🧠

Context Engineering: A Practitioner Methodology for Structured Human-AI Collaboration

Researchers introduce Context Engineering, a structured methodology for improving AI output quality through better context assembly rather than just prompting techniques. The study of 200 AI interactions showed that structured context reduced iteration cycles from 3.8 to 2.0 and improved first-pass acceptance rates from 32% to 55%.

🧠 ChatGPT🧠 Claude
AIBullisharXiv – CS AI · Mar 266/10
🧠

Learning To Guide Human Decision Makers With Vision-Language Models

Researchers introduce Learning to Guide (LTG), a new AI framework where machines provide interpretable guidance to human decision-makers rather than making automated decisions. The SLOG approach transforms vision-language models into guidance generators using human feedback, showing promise in medical diagnosis applications.

AINeutralarXiv – CS AI · Mar 266/10
🧠

From Sycophancy to Sensemaking: Premise Governance for Human-AI Decision Making

Researchers propose a new framework for human-AI decision making that shifts from AI systems providing fluent but potentially sycophantic answers to collaborative premise governance. The approach uses discrepancy-driven control loops to detect conflicts and ensure commitment to decision-critical premises before taking action.

AINeutralarXiv – CS AI · Mar 166/10
🧠

The Perfection Paradox: From Architect to Curator in AI-Assisted API Design

A research study with 16 industry experts found that AI-assisted API design outperformed human-authored specifications in 10 of 11 usability dimensions while reducing authoring time by 87%. However, experts identified a 'Perfection Paradox' where AI-generated designs appeared unsettlingly perfect due to hyper-consistency, suggesting humans should shift from drafting to curating AI-generated patterns.

AIBullisharXiv – CS AI · Mar 166/10
🧠

Seeing Eye to Eye: Enabling Cognitive Alignment Through Shared First-Person Perspective in Human-AI Collaboration

Researchers propose Eye2Eye, a new framework that uses first-person perspective to improve human-AI collaboration by addressing communication and understanding gaps. The AR prototype integrates joint attention coordination, revisable memory, and reflective feedback, showing significant improvements in task completion time and user trust in studies.

AIBearisharXiv – CS AI · Mar 126/10
🧠

Reactive Writers: How Co-Writing with AI Changes How We Engage with Ideas

A research study reveals that AI co-writing tools fundamentally change how people write by shifting them into 'Reactive Writing' mode, where writers evaluate AI suggestions rather than generating original ideas first. This process influences writers' opinions and expressed views without them realizing the AI's impact, as they focus on suggestion evaluation rather than traditional ideation.

AIBullisharXiv – CS AI · Mar 126/10
🧠

Designing Service Systems from Textual Evidence

Researchers developed PP-LUCB, an algorithm that efficiently identifies optimal service system configurations by combining biased AI evaluation with selective human audits. The method reduces human audit costs by 90% while maintaining accuracy in selecting the best performing systems from textual evidence like customer support transcripts.

Page 1 of 2Next →