Intentmaking and Sensemaking: Human Interaction with AI-Guided Mathematical Discovery
Researchers conducted a user study with 11 expert mathematicians using AlphaEvolve, an AI coding agent, to explore how humans effectively collaborate with AI systems for scientific discovery. The study identified a cyclical workflow called 'intentmaking'—where users iteratively define and refine experimental goals through system interaction—paired with traditional sensemaking, suggesting AI tools should function as collaborative instruments rather than black-box assistants.
This research addresses a critical gap in human-AI interaction design for scientific domains. Rather than treating AI as a question-answering tool, the study reveals that expert users benefit from systems that support iterative goal refinement through active feedback loops. The intentmaking framework describes how mathematicians don't arrive with fully formed research questions; instead, they discover and sharpen their objectives through experimentation with AI outputs, creating a bidirectional learning process.
The findings emerge from growing recognition that current AI systems optimize for narrow task completion while neglecting the exploratory nature of real scientific work. Previous tools emphasize rapid problem-solving, but domain experts require transparency, interpretability, and control to validate approaches within their fields. This study validates that collaborative design patterns—where AI augments human reasoning rather than replacing it—yield better scientific outcomes.
For the AI development community, these insights reshape product strategy. Companies building scientific AI tools should prioritize interfaces enabling iterative goal-setting, result interpretation frameworks, and documented reasoning trails rather than optimizing for speed or automation. This approach particularly benefits high-stakes domains like mathematics and physics where expert judgment remains paramount.
The implications extend beyond pure mathematics. Similar interaction patterns likely benefit drug discovery, materials science, and financial modeling research. Future AI tools designed around intentmaking-sensemaking cycles could accelerate discovery timelines while maintaining expert oversight, creating a sustainable human-in-the-loop paradigm that builds trust and enables broader adoption across research institutions.
- →Intentmaking describes how experts iteratively discover and refine research goals through active AI system interaction, not prior to it.
- →Effective scientific AI requires collaborative instrument design supporting both intentmaking and sensemaking cycles rather than black-box automation.
- →Expert users value interpretability and control over speed, suggesting current AI products misalign with domain scientist needs.
- →The workflow pattern identified suggests scientific AI tools should provide transparency into reasoning and enable goal refinement mid-investigation.
- →Cross-domain application of these interaction principles could improve discovery outcomes in drug development, materials science, and quantitative research.