14 articles tagged with #dialogue-systems. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AINeutralarXiv โ CS AI ยท 3d ago7/10
๐ง Researchers introduce SAGE, a comprehensive benchmark for evaluating Large Language Models in customer service automation that uses dynamic dialogue graphs and adversarial testing to assess both intent classification and action execution. Testing across 27 LLMs reveals a critical 'Execution Gap' where models correctly identify user intents but fail to perform appropriate follow-up actions, plus an 'Empathy Resilience' phenomenon where models maintain polite facades despite underlying logical failures.
AIBullisharXiv โ CS AI ยท Mar 56/10
๐ง Researchers have developed AriadneMem, a new memory system for long-horizon LLM agents that addresses challenges in maintaining accurate memory under fixed context budgets. The system uses a two-phase pipeline with entropy-aware gating and conflict-aware coarsening to improve multi-hop reasoning while reducing runtime by 77.8% and using only 497 context tokens.
๐ง GPT-4
AIBullisharXiv โ CS AI ยท Mar 56/10
๐ง Researchers introduce DIALEVAL, a new automated framework that uses dual LLM agents to evaluate how well AI models follow instructions. The system achieves 90.38% accuracy by breaking down instructions into verifiable components and applying type-specific evaluation criteria, showing 26.45% error reduction over existing methods.
AIBullisharXiv โ CS AI ยท Mar 47/103
๐ง Researchers developed ATPO (Adaptive Tree Policy Optimization), a new AI algorithm for multi-turn medical dialogues that outperforms existing methods by better handling uncertainty in patient-doctor interactions. The algorithm enabled a smaller Qwen3-8B model to surpass GPT-4o's accuracy by 0.92% on medical dialogue benchmarks through improved value estimation and exploration strategies.
AINeutralarXiv โ CS AI ยท 2d ago6/10
๐ง Researchers propose MDS (Multi-turn Dialogue Selection), a framework for improving instruction-tuned language models by intelligently selecting high-quality multi-turn dialogue data. The method combines global coverage analysis with local structural evaluation to filter noisy datasets, demonstrating superior performance across multiple benchmarks compared to existing selection approaches.
AIBearisharXiv โ CS AI ยท Mar 276/10
๐ง Research reveals that large language models (LLMs) struggle to maintain consistent internal beliefs or goals across multi-turn conversations, failing to preserve implicit consistency when not explicitly provided context. This limitation poses significant challenges for developing persona-driven AI systems that require stable personality traits and behavioral patterns.
AIBullisharXiv โ CS AI ยท Mar 266/10
๐ง Researchers have introduced MedAidDialog, a multilingual medical dialogue dataset covering seven languages, and developed MedAidLM, a conversational AI model for preliminary medical consultations. The system uses parameter-efficient fine-tuning on small language models to enable deployment without high-end computational infrastructure while incorporating patient context for personalized consultations.
AIBullisharXiv โ CS AI ยท Mar 116/10
๐ง DuplexCascade introduces a VAD-free cascaded streaming pipeline that enables full-duplex speech-to-speech dialogue while maintaining LLM intelligence. The system converts traditional long utterance turns into micro-turn interactions using special control tokens to coordinate turn-taking and response timing.
AIBullisharXiv โ CS AI ยท Mar 36/107
๐ง Researchers propose ActMem, a novel memory framework for LLM agents that combines memory retrieval with active causal reasoning to handle complex decision-making scenarios. The framework transforms dialogue history into structured causal graphs and uses counterfactual reasoning to resolve conflicts between past states and current intentions, significantly outperforming existing baselines in memory-dependent tasks.
AIBearisharXiv โ CS AI ยท Mar 37/108
๐ง Researchers have identified significant privacy risks in Large Language Model-based Task-Oriented Dialogue Systems, demonstrating that these AI systems can memorize and leak sensitive training data including phone numbers and complete dialogue exchanges. The study proposes new attack methods that can extract thousands of training dialogue states with over 70% precision in best-case scenarios.
$RNDR
AIBullisharXiv โ CS AI ยท Mar 26/1013
๐ง Researchers propose an LLM-driven framework for generating multi-turn task-oriented dialogues to create more realistic reasoning benchmarks. The framework addresses limitations in current AI evaluation methods by producing synthetic datasets that better reflect real-world complexity and contextual coherence.
AIBullisharXiv โ CS AI ยท Feb 276/105
๐ง Researchers introduce InteractCS-RL, a new reinforcement learning framework that helps AI agents balance empathetic communication with cost-effective decision-making in task-oriented dialogue. The system uses a multi-granularity approach with persona-driven user interactions and cost-aware policy optimization to achieve better performance across business scenarios.
AINeutralApple Machine Learning ยท Feb 246/102
๐ง Researchers introduce AMUSE, a new benchmark for evaluating multimodal large language models in multi-speaker dialogue scenarios. The framework addresses current limitations of models like GPT-4o in tracking speakers, maintaining conversational roles, and reasoning across audio-visual streams in applications such as conversational video assistants.
AINeutralarXiv โ CS AI ยท Mar 54/10
๐ง Researchers have created a new multi-task Chinese dialogue dataset that enables prediction of user satisfaction, emotion recognition, and emotional state transitions across multiple conversation turns. The dataset addresses limitations in existing Chinese resources and aims to improve understanding of how user emotions evolve during interactions to better predict satisfaction.