6 articles tagged with #active-learning. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv โ CS AI ยท Mar 117/10
๐ง Researchers introduce ACTIVEULTRAFEEDBACK, an active learning pipeline that reduces the cost of training Large Language Models by using uncertainty estimates to identify the most informative responses for annotation. The system achieves comparable performance using only one-sixth of the annotated data compared to static baselines, potentially making LLM training more accessible for low-resource domains.
๐ข Hugging Face
AINeutralarXiv โ CS AI ยท Apr 66/10
๐ง Research from arXiv shows that Active Preference Learning (APL) provides minimal improvements over random sampling in training modern LLMs through Direct Preference Optimization. The study found that random sampling performs nearly as well as sophisticated active selection methods while being computationally cheaper and avoiding capability degradation.
AINeutralarXiv โ CS AI ยท Mar 174/10
๐ง Researchers introduce EAGLE, a new framework for explaining black-box machine learning models using information-theoretic active learning to select optimal data perturbations. The method produces feature importance scores with uncertainty estimates and demonstrates improved explanation reproducibility and stability compared to existing approaches like LIME.
AIBullisharXiv โ CS AI ยท Mar 175/10
๐ง Researchers introduce IDALC, a semi-supervised framework for voice-controlled dialog systems that improves intent detection and reduces manual annotation costs. The system achieves 5-10% higher accuracy and 4-8% better macro-F1 scores while requiring annotation of only 6-10% of unlabeled data.
AINeutralarXiv โ CS AI ยท Mar 165/10
๐ง Researchers introduce BoSS (Best-of-Strategies Selector), a new oracle strategy for active learning that outperforms existing methods by using an ensemble approach to select optimal data annotation batches. The study reveals that current state-of-the-art active learning strategies still significantly underperform compared to oracle performance, particularly on large-scale datasets.
AINeutralarXiv โ CS AI ยท Mar 24/107
๐ง Researchers propose LEMP4HG, a new language model-enhanced approach for improving graph neural networks on heterophilic graphs where connected nodes have different characteristics. The method leverages language models to better understand semantic relationships between text-attributed nodes, outperforming existing methods while maintaining efficiency through selective message enhancement.