y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#fine-tuning News & Analysis

148 articles tagged with #fine-tuning. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

148 articles
AIBullisharXiv โ€“ CS AI ยท Mar 45/103
๐Ÿง 

Quantum-Inspired Fine-Tuning for Few-Shot AIGC Detection via Phase-Structured Reparameterization

Researchers propose Q-LoRA, a quantum-enhanced fine-tuning method that integrates quantum neural networks into LoRA adapters for improved AI-generated content detection. The study also introduces H-LoRA, a classical variant using Hilbert transforms that achieves similar 5%+ accuracy improvements over standard LoRA at lower computational cost.

AINeutralarXiv โ€“ CS AI ยท Mar 36/107
๐Ÿง 

When Metrics Disagree: Automatic Similarity vs. LLM-as-a-Judge for Clinical Dialogue Evaluation

Researchers fine-tuned the Llama 2 7B model using real patient-doctor interaction transcripts to improve medical query responses, but found significant discrepancies between automatic similarity metrics and GPT-4 evaluations. The study highlights the challenges in evaluating AI medical models and recommends human medical expert review for proper validation.

AIBullisharXiv โ€“ CS AI ยท Mar 37/106
๐Ÿง 

Token-level Data Selection for Safe LLM Fine-tuning

Researchers have developed TOSS, a new framework for safely fine-tuning large language models that operates at the token level rather than sample level. The method identifies and removes unsafe tokens while preserving task-specific information, demonstrating superior performance compared to existing sample-level defense methods in maintaining both safety and utility.

AINeutralarXiv โ€“ CS AI ยท Mar 37/108
๐Ÿง 

SafeSci: Safety Evaluation of Large Language Models in Science Domains and Beyond

Researchers introduce SafeSci, a comprehensive framework for evaluating safety in large language models used for scientific applications. The framework includes a 0.25M sample benchmark and 1.5M sample training dataset, revealing critical vulnerabilities in 24 advanced LLMs while demonstrating that fine-tuning can significantly improve safety alignment.

AIBullisharXiv โ€“ CS AI ยท Mar 35/104
๐Ÿง 

EstLLM: Enhancing Estonian Capabilities in Multilingual LLMs via Continued Pretraining and Post-Training

Researchers developed EstLLM, enhancing Estonian language capabilities in multilingual LLMs through continued pretraining of Llama 3.1 8B with balanced data mixtures. The approach improved Estonian linguistic performance while maintaining English capabilities, demonstrating that targeted continued pretraining can substantially improve single-language performance in multilingual models.

AIBullisharXiv โ€“ CS AI ยท Mar 36/104
๐Ÿง 

Prompt and Parameter Co-Optimization for Large Language Models

Researchers introduce MetaTuner, a new framework that combines prompt optimization with fine-tuning for Large Language Models, using shared neural networks to discover optimal combinations of prompts and parameters. The approach addresses the discrete-continuous optimization challenge through supervised regularization and demonstrates consistent performance improvements across benchmarks.

AIBullisharXiv โ€“ CS AI ยท Mar 36/103
๐Ÿง 

Training Large Language Models To Reason In Parallel With Global Forking Tokens

Researchers developed Set Supervised Fine-Tuning (SSFT) and Global Forking Policy Optimization (GFPO) methods to improve large language model reasoning by enabling parallel processing through 'global forking tokens.' The techniques preserve diverse reasoning modes and demonstrate superior performance on math and code generation benchmarks compared to traditional fine-tuning approaches.

AIBullisharXiv โ€“ CS AI ยท Mar 36/104
๐Ÿง 

Regularization Through Reasoning: Systematic Improvements in Language Model Classification via Explanation-Enhanced Fine-Tuning

Researchers found that fine-tuning large language models with explanations attached to labels significantly improves classification accuracy compared to label-only training. Surprisingly, even random token sequences that mimic explanation structure provide similar benefits, suggesting the improvement comes from increased token budget and regularization rather than semantic meaning.

AINeutralarXiv โ€“ CS AI ยท Mar 26/1013
๐Ÿง 

DARE-bench: Evaluating Modeling and Instruction Fidelity of LLMs in Data Science

Researchers introduce DARE-bench, a new benchmark with 6,300 Kaggle-derived tasks for evaluating Large Language Models' performance on data science and machine learning tasks. The benchmark reveals that even advanced models like GPT-4-mini struggle with ML modeling tasks, while fine-tuning on DARE-bench data can improve model accuracy by up to 8x.

AIBullisharXiv โ€“ CS AI ยท Mar 26/1012
๐Ÿง 

Task-Centric Acceleration of Small-Language Models

Researchers propose TASC (Task-Adaptive Sequence Compression), a framework for accelerating small language models through two methods: TASC-ft for fine-tuning with expanded vocabularies and TASC-spec for training-free speculative decoding. The methods demonstrate improved inference efficiency while maintaining task performance across low output-variability generation tasks.

AIBullisharXiv โ€“ CS AI ยท Mar 26/1018
๐Ÿง 

Taming Momentum: Rethinking Optimizer States Through Low-Rank Approximation

Researchers introduce LoRA-Pre, a memory-efficient optimizer that reduces memory overhead in training large language models by using low-rank approximation of momentum states. The method achieves superior performance on Llama models from 60M to 1B parameters while using only 1/8 the rank of baseline methods.

AIBullisharXiv โ€“ CS AI ยท Mar 26/1015
๐Ÿง 

FineScope : SAE-guided Data Selection Enables Domain Specific LLM Pruning and Finetuning

Researchers introduce FineScope, a framework that uses Sparse Autoencoder (SAE) techniques to create smaller, domain-specific language models from larger pretrained LLMs through structured pruning and self-data distillation. The method achieves competitive performance while significantly reducing computational requirements compared to training from scratch.

AIBullisharXiv โ€“ CS AI ยท Mar 27/1019
๐Ÿง 

Thompson Sampling via Fine-Tuning of LLMs

Researchers developed ToSFiT (Thompson Sampling via Fine-Tuning), a new Bayesian optimization method that uses fine-tuned large language models to improve search efficiency in complex discrete spaces. The approach eliminates computational bottlenecks by directly parameterizing reward probabilities and demonstrates superior performance across diverse applications including protein search and quantum circuit design.

AIBullisharXiv โ€“ CS AI ยท Mar 26/1018
๐Ÿง 

LIA: Supervised Fine-Tuning of Large Language Models for Automatic Issue Assignment

Researchers developed LIA, a supervised fine-tuning approach using DeepSeek-R1-Distill-Llama-8B to automatically assign software issues to developers. The system achieved up to 187.8% improvement over the base model and 211.2% better performance than existing methods in developer recommendation accuracy.

AINeutralarXiv โ€“ CS AI ยท Feb 275/107
๐Ÿง 

Towards Simulating Social Media Users with LLMs: Evaluating the Operational Validity of Conditioned Comment Prediction

Researchers introduced Conditioned Comment Prediction (CCP) to evaluate how well Large Language Models can simulate social media user behavior by predicting user comments. The study found that supervised fine-tuning improves text structure but degrades semantic accuracy, and that behavioral histories are more effective than descriptive personas for user simulation.

AIBullisharXiv โ€“ CS AI ยท Feb 276/105
๐Ÿง 

pMoE: Prompting Diverse Experts Together Wins More in Visual Adaptation

Researchers developed pMoE, a novel parameter-efficient fine-tuning method that combines multiple expert domains through specialized prompt tokens and dynamic dispatching. Testing across 47 visual adaptation tasks in classification and segmentation shows superior performance with improved computational efficiency compared to existing methods.

AIBullisharXiv โ€“ CS AI ยท Feb 275/103
๐Ÿง 

Make It Hard to Hear, Easy to Learn: Long-Form Bengali ASR and Speaker Diarization via Extreme Augmentation and Perfect Alignment

Researchers developed Lipi-Ghor-882, an 882-hour Bengali speech dataset, and demonstrated that targeted fine-tuning with synthetic acoustic degradation significantly improves automatic speech recognition for long-form Bengali audio. Their dual pipeline achieved a 0.019 Real-Time Factor, establishing new benchmarks for low-resource speech processing.

AIBullisharXiv โ€“ CS AI ยท Feb 276/106
๐Ÿง 

Scaling Search Relevance: Augmenting App Store Ranking with LLM-Generated Judgments

Apple's App Store search team successfully implemented LLM-generated textual relevance labels to augment their ranking system, addressing data scarcity issues. A fine-tuned specialized model outperformed larger pre-trained models, generating millions of labels that improved search relevance. This resulted in a statistically significant 0.24% increase in conversion rates in worldwide A/B testing.

AIBullisharXiv โ€“ CS AI ยท Feb 276/107
๐Ÿง 

Parameter-Efficient Fine-Tuning for Continual Learning: A Neural Tangent Kernel Perspective

Researchers introduce NTK-CL, a new framework for parameter-efficient fine-tuning in continual learning that uses Neural Tangent Kernel theory to address catastrophic forgetting. The approach achieves state-of-the-art performance by tripling feature representation and implementing adaptive mechanisms to maintain task-specific knowledge while learning new tasks.

AIBullisharXiv โ€“ CS AI ยท Feb 276/106
๐Ÿง 

StruXLIP: Enhancing Vision-language Models with Multimodal Structural Cues

StruXLIP is a new fine-tuning paradigm for vision-language models that uses edge maps and structural cues to improve cross-modal retrieval performance. The method augments standard CLIP training with three structure-centric losses to achieve more robust vision-language alignment by maximizing mutual information between multimodal structural representations.

AIBullishApple Machine Learning ยท Feb 256/103
๐Ÿง 

Constructive Circuit Amplification: Improving Math Reasoning in LLMs via Targeted Sub-Network Updates

Researchers propose Constructive Circuit Amplification, a new method for improving LLM mathematical reasoning by directly targeting and strengthening specific neural network subnetworks (circuits) responsible for particular tasks. This approach builds on findings that model improvements through fine-tuning often result from amplifying existing circuits rather than creating new capabilities.

AIBullishOpenAI News ยท Oct 66/106
๐Ÿง 

Introducing AgentKit, new Evals, and RFT for agents

OpenAI has released new developer tools including AgentKit, expanded evaluation capabilities, and reinforcement fine-tuning specifically designed for AI agents. These tools aim to accelerate the development process from prototype to production deployment for AI agent applications.

AIBullishHugging Face Blog ยท Sep 106/105
๐Ÿง 

Fine-tune Any LLM from the Hugging Face Hub with Together AI

Together AI has launched a new feature enabling users to fine-tune any large language model available on the Hugging Face Hub. This development makes custom AI model training more accessible by providing streamlined infrastructure and tooling for developers and researchers.

AIBullishHugging Face Blog ยท Jun 196/106
๐Ÿง 

(LoRA) Fine-Tuning FLUX.1-dev on Consumer Hardware

The article discusses fine-tuning FLUX.1-dev using LoRA (Low-Rank Adaptation) techniques on consumer-grade hardware. This approach makes advanced AI model customization more accessible to individual developers and smaller organizations without requiring enterprise-level computing resources.