AINeutralarXiv โ CS AI ยท 7h ago6/10
๐ง
LLM attribution analysis across different fine-tuning strategies and model scales for automated code compliance
Researchers conducted a comparative study of how large language models trained with different fine-tuning methods (full fine-tuning, LoRA, and quantized LoRA) interpret code compliance tasks. The study reveals that full fine-tuning produces more focused attribution patterns than parameter-efficient methods, and larger models develop distinct interpretive strategies despite performance gains plateauing above 7B parameters.