y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#prompt-tuning News & Analysis

4 articles tagged with #prompt-tuning. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

4 articles
AIBullisharXiv โ€“ CS AI ยท Mar 47/103
๐Ÿง 

CAPT: Confusion-Aware Prompt Tuning for Reducing Vision-Language Misalignment

Researchers propose CAPT, a Confusion-Aware Prompt Tuning framework that addresses systematic misclassifications in vision-language models like CLIP by learning from the model's own confusion patterns. The method uses a Confusion Bank to model persistent category misalignments and introduces specialized modules to capture both semantic and sample-level confusion cues.

AINeutralarXiv โ€“ CS AI ยท Apr 136/10
๐Ÿง 

CLIP-Inspector: Model-Level Backdoor Detection for Prompt-Tuned CLIP via OOD Trigger Inversion

Researchers introduce CLIP-Inspector, a backdoor detection method for prompt-tuned CLIP models that reconstructs hidden triggers using out-of-distribution images to identify if a model has been maliciously compromised. The technique achieves 94% detection accuracy and enables post-hoc model repair, addressing critical security vulnerabilities in outsourced machine learning services.

AIBullisharXiv โ€“ CS AI ยท Feb 276/105
๐Ÿง 

pMoE: Prompting Diverse Experts Together Wins More in Visual Adaptation

Researchers developed pMoE, a novel parameter-efficient fine-tuning method that combines multiple expert domains through specialized prompt tokens and dynamic dispatching. Testing across 47 visual adaptation tasks in classification and segmentation shows superior performance with improved computational efficiency compared to existing methods.

AINeutralarXiv โ€“ CS AI ยท Apr 64/10
๐Ÿง 

An Initial Exploration of Contrastive Prompt Tuning to Generate Energy-Efficient Code

Researchers explored using Contrastive Prompt Tuning (CPT) to improve Large Language Models' ability to generate energy-efficient code, combining contrastive learning with parameter-efficient fine-tuning. The study tested CPT across Python, Java, and C++ on three different models, finding consistent accuracy improvements for two models but variable efficiency gains depending on model, language, and task complexity.