y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#pretrained-models News & Analysis

4 articles tagged with #pretrained-models. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

4 articles
AIBullisharXiv – CS AI · Apr 137/10
🧠

Evidential Transformation Network: Turning Pretrained Models into Evidential Models for Post-hoc Uncertainty Estimation

Researchers propose Evidential Transformation Network (ETN), a lightweight post-hoc module that converts pretrained models into evidential models for uncertainty estimation without retraining. ETN operates in logit space using sample-dependent affine transformations and Dirichlet distributions, demonstrating improved uncertainty quantification across vision and language benchmarks with minimal computational overhead.

AINeutralarXiv – CS AI · 6h ago6/10
🧠

Channel-Level Semantic Perturbations: Unlearnable Examples for Diverse Training Paradigms

Researchers have developed a new technique called Shallow Semantic Camouflage (SSC) to protect personal data from unauthorized use in AI model training. The work addresses a critical gap where existing data protection methods fail under modern pretraining-finetuning paradigms, demonstrating that frozen pretrained weights significantly weaken previous unlearnable example approaches.

AIBullisharXiv – CS AI · Mar 36/102
🧠

Inner Loop Inference for Pretrained Transformers: Unlocking Latent Capabilities Without Training

Researchers propose a new inference technique called "inner loop inference" that improves pretrained transformer models' performance by repeatedly applying selected layers during inference without additional training. The method yields consistent but modest accuracy improvements across benchmarks by allowing more refinement of internal representations.