y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#model-editing News & Analysis

6 articles tagged with #model-editing. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

6 articles
AIBullisharXiv โ€“ CS AI ยท 6d ago7/10
๐Ÿง 

RePAIR: Interactive Machine Unlearning through Prompt-Aware Model Repair

Researchers introduce RePAIR, a framework enabling users to instruct large language models to forget harmful knowledge, misinformation, and personal data through natural language prompts at inference time. The system uses a training-free method called STAMP that manipulates model activations to achieve selective unlearning with minimal computational overhead, outperforming existing approaches while preserving model utility.

AIBullisharXiv โ€“ CS AI ยท Mar 117/10
๐Ÿง 

UltraEdit: Training-, Subject-, and Memory-Free Lifelong Editing in Language Models

Researchers have developed UltraEdit, a breakthrough method for efficiently updating large language models without retraining. The approach is 7x faster than previous methods while using 4x less memory, enabling continuous model updates with up to 2 million edits on consumer hardware.

AIBullisharXiv โ€“ CS AI ยท Mar 37/103
๐Ÿง 

Bilinear representation mitigates reversal curse and enables consistent model editing

Researchers have identified that the 'reversal curse' in language models - their inability to infer 'B is A' from 'A is B' - can be overcome through bilinear representation structures. Training models on synthetic relational knowledge graphs creates internal geometries that enable consistent model editing and logical inference of reverse facts.

AIBullisharXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

RAZOR: Ratio-Aware Layer Editing for Targeted Unlearning in Vision Transformers and Diffusion Models

Researchers introduce RAZOR, a new framework for efficiently removing sensitive information from AI models like CLIP and Stable Diffusion without requiring full retraining. The method selectively edits specific layers and attention heads in transformer models to achieve targeted 'unlearning' while preserving overall performance.

๐Ÿง  Stable Diffusion
AIBullisharXiv โ€“ CS AI ยท Mar 166/10
๐Ÿง 

MetaKE: Meta-learning Aligned Knowledge Editing via Bi-level Optimization

Researchers propose MetaKE, a new framework for knowledge editing in Large Language Models that addresses the 'Semantic-Execution Disconnect' through bi-level optimization. The method treats edit targets as learnable parameters and uses a Structural Gradient Proxy to align edits with the model's feasible manifold, showing significant improvements over existing approaches.

AINeutralarXiv โ€“ CS AI ยท Mar 175/10
๐Ÿง 

SAKE: Towards Editing Auditory Attribute Knowledge of Large Audio-Language Models

Researchers introduce SAKE, the first benchmark for editing auditory attribute knowledge in large audio-language models without requiring full retraining. The study reveals significant limitations in current editing methods, particularly with auditory generalization and sequential editing, while finding that fine-tuning modality connectors offers better performance than editing LLM backbones directly.