y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#activation-steering News & Analysis

12 articles tagged with #activation-steering. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

12 articles
AIBullisharXiv – CS AI Β· 2d ago7/10
🧠

RePAIR: Interactive Machine Unlearning through Prompt-Aware Model Repair

Researchers introduce RePAIR, a framework enabling users to instruct large language models to forget harmful knowledge, misinformation, and personal data through natural language prompts at inference time. The system uses a training-free method called STAMP that manipulates model activations to achieve selective unlearning with minimal computational overhead, outperforming existing approaches while preserving model utility.

AIBearisharXiv – CS AI Β· 3d ago7/10
🧠

Persona Non Grata: Single-Method Safety Evaluation Is Incomplete for Persona-Imbued LLMs

Researchers demonstrate that safety evaluations of persona-imbued large language models using only prompt-based testing are fundamentally incomplete, as activation steering reveals entirely different vulnerability profiles across model architectures. Testing across four models reveals the 'prosocial persona paradox' where conscientious personas safe under prompting become the most vulnerable to activation steering attacks, indicating that single-method safety assessments can miss critical failure modes.

🧠 Llama
AIBullisharXiv – CS AI Β· Mar 177/10
🧠

Steering at the Source: Style Modulation Heads for Robust Persona Control

Researchers have identified a method to control Large Language Model behavior by targeting only three specific attention heads called 'Style Modulation Heads' rather than the entire residual stream. This approach maintains model coherency while enabling precise persona and style control, offering a more efficient alternative to fine-tuning.

AINeutralarXiv – CS AI Β· Mar 117/10
🧠

Curveball Steering: The Right Direction To Steer Isn't Always Linear

Researchers propose 'Curveball steering', a nonlinear method for controlling large language model behavior that outperforms traditional linear approaches. The study challenges the Linear Representation Hypothesis by showing that LLM activation spaces have substantial geometric distortions that require geometry-aware interventions.

AIBullisharXiv – CS AI Β· Mar 97/10
🧠

Mitigating Content Effects on Reasoning in Language Models through Fine-Grained Activation Steering

Researchers have developed a new technique called activation steering to reduce reasoning biases in large language models, particularly the tendency to confuse content plausibility with logical validity. Their novel K-CAST method achieved up to 15% improvement in formal reasoning accuracy while maintaining robustness across different tasks and languages.

AINeutralarXiv – CS AI Β· 3d ago6/10
🧠

From Attribution to Action: A Human-Centered Application of Activation Steering

Researchers introduce an interactive workflow combining Sparse Autoencoders (SAE) and activation steering to make AI explainability actionable for practitioners. Through expert interviews with debugging tasks on CLIP, the study reveals that activation steering enables hypothesis testing and intervention-based debugging, though practitioners emphasize trust in observed model behavior over explanation plausibility and identify risks like ripple effects and limited generalization.

$XRP
AIBullisharXiv – CS AI Β· 3d ago6/10
🧠

CoSToM:Causal-oriented Steering for Intrinsic Theory-of-Mind Alignment in Large Language Models

Researchers introduce CoSToM, a framework that uses causal tracing and activation steering to improve Theory of Mind alignment in large language models. The work addresses a critical gap between LLMs' internal knowledge and external behavior, demonstrating that targeted interventions in specific neural layers can enhance social reasoning capabilities and dialogue quality.

AINeutralarXiv – CS AI Β· 4d ago6/10
🧠

Dictionary-Aligned Concept Control for Safeguarding Multimodal LLMs

Researchers introduce Dictionary-Aligned Concept Control (DACO), a framework that uses a curated dictionary of 15,000 multimodal concepts and Sparse Autoencoders to improve safety in multimodal large language models by steering their activations at inference time. Testing across multiple models shows DACO significantly enhances safety performance while preserving general-purpose capabilities without requiring model retraining.

AINeutralarXiv – CS AI Β· Apr 106/10
🧠

On Emotion-Sensitive Decision Making of Small Language Model Agents

Researchers introduce a framework for studying how emotional states affect decision-making in small language models (SLMs) used as autonomous agents. Using activation steering techniques grounded in real-world emotion-eliciting texts, they benchmark SLMs across game-theoretic scenarios and find that emotional perturbations systematically influence strategic choices, though behaviors often remain unstable and misaligned with human patterns.

AIBearisharXiv – CS AI Β· Apr 106/10
🧠

The Impact of Steering Large Language Models with Persona Vectors in Educational Applications

Researchers studied how persona vectorsβ€”AI steering techniques that inject personality traits into large language modelsβ€”affect educational applications like essay generation and automated grading. The study found that persona steering significantly degrades answer quality, with substantially larger negative impacts on open-ended humanities tasks compared to factual science questions, and reveals that AI scorers exhibit predictable bias patterns based on assigned personality traits.

AINeutralarXiv – CS AI Β· Mar 37/108
🧠

Decoding Answers Before Chain-of-Thought: Evidence from Pre-CoT Probes and Activation Steering

New research reveals that large language models often determine their final answers before generating chain-of-thought reasoning, challenging the assumption that CoT reflects the model's actual decision process. Linear probes can predict model answers with 0.9 AUC accuracy before CoT generation, and steering these activations can flip answers in over 50% of cases.

AINeutralarXiv – CS AI Β· Mar 36/104
🧠

Detecting the Disturbance: A Nuanced View of Introspective Abilities in LLMs

Researchers investigated whether large language models can introspect by detecting perturbations to their internal states using Meta-Llama-3.1-8B-Instruct. They found that while binary detection methods from prior work were flawed due to methodological artifacts, models do show partial introspection capabilities, localizing sentence injections at 88% accuracy and discriminating injection strengths at 83% accuracy, but only for early-layer perturbations.