y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#model-control News & Analysis

6 articles tagged with #model-control. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

6 articles
AIBullisharXiv โ€“ CS AI ยท Apr 137/10
๐Ÿง 

Neurons Speak in Ranges: Breaking Free from Discrete Neuronal Attribution

Researchers introduce NeuronLens, a framework that interprets neural networks by analyzing activation ranges rather than individual neurons, addressing the widespread polysemanticity problem in large language models. The range-based approach enables more precise concept manipulation while minimizing unintended degradation to model performance.

AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Steering at the Source: Style Modulation Heads for Robust Persona Control

Researchers have identified a method to control Large Language Model behavior by targeting only three specific attention heads called 'Style Modulation Heads' rather than the entire residual stream. This approach maintains model coherency while enabling precise persona and style control, offering a more efficient alternative to fine-tuning.

AIBullisharXiv โ€“ CS AI ยท Mar 97/10
๐Ÿง 

COLD-Steer: Steering Large Language Models via In-Context One-step Learning Dynamics

Researchers introduce COLD-Steer, a training-free framework that enables efficient control of large language model behavior at inference time using just a few examples. The method approximates gradient descent effects without parameter updates, achieving 95% steering effectiveness while using 50 times fewer samples than existing approaches.

AIBullisharXiv โ€“ CS AI ยท Mar 56/10
๐Ÿง 

Controllable and explainable personality sliders for LLMs at inference time

Researchers propose Sequential Adaptive Steering (SAS), a new framework for controlling Large Language Model personalities at inference time without retraining. The method uses orthogonalized steering vectors to enable precise, multi-dimensional personality control by adjusting coefficients, validated on Big Five personality traits.

AINeutralarXiv โ€“ CS AI ยท Apr 106/10
๐Ÿง 

Steering the Verifiability of Multimodal AI Hallucinations

Researchers have developed a method to control how verifiable AI hallucinations are in multimodal language models by distinguishing between obvious hallucinations (easily detected by humans) and elusive ones (harder to spot). Using a dataset of 4,470 human responses, they created targeted interventions that can fine-tune which types of hallucinations occur, enabling flexible control suited to different security and usability requirements.

AIBullisharXiv โ€“ CS AI ยท Mar 37/106
๐Ÿง 

Spectral Attention Steering for Prompt Highlighting

Researchers introduce SEKA and AdaSEKA, new training-free methods for attention steering in AI models that work with memory-efficient implementations like FlashAttention. These techniques enable better prompt highlighting by directly editing key embeddings using spectral decomposition, offering significant performance improvements with lower computational overhead.