y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#energy-based-models News & Analysis

6 articles tagged with #energy-based-models. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

6 articles
AIBullisharXiv โ€“ CS AI ยท Mar 46/102
๐Ÿง 

CoBELa: Steering Transparent Generation via Concept Bottlenecks on Energy Landscapes

Researchers introduce CoBELa, a new AI framework for interpretable image generation that uses concept bottlenecks on energy landscapes to enable transparent, controllable synthesis without requiring decoder retraining. The system achieves strong performance on benchmark datasets while allowing users to compositionally manipulate concepts through energy function combinations.

AIBullisharXiv โ€“ CS AI ยท Mar 47/103
๐Ÿง 

Mitigating Over-Refusal in Aligned Large Language Models via Inference-Time Activation Energy

Researchers introduce Energy Landscape Steering (ELS), a new framework that reduces false refusals in AI safety-aligned language models without compromising security. The method uses an external Energy-Based Model to dynamically guide model behavior during inference, improving compliance from 57.3% to 82.6% on safety benchmarks.

AIBullishOpenAI News ยท Nov 77/107
๐Ÿง 

Learning concepts with energy functions

Researchers developed an energy-based AI model that can learn spatial concepts like 'near' and 'above' from just five demonstrations using 2D point sets. The model demonstrates cross-domain transfer capabilities, applying concepts learned in 2D particle environments to solve 3D physics-based robotics tasks.

$NEAR
AIBullisharXiv โ€“ CS AI ยท Mar 36/102
๐Ÿง 

Spilled Energy in Large Language Models

Researchers developed a training-free method to detect AI hallucinations by reinterpreting LLM output as Energy-Based Models and tracking 'energy spills' during text generation. The approach successfully identifies factual errors and biases across multiple state-of-the-art models including LLaMA, Mistral, and Gemma without requiring additional training or probe classifiers.

AIBullishOpenAI News ยท Mar 216/104
๐Ÿง 

Implicit generation and generalization methods for energy-based models

Researchers have achieved progress in training energy-based models (EBMs) with improved stability and scalability, resulting in better sample quality and generalization. The models can generate samples competitive with GANs while maintaining mode coverage guarantees of likelihood-based models through iterative refinement.