y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#autoregressive News & Analysis

12 articles tagged with #autoregressive. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

12 articles
AIBullisharXiv โ€“ CS AI ยท 4d ago7/10
๐Ÿง 

Introspective Diffusion Language Models

Researchers introduce Introspective Diffusion Language Models (I-DLM), a new approach that combines the parallel generation speed of diffusion models with the quality of autoregressive models by ensuring models verify their own outputs. I-DLM achieves performance matching conventional large language models while delivering 3x higher throughput, potentially reshaping how AI systems are deployed at scale.

AINeutralarXiv โ€“ CS AI ยท Mar 267/10
๐Ÿง 

Evaluation of Large Language Models via Coupled Token Generation

Researchers propose a new method called coupled autoregressive generation to evaluate large language models more efficiently by controlling for randomness in their responses. The study shows this approach can reduce evaluation samples by up to 75% while revealing that current model rankings may be confounded by inherent randomness in generation processes.

๐Ÿง  Llama
AIBullisharXiv โ€“ CS AI ยท Mar 56/10
๐Ÿง 

Separators in Enhancing Autoregressive Pretraining for Vision Mamba

Researchers introduce STAR, a new autoregressive pretraining method for Vision Mamba that uses separators to quadruple input sequence length while maintaining image dimensions. The STAR-B model achieved 83.5% accuracy on ImageNet-1k, demonstrating improved performance through better utilization of long-range dependencies in computer vision tasks.

AIBullisharXiv โ€“ CS AI ยท Mar 56/10
๐Ÿง 

CubeComposer: Spatio-Temporal Autoregressive 4K 360{\deg} Video Generation from Perspective Video

CubeComposer is a new AI model that generates high-quality 4K 360-degree panoramic videos from regular perspective videos using a novel spatio-temporal autoregressive diffusion approach. The technology addresses computational limitations of existing methods by decomposing videos into cubemap representations, enabling native 4K resolution output for VR applications.

AIBullisharXiv โ€“ CS AI ยท Mar 47/104
๐Ÿง 

CoDAR: Continuous Diffusion Language Models are More Powerful Than You Think

Researchers propose CoDAR, a new continuous diffusion language model framework that addresses key bottlenecks in token rounding through a two-stage approach combining continuous diffusion with an autoregressive decoder. The model demonstrates substantial improvements in generation quality over existing latent diffusion methods and becomes competitive with discrete diffusion language models.

AIBullisharXiv โ€“ CS AI ยท Mar 47/103
๐Ÿง 

LEDOM: Reverse Language Model

Researchers have developed LEDOM, an open-source reverse autoregressive language model that trains right-to-left instead of the traditional left-to-right approach. The model demonstrates unique capabilities like abductive inference and question synthesis, and when combined with forward models through 'Reverse Reward' scoring, achieves significant performance gains of up to 15% on mathematical reasoning tasks.

AIBullishApple Machine Learning ยท Mar 256/10
๐Ÿง 

Thinking into the Future: Latent Lookahead Training for Transformers

Researchers propose Latent Lookahead Training, a new method for training transformer language models that allows exploration of multiple token continuations rather than committing to single tokens at each step. The paper was accepted at ICLR 2026's Workshop on Latent & Implicit Thinking, addressing limitations in current autoregressive language model training approaches.

AIBullisharXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

SyncSpeech: Efficient and Low-Latency Text-to-Speech based on Temporal Masked Transformer

Researchers introduce SyncSpeech, a new text-to-speech model that combines autoregressive and non-autoregressive approaches using a Temporal Mask Transformer architecture. The model achieves 5.8x lower first-packet latency and 8.8x improved real-time performance while maintaining comparable speech quality to existing models.

AIBullisharXiv โ€“ CS AI ยท Mar 116/10
๐Ÿง 

Latent-DARM: Bridging Discrete Diffusion And Autoregressive Models For Reasoning

Researchers introduce Latent-DARM, a framework that bridges discrete diffusion language models and autoregressive models to improve multi-agent AI reasoning capabilities. The system achieved significant improvements on reasoning benchmarks, increasing accuracy from 27% to 36% on DART-5 while using less than 2.2% of the token budget of state-of-the-art models.

AIBullisharXiv โ€“ CS AI ยท Mar 96/10
๐Ÿง 

DEX-AR: A Dynamic Explainability Method for Autoregressive Vision-Language Models

Researchers developed DEX-AR, a new explainability method for autoregressive Vision-Language Models that generates 2D heatmaps to understand how these AI systems make decisions. The method addresses challenges in interpreting modern VLMs by analyzing token-by-token generation and visual-textual interactions, showing improved performance across multiple benchmarks.

๐Ÿข Perplexity
AIBullisharXiv โ€“ CS AI ยท Feb 276/108
๐Ÿง 

Autoregressive Visual Decoding from EEG Signals

Researchers developed AVDE, a lightweight framework for decoding visual information from EEG brain signals using autoregressive generation. The system outperforms existing methods while using only 10% of the parameters, potentially advancing practical brain-computer interface applications.