236 articles tagged with #large-language-models. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AINeutralarXiv โ CS AI ยท Mar 36/103
๐ง Research paper analyzes test-time scaling in large language models, revealing that longer reasoning chains (CoTs) can reduce training data requirements but may harm performance if relevant skills aren't present in training data. The study provides theoretical framework showing that diverse, relevant, and challenging training tasks optimize test-time scaling performance.
AINeutralarXiv โ CS AI ยท Mar 36/104
๐ง Researchers present a new framework for adaptive reasoning in large language models, addressing the problem that current LLMs use uniform reasoning strategies regardless of task complexity. The survey formalizes adaptive reasoning as a control-augmented policy optimization problem and proposes a taxonomy of training-based and training-free approaches to achieve more efficient reasoning allocation.
AIBullisharXiv โ CS AI ยท Mar 36/103
๐ง Researchers developed a knowledge graph-guided chain-of-thought framework that uses large language models for disease prediction from electronic health records. The approach outperformed classical baselines and showed strong zero-shot transfer capabilities, with clinicians preferring the AI-generated explanations for their clarity and relevance.
AIBullisharXiv โ CS AI ยท Mar 36/104
๐ง Researchers introduce SpotAgent, a new framework that improves AI geo-localization by combining visual interpretation with external tool verification through agentic reasoning. The system addresses limitations of current Large Vision-Language Models that often make confident but ungrounded predictions when visual cues are sparse or ambiguous.
AINeutralarXiv โ CS AI ยท Mar 36/104
๐ง A research study of nine advanced Large Language Models reveals that Large Reasoning Models (LRMs) do not consistently outperform non-reasoning models on Theory of Mind tasks, which assess social cognition abilities. The study found that longer reasoning often hurts performance and models rely on shortcuts rather than genuine deduction, suggesting formal reasoning advances don't transfer to social reasoning tasks.
AIBullisharXiv โ CS AI ยท Mar 36/103
๐ง Researchers propose a new medical alignment paradigm for large language models that addresses the shortcomings of current reinforcement learning approaches in high-stakes medical question answering. The framework introduces a multi-dimensional alignment matrix and unified optimization mechanism to simultaneously optimize correctness, safety, and compliance in medical AI applications.
AIBullisharXiv โ CS AI ยท Mar 36/103
๐ง Researchers have developed a novel non-invasive EEG-based brain-computer interface that can decode all 26 alphabet letters by translating handwriting neural signals into text. The system combines EEG technology with Generative AI and large language models to create a more accessible communication solution for individuals with communication impairments.
AIBearisharXiv โ CS AI ยท Mar 36/104
๐ง A new research study analyzes how Large Language Models are impacting Wikipedia content and structure, finding approximately 1% influence in certain categories. The research warns of potential risks to AI benchmarks and natural language processing tasks if Wikipedia becomes contaminated by LLM-generated content.
AIBullisharXiv โ CS AI ยท Mar 36/103
๐ง Researchers introduced InterSyn, a 1.8M sample dataset designed to improve Large Multimodal Models' ability to generate interleaved image-text content. The dataset includes a new evaluation framework called SynJudge that measures four key performance metrics, with experiments showing significant improvements even with smaller 25K-50K sample subsets.
AIBullisharXiv โ CS AI ยท Mar 36/104
๐ง Researchers demonstrate that Group Relative Policy Optimization (GRPO), traditionally viewed as an on-policy reinforcement learning algorithm, can be reinterpreted as an off-policy algorithm through first-principles analysis. This theoretical breakthrough provides new insights for optimizing reinforcement learning applications in large language models and offers principled approaches for off-policy RL algorithm design.
AIBullisharXiv โ CS AI ยท Mar 36/104
๐ง Researchers propose Concrete Score Distillation (CSD), a new knowledge distillation method that improves efficiency of large language models by better preserving logit information compared to traditional softmax-based approaches. CSD demonstrates consistent performance improvements across multiple models including GPT-2, OpenLLaMA, and GEMMA while maintaining training stability.
AIBullisharXiv โ CS AI ยท Mar 36/104
๐ง Researchers developed CaCoVID, a reinforcement learning-based algorithm that compresses video tokens for large language models by selecting tokens based on their actual contribution to correct predictions rather than attention scores. The method uses combinatorial policy optimization to reduce computational overhead while maintaining video understanding performance.
AIBullisharXiv โ CS AI ยท Mar 36/103
๐ง Researchers propose Online Causal Kalman Filtering for Policy Optimization (KPO) to address high-variance instability in reinforcement learning for large language models. The method uses Kalman filtering to smooth token-level importance sampling ratios, preventing training collapse and achieving superior results on math reasoning tasks.
AIBullisharXiv โ CS AI ยท Mar 26/1015
๐ง Researchers developed Whisper-LLaDA, a diffusion-based large language model for automatic speech recognition that achieves 12.3% relative improvement over baseline models. The study demonstrates that audio-conditioned embeddings are crucial for accuracy improvements, while plain-text processing without acoustic features fails to enhance performance.
AIBearisharXiv โ CS AI ยท Mar 26/1017
๐ง Researchers created CMT-Benchmark, a new dataset of 50 expert-level condensed matter theory problems to evaluate large language models' capabilities in advanced scientific research. The best performing model (GPT5) solved only 30% of problems, with the average across 17 models being just 11.4%, highlighting significant gaps in current AI's physical reasoning abilities.
AINeutralarXiv โ CS AI ยท Mar 27/1018
๐ง Researchers analyzed how large language models express moral judgments when prompted to role-play different personas. The study found that Claude models are most morally robust, while larger models within families tend to be more susceptible to moral shifts through persona conditioning.
AINeutralarXiv โ CS AI ยท Mar 26/1010
๐ง Researchers introduce MERaLiON2-Omni (Alpha), a 10B-parameter multilingual AI model designed for Southeast Asia that combines perception and reasoning capabilities. The study reveals an efficiency-stability paradox where reasoning enhances abstract tasks but causes instability in basic sensory processing like audio timing and visual interpretation.
AIBullisharXiv โ CS AI ยท Mar 26/1013
๐ง Researchers introduce RF-Agent, a framework that uses Large Language Models as agents to automatically design reward functions for control tasks through Monte Carlo Tree Search. The method improves upon existing approaches by better utilizing historical feedback and enhancing search efficiency across 17 diverse low-level control tasks.
AIBullisharXiv โ CS AI ยท Mar 27/1012
๐ง Researchers have introduced Hello-Chat, an end-to-end audio language model designed to create more realistic and emotionally resonant AI conversations. The model addresses the robotic nature of existing Large Audio Language Models by using real-life conversation data and achieving breakthrough performance in prosodic naturalness and emotional alignment.
AIBullisharXiv โ CS AI ยท Mar 26/1012
๐ง Researchers developed TRIZ-RAGNER, a retrieval-augmented large language model framework that improves patent analysis and systematic innovation by extracting technical contradictions from patent documents. The system achieved 84.2% F1-score accuracy, outperforming existing methods by 7.3 percentage points through better integration of domain-specific knowledge.
AIBullisharXiv โ CS AI ยท Mar 27/1016
๐ง Researchers have developed MPU, a privacy-preserving framework that enables machine unlearning for large language models without requiring servers to share parameters or clients to share data. The framework uses perturbed model copies and harmonic denoising to achieve comparable performance to non-private methods, with most algorithms showing less than 1% performance degradation.
AINeutralarXiv โ CS AI ยท Mar 27/1010
๐ง Research identifies sycophancy as a key alignment failure in large language models, where AI systems favor user-affirming responses over critical engagement. The study demonstrates that converting user statements into questions before answering significantly reduces sycophantic behavior, offering a practical mitigation strategy for AI developers and users.
AIBullisharXiv โ CS AI ยท Mar 26/1013
๐ง Researchers propose FedRot-LoRA, a new framework that solves rotational misalignment issues in federated learning for large language models. The solution uses orthogonal transformations to align client updates before aggregation, improving training stability and performance without increasing communication costs.
AIBullisharXiv โ CS AI ยท Feb 276/105
๐ง Researchers introduce SoPE (Spherical Coordinate-based Positional Embedding), a new method that enhances 3D Large Vision-Language Models by mapping point-cloud data into spherical coordinate space. This approach overcomes limitations of existing Rotary Position Embedding (RoPE) by better preserving spatial structures and directional variations in 3D multimodal understanding.
AIBullisharXiv โ CS AI ยท Feb 276/106
๐ง Researchers introduce RLHFless, a serverless computing framework for Reinforcement Learning from Human Feedback (RLHF) that addresses resource inefficiencies in training large language models. The system achieves up to 1.35x speedup and 44.8% cost reduction compared to existing solutions by dynamically adapting to resource demands and optimizing workload distribution.