9 articles tagged with #llm-reliability. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv โ CS AI ยท Apr 147/10
๐ง CircuitSynth is a neuro-symbolic framework that addresses hallucinations and logical inconsistencies in LLM-generated synthetic data by combining probabilistic decision diagrams with optimization mechanisms to enforce hard constraints and distributional guarantees. The approach achieves 100% schema validity across complex benchmarks while outperforming existing methods in coverage, representing a significant advancement in reliable synthetic data generation for machine learning applications.
AIBullisharXiv โ CS AI ยท Apr 107/10
๐ง Researchers developed a weak supervision framework to detect hallucinations in large language models by distilling grounding signals into transformer representations during training. Using substring matching, sentence embeddings, and LLM judges, they created a 15,000-sample dataset and trained five probing classifiers that achieve hallucination detection from internal activations alone at inference time, eliminating the need for external verification systems.
AIBullisharXiv โ CS AI ยท Apr 107/10
๐ง Researchers propose SciDC, a method that constrains large language model outputs using subject-specific scientific rules to reduce hallucinations and improve reliability. The approach demonstrates 12% average accuracy improvements across domain tasks including drug formulation, clinical diagnosis, and chemical synthesis planning.
AIBearisharXiv โ CS AI ยท Apr 107/10
๐ง Researchers discovered that GPT-4o exhibits significant daily and weekly performance fluctuations when solving identical tasks under fixed conditions, with periodic variability accounting for approximately 20% of total variance. This finding fundamentally challenges the widespread assumption that LLM performance is time-invariant and raises critical concerns about the reliability and reproducibility of research utilizing large language models.
๐ง GPT-4
AIBullisharXiv โ CS AI ยท Mar 37/104
๐ง Researchers introduce HalluGuard, a new framework that identifies and addresses both data-driven and reasoning-driven hallucinations in Large Language Models. The system achieved state-of-the-art performance across 10 benchmarks and 9 LLM backbones, offering a unified approach to improve AI reliability in critical domains like healthcare and law.
AINeutralarXiv โ CS AI ยท 4d ago6/10
๐ง Researchers propose a conformal prediction framework for large language models that uses internal neural representations rather than surface-level outputs to assess reliability and uncertainty. The Layer-Wise Information scoring method improves prediction validity under distribution shift while maintaining competitive performance, addressing a critical challenge in deploying LLMs where traditional uncertainty signals become unreliable.
AINeutralarXiv โ CS AI ยท 4d ago6/10
๐ง Researchers present Deliberative Searcher, a framework that enhances large language model reliability by combining certainty calibration with retrieval-based search for question answering. The system uses reinforcement learning with soft reliability constraints to improve alignment between model confidence and actual correctness, producing more trustworthy outputs.
AINeutralarXiv โ CS AI ยท Mar 36/103
๐ง Researchers introduce FaithCoT-Bench, the first comprehensive benchmark for detecting unfaithful Chain-of-Thought reasoning in large language models. The benchmark includes over 1,000 expert-annotated trajectories across four domains and evaluates eleven detection methods, revealing significant challenges in identifying unreliable AI reasoning processes.
AIBearisharXiv โ CS AI ยท Feb 276/105
๐ง Researchers analyzed factual accuracy of Chinese web information systems, comparing traditional search engines, standalone LLMs, and AI overviews using 12,161 real-world queries. The study found substantial differences in factual accuracy across systems and estimated potential misinformation exposure for Chinese users.