y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#llm News & Analysis

954 articles tagged with #llm. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

954 articles
AI × CryptoBullishHugging Face Blog · Aug 27/106
🤖

Towards Encrypted Large Language Models with FHE

The article discusses the development of encrypted large language models using Fully Homomorphic Encryption (FHE) technology. This approach would allow AI models to process data while keeping it encrypted, potentially addressing privacy concerns in AI applications.

AIBullishHugging Face Blog · Jul 187/105
🧠

Llama 2 is here - get it on Hugging Face

The article appears to announce the release of Llama 2, Meta's open-source large language model, now available on Hugging Face platform. However, the article body is empty, limiting detailed analysis of the announcement's specifics or implications.

AIBullishHugging Face Blog · May 247/108
🧠

Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA

The article discusses advances in making Large Language Models (LLMs) more accessible through bitsandbytes library, 4-bit quantization techniques, and QLoRA (Quantized Low-Rank Adaptation). These technologies enable running and fine-tuning large AI models on consumer hardware with significantly reduced memory requirements.

AINeutralarXiv – CS AI · 2d ago6/10
🧠

PrivacyReasoner: Can LLM Emulate a Human-like Privacy Mind?

Researchers introduce PrivacyReasoner, an LLM-based agent architecture that reconstructs individual privacy perspectives from online comment history to predict how specific people would perceive data practices. The system outperforms baseline models in predicting privacy concerns across AI, e-commerce, and healthcare domains by contextually activating relevant privacy beliefs.

AINeutralarXiv – CS AI · 3d ago6/10
🧠

When Valid Signals Fail: Regime Boundaries Between LLM Features and RL Trading Policies

Researchers demonstrate that large language models can extract predictive features from financial news with valid intermediate signals (Information Coefficient >0.15), yet these features fail to improve reinforcement learning trading agents during macroeconomic shocks. The findings reveal a critical gap between feature-level validity and downstream policy robustness, suggesting that valid signals alone cannot guarantee trading performance under distribution shifts.

AINeutralarXiv – CS AI · 3d ago6/10
🧠

Legal2LogicICL: Improving Generalization in Transforming Legal Cases to Logical Formulas via Diverse Few-Shot Learning

Researchers introduce Legal2LogicICL, an LLM-based framework that improves the conversion of natural-language legal cases into logical formulas through retrieval-augmented few-shot learning. The method addresses data scarcity in legal AI systems and introduces a new annotated dataset (Legal2Proleg) to advance interpretable legal reasoning without requiring model fine-tuning.

AINeutralarXiv – CS AI · 3d ago6/10
🧠

Large Language Model as An Operator: An Experience-Driven Solution for Distribution Network Voltage Control

Researchers propose an LLM-based system for autonomous voltage control in electrical distribution networks, using experience-driven decision-making to optimize day-ahead dispatch strategies. The framework combines historical operational data retrieval with AI-generated solutions, demonstrating how large language models can address complex power system management under incomplete information.

AINeutralarXiv – CS AI · 3d ago6/10
🧠

CodaRAG: Connecting the Dots with Associativity Inspired by Complementary Learning

Researchers introduce CodaRAG, a framework that enhances Retrieval-Augmented Generation by treating evidence retrieval as active associative discovery rather than passive lookup. The system achieves 7-10% gains in retrieval recall and 3-11% improvements in generation accuracy by consolidating fragmented knowledge, navigating multi-dimensional pathways, and eliminating noise.

AINeutralarXiv – CS AI · 3d ago6/10
🧠

Pseudo-Unification: Entropy Probing Reveals Divergent Information Patterns in Unified Multimodal Models

Researchers reveal that unified multimodal models (UMMs) combining language and vision capabilities fail to achieve genuine synergy, exhibiting divergent information patterns that undermine reasoning transfer to image synthesis. An information-theoretic framework analyzing ten models shows pseudo-unification stems from asymmetric encoding and conflicting response patterns, with only models implementing contextual prediction achieving stronger text-to-image reasoning.

AINeutralarXiv – CS AI · 4d ago6/10
🧠

GNN-as-Judge: Unleashing the Power of LLMs for Graph Learning with GNN Feedback

Researchers propose GNN-as-Judge, a framework combining Large Language Models with Graph Neural Networks to improve learning on text-attributed graphs in low-resource settings. The approach uses collaborative pseudo-labeling and weakly-supervised fine-tuning to generate reliable labels while reducing noise, demonstrating significant performance gains when labeled data is scarce.

AINeutralarXiv – CS AI · 4d ago6/10
🧠

Every Response Counts: Quantifying Uncertainty of LLM-based Multi-Agent Systems through Tensor Decomposition

Researchers introduce MATU, a novel uncertainty quantification framework using tensor decomposition to address reliability challenges in Large Language Model-based Multi-Agent Systems. The method analyzes entire reasoning trajectories rather than single outputs, effectively measuring uncertainty across different agent structures and communication topologies.

AINeutralarXiv – CS AI · Apr 106/10
🧠

SentinelSphere: Integrating AI-Powered Real-Time Threat Detection with Cybersecurity Awareness Training

SentinelSphere is an AI-powered cybersecurity platform combining machine learning-based threat detection with LLM-driven security training to address both technical vulnerabilities and human-factor weaknesses in enterprise security. The system uses an Enhanced DNN model trained on benchmark datasets for real-time threat identification and deploys a quantized Phi-4 model for accessible security education, validated by industry professionals as intuitive and effective.

AINeutralarXiv – CS AI · Apr 106/10
🧠

Improved Evidence Extraction and Metrics for Document Inconsistency Detection with LLMs

Researchers introduce improved methods for detecting inconsistencies in documents using large language models, including new evaluation metrics and a redact-and-retry framework. The work addresses a research gap in LLM-based document analysis and includes a new semi-synthetic dataset for benchmarking evidence extraction capabilities.

AINeutralarXiv – CS AI · Apr 76/10
🧠

LiveFact: A Dynamic, Time-Aware Benchmark for LLM-Driven Fake News Detection

Researchers have developed LiveFact, a new dynamic benchmark for evaluating Large Language Models' ability to detect fake news and misinformation in real-time conditions. The benchmark addresses limitations of static testing by using temporal evidence sets and finds that open-source models like Qwen3-235B-A22B now match proprietary systems in performance.

AIBearisharXiv – CS AI · Apr 76/10
🧠

Plausibility as Commonsense Reasoning: Humans Succeed, Large Language Models Do not

A new study reveals that large language models fail to integrate world knowledge with syntactic structure for ambiguity resolution in the same way humans do. Researchers tested Turkish language models on relative-clause attachment ambiguities and found that while humans reliably use plausibility to guide interpretation, LLMs show weak, unstable, or reversed responses to the same plausibility cues.

AIBullisharXiv – CS AI · Apr 76/10
🧠

Scaling DPPs for RAG: Density Meets Diversity

Researchers propose ScalDPP, a new retrieval mechanism for RAG systems that uses Determinantal Point Processes to optimize both density and diversity in context selection. The approach addresses limitations in current RAG pipelines that ignore interactions between retrieved information chunks, leading to redundant contexts that reduce effectiveness.

AIBullisharXiv – CS AI · Apr 76/10
🧠

VERT: Reliable LLM Judges for Radiology Report Evaluation

Researchers introduced VERT, a new LLM-based metric for evaluating radiology reports that shows up to 11.7% better correlation with radiologist judgments compared to existing methods. The study demonstrates that fine-tuned smaller models can achieve significant performance gains while reducing inference time by up to 37.2 times.

AIBullisharXiv – CS AI · Apr 76/10
🧠

PRAISE: Prefix-Based Rollout Reuse in Agentic Search Training

Researchers introduce PRAISE, a new framework that improves training efficiency for AI agents performing complex search tasks like multi-hop question answering. The method addresses key limitations in current reinforcement learning approaches by reusing partial search trajectories and providing intermediate rewards rather than only final answer feedback.

AINeutralarXiv – CS AI · Apr 76/10
🧠

Can Humans Tell? A Dual-Axis Study of Human Perception of LLM-Generated News

A research study using JudgeGPT platform found that humans cannot reliably distinguish between AI-generated and human-written news articles across 2,318 judgments from 1,054 participants. The study tested six different LLMs and concluded that user-side detection is not viable, suggesting the need for cryptographic content provenance systems.

AIBullisharXiv – CS AI · Apr 76/10
🧠

Automated Attention Pattern Discovery at Scale in Large Language Models

Researchers developed AP-MAE, a vision transformer model that analyzes attention patterns in large language models at scale to improve interpretability. The system can predict code generation accuracy with 55-70% precision and enable targeted interventions that increase model accuracy by 13.6%.

AIBullisharXiv – CS AI · Apr 76/10
🧠

GROUNDEDKG-RAG: Grounded Knowledge Graph Index for Long-document Question Answering

Researchers introduced GroundedKG-RAG, a new retrieval-augmented generation system that creates knowledge graphs directly grounded in source documents to improve long-document question answering. The system reduces resource consumption and hallucinations while maintaining accuracy comparable to state-of-the-art models at lower cost.

AIBullisharXiv – CS AI · Apr 76/10
🧠

Context is All You Need

Researchers introduce CONTXT, a lightweight neural network adaptation method that improves AI model performance when deployed on data different from training data. The technique uses simple additive and multiplicative transforms to modulate internal representations, providing consistent gains across both discriminative and generative models including LLMs.

AIBullisharXiv – CS AI · Apr 76/10
🧠

Conversational Control with Ontologies for Large Language Models: A Lightweight Framework for Constrained Generation

Researchers developed a lightweight framework that uses ontological definitions to provide modular and explainable control over Large Language Model outputs in conversational systems. The method fine-tunes LLMs to generate content according to specific constraints like English proficiency level and content polarity, consistently outperforming pre-trained baselines across seven state-of-the-art models.

← PrevPage 15 of 39Next →