y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All31,170🧠AI13,282⛓️Crypto11,259💎DeFi1,160🤖AI × Crypto566📰General4,903
🧠

AI

13,288 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.

13288 articles
AIBullisharXiv – CS AI · Mar 26/1017
🧠

Data Driven Optimization of GPU efficiency for Distributed LLM Adapter Serving

Researchers developed a data-driven pipeline to optimize GPU efficiency for distributed LLM adapter serving, achieving sub-5% throughput estimation error while running 90x faster than full benchmarking. The system uses a Digital Twin, machine learning models, and greedy placement algorithms to minimize GPU requirements while serving hundreds of adapters concurrently.

AINeutralarXiv – CS AI · Mar 27/1015
🧠

SLA-Aware Distributed LLM Inference Across Device-RAN-Cloud

Researchers tested distributed AI inference across device, edge, and cloud tiers in a 5G network, finding that sub-second AI response times required for embodied AI are challenging to achieve. On-device execution took multiple seconds, while RAN-edge deployment with quantized models could meet 0.5-second deadlines, and cloud deployment achieved 100% success for 1-second deadlines.

$NEAR
AIBullisharXiv – CS AI · Mar 26/1013
🧠

3D Modality-Aware Pre-training for Vision-Language Model in MRI Multi-organ Abnormality Detection

Researchers developed MedMAP, a Medical Modality-Aware Pretraining framework that enhances vision-language models for 3D MRI multi-organ abnormality detection. The framework addresses challenges in modality-specific alignment and cross-modal feature fusion, demonstrating superior performance on a curated dataset of 7,392 3D MRI volume-report pairs.

AIBullisharXiv – CS AI · Mar 26/1013
🧠

FedRot-LoRA: Mitigating Rotational Misalignment in Federated LoRA

Researchers propose FedRot-LoRA, a new framework that solves rotational misalignment issues in federated learning for large language models. The solution uses orthogonal transformations to align client updates before aggregation, improving training stability and performance without increasing communication costs.

AIBullisharXiv – CS AI · Mar 26/1016
🧠

FlexGuard: Continuous Risk Scoring for Strictness-Adaptive LLM Content Moderation

Researchers introduce FlexGuard, a new AI content moderation system that provides continuous risk scoring instead of binary decisions, allowing platforms to adapt moderation strictness as needed. The system addresses limitations of existing guardrail models that break down when content moderation requirements change across platforms or over time.

AINeutralarXiv – CS AI · Mar 27/1013
🧠

Learning to maintain safety through expert demonstrations in settings with unknown constraints: A Q-learning perspective

Researchers propose SafeQIL, a new Q-learning algorithm that learns safe policies from expert demonstrations in constrained environments where safety constraints are unknown. The approach balances maximizing task rewards while maintaining safety by learning from demonstrated trajectories that successfully complete tasks without violating hidden constraints.

AIBullisharXiv – CS AI · Mar 26/1012
🧠

TRIZ-RAGNER: A Retrieval-Augmented Large Language Model for TRIZ-Aware Named Entity Recognition in Patent-Based Contradiction Mining

Researchers developed TRIZ-RAGNER, a retrieval-augmented large language model framework that improves patent analysis and systematic innovation by extracting technical contradictions from patent documents. The system achieved 84.2% F1-score accuracy, outperforming existing methods by 7.3 percentage points through better integration of domain-specific knowledge.

AINeutralarXiv – CS AI · Mar 26/1012
🧠

DLEBench: Evaluating Small-scale Object Editing Ability for Instruction-based Image Editing Model

Researchers introduce DLEBench, the first benchmark specifically designed to evaluate instruction-based image editing models' ability to edit small-scale objects that occupy only 1%-10% of image area. Testing on 10 models revealed significant performance gaps in small object editing, highlighting a critical limitation in current AI image editing capabilities.

AIBullisharXiv – CS AI · Mar 27/1011
🧠

KEEP: A KV-Cache-Centric Memory Management System for Efficient Embodied Planning

Researchers from PKU-SEC-Lab have developed KEEP, a new memory management system that significantly improves the efficiency of AI-powered embodied planning by optimizing KV cache usage. The system achieves 2.68x speedup compared to text-based memory methods while maintaining accuracy, addressing a key bottleneck in memory-augmented Large Language Models for complex planning tasks.

AIBullisharXiv – CS AI · Mar 26/1013
🧠

Pseudo Contrastive Learning for Diagram Comprehension in Multimodal Models

Researchers propose a new training method called pseudo contrastive learning to improve diagram comprehension in multimodal AI models like CLIP. The approach uses synthetic diagram samples to help models better understand fine-grained structural differences in diagrams, showing significant improvements in flowchart understanding tasks.

AINeutralarXiv – CS AI · Mar 26/1015
🧠

LFQA-HP-1M: A Large-Scale Human Preference Dataset for Long-Form Question Answering

Researchers released LFQA-HP-1M, a dataset with 1.3 million human preference annotations for evaluating long-form question answering systems. The study introduces nine quality rubrics and shows that simple linear models can match advanced LLM evaluators while exposing vulnerabilities in current evaluation methods.

AIBullisharXiv – CS AI · Mar 27/1012
🧠

Hyperdimensional Cross-Modal Alignment of Frozen Language and Image Models for Efficient Image Captioning

Researchers introduce HDFLIM, a new framework that aligns vision and language AI models without requiring computationally expensive fine-tuning by using hyperdimensional computing to create cross-modal mappings while keeping foundation models frozen. The approach achieves comparable performance to traditional training methods while being significantly more resource-efficient.

AINeutralarXiv – CS AI · Mar 26/1017
🧠

When Does Multimodal Learning Help in Healthcare? A Benchmark on EHR and Chest X-Ray Fusion

Researchers conducted a systematic benchmark study on multimodal fusion between Electronic Health Records (EHR) and chest X-rays for clinical decision support, revealing when and how combining data modalities improves healthcare AI performance. The study found that multimodal fusion helps when data is complete but benefits degrade under realistic missing data scenarios, and released an open-source benchmarking toolkit for reproducible evaluation.

AIBullisharXiv – CS AI · Mar 26/1013
🧠

LLM-Driven Multi-Turn Task-Oriented Dialogue Synthesis for Realistic Reasoning

Researchers propose an LLM-driven framework for generating multi-turn task-oriented dialogues to create more realistic reasoning benchmarks. The framework addresses limitations in current AI evaluation methods by producing synthetic datasets that better reflect real-world complexity and contextual coherence.

AINeutralarXiv – CS AI · Mar 26/1019
🧠

BRIDGE the Gap: Mitigating Bias Amplification in Automated Scoring of English Language Learners via Inter-group Data Augmentation

Researchers developed BRIDGE, a framework to reduce bias in AI-powered automated scoring systems that unfairly penalize English Language Learners (ELLs). The system addresses representation bias by generating synthetic high-scoring ELL samples, achieving fairness improvements comparable to using additional human data while maintaining overall performance.

AIBullisharXiv – CS AI · Mar 27/1012
🧠

Rudder: Steering Prefetching in Distributed GNN Training using LLM Agents

Researchers introduced Rudder, a software module that uses Large Language Models (LLMs) to optimize data prefetching in distributed Graph Neural Network training. The system shows up to 91% performance improvement over baseline training and 82% over static prefetching by autonomously adapting to dynamic conditions.

AIBearisharXiv – CS AI · Mar 26/1013
🧠

Humans and LLMs Diverge on Probabilistic Inferences

Researchers created ProbCOPA, a dataset testing probabilistic reasoning in humans versus AI models, finding that state-of-the-art LLMs consistently fail to match human judgment patterns. The study reveals fundamental differences in how humans and AI systems process non-deterministic inferences, highlighting limitations in current AI reasoning capabilities.

AIBullisharXiv – CS AI · Mar 26/1014
🧠

BiKA: Kolmogorov-Arnold-Network-inspired Ultra Lightweight Neural Network Hardware Accelerator

Researchers propose BiKA, a new ultra-lightweight neural network accelerator inspired by Kolmogorov-Arnold Networks that uses binary thresholds instead of complex computations. The FPGA prototype demonstrates 27-51% reduction in hardware resource usage compared to existing binarized and quantized neural network accelerators while maintaining competitive accuracy.

AINeutralarXiv – CS AI · Mar 27/1017
🧠

Human Supervision as an Information Bottleneck: A Unified Theory of Error Floors in Human-Guided Learning

Researchers propose a unified theory explaining why AI models trained on human feedback exhibit persistent error floors that cannot be eliminated through scaling alone. The study demonstrates that human supervision acts as an information bottleneck due to annotation noise, subjective preferences, and language limitations, requiring auxiliary non-human signals to overcome these structural limitations.

AIBullisharXiv – CS AI · Mar 26/1011
🧠

Evidential Neural Radiance Fields

Researchers introduce Evidential Neural Radiance Fields, a new probabilistic approach that enables uncertainty quantification in 3D scene modeling while maintaining rendering quality. The method addresses critical limitations in existing NeRF technology by capturing both aleatoric and epistemic uncertainty from a single forward pass, making neural radiance fields more suitable for safety-critical applications.

AIBullisharXiv – CS AI · Mar 26/1014
🧠

SALIENT: Frequency-Aware Paired Diffusion for Controllable Long-Tail CT Detection

Researchers introduce SALIENT, a frequency-aware diffusion model framework that improves detection of rare lesions in CT scans by generating synthetic training data in wavelet domain rather than pixel space. The approach addresses extreme class imbalance in medical imaging through controllable augmentation, achieving significant improvements in detection performance for low-prevalence conditions.

AIBullisharXiv – CS AI · Mar 27/1015
🧠

Learning to Generate Secure Code via Token-Level Rewards

Researchers have developed Vul2Safe, a new framework for generating secure code using large language models, which addresses security vulnerabilities through self-reflection and token-level reinforcement learning. The approach introduces the PrimeVul+ dataset and SRCode training framework to provide more precise optimization of security patterns in code generation.

← PrevPage 250 of 532Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined