y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All26,571🧠AI11,682⛓️Crypto9,745💎DeFi998🤖AI × Crypto505📰General3,641
🧠

AI

11,682 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.

11682 articles
AINeutralarXiv – CS AI · Mar 47/102
🧠

Covering Numbers for Deep ReLU Networks with Applications to Function Approximation and Nonparametric Regression

Researchers have derived tight bounds on covering numbers for deep ReLU neural networks, providing fundamental insights into network capacity and approximation capabilities. The work removes a log^6(n) factor from the best known sample complexity rate for estimating Lipschitz functions via deep networks, establishing optimality in nonparametric regression.

AIBullisharXiv – CS AI · Mar 47/102
🧠

Learning Memory-Enhanced Improvement Heuristics for Flexible Job Shop Scheduling

Researchers propose MIStar, a memory-enhanced improvement search framework using heterogeneous graph neural networks for flexible job-shop scheduling problems in smart manufacturing. The approach significantly outperforms traditional heuristics and state-of-the-art deep reinforcement learning methods in optimizing production schedules.

$NEAR
AIBearisharXiv – CS AI · Mar 46/103
🧠

Contextual Drag: How Errors in the Context Affect LLM Reasoning

Researchers have identified 'contextual drag' - a phenomenon where large language models (LLMs) generate similar errors when failed attempts are present in their context. The study found 10-20% performance drops across 11 models on 8 reasoning tasks, with iterative self-refinement potentially leading to self-deterioration.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Hallucination, Monofacts, and Miscalibration: An Empirical Investigation

Researchers conducted the first empirical investigation of hallucination in large language models, revealing that strategic repetition of just 5% of training examples can reduce AI hallucinations by up to 40%. The study introduces 'selective upweighting' as a technique that maintains model accuracy while significantly reducing false information generation.

AIBullisharXiv – CS AI · Mar 46/103
🧠

IoUCert: Robustness Verification for Anchor-based Object Detectors

Researchers introduce IoUCert, a new formal verification framework that enables robustness verification for anchor-based object detection models like SSD, YOLOv2, and YOLOv3. The breakthrough uses novel coordinate transformations and Interval Bound Propagation to overcome previous limitations in verifying object detection systems against input perturbations.

AIBullisharXiv – CS AI · Mar 46/104
🧠

OCR or Not? Rethinking Document Information Extraction in the MLLMs Era with Real-World Large-Scale Datasets

A large-scale benchmarking study finds that powerful Multimodal Large Language Models (MLLMs) can extract information from business documents using image-only input, potentially eliminating the need for traditional OCR preprocessing. The research demonstrates that well-designed prompts and instructions can further enhance MLLM performance in document processing tasks.

AINeutralarXiv – CS AI · Mar 47/102
🧠

Faster, Cheaper, More Accurate: Specialised Knowledge Tracing Models Outperform LLMs

Research comparing Knowledge Tracing (KT) models to Large Language Models (LLMs) for predicting student responses found that specialized KT models significantly outperform LLMs in accuracy, speed, and cost-effectiveness. The study demonstrates that domain-specific models are superior to general-purpose LLMs for educational prediction tasks, with LLMs being orders of magnitude slower and more expensive to deploy.

AIBullisharXiv – CS AI · Mar 47/103
🧠

BrandFusion: A Multi-Agent Framework for Seamless Brand Integration in Text-to-Video Generation

Researchers introduce BrandFusion, a multi-agent AI framework that enables seamless brand integration into text-to-video generation models. The system addresses commercial monetization challenges in T2V technology by automatically embedding advertiser brands into generated videos while preserving user intent and ensuring natural integration.

AIBullisharXiv – CS AI · Mar 47/104
🧠

You Only Fine-tune Once: Many-Shot In-Context Fine-Tuning for Large Language Models

Researchers propose Many-Shot In-Context Fine-tuning (ManyICL), a novel approach that significantly improves large language model performance by treating multiple in-context examples as supervised training targets rather than just prompts. The method narrows the performance gap between in-context learning and dedicated fine-tuning while reducing catastrophic forgetting issues.

AIBearisharXiv – CS AI · Mar 46/102
🧠

Scores Know Bobs Voice: Speaker Impersonation Attack

Researchers developed a new AI attack method that can fool speaker recognition systems with 10x fewer attempts than previous approaches. The technique uses feature-aligned inversion to optimize attacks in latent space, achieving up to 91.65% success rate with only 50 queries.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Next Embedding Prediction Makes World Models Stronger

Researchers introduce NE-Dreamer, a decoder-free model-based reinforcement learning agent that uses temporal transformers to predict next-step encoder embeddings. The approach achieves performance matching or exceeding DreamerV3 on standard benchmarks while showing substantial improvements on memory and spatial reasoning tasks.

AIBullisharXiv – CS AI · Mar 46/104
🧠

SPARC: Spatial-Aware Path Planning via Attentive Robot Communication

Researchers developed SPARC, a new AI system for multi-robot path planning that uses spatial-aware communication to improve coordination. The system achieved 75% success rate when scaling from 8 training robots to 128 test robots, outperforming existing methods by over 25 percentage points in high-density environments.

AIBullisharXiv – CS AI · Mar 47/102
🧠

SUN: Shared Use of Next-token Prediction for Efficient Multi-LLM Disaggregated Serving

Researchers propose SUN (Shared Use of Next-token Prediction), a novel approach for multi-LLM serving that enables cross-model sharing of decode execution by decomposing transformers into separate prefill and decode modules. The system achieves up to 2.0x throughput improvement per GPU while maintaining accuracy comparable to full fine-tuning, with a quantized version (QSUN) providing additional 45% speedup.

AIBullisharXiv – CS AI · Mar 46/102
🧠

ScaleDoc: Scaling LLM-based Predicates over Large Document Collections

ScaleDoc is a new system that enables efficient semantic analysis of large document collections using LLMs by combining offline document representation with lightweight online filtering. The system achieves 2x speedup and reduces expensive LLM calls by up to 85% through contrastive learning and adaptive cascade mechanisms.

AIBullisharXiv – CS AI · Mar 47/104
🧠

OpenClaw, Moltbook, and ClawdLab: From Agent-Only Social Networks to Autonomous Scientific Research

Researchers introduced ClawdLab, an open-source platform for autonomous AI scientific research, following analysis of OpenClaw framework and Moltbook social network that revealed security vulnerabilities across 131 agent skills and over 15,200 exposed control panels. The platform addresses identified failure modes through structured governance and multi-model orchestration in fully decentralized AI systems.

AIBearisharXiv – CS AI · Mar 47/102
🧠

TrustMH-Bench: A Comprehensive Benchmark for Evaluating the Trustworthiness of Large Language Models in Mental Health

Researchers have developed TrustMH-Bench, a comprehensive framework to evaluate the trustworthiness of Large Language Models (LLMs) in mental health applications. Testing revealed that both general-purpose and specialized mental health LLMs, including advanced models like GPT-5.1, significantly underperform across critical trustworthiness dimensions in mental health scenarios.

AIBullisharXiv – CS AI · Mar 47/104
🧠

PRISM: Pushing the Frontier of Deep Think via Process Reward Model-Guided Inference

Researchers introduce PRISM, a new AI inference algorithm that uses Process Reward Models to guide deep reasoning systems. The method significantly improves performance on mathematical and scientific benchmarks by treating candidate solutions as particles in an energy landscape and using score-guided refinement to concentrate on higher-quality reasoning paths.

AIBullishOpenAI News · Mar 47/103
🧠

Understanding AI and learning outcomes

OpenAI has launched the Learning Outcomes Measurement Suite, a new tool designed to evaluate how AI technology impacts student learning across various educational settings. The suite aims to provide longitudinal assessment capabilities to measure AI's effectiveness in education over extended periods.

AIBearishTechCrunch – AI · Mar 37/104
🧠

Alibaba’s Qwen tech lead steps down after major AI push

Junyang Lin, the technology lead for Alibaba's Qwen AI team, has stepped down following a major AI model launch. The departure has caused significant reactions within the Qwen team, potentially signaling internal tensions or strategic changes at one of China's leading AI development groups.

← PrevPage 74 of 468Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined