y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All25,274🧠AI11,286⛓️Crypto9,400💎DeFi952🤖AI × Crypto505📰General3,131
🧠

AI

11,288 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.

11288 articles
AIBullisharXiv – CS AI · Apr 107/10
🧠

Towards provable probabilistic safety for scalable embodied AI systems

Researchers propose a shift from deterministic to probabilistic safety verification for embodied AI systems, arguing that provable probabilistic guarantees offer a more practical path to large-scale deployment in safety-critical applications like autonomous vehicles and robotics than the infeasible goal of absolute safety across all scenarios.

AIBearisharXiv – CS AI · Apr 107/10
🧠

Beyond Functional Correctness: Design Issues in AI IDE-Generated Large-Scale Projects

Researchers evaluated Cursor, an AI-powered IDE, on its ability to generate large-scale software projects and found it achieves 91% functional correctness but produces significant design issues including code duplication, complexity violations, and framework best-practice breaches that threaten long-term maintainability.

AIBullisharXiv – CS AI · Apr 107/10
🧠

SALLIE: Safeguarding Against Latent Language & Image Exploits

Researchers introduce SALLIE, a lightweight runtime defense framework that detects and mitigates jailbreak attacks and prompt injections in large language and vision-language models simultaneously. Using mechanistic interpretability and internal model activations, SALLIE achieves robust protection across multiple architectures without degrading performance or requiring architectural changes.

AIBullisharXiv – CS AI · Apr 107/10
🧠

Computer Environments Elicit General Agentic Intelligence in LLMs

Researchers introduce LLM-in-Sandbox, a minimal computer environment that significantly enhances large language models' capabilities across diverse tasks without additional training. The approach enables weaker models to internalize agent-like behaviors through specialized training, demonstrating that environmental interaction—not just model parameters—drives general intelligence in LLMs.

AIBullisharXiv – CS AI · Apr 107/10
🧠

DosimeTron: Automating Personalized Monte Carlo Radiation Dosimetry in PET/CT with Agentic AI

DosimeTron, an agentic AI system powered by GPT-5.2, automates personalized Monte Carlo radiation dosimetry calculations for PET/CT medical imaging. Validated on 597 studies across 378 patients, the system achieved 99.6% correlation with reference dosimetry calculations while processing each case in approximately 32 minutes with zero execution failures.

🧠 GPT-5
AIBullisharXiv – CS AI · Apr 107/10
🧠

Space Filling Curves is All You Need: Communication-Avoiding Matrix Multiplication Made Simple

Researchers present a new approach to General Matrix Multiplication (GEMM) using Space Filling Curves that automatically optimizes data movement across memory hierarchies without requiring platform-specific tuning. The method achieves up to 5.5x speedups over vendor libraries and demonstrates significant performance gains in LLM inference and distributed computing applications.

AIBullisharXiv – CS AI · Apr 107/10
🧠

ClawLess: A Security Model of AI Agents

ClawLess introduces a formally verified security framework that enforces policies on AI agents operating with code execution and information retrieval capabilities, addressing risks that existing training-based approaches cannot adequately mitigate. The system uses BPF-based syscall interception and a user-space kernel to prevent adversarial AI agents from violating security boundaries, regardless of their internal design.

AINeutralarXiv – CS AI · Apr 107/10
🧠

Blending Human and LLM Expertise to Detect Hallucinations and Omissions in Mental Health Chatbot Responses

Researchers demonstrate that standard LLM-as-a-judge methods achieve only 52% accuracy in detecting hallucinations and omissions in mental health chatbots, failing in high-risk healthcare contexts. A hybrid framework combining human domain expertise with machine learning features achieves significantly higher performance (0.717-0.849 F1 scores), suggesting that transparent, interpretable approaches outperform black-box LLM evaluation in safety-critical applications.

AINeutralarXiv – CS AI · Apr 107/10
🧠

Invisible Influences: Investigating Implicit Intersectional Biases through Persona Engineering in Large Language Models

Researchers introduced BADx, a novel metric that measures how Large Language Models amplify implicit biases when adopting different social personas, revealing that popular LLMs like GPT-4o and DeepSeek-R1 exhibit significant context-dependent bias shifts. The study across five state-of-the-art models demonstrates that static bias testing methods fail to capture dynamic bias amplification, with implications for AI safety and responsible deployment.

🧠 GPT-4🧠 Claude
AIBullisharXiv – CS AI · Apr 107/10
🧠

SPICE: Submodular Penalized Information-Conflict Selection for Efficient Large Language Model Training

Researchers introduce SPICE, a data selection algorithm that reduces large language model training data requirements by 90% while maintaining performance by identifying and minimizing gradient conflicts between training samples. The method combines information-theoretic principles with practical efficiency improvements, enabling effective model tuning on just 10% of typical datasets across multiple benchmarks.

AIBearisharXiv – CS AI · Apr 107/10
🧠

BadImplant: Injection-based Multi-Targeted Graph Backdoor Attack

Researchers have demonstrated the first multi-targeted backdoor attack against graph neural networks (GNNs) in graph classification tasks, using a novel subgraph injection method that simultaneously redirects multiple predictions to different target labels while maintaining clean accuracy. The attack shows high efficacy across multiple GNN architectures and datasets, with resilience against existing defense mechanisms, exposing significant vulnerabilities in GNN security.

AINeutralarXiv – CS AI · Apr 107/10
🧠

Benchmarking LLM Tool-Use in the Wild

Researchers introduce WildToolBench, a new benchmark for evaluating large language models' ability to use tools in real-world scenarios. Testing 57 LLMs reveals that none exceed 15% accuracy, exposing significant gaps in current models' agentic capabilities when facing messy, multi-turn user interactions rather than simplified synthetic tasks.

AIBearisharXiv – CS AI · Apr 107/10
🧠

LLM Spirals of Delusion: A Benchmarking Audit Study of AI Chatbot Interfaces

A comprehensive audit study reveals significant differences between LLM API testing and real-world chat interface usage, finding that ChatGPT-5 shows fewer problematic behaviors than ChatGPT-4o but both models still display substantial levels of delusion reinforcement and conspiratorial thinking amplification. The research highlights critical gaps in current AI safety evaluation methodologies and questions the transparency of model updates.

🧠 GPT-5🧠 ChatGPT
AIBearisharXiv – CS AI · Apr 107/10
🧠

Beyond Surface Judgments: Human-Grounded Risk Evaluation of LLM-Generated Disinformation

A new study challenges the validity of using LLM judges as proxies for human evaluation of AI-generated disinformation, finding that eight frontier LLM judges systematically diverge from human reader responses in their scoring, ranking, and reliance on textual signals. The research demonstrates that while LLMs agree strongly with each other, this internal coherence masks fundamental misalignment with actual human perception, raising critical questions about the reliability of automated content moderation at scale.

AIBullisharXiv – CS AI · Apr 107/10
🧠

Less is More: Data-Efficient Adaptation for Controllable Text-to-Video Generation

Researchers demonstrate a data-efficient fine-tuning method for text-to-video diffusion models that enables new generative controls using sparse, low-quality synthetic data rather than expensive, photorealistic datasets. Counterintuitively, models trained on simple synthetic data outperform those trained on high-fidelity real data, supported by both empirical results and theoretical justification.

AINeutralarXiv – CS AI · Apr 107/10
🧠

ATANT: An Evaluation Framework for AI Continuity

Researchers introduce ATANT, an open evaluation framework designed to measure whether AI systems can maintain coherent context and continuity across time without confusing information across different narratives. The framework achieves up to 100% accuracy in isolated scenarios but drops to 96% when managing 250 simultaneous narratives, revealing practical limitations in current AI memory architectures.

AIBearisharXiv – CS AI · Apr 107/10
🧠

Concentrated siting of AI data centers drives regional power-system stress under rising global compute demand

A new study reveals that AI data centers are becoming a critical driver of electricity demand, with projected consumption doubling to 239-295 TWh by 2030. The concentrated geographic clustering of these facilities in North America, Western Europe, and Asia-Pacific creates significant grid vulnerabilities in regions like Oregon, Virginia, and Ireland, requiring urgent infrastructure planning.

AIBullisharXiv – CS AI · Apr 107/10
🧠

Asking like Socrates: Socrates helps VLMs understand remote sensing images

Researchers introduce RS-EoT (Remote Sensing Evidence-of-Thought), a novel framework that enables vision-language models to reason more effectively about satellite imagery by iteratively seeking visual evidence rather than relying on linguistic patterns. The approach uses a self-play multi-agent system called SocraticAgent and reinforcement learning to address the 'Glance Effect,' where models superficially analyze large-scale remote sensing images, achieving state-of-the-art performance on multiple benchmarks.

AIBullisharXiv – CS AI · Apr 107/10
🧠

Qualixar OS: A Universal Operating System for AI Agent Orchestration

Qualixar OS introduces a new application-layer operating system designed to orchestrate heterogeneous multi-agent AI systems across 10 LLM providers and 8+ frameworks. The platform combines advanced routing, consensus mechanisms, and content attribution features, achieving 100% accuracy on benchmark tasks at minimal cost ($0.000039 per task).

$MKR
AIBearisharXiv – CS AI · Apr 107/10
🧠

Riemann-Bench: A Benchmark for Moonshot Mathematics

Researchers introduced Riemann-Bench, a private benchmark of 25 expert-curated mathematics problems designed to evaluate AI systems on research-level reasoning beyond competition mathematics. The benchmark reveals that all frontier AI models currently score below 10%, exposing a significant gap between olympiad-level problem solving and genuine mathematical research capabilities.

AIBearisharXiv – CS AI · Apr 107/10
🧠

Daily and Weekly Periodicity in Large Language Model Performance and Its Implications for Research

Researchers discovered that GPT-4o exhibits significant daily and weekly performance fluctuations when solving identical tasks under fixed conditions, with periodic variability accounting for approximately 20% of total variance. This finding fundamentally challenges the widespread assumption that LLM performance is time-invariant and raises critical concerns about the reliability and reproducibility of research utilizing large language models.

🧠 GPT-4
AIBullisharXiv – CS AI · Apr 107/10
🧠

Harnessing Hyperbolic Geometry for Harmful Prompt Detection and Sanitization

Researchers propose HyPE and HyPS, a two-part defense framework using hyperbolic geometry to detect and neutralize harmful prompts in Vision-Language Models. The approach offers a lightweight, interpretable alternative to blacklist filters and classifier-based systems that are vulnerable to adversarial attacks.

AINeutralarXiv – CS AI · Apr 107/10
🧠

Rethinking Generalization in Reasoning SFT: A Conditional Analysis on Optimization, Data, and Model Capability

Researchers challenge the conventional wisdom that supervised finetuning (SFT) merely memorizes while reinforcement learning generalizes. Their analysis reveals that reasoning SFT with chain-of-thought supervision can generalize across domains, but success depends critically on optimization duration, data quality, and base model strength, with generalization improvements coming at the cost of degraded safety performance.

AIBullishCoinTelegraph · Apr 107/10
🧠

CIA to integrate AI ‘co-workers’ to process intelligence, catch spies

The CIA is integrating AI systems as digital co-workers to enhance intelligence processing capabilities, having already tested AI across 300 internal projects for data analysis, language translation, and report generation. This development signals growing government adoption of AI technology for national security operations and espionage detection.

CIA to integrate AI ‘co-workers’ to process intelligence, catch spies
← PrevPage 22 of 452Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined