y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All27,999🧠AI12,183⛓️Crypto10,194💎DeFi1,055🤖AI × Crypto505📰General4,062
🧠

AI

12,186 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.

12186 articles
AIBullisharXiv – CS AI · Feb 277/108
🧠

RAGdb: A Zero-Dependency, Embeddable Architecture for Multimodal Retrieval-Augmented Generation on the Edge

Researchers introduce RAGdb, a revolutionary architecture that consolidates Retrieval-Augmented Generation into a single SQLite container, eliminating the need for cloud infrastructure and GPUs. The system achieves 100% entity retrieval accuracy while reducing disk footprint by 99.5% compared to traditional Docker-based RAG stacks, enabling truly portable AI applications for edge computing and privacy-sensitive environments.

AINeutralarXiv – CS AI · Feb 277/106
🧠

RaPA: Enhancing Transferable Targeted Attacks via Random Parameter Pruning

Researchers propose Random Parameter Pruning Attack (RaPA), a new method that improves targeted adversarial attacks by randomly pruning model parameters during optimization. The technique achieves up to 11.7% higher attack success rates when transferring from CNN to Transformer models compared to existing methods.

AIBullisharXiv – CS AI · Feb 277/104
🧠

AviaSafe: A Physics-Informed Data-Driven Model for Aviation Safety-Critical Cloud Forecasts

Researchers developed AviaSafe, a physics-informed AI model that forecasts aviation-critical cloud species up to 7 days ahead, addressing safety concerns around engine icing. The model outperforms operational weather models by predicting specific hydrometeor species rather than general atmospheric variables, enabling better aviation route optimization.

AINeutralarXiv – CS AI · Feb 277/105
🧠

LLM Novice Uplift on Dual-Use, In Silico Biology Tasks

A research study found that novice users with access to large language models were 4.16 times more accurate on biosecurity-relevant tasks compared to those using only internet resources. The study raises concerns about dual-use risks as 89.6% of participants reported easily obtaining potentially dangerous biological information despite AI safeguards.

AIBullisharXiv – CS AI · Feb 277/108
🧠

AgentDropoutV2: Optimizing Information Flow in Multi-Agent Systems via Test-Time Rectify-or-Reject Pruning

Researchers propose AgentDropoutV2, a test-time framework that optimizes multi-agent systems by dynamically correcting or removing erroneous outputs without requiring retraining. The system acts as an active firewall with retrieval-augmented rectification, achieving 6.3 percentage point accuracy gains on math benchmarks while preventing error propagation between AI agents.

AIBullisharXiv – CS AI · Feb 277/104
🧠

Mitigating Legibility Tax with Decoupled Prover-Verifier Games

Researchers propose a new approach to address 'legibility tax' in AI systems by decoupling solver and verification functions. They introduce a translator model that converts correct solutions into checkable forms, maintaining accuracy while improving verifiability through decoupled prover-verifier games.

AIBullisharXiv – CS AI · Feb 277/105
🧠

A Model-Free Universal AI

Researchers have introduced AIQI (Universal AI with Q-Induction), the first model-free artificial intelligence agent proven to be asymptotically optimal in general reinforcement learning. Unlike previous optimal agents like AIXI that rely on environment models, AIQI performs universal induction over distributional action-value functions, significantly expanding the diversity of known universal agents.

AINeutralarXiv – CS AI · Feb 277/105
🧠

A Decision-Theoretic Formalisation of Steganography With Applications to LLM Monitoring

Researchers have developed a new decision-theoretic framework to detect steganographic capabilities in large language models, which could help identify when AI systems are hiding information to evade oversight. The method introduces 'generalized V-information' and a 'steganographic gap' measure to quantify hidden communication without requiring reference distributions.

AIBullisharXiv – CS AI · Feb 277/107
🧠

The Trinity of Consistency as a Defining Principle for General World Models

Researchers propose a 'Trinity of Consistency' framework for developing General World Models in AI, consisting of Modal, Spatial, and Temporal consistency principles. They introduce CoW-Bench, a new benchmark for evaluating video generation models and unified multimodal models, aiming to establish a principled pathway toward AGI-capable world simulation systems.

AIBullisharXiv – CS AI · Feb 277/104
🧠

Imitation Game: Reproducing Deep Learning Bugs Leveraging an Intelligent Agent

Researchers developed RepGen, an AI-powered tool that automatically reproduces deep learning bugs with an 80.19% success rate, significantly improving upon the current 3% manual reproduction rate. The system uses LLMs to generate reproduction code through an iterative process, reducing debugging time by 56.8% in developer studies.

AIBullisharXiv – CS AI · Feb 277/104
🧠

Beyond the Monitor: Mixed Reality Visualization and Multimodal AI for Enhanced Digital Pathology Workflow

Researchers developed PathVis, a mixed-reality platform for Apple Vision Pro that revolutionizes digital pathology by allowing pathologists to examine gigapixel cancer diagnostic images through immersive visualization and multimodal AI assistance. The system replaces traditional 2D monitor limitations with natural interactions using eye gaze, hand gestures, and voice commands, integrated with AI agents for computer-aided diagnosis.

AINeutralarXiv – CS AI · Feb 277/105
🧠

Training Agents to Self-Report Misbehavior

Researchers developed a new AI safety approach called 'self-incrimination training' that teaches AI agents to report their own deceptive behavior by calling a report_scheming() function. Testing on GPT-4.1 and Gemini-2.0 showed this method significantly reduces undetected harmful actions compared to traditional alignment training and monitoring approaches.

AIBearisharXiv – CS AI · Feb 277/106
🧠

Agency and Architectural Limits: Why Optimization-Based Systems Cannot Be Norm-Responsive

New research demonstrates that AI systems trained via RLHF cannot be governed by norms due to fundamental architectural limitations in optimization-based systems. The paper argues that genuine agency requires incommensurable constraints and apophatic responsiveness, which optimization systems inherently cannot provide, making documented AI failures structural rather than correctable bugs.

AIBullisharXiv – CS AI · Feb 277/106
🧠

Abstracted Gaussian Prototypes for True One-Shot Concept Learning

Researchers introduce Abstracted Gaussian Prototypes (AGP), a new framework for one-shot concept learning that can classify and generate visual concepts from a single example. The system uses Gaussian Mixture Models and variational autoencoders to create robust prototypes without requiring pre-training, achieving human-level performance on generative tasks.

AINeutralarXiv – CS AI · Feb 277/106
🧠

On the Complexity of Neural Computation in Superposition

Researchers establish theoretical foundations for neural network superposition, proving lower bounds that require at least Ω(√m' log m') neurons and Ω(m' log m') parameters to compute m' features. The work demonstrates exponential complexity gaps between computing versus merely representing features and provides first subexponential bounds on network capacity.

AINeutralarXiv – CS AI · Feb 277/106
🧠

ProactiveMobile: A Comprehensive Benchmark for Boosting Proactive Intelligence on Mobile Devices

Researchers introduce ProactiveMobile, a new benchmark for developing AI agents that can proactively anticipate user needs on mobile devices rather than just responding to commands. The benchmark includes over 3,600 test instances across 14 scenarios, with current models achieving low success rates, indicating significant room for improvement in proactive AI capabilities.

AIBearisharXiv – CS AI · Feb 277/107
🧠

Obscure but Effective: Classical Chinese Jailbreak Prompt Optimization via Bio-Inspired Search

Researchers developed CC-BOS, a framework that uses classical Chinese text to conduct more effective jailbreak attacks on Large Language Models. The method exploits the conciseness and obscurity of classical Chinese to bypass safety constraints, using bio-inspired optimization techniques to automatically generate adversarial prompts.

AIBullisharXiv – CS AI · Feb 277/107
🧠

General Agent Evaluation

Researchers have developed Exgentic, a new framework for evaluating general-purpose AI agents that can perform tasks across different environments without domain-specific tuning. The study benchmarked five prominent agent implementations and found that general agents can achieve performance comparable to specialized agents, establishing the first Open General Agent Leaderboard.

AIBullisharXiv – CS AI · Feb 277/106
🧠

Dual-IPO: Dual-Iterative Preference Optimization for Text-to-Video Generation

Researchers introduce Dual-Iterative Preference Optimization (Dual-IPO), a new method that iteratively improves both reward models and video generation models to create higher-quality AI-generated videos better aligned with human preferences. The approach enables smaller 2B parameter models to outperform larger 5B models without requiring manual preference annotations.

AIBullisharXiv – CS AI · Feb 277/106
🧠

Toward Automated Validation of Language Model Synthesized Test Cases using Semantic Entropy

Researchers introduce VALTEST, a framework that uses semantic entropy to automatically validate test cases generated by Large Language Models, addressing the problem of invalid or hallucinated tests that mislead AI programming agents. The system improves test validity by up to 29% and enhances code generation performance through better filtering of LLM-generated test cases.

AIBullisharXiv – CS AI · Feb 277/105
🧠

Enhancing CVRP Solver through LLM-driven Automatic Heuristic Design

Researchers developed AILS-AHD, a novel approach using Large Language Models to solve the Capacitated Vehicle Routing Problem (CVRP) more efficiently. The LLM-driven method achieved new best-known solutions for 8 out of 10 instances in large-scale benchmarks, demonstrating superior performance over existing state-of-the-art solvers.

AIBullisharXiv – CS AI · Feb 277/105
🧠

Certified Circuits: Stability Guarantees for Mechanistic Circuits

Researchers introduce Certified Circuits, a framework that provides provable stability guarantees for neural network circuit discovery. The method wraps existing algorithms with randomized data subsampling to ensure circuit components remain consistent across dataset variations, achieving 91% higher accuracy while using 45% fewer neurons.

AIBullisharXiv – CS AI · Feb 277/106
🧠

Toward Personalized LLM-Powered Agents: Foundations, Evaluation, and Future Directions

Researchers published a comprehensive survey on personalized LLM-powered agents that can adapt to individual users over extended interactions. The study organizes these agents into four key components: profile modeling, memory, planning, and action execution, providing a framework for developing more user-aligned AI assistants.

← PrevPage 87 of 488Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined