Real-time AI-curated news from 31,649+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.
AIBullisharXiv – CS AI · Mar 37/103
🧠Researchers introduce Kiwi-Edit, a new video editing architecture that combines instruction-based and reference-guided editing for more precise visual control. The team created RefVIE, a large-scale dataset for training, and achieved state-of-the-art results in controllable video editing through a unified approach that addresses limitations of natural language descriptions.
AI × CryptoBullisharXiv – CS AI · Mar 37/103
🤖Researchers have developed SymGPT, a new tool that combines large language models with symbolic execution to automatically audit smart contracts for ERC rule violations. The tool identified 5,783 violations in 4,000 real-world contracts, including 1,375 with clear attack paths for financial theft, outperforming existing automated analysis methods.
$ETH
AINeutralarXiv – CS AI · Mar 37/104
🧠Researchers identify a 'safety mirage' problem in vision language models where supervised fine-tuning creates spurious correlations that make models vulnerable to simple attacks and overly cautious with benign queries. They propose machine unlearning as an alternative that reduces attack success rates by up to 60.27% and unnecessary rejections by over 84.20%.
AIBullisharXiv – CS AI · Mar 37/103
🧠Researchers developed mCLM, a 3-billion parameter modular Chemical Language Model that generates functional molecules compatible with automated synthesis by tokenizing at the building block level rather than individual atoms. The AI system outperformed larger models including GPT-5 in creating synthesizable drug candidates and can iteratively improve failed clinical trial compounds.
AINeutralarXiv – CS AI · Mar 37/103
🧠Researchers introduced CityLens, a comprehensive benchmark for evaluating Large Vision-Language Models' ability to predict socioeconomic indicators from urban imagery. The study tested 17 state-of-the-art LVLMs across 11 prediction tasks using data from 17 global cities, revealing promising capabilities but significant limitations in urban socioeconomic analysis.
AIBearisharXiv – CS AI · Mar 37/104
🧠Researchers have identified critical security vulnerabilities in Computer-Use Agents (CUAs) through Visual Prompt Injection attacks, where malicious instructions are embedded in user interfaces. Their VPI-Bench study shows CUAs can be deceived at rates up to 51% and Browser-Use Agents up to 100% on certain platforms, with current defenses proving inadequate.
AINeutralarXiv – CS AI · Mar 37/104
🧠Researchers introduce 'Control Tax' - a framework to quantify the operational and financial costs of implementing AI safety oversight mechanisms. The study provides theoretical models and empirical cost estimates to help organizations balance AI safety measures with economic feasibility in real-world deployments.
AIBullisharXiv – CS AI · Mar 37/103
🧠Researchers introduce SPIRAL, a self-play reinforcement learning framework that enables language models to develop reasoning capabilities by playing zero-sum games against themselves without human supervision. The system improves performance by up to 10% across 8 reasoning benchmarks on multiple model families including Qwen and Llama.
AIBullisharXiv – CS AI · Mar 37/104
🧠Researchers developed SpiroLLM, the first multimodal large language model capable of understanding spirogram time series data for COPD diagnosis. Using data from 234,028 UK Biobank individuals, the model achieved 0.8977 diagnostic AUROC and maintained 100% valid response rate even with missing data, far outperforming text-only models.
AIBullisharXiv – CS AI · Mar 37/103
🧠Researchers have developed EigenBench, a new black-box method for measuring how well AI language models align with human values. The system uses an ensemble of models to judge each other's outputs against a given constitution, producing alignment scores that closely match human evaluator judgments.
AIBullisharXiv – CS AI · Mar 37/105
🧠Researchers have developed DeepMedix-R1, a foundation model for chest X-ray interpretation that provides transparent, step-by-step reasoning alongside accurate diagnoses to address the black-box problem in medical AI. The model uses reinforcement learning to align diagnostic outputs with clinical plausibility and significantly outperforms existing models in report generation and visual question answering tasks.
AIBullisharXiv – CS AI · Mar 37/103
🧠Researchers have identified that the 'reversal curse' in language models - their inability to infer 'B is A' from 'A is B' - can be overcome through bilinear representation structures. Training models on synthetic relational knowledge graphs creates internal geometries that enable consistent model editing and logical inference of reverse facts.
AIBullisharXiv – CS AI · Mar 37/104
🧠BridgeDrive introduces a novel diffusion bridge policy for autonomous driving trajectory planning that transforms coarse anchor trajectories into refined plans while maintaining theoretical consistency. The system achieves state-of-the-art performance on the Bench2Drive benchmark with a 7.72% improvement in success rate and is compatible with real-time deployment.
AINeutralarXiv – CS AI · Mar 37/104
🧠Researchers discovered that large reasoning models (LRMs) suffer from inconsistent answers due to competing mechanisms between Chain-of-Thought reasoning and memory retrieval. They developed FARL, a new fine-tuning framework that suppresses retrieval shortcuts to promote genuine reasoning capabilities in AI models.
AIBullisharXiv – CS AI · Mar 37/102
🧠Researchers propose Intervened Preference Optimization (IPO) to address safety issues in Large Reasoning Models, where chain-of-thought reasoning contains harmful content even when final responses appear safe. The method achieves over 30% reduction in harmfulness while maintaining reasoning performance.
AINeutralarXiv – CS AI · Mar 37/103
🧠Researchers propose TRACE (Truncated Reasoning AUC Evaluation), a new method to detect implicit reward hacking in AI reasoning models. The technique identifies when AI models exploit loopholes by measuring reasoning effort through progressively truncating chain-of-thought responses, achieving over 65% improvement in detection compared to existing monitors.
$CRV
AIBullisharXiv – CS AI · Mar 37/104
🧠Doctor-R1 is a new AI agent that combines accurate medical decision-making with strategic, empathetic patient consultation skills through reinforcement learning. The system outperforms existing open-source medical LLMs and proprietary models on clinical benchmarks while demonstrating superior communication quality and patient-centric performance.
AIBullisharXiv – CS AI · Mar 37/103
🧠Researchers propose Decoupled Reward Policy Optimization (DRPO), a new framework that reduces computational costs in large reasoning models by 77% while maintaining performance. The method addresses the 'overthinking' problem where AI models generate unnecessarily long reasoning for simple questions, achieving significant efficiency gains over existing approaches.
AINeutralarXiv – CS AI · Mar 37/105
🧠Researchers introduce DAG-Math, a new framework for evaluating mathematical reasoning in Large Language Models that models Chain-of-Thought as rule-based processes over directed acyclic graphs. The framework includes a 'logical closeness' metric that reveals significant differences in reasoning quality between LLM families, even when final answer accuracy appears comparable.
AINeutralarXiv – CS AI · Mar 37/104
🧠Researchers propose the Compression Efficiency Principle (CEP) to explain why artificial neural networks and biological brains develop similar representations despite different substrates. The theory suggests both systems converge on efficient compression strategies that encode stable invariants rather than unstable correlations, providing a unified framework for understanding intelligence across biological and artificial systems.
AIBullisharXiv – CS AI · Mar 37/102
🧠Researchers have developed FM Agent, a multi-agent AI framework that combines large language models with evolutionary search to autonomously solve complex research problems. The system achieved state-of-the-art results across multiple domains including operations research, machine learning, and GPU optimization without human intervention.
AIBullisharXiv – CS AI · Mar 37/104
🧠Researchers introduced AgentMath, a new AI framework that combines language models with code interpreters to solve complex mathematical problems more efficiently than current Large Reasoning Models. The system achieves state-of-the-art performance on mathematical competition benchmarks, with AgentMath-30B-A3B reaching 90.6% accuracy on AIME24 while remaining competitive with much larger models like OpenAI-o3.
AIBullisharXiv – CS AI · Mar 37/104
🧠Researchers have developed Hierarchical Speculative Decoding (HSD), a new method that significantly improves AI inference speed while maintaining accuracy by solving joint intractability problems in verification processes. The technique shows over 12% performance gains when integrated with existing frameworks like EAGLE-3, establishing new state-of-the-art efficiency standards.
AINeutralarXiv – CS AI · Mar 37/104
🧠Researchers introduce PsyAgent, a new AI framework that creates human-like agents by combining personality modeling based on Big Five traits with contextual social awareness. The system uses structured prompts and fine-tuning to produce AI agents that maintain stable personality traits while adapting appropriately to different social situations and roles.
AIBullisharXiv – CS AI · Mar 37/104
🧠Researchers introduce DRAGON, a new framework that combines Large Language Models with metaheuristic optimization to solve large-scale combinatorial optimization problems. The system decomposes complex problems into manageable subproblems and achieves near-optimal results on datasets with over 3 million variables, overcoming the scalability limitations of existing LLM-based solvers.
$NEAR