y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#research News & Analysis

909 articles tagged with #research. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

909 articles
AIBullisharXiv – CS AI · Feb 277/108
🧠

Learning beyond Teacher: Generalized On-Policy Distillation with Reward Extrapolation

Researchers propose Generalized On-Policy Distillation (G-OPD), a new AI training framework that improves upon standard on-policy distillation by introducing flexible reference models and reward scaling factors. The method, particularly ExOPD with reward extrapolation, enables smaller student models to surpass their teacher's performance in math reasoning and code generation tasks.

AINeutralarXiv – CS AI · Feb 277/105
🧠

On the Equivalence of Random Network Distillation, Deep Ensembles, and Bayesian Inference

Researchers establish theoretical connections between Random Network Distillation (RND), deep ensembles, and Bayesian inference for uncertainty quantification in deep learning models. The study proves that RND's uncertainty signals are equivalent to deep ensemble predictive variance and can mirror Bayesian posterior distributions, providing a unified theoretical framework for efficient uncertainty quantification methods.

AINeutralarXiv – CS AI · Feb 277/105
🧠

Using the Path of Least Resistance to Explain Deep Networks

Researchers propose Geodesic Integrated Gradients (GIG), a new method for explaining AI model decisions that uses curved paths instead of straight lines to compute feature importance. The method addresses flawed attributions in existing approaches by integrating gradients along geodesic paths under a model-induced Riemannian metric.

AINeutralarXiv – CS AI · Feb 277/103
🧠

Manifold of Failure: Behavioral Attraction Basins in Language Models

Researchers developed a new framework called MAP-Elites to systematically map vulnerability regions in Large Language Models, revealing distinct safety landscape patterns across different models. The study found that Llama-3-8B shows near-universal vulnerabilities, while GPT-5-Mini demonstrates stronger robustness with limited failure regions.

$NEAR
AIBullisharXiv – CS AI · Feb 277/107
🧠

OmniGAIA: Towards Native Omni-Modal AI Agents

Researchers introduce OmniGAIA, a comprehensive benchmark for evaluating omni-modal AI agents that can process video, audio, and image data simultaneously with complex reasoning capabilities. They also propose OmniAtlas, a foundation agent that enhances existing open-source models' ability to use tools across multiple modalities, marking progress toward more capable AI assistants.

AIBullisharXiv – CS AI · Feb 277/106
🧠

Abstracted Gaussian Prototypes for True One-Shot Concept Learning

Researchers introduce Abstracted Gaussian Prototypes (AGP), a new framework for one-shot concept learning that can classify and generate visual concepts from a single example. The system uses Gaussian Mixture Models and variational autoencoders to create robust prototypes without requiring pre-training, achieving human-level performance on generative tasks.

AIBullishIEEE Spectrum – AI · Feb 257/108
🧠

AI Is Acing Math Exams Faster Than Scientists Write Them

AI systems are rapidly advancing in mathematical capabilities, with models now solving over 40% of advanced undergraduate to postdoc-level problems compared to just 2% when benchmarks were introduced. Google DeepMind's Aletheia achieved autonomous PhD-level research results, while OpenAI solved 5 of 10 extremely difficult research problems in the new First Proof challenge.

AIBullishGoogle DeepMind Blog · Feb 127/108
🧠

Gemini 3 Deep Think: Advancing science, research and engineering

Gemini 3 Deep Think represents an updated specialized reasoning mode designed to tackle complex challenges in modern science, research, and engineering. The advancement focuses on enhanced problem-solving capabilities for technical and scientific applications.

AIBullishMIT News – AI · Feb 27/108
🧠

How generative AI can help scientists synthesize complex materials

MIT researchers developed DiffSyn, a generative AI model that provides recipes for synthesizing new materials. This breakthrough could accelerate scientific experimentation by reducing the time from hypothesis to practical application.

AINeutralGoogle DeepMind Blog · Dec 117/104
🧠

Deepening our partnership with the UK AI Security Institute

Google DeepMind and the UK AI Security Institute (AISI) are strengthening their collaboration on critical AI safety and security research. This partnership aims to advance research in AI safety measures and security protocols.

AIBullishOpenAI News · Dec 117/104
🧠

Ten years

OpenAI publishes a ten-year retrospective highlighting their journey from early research to deploying widely-used AI systems that have transformed capabilities across industries. The company reflects on key lessons learned while maintaining their commitment to developing artificial general intelligence (AGI) that serves humanity's benefit.

AIBullishMIT News – AI · Dec 57/106
🧠

MIT researchers “speak objects into existence” using AI and robotics

MIT researchers have developed a speech-to-reality system that combines 3D generative AI with robotic assembly to create physical objects on demand from voice commands. The technology represents a significant advancement in AI-driven manufacturing and automation capabilities.

AINeutralOpenAI News · Nov 77/107
🧠

Understanding prompt injections: a frontier security challenge

Prompt injections represent a significant security vulnerability in AI systems, requiring specialized research and countermeasures. OpenAI is actively developing safeguards and training methods to protect users from these frontier attacks.

AIBullishOpenAI News · Sep 57/107
🧠

Why language models hallucinate

OpenAI has published new research explaining the underlying causes of language model hallucinations. The study demonstrates how better evaluation methods can improve AI systems' reliability, honesty, and safety performance.

AINeutralOpenAI News · Sep 57/106
🧠

GPT-5 bio bug bounty call

OpenAI has launched a Bio Bug Bounty program inviting researchers to test GPT-5's safety protocols using universal jailbreak prompts. The program offers rewards up to $25,000 for successfully identifying vulnerabilities in the upcoming AI model's biological safety measures.

AIBullishSynced Review · Jun 167/105
🧠

MIT Researchers Unveil “SEAL”: A New Step Towards Self-Improving AI

MIT researchers have developed SEAL, a new framework that enables large language models to self-edit and update their own weights through reinforcement learning. This represents a significant advancement toward creating AI systems capable of autonomous self-improvement.

AIBullishGoogle DeepMind Blog · Nov 187/105
🧠

The AI for Science Forum: A new era of discovery

The AI Science Forum showcases artificial intelligence's transformative potential in accelerating scientific discovery and addressing global challenges. The forum emphasizes the critical need for collaboration between scientists, policymakers, and industry leaders to maximize AI's impact on research and innovation.

AIBullishOpenAI News · Oct 237/105
🧠

Simplifying, stabilizing, and scaling continuous-time consistency models

Researchers have developed improved continuous-time consistency models that achieve sample quality comparable to leading diffusion models while requiring only two sampling steps. This represents a significant efficiency breakthrough in AI model sampling technology.

AIBullishOpenAI News · Dec 147/105
🧠

Superalignment Fast Grants

A new $10 million grant program has been launched to fund technical research focused on aligning and ensuring the safety of superhuman AI systems. The initiative targets key areas including weak-to-strong generalization, interpretability, and scalable oversight methods.

AIBullishOpenAI News · Jun 117/106
🧠

Improving language understanding with unsupervised learning

Researchers achieved state-of-the-art results on diverse language tasks using a scalable system combining transformers and unsupervised pre-training. The approach demonstrates that pairing supervised learning with unsupervised pre-training is highly effective for language understanding tasks.

AIBullishOpenAI News · Mar 167/104
🧠

Learning to communicate

OpenAI has published new research demonstrating that AI agents can develop their own communication language. This research explores emergent communication capabilities in artificial intelligence systems.

CryptoBullishEthereum Foundation Blog · Jan 197/101
⛓️

An Update on Integrating Zcash on Ethereum (ZoE)

Ethereum R&D team and Zcash Company are collaborating on the Zcash on Ethereum (ZoE) research project, which aims to combine blockchain programmability with privacy features. This joint initiative explores integrating Zcash's privacy capabilities with Ethereum's smart contract functionality.

$ETH
AINeutralarXiv – CS AI · 1d ago6/10
🧠

Modeling Co-Pilots for Text-to-Model Translation

Researchers introduce Text2Model and Text2Zinc, frameworks that use large language models to translate natural language descriptions into formal optimization and satisfaction models. The work represents the first unified approach combining both problem types with a solver-agnostic architecture, though experiments reveal LLMs remain imperfect at this task despite showing competitive performance.

← PrevPage 11 of 37Next →