y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#ai-research News & Analysis

984 articles tagged with #ai-research. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

984 articles
AIBullishGoogle Research Blog · Jul 286/107
🧠

SensorLM: Learning the language of wearable sensors

SensorLM represents a breakthrough in generative AI applied to wearable sensor data, enabling AI systems to understand and process the complex language of sensor inputs from devices like smartwatches and fitness trackers. This development could revolutionize how AI interprets biometric and movement data for healthcare, fitness, and human-computer interaction applications.

AINeutralOpenAI News · Jul 225/106
🧠

OpenAI’s new economic analysis

OpenAI published new economic analysis examining ChatGPT's impact on the economy. The company also launched a research collaboration to study AI's broader effects on labor markets and productivity.

AIBullishGoogle DeepMind Blog · Jun 256/105
🧠

AlphaGenome: AI for better understanding the genome

AlphaGenome introduces a new unified DNA sequence model designed to improve regulatory variant-effect prediction and enhance understanding of genome function. The AI-powered genomics tool is now accessible through an API for researchers and developers.

AIBullishSynced Review · Jun 166/107
🧠

Researchers from PSU and Duke introduce “Multi-Agent Systems Automated Failure Attribution

Researchers from Pennsylvania State University and Duke University have introduced automated failure attribution for multi-agent systems, a methodology that transforms the complex process of identifying system failures and their causes into a quantifiable and analyzable problem. This development could significantly improve the debugging and accountability processes in multi-agent AI system development.

AIBullishLil'Log (Lilian Weng) · May 16/10
🧠

Why We Think

This article introduces a review of recent developments in test-time compute and Chain-of-thought (CoT) techniques for AI models. The post examines how providing models with 'thinking time' during inference leads to significant performance improvements while raising new research questions.

AINeutralHugging Face Blog · Apr 166/108
🧠

Introducing HELMET: Holistically Evaluating Long-context Language Models

HELMET is a new holistic evaluation framework for assessing long-context language models across multiple dimensions and use cases. The framework aims to provide comprehensive benchmarking capabilities for AI models that can process extended text sequences.

AIBullishNVIDIA AI Blog · Mar 206/104
🧠

Innovation to Impact: How NVIDIA Research Fuels Transformative Work in AI, Graphics and Beyond

NVIDIA's research organization, a global team of around 400 experts established in 2006, serves as the foundation for the company's landmark innovations in AI, accelerated computing, real-time ray tracing, and data center connectivity. The research division spans multiple fields including computer architecture, generative AI, graphics, and robotics, driving transformative technological developments.

Innovation to Impact: How NVIDIA Research Fuels Transformative Work in AI, Graphics and Beyond
AIBullishOpenAI News · Feb 26/105
🧠

Introducing deep research

A new AI research agent has been launched that can synthesize large amounts of online information and complete complex multi-step research tasks through advanced reasoning capabilities. The tool is currently available to Pro users with rollout planned for Plus and Team subscribers.

AINeutralOpenAI News · Jan 225/105
🧠

Trading inference-time compute for adversarial robustness

The article discusses research on trading computational resources during inference time to improve adversarial robustness in AI systems. This approach explores how allocating more compute power at inference can enhance model security against adversarial attacks.

AIBullishGoogle DeepMind Blog · Dec 176/103
🧠

FACTS Grounding: A new benchmark for evaluating the factuality of large language models

Researchers have introduced FACTS Grounding, a new benchmark designed to evaluate how accurately large language models ground their responses in source material and avoid hallucinations. The benchmark includes a comprehensive evaluation system and online leaderboard to measure LLM factuality performance.

AIBullishGoogle DeepMind Blog · Dec 56/104
🧠

Google DeepMind at NeurIPS 2024

Google DeepMind presents research at NeurIPS 2024 focused on advancing adaptive AI agents, empowering 3D scene creation capabilities, and developing innovations in large language model training. The research aims to create smarter and safer AI systems for future applications.

AIBullishOpenAI News · Sep 125/107
🧠

Answering quantum physics questions with OpenAI o1

Quantum physicist Mario Krenn is utilizing OpenAI's o1 model to tackle fundamental questions in quantum physics. The collaboration demonstrates the potential for advanced AI systems to assist researchers in solving complex scientific problems.

AIBullishOpenAI News · Jun 206/105
🧠

Improved Techniques for Training Consistency Models

Consistency models represent a new family of generative AI models that can produce high-quality data samples in a single step without requiring adversarial training methods. This research focuses on developing improved training techniques for these models.

AINeutralOpenAI News · Jun 75/107
🧠

Expanding on how Voice Engine works and our safety research

OpenAI provides technical insights into Voice Engine, their text-to-speech model technology, along with details about their safety research approach. The article explores the underlying technology and safety considerations for their voice synthesis capabilities.

AINeutralOpenAI News · Aug 246/107
🧠

Our approach to alignment research

An AI research organization outlines their approach to alignment research, focusing on improving AI systems' ability to learn from human feedback and assist in AI evaluation. Their ultimate goal is developing a sufficiently aligned AI system capable of solving all remaining AI alignment challenges.

AINeutralOpenAI News · Jul 256/106
🧠

A hazard analysis framework for code synthesis large language models

The article presents a framework for analyzing potential hazards and risks associated with large language models that generate code. This research addresses growing concerns about AI-generated code safety and reliability as LLMs become more widely adopted for software development tasks.

AINeutralOpenAI News · May 285/104
🧠

Teaching models to express their uncertainty in words

The article title suggests coverage of research into teaching AI models to verbally express uncertainty, but no article content was provided for analysis. This represents a significant area of AI development focused on improving model transparency and reliability.

AIBullishOpenAI News · Apr 136/104
🧠

Hierarchical text-conditional image generation with CLIP latents

The article discusses hierarchical text-conditional image generation using CLIP latents, a technique that leverages CLIP's understanding of text-image relationships to generate images based on textual descriptions. This approach represents an advancement in AI image generation capabilities by incorporating hierarchical structures and CLIP's semantic understanding.

AIBullishOpenAI News · Jun 106/105
🧠

Improving language model behavior by training on a curated dataset

Researchers have discovered that language model behavior can be improved for specific behavioral values through fine-tuning on small, curated datasets. This approach offers a more efficient method for aligning AI models with desired behavioral outcomes without requiring massive training resources.

AIBullishOpenAI News · Jun 205/103
🧠

Procgen and MineRL Competitions

OpenAI announces co-organization of two NeurIPS 2020 AI competitions with AIcrowd, Carnegie Mellon University, and DeepMind. The competitions utilize Procgen Benchmark and MineRL platforms for AI research advancement.

AIBullishOpenAI News · Apr 146/105
🧠

OpenAI Microscope

OpenAI has launched Microscope, a visualization tool that provides detailed views of layers and neurons in eight vision AI models commonly used in interpretability research. The tool aims to help researchers better understand and analyze the internal features that develop within neural networks.

← PrevPage 31 of 40Next →