y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All28,436🧠AI12,366⛓️Crypto10,325💎DeFi1,077🤖AI × Crypto505📰General4,163
🧠

AI

12,366 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.

12366 articles
AIBullishOpenAI News · Nov 307/107
🧠

Introducing ChatGPT

OpenAI has introduced ChatGPT, a conversational AI model designed to interact through dialogue. The model can answer follow-up questions, admit mistakes, challenge incorrect premises, and reject inappropriate requests.

AIBullishOpenAI News · Nov 37/105
🧠

DALL·E API now available in public beta

OpenAI has launched the DALL·E API in public beta, allowing developers to integrate the AI image generation technology into their applications. This marks a significant step in making advanced AI image generation capabilities more widely accessible to developers and businesses.

AIBullishOpenAI News · Sep 217/107
🧠

Introducing Whisper

OpenAI has trained and open-sourced Whisper, a neural network for speech recognition that achieves human-level robustness and accuracy on English speech. The model represents a significant advancement in AI speech recognition technology and is being made freely available to the community.

AIBullishOpenAI News · Jul 207/106
🧠

DALL·E now available in beta

OpenAI is launching DALL·E in beta, inviting 1 million waitlist users over the coming weeks. Users receive free monthly credits to create images, with additional credits available for purchase at $15 per 115 generations.

AIBullishOpenAI News · Jun 237/105
🧠

Learning to play Minecraft with Video PreTraining

Researchers developed a neural network that learned to play Minecraft using Video PreTraining (VPT) on massive unlabeled human gameplay footage with minimal labeled data. The AI can craft diamond tools through standard keyboard and mouse inputs, representing progress toward general-purpose computer-using agents.

AIBullishOpenAI News · Jun 27/108
🧠

Best practices for deploying language models

Cohere, OpenAI, and AI21 Labs have collaboratively developed a preliminary set of best practices for organizations developing or deploying large language models. This represents a significant industry effort to establish standards and guidelines for responsible AI development and deployment.

AIBullishOpenAI News · May 247/107
🧠

Powering next generation applications with OpenAI Codex

OpenAI Codex is now powering 70 different applications across various use cases through the OpenAI API. This represents significant adoption of OpenAI's code generation technology across the developer ecosystem.

AIBullishOpenAI News · Feb 27/105
🧠

Solving (some) formal math olympiad problems

Researchers have developed a neural theorem prover for Lean that successfully solved challenging high-school mathematics olympiad problems, including those from AMC12, AIME competitions, and two problems adapted from the International Mathematical Olympiad (IMO). This represents a significant advancement in AI's ability to handle formal mathematical reasoning and proof generation.

AIBullishOpenAI News · Jan 277/107
🧠

Aligning language models to follow instructions

OpenAI has developed InstructGPT models that significantly improve upon GPT-3's ability to follow user instructions while being more truthful and less toxic. These models use human feedback training and alignment research techniques, and have been deployed as the default language models on OpenAI's API.

AIBullishOpenAI News · Dec 167/106
🧠

WebGPT: Improving the factual accuracy of language models through web browsing

OpenAI has fine-tuned GPT-3 to create WebGPT, which can browse the web through a text-based browser to provide more accurate answers to open-ended questions. This development represents a significant advancement in AI factual accuracy by allowing language models to access real-time information beyond their training data.

AIBullishOpenAI News · Aug 107/105
🧠

OpenAI Codex

OpenAI has released an improved version of Codex, their AI system that converts natural language into code. The enhanced system is now available through their API in private beta, marking a significant advancement in AI-powered programming tools.

AIBullishOpenAI News · Jul 287/106
🧠

Introducing Triton: Open-source GPU programming for neural networks

OpenAI has released Triton 1.0, an open-source Python-like programming language that allows researchers without CUDA expertise to write highly efficient GPU code for neural networks. The tool aims to democratize GPU programming by making it accessible to those without specialized hardware programming knowledge while maintaining performance comparable to expert-level code.

AINeutralOpenAI News · May 37/106
🧠

Will Hurd joins OpenAI’s board of directors

Former Congressman Will Hurd has joined OpenAI's board of directors to bring public policy expertise to the company. OpenAI states this addition supports their mission to develop general-purpose artificial intelligence that benefits all humanity by combining technology and policy knowledge.

AIBullishOpenAI News · Mar 257/108
🧠

GPT-3 powers the next generation of apps

Over 300 applications are now integrating GPT-3 through OpenAI's API to deliver advanced AI features including search, conversation, and text completion capabilities. This demonstrates significant adoption of GPT-3 technology across various application types and use cases.

AIBullishOpenAI News · Mar 47/105
🧠

Multimodal neurons in artificial neural networks

Researchers discovered multimodal neurons in OpenAI's CLIP model that respond to concepts regardless of how they're presented - literally, symbolically, or conceptually. This breakthrough helps explain CLIP's ability to accurately classify unexpected visual representations and provides insights into how AI models learn associations and biases.

AIBullishOpenAI News · Jan 257/103
🧠

Scaling Kubernetes to 7,500 nodes

A team has successfully scaled Kubernetes clusters to 7,500 nodes, creating infrastructure capable of supporting both large-scale AI models like GPT-3, CLIP, and DALL-E, as well as smaller research projects. This achievement demonstrates significant progress in cloud infrastructure scalability for AI workloads.

AIBullishHugging Face Blog · Jan 187/107
🧠

How we sped up transformer inference 100x for 🤗 API customers

Hugging Face announced they achieved a 100x speed improvement for transformer inference in their API services. The optimization breakthrough significantly enhances performance for AI model deployment and reduces latency for customers using their platform.

AIBullishOpenAI News · Jan 57/107
🧠

DALL·E: Creating images from text

OpenAI has developed DALL·E, a neural network that generates images from text descriptions. This AI system can create visual content for a wide range of concepts that can be expressed in natural language.

AIBullishOpenAI News · Jan 57/105
🧠

CLIP: Connecting text and images

OpenAI introduces CLIP, a neural network that learns visual concepts from natural language supervision and can perform visual classification tasks without specific training. CLIP demonstrates zero-shot capabilities similar to GPT-2 and GPT-3, enabling it to recognize visual categories simply by providing their names.

AIBullishOpenAI News · Sep 227/107
🧠

OpenAI licenses GPT-3 technology to Microsoft

OpenAI has agreed to license its GPT-3 technology to Microsoft, allowing the tech giant to integrate the advanced language model into its own products and services. This partnership represents a significant commercial expansion for OpenAI's flagship AI technology.

AIBullishOpenAI News · Sep 47/105
🧠

Learning to summarize with human feedback

Researchers have successfully applied reinforcement learning from human feedback (RLHF) to improve language model summarization capabilities. This approach uses human preferences to guide the training process, resulting in models that produce higher quality summaries aligned with human expectations.

AIBullishOpenAI News · Jun 177/105
🧠

Image GPT

Researchers demonstrated that transformer models originally designed for language processing can generate coherent images when trained on pixel sequences. The study establishes a correlation between image generation quality and classification accuracy, showing their generative model contains features competitive with top convolutional networks in unsupervised learning.

AIBullishOpenAI News · Jun 117/103
🧠

OpenAI API

OpenAI has announced the release of an API that will provide developers access to their new AI models. This move opens up OpenAI's latest AI capabilities to third-party developers and applications through a programmatic interface.

AIBullishOpenAI News · May 57/104
🧠

AI and efficiency

A new analysis reveals that compute requirements for training neural networks to match ImageNet classification performance have decreased by 50% every 16 months since 2012. Training a network to AlexNet-level performance now requires 44 times less compute than in 2012, far outpacing Moore's Law improvements which would only yield 11x cost reduction over the same period.

AINeutralOpenAI News · Dec 57/105
🧠

Deep double descent

Research reveals that deep learning models including CNNs, ResNets, and transformers exhibit a double descent phenomenon where performance improves, deteriorates, then improves again as model size, data size, or training time increases. This universal behavior can be mitigated through proper regularization, though the underlying mechanisms remain unclear and require further investigation.

← PrevPage 104 of 495Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined