88 articles tagged with #scalability. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv – CS AI · 4d ago7/10
🧠Researchers introduced Webscale-RL, a data pipeline that converts large-scale pre-training documents into 1.2 million diverse question-answer pairs for reinforcement learning training. The approach enables RL models to achieve pre-training-level performance with up to 100x fewer tokens, addressing a critical bottleneck in scaling RL data and potentially advancing more efficient language model development.
AIBullisharXiv – CS AI · 4d ago7/10
🧠TensorHub introduces Reference-Oriented Storage (ROS), a novel weight transfer system that enables efficient reinforcement learning training across distributed GPU clusters without physically copying model weights. The production-deployed system achieves significant performance improvements, reducing GPU stall time by up to 6.7x for rollout operations and improving cross-datacenter transfers by 19x.
CryptoBullishThe Block · Apr 77/10
⛓️Polygon is set to activate the Giugliano hardfork on April 8, 2024, which will improve transaction finality and integrate fee parameters directly into block headers. This upgrade aims to enhance the network's performance and efficiency for users and developers.
$MATIC
AIBullisharXiv – CS AI · Apr 77/10
🧠Researchers developed PALM (Portfolio of Aligned LLMs), a method to create a small collection of language models that can serve diverse user preferences without requiring individual models per user. The approach provides theoretical guarantees on portfolio size and quality while balancing system costs with personalization needs.
AIBullisharXiv – CS AI · Apr 77/10
🧠Researchers introduce k-Maximum Inner Product (k-MIP) attention for graph transformers, enabling linear memory complexity and up to 10x speedups while maintaining full expressive power. The innovation allows processing of graphs with over 500k nodes on a single GPU and demonstrates top performance on benchmark datasets.
AIBullisharXiv – CS AI · Apr 77/10
🧠Researchers have developed Combee, a new framework that enables parallel prompt learning for AI language model agents, achieving up to 17x speedup over existing methods. The system allows multiple AI agents to learn simultaneously from their collective experiences without quality degradation, addressing scalability limitations in current single-agent approaches.
AIBullisharXiv – CS AI · Apr 67/10
🧠Researchers introduce Textual Equilibrium Propagation (TEP), a new method to optimize large language model compound AI systems that addresses performance degradation in deep, multi-module workflows. TEP uses local learning principles to avoid exploding and vanishing gradient problems that plague existing global feedback methods like TextGrad.
AIBullisharXiv – CS AI · Mar 177/10
🧠Researchers developed MegaScale-Data, an industrial-grade distributed data loading architecture that significantly improves training efficiency for large foundation models using multiple data sources. The system achieves up to 4.5x training throughput improvement and 13.5x reduction in CPU memory usage through disaggregated preprocessing and centralized data orchestration.
AIBullisharXiv – CS AI · Mar 177/10
🧠ICaRus introduces a novel architecture enabling multiple AI models to share identical Key-Value (KV) caches, addressing memory explosion issues in multi-model inference systems. The solution achieves up to 11.1x lower latency and 3.8x higher throughput by allowing cross-model cache reuse while maintaining comparable accuracy to task-specific fine-tuned models.
AIBullisharXiv – CS AI · Mar 177/10
🧠Researchers introduce BevAD, a new lightweight end-to-end autonomous driving architecture that achieves 72.7% success rate on the Bench2Drive benchmark. The study systematically analyzes architectural patterns in closed-loop driving performance, revealing limitations of open-loop dataset approaches and demonstrating strong data-scaling behavior through pure imitation learning.
AINeutralarXiv – CS AI · Mar 127/10
🧠Researchers propose treating multi-agent AI memory as a computer architecture problem, introducing a three-layer memory hierarchy and identifying critical protocol gaps. The paper highlights multi-agent memory consistency as the most pressing challenge for building scalable collaborative AI systems.
AIBullishOpenAI News · Mar 117/10
🧠OpenAI has developed an agent runtime that transforms their Responses API from a simple model interface into a full computing environment. The system uses shell tools and hosted containers to enable secure, scalable AI agents that can manage files, execute tools, and maintain state.
🏢 OpenAI
AIBullisharXiv – CS AI · Mar 117/10
🧠Researchers have developed Variational Mixture-of-Experts Routing (VMoER), a Bayesian framework that enables uncertainty quantification in large-scale AI models while adding less than 1% computational overhead. The method improves routing stability by 38%, reduces calibration error by 94%, and increases out-of-distribution detection by 12%.
AIBullisharXiv – CS AI · Mar 117/10
🧠Researchers have developed UltraEdit, a breakthrough method for efficiently updating large language models without retraining. The approach is 7x faster than previous methods while using 4x less memory, enabling continuous model updates with up to 2 million edits on consumer hardware.
AINeutralarXiv – CS AI · Mar 97/10
🧠Researchers propose a framework for decentralized resource allocation in real-time AI services across device-edge-cloud infrastructure. The study shows that dependency graph topology determines whether price-based allocation can work at scale, with hierarchical structures enabling stable pricing while complex dependencies cause instability.
AIBullisharXiv – CS AI · Mar 67/10
🧠Researchers propose asymmetric transformer attention where keys use fewer dimensions than queries and values, achieving 75% key cache reduction with minimal quality loss. The technique enables 60% more concurrent users for large language models by saving 25GB of KV cache per user for 7B parameter models.
🏢 Perplexity
AIBullisharXiv – CS AI · Mar 57/10
🧠Researchers have introduced Agentics 2.0, a Python framework for building enterprise-grade AI agent workflows using logical transduction algebra. The framework addresses reliability, scalability, and observability challenges in deploying agentic AI systems beyond research prototypes.
AIBullisharXiv – CS AI · Mar 57/10
🧠Researchers introduce GraphMERT, an 80M-parameter AI model that efficiently extracts reliable knowledge graphs from unstructured text data. The system outperforms much larger language models like Qwen3-32B in generating factually accurate and semantically valid knowledge graphs, achieving 69.8% FActScore versus 40.2% for the baseline.
AIBullisharXiv – CS AI · Mar 37/104
🧠Researchers introduce DRAGON, a new framework that combines Large Language Models with metaheuristic optimization to solve large-scale combinatorial optimization problems. The system decomposes complex problems into manageable subproblems and achieves near-optimal results on datasets with over 3 million variables, overcoming the scalability limitations of existing LLM-based solvers.
$NEAR
AI × CryptoBullishBankless · Feb 267/103
🤖The article discusses how AI agents require cryptocurrency infrastructure to achieve scalability. It explores the technological developments needed to build an AI agent economy on the Ethereum blockchain.
$ETH
AI × CryptoBullishCoinTelegraph – AI · Feb 267/108
🤖Stripe executives Patrick and John Collison predict that blockchain networks will need to handle 1 billion transactions per second (TPS) to support the growing adoption and use of AI agents in the future. This represents a massive scalability challenge for current blockchain infrastructure.
CryptoBullishBankless · Feb 257/105
⛓️The Ethereum Foundation has released an updated 'Strawmap' roadmap outlining development priorities and timeline for Ethereum. The roadmap highlights ambitious goals including shielded ETH transfers for enhanced privacy and scaling to 10,000 transactions per second.
$ETH
CryptoBullishThe Defiant · Feb 197/103
⛓️The Ethereum Foundation announced its 2026 Protocol priorities, focusing on scalability, user experience, and security improvements. The network is preparing for the upcoming Glamsterdam upgrade as part of its long-term development roadmap.
$ETH
CryptoBullishBankless · Feb 167/103
⛓️Ethereum is advancing its zkEVM technology alongside post-quantum security measures and client-side proving capabilities. These technical developments are converging to create a more efficient and scalable Layer 1 blockchain solution.
$ETH
CryptoBullishWu Blockchain · Feb 147/103
⛓️LayerZero has announced the launch of Zero, a new Layer 1 blockchain specifically designed to tackle scalability and privacy issues that have hindered Wall Street's adoption of blockchain technology. This development represents LayerZero's strategic move to bridge traditional finance with blockchain infrastructure.
$AAVE