Decentralized GPU networks are emerging as a cost-effective solution for specific AI tasks, particularly in inference and routine processing. While large-scale AI model training is typically confined to hyperscale data centers due to the substantial GPU resources required, advancements in open-source models are enabling more efficient use of consumer GPUs, according to Mitch Liu, co-founder and CEO of Theta Network.
The training of frontier AI models remains heavily reliant on centralized infrastructures, with companies like OpenAI utilizing over 200,000 GPUs for the launch of GPT-5. Similarly, Meta employed a cluster exceeding 100,000 Nvidia H100 GPUs for its Llama 4 model. This centralized approach is compared to constructing a skyscraper, where the integration of hardware allows for efficient operation, as explained by Nökkvi Dan Ellidason, CEO of Ovia Systems.
In contrast, decentralized networks face challenges in achieving the necessary synchronization for high-performance AI training, as internet latency and reliability issues hinder their effectiveness. However, the shift towards more compact and optimized models indicates a potential expansion of decentralized networks in the AI landscape.