Most formidable supercomputer ever is warming up for ChatGPT 5 — thousands of ‘old’ AMD GPU accelerators crunched 1-trillion parameter models

The most powerful supercomputer in the world has used just over 8% of the GPUs it’s fitted with to train a large language model (LLM) containing one trillion parameters – comparable to OpenAI‘s GPT-4.

Frontier, based in the Oak Ridge National Laboratory, used 3,072 of its AMD Radeon Instinct GPUs to train an AI system at the trillion-parameter scale, and it used 1,024 of these GPUs (roughly 2.5%) to train a 175-billion parameter model, essentially the same size as ChatGPT.



Source link

Why Crypto Idealogues Won’t Touch Bitcoin ETFs Previous post Why Crypto Idealogues Won’t Touch Bitcoin ETFs
Microsoft and SAP unveil new AI solutions for retail ahead of NRF 2024 Next post Microsoft and SAP unveil new AI solutions for retail ahead of NRF 2024