AI startup: AMD beats NVIDIA on AI GPU

AMD MI300 AI Chip (1)
Translate from : AI startup: AMD slår NVIDIA på AI GPU
AI startup TensorWave is among the first with a public AMD Instinct MI300X AI accelerator setup. The CEO believes they are a better choice than NVIDIA's dominant Hopper H100 AI GPU.

AI startup TensorWave is one of the first with a publicly deployed setup powered by AMD Instinct MI300X AI accelerators. According to CEO Jeff Tatarchuk, these accelerators are a far better option than NVIDIA's dominant Hopper H100 AI GPU. TensorWave has begun building AI systems powered by AMD's latest Instinct MI300X AI accelerator, which the company plans to lease at a fraction of the cost of NVIDIA's Hopper H100 AI GPU.

TensorWave plans to have 20,000 of AMD's new Instinct MI300X AI accelerators before the end of the year at two facilities, and plans to have liquid cooling systems online by 2025. Jeff Tatarchuk says AMD's new Instinct MI300X AI GPU is "dominant over the H100" in "pure specs" , and he is not wrong. But that's true of the original H100 with 40GB and 80GB HBM3 options, while an improved H200 has a much larger capacity of 141GB HBM3e memory with up to 4.8TB/sec of memory bandwidth.

But with the Blackwell B200 here, NVIDIA has the AI GPU market with 192TB of HBM3e at 8TB/sec memory bandwidth, that may change. AMD has the most VRAM so far on an AI GPU, with NVIDIA lagging behind with the H100 and 80GB at its limits, unless you're in China, with access to H100 96GB models, and even the upcoming H200 with 141GB HBM3e, but it's not even as much and not as fast as the Instinct MI300X from AMD.

But it's not just about pure hardware and VRAM for AI workloads. The actual AI accelerators or AI GPUs should also be able to deliver the same performance as NVIDIA's dominant H100 AI GPUs. Tatarchuk says there's a lot of enthusiasm about AMD's new Instinct MI300X AI accelerators as a good alternative to NVIDIA, but customers aren't sure if they'll get the same performance.

Our Partners