NVIDIA's Blackwell Ultra Chips Set a New Benchmark for AI Infrastructure
NVIDIA has officially begun shipping its Blackwell Ultra B300 and B300A GPUs — the most powerful AI accelerators the company has produced. Built on TSMC's N4P process with 192 GB of HBM3e memory per GPU, the Blackwell Ultra architecture delivers up to 1.5 petaflops of FP4 inference throughput in a single chip, roughly a 4× improvement over the previous Hopper H100 generation.
For Singapore-based system integrators and data centre operators, this marks a critical procurement decision point. The transition from H100 clusters to Blackwell-based DGX B300 systems and GB300 NVL72 racks requires significant infrastructure investment — both in power density (NVL72 racks draw up to 120 kW) and liquid cooling. CSA Group and major hyperscalers have confirmed deployments across Singapore's Jurong Island and Tuas data centre corridors, with NVIDIA's partner ecosystem expected to accelerate local adoption through H2 2025.
Beyond raw compute, NVIDIA Blackwell's NVLink 5 interconnect supports a 72-GPU unified memory pool — collapsing what was previously a multi-node cluster into a single logical system. This has significant implications for enterprise AI workloads such as large-scale model fine-tuning and inference serving, reducing the complexity burden on local MNCs and government agencies investing in sovereign AI infrastructure.
Read more