Nvidia Blackwell

Nvidia Unveils Powerful Generative AI Blackwell Chip and Platform

Nvidia CEO Jensen Huang introduced the massively powerful new Nvidia Blackwell chip and platform at this year’s GTC conference keynote. Blackwell is aimed at enhancing generative AI projects while improving their efficiency, which is a major cost issue for running large, real-time language models. The Blackwell platform leverages the Blackwell GPU architecture and is capable of handling models with up to a trillion parameters at an efficiency 25 times greater than its predecessor.

Nvidia Blackwell

“Accelerated computing has reached the tipping point — general purpose computing has run out of steam,” Huang said at GTC. “We need another way of doing computing — so that we can continue to scale so that we can continue to drive down the cost of computing, so that we can continue to consume more and more computing while being sustainable. Accelerated computing is a dramatic speedup over general-purpose computing, in every single industry.”

Named after mathematician David Harold Blackwell, the new platform uses the new Nvidia GB200 Grace Blackwell Superchip to achieve its capabilities, connecting GPUs to CPUs over a high-speed interconnect, and the Nvidia GB200 NVL72, a system designed for the most compute-intensive workloads. These components are expected to provide up to a 30-fold increase in performance for LLM inference workloads while significantly reducing operational costs and energy usage. Blackwell already has most major tech firms lining up to buy and integrate it into Amazon Web Services, Dell Technologies, Google, and Microsoft, among others.

The Blackwell platform is also supported by the company’s AI Enterprise operating system. That’s a key element for Nvidia’s long-term strategy, according to Huang. To make it feasible for businesses to integrate the new, more powerful GPUs, Nvidia is rolling out a new approach to creating software. The idea is that instead of actually writing code, a company can simply aggregate various specialized generative AI models and simply tell them the goal of the project along with examples and other data. These packages of AI models are called NIMs, which refer to Nvidia inference microservices and utilize Nvidia’s models and computing libraries.

“How do we build software in the future? It is unlikely that you’ll write it from scratch or write a whole bunch of Python code or anything like that. It is very likely that you assemble a team of AIs,” Huamg said. “The enterprise IT industry is sitting on a goldmine. They have all these amazing tools (and data) that have been created over the years. If they could take that goldmine and turn it into copilots, these copilots can help us do things.”

Nvidia ‘Superchip’ Blasts Past AI Benchmarks

Nvidia Enhances Industrial Robotics With On-Device Generative AI Features

Nvidia Launches Enterprise Generative AI Cloud Services for Synthetic Media Engines