Nvidia has announced a new chip designed to run artificial intelligence models, the GH200. The chip has the same GPU as the company’s current highest-end AI chip, the H100, but it pairs that GPU with 141 gigabytes of cutting-edge memory, as well as a 72-core ARM central processor.
The GH200 is designed for inference, the process of using AI models to make predictions or generate content. Inference is computationally expensive, and it requires a lot of processing power every time the software runs. Nvidia says that the GH200 will allow for significantly faster inference speeds, which will make it possible to run larger and more complex AI models.
The GH200 is also designed for scale-out, meaning that it can be used in large data centers to power multiple AI models simultaneously. This makes it well-suited for cloud computing providers and other businesses that need to run a large number of AI models.
Nvidia’s announcement comes as the company faces increasing competition in the AI chip market from AMD and Google. AMD recently announced its own AI-oriented chip, the MI300X, which can support 192GB of memory. Google is also developing its own custom AI chips for inference.
The sources for this piece include an article in CNBC.