nvidia (NVDA) made headlines this week at its GTC Conference when it announced that it is building its first standalone CPU. For a company that made its fortune on the power of its graphics cards, it’s a whole new direction.
And according to CEO Jensen Huang, the superchip, named Graceit’s a powerful addition to the company’s lineup.
“This is a new growth market for us,” Huang told Yahoo Finance during an interview.
“The entire data center, whether it’s for scientific computing or AI training, or application inference, AI deployment, or edge data centers, all the way down to an autonomous system, like a self-driving one. car, we have data center-scale products and technologies for all of them,” he added.
Grace, named for computer programming pioneer Grace Hopper, boasts 144 cores and twice the memory bandwidth and power efficiency of leading high-end server chips, according to Nvidia.
The chip, which Nvidia calls a superchip because it’s two CPUs in one, is designed specifically for use in AI systems, something the company has invested heavily in in recent years.
“For the first time, we are selling CPUs. Today, we plug our GPUs into off-the-shelf CPUs and will continue to do so. The market is really big, there are many different segments,” said Huang.
“Artificial intelligence or scientific computing, the amount of data that we have to move is a lot. This gives us the opportunity to bring a revolutionary type of product to an existing market for a new type of application that is really taking computing by storm.”
In addition to Grace, Nvidia introduced its new Hopper H100 data center GPU. That system, which contains 80 billion transistors, offers a significant performance leap over its predecessor, the A100 GPU, Nvidia said.
GPUs are important for high-performance computing and artificial intelligence applications because they can handle multiple processes at the same time. And Nvidia has used those capabilities for years.
“If you think about our business today, it’s really a data center-scale business. We offer GPUs, systems, software, and network switches,” explained Huang.
“So the entire data center, whether it’s for scientific computing or for AI, training or inference to the application, deployment of AI or edge data centers, or all the way to an autonomous system, like a self-driving car , we have data center-scale products and technologies for all of them.”
But as chips continue to shrink and the number of transistors packed into each CPU or GPU increases, there’s always the question of whether chipmakers like Nvidia are pushing the limits of the silicon that makes up their semiconductors.
Huang, however, says that’s not the case, and chipmakers still have plenty of time thanks to the power of cloud computing.
“It is absolutely true that transistor scaling is slowing down. We are getting more transistors, but the…rate of progress has slowed tremendously,” Huang explained.
“In the cloud, you can make computers as big as you want. And, in fact, if you look at the computer that we are announcing today, it is an incredible size. For example, 80 billion transistors, we have eight of those chips in a system. And then we take 32 of those systems and we put them together in a giant GPU, they work like a giant GPU.
Nvidia, of course, isn’t the only company offering data center GPUs to customers interested in AI and high-performance computing. AMD (amd) sells its own GPU-powered setup that it claims can easily take on Nvidia’s previous generation data center GPU, the A100.
Meanwhile, Nvidia says that the Hopper GPU will blow the doors off the A100. Now we just have to find out how it compares to AMD’s offerings.
More of Dan
Do you have a tip? Email Daniel Howley at email@example.com. Follow him on Twitter at @DanielHowley.