NVIDIA Made a CPU.. I’m Holding It.

NVIDIA Made a CPU.. I’m Holding It.

Hi, this is Wayne again with a topic “NVIDIA Made a CPU.. I’m Holding It.”.
It’S pretty clear where nvidia’s priorities lie these days, we’re here at the computex booth of one of their Partners gigabyte – and this is the entire gaming showcase. That’S because they, like the rest of the industry, understand that the future of computing lies in the data center. That is where the grace Superchip comes in under each of these gigantic heat spreaders are 72 of nvidia’s. Grace CPU course connected together, using what Nvidia calls the Envy link chip to chip interconnect for a total of 144 cores, except that’s just one of the nodes. This Server from gigabyte accepts not one, not two, but four of these modules in its four separate nodes, that is an absolutely mind-bending 576 cores in a 2u server rack.

But these are not the types of CPUs that you have in your gaming PC at home. Those processors from the likes of AMD and Intel are based on the x86 architecture so similar to what Apple did with their M series, M1 and M2 processors Nvidia is making use of a different processor architect picture called arm and uh. We actually did get permission to do this, we’re going to be taking a closer look here. It doesn’t look much like it, but this is the same style of processor that you might find in your phone arm. Processors have a lot of advantages, first and foremost, being that they’re, typically more power efficient thanks to their relatively lightweight and structure set, so so much so that Nvidia claims. These gray CPUs have twice the performance per watt of the latest x86 chips, but the disadvantage is. They also require software. Like your operating system and all the programs, you need to run to be coded and compiled specifically for arm now for the PC market, because 86 has been the standard for so long it’s difficult to justify switching over to arm.

It would cost you so much in terms of backwards compatibility, but in the data center the types of customers who are going to buy a processor like this are usually developing their own software anyway, like let’s say, Google to run the algorithms that power, Google search or Youtube recommendations for them switching over to arm isn’t as big a deal and in fact, companies like Amazon, who are developing their own arm-based CPUs are already doing it and very effectively. I mean hey. If my next gaming CPU could be half the power draw and the same performance of my current one, I’d be stoked, but this is even better imagine if, instead of one computer you’re talking thousands or tens of thousands, the savings start to become so large that it’s Less a question of: can we afford this migration and more a question of? Can we afford not to make it now? I didn’t ask permission for this part, but nobody seems to be stopping me or even really paying attention to me. So, let’s take apart Grace Superchick on each gray. Superchip is up to 480 gigabytes of LP ddr5x ECC memory per CPU and, what’s really cool, is that that can actually be accessed by either CPU over the nvlink interconnect. That’S how fast this new Envy link is the only downside to this approach, since we’re making comparisons to Apple.

Is that just like with your M2 MacBook, you better decide how much memory you want in your server right at the time you buy it unless you want to replace the entire compute engine. While you perform a memory upgrade given that the rumored price of their h100 gpus is a hundred thousand dollars, I don’t even want to know what this thing costs, but hopefully you get a bit of a discount when you buy it together with the gray Superchip CPU. Let me show you this can’t believe. They’Re letting me take this off the wall, foreign, okay, success.

NVIDIA Made a CPU.. I’m Holding It.

We have dropped nothing important so far today this is Grace Hopper. On the one side, we’ve got the same at 72, core Grace arm CPU that we just saw, but on the other side, the ooh shiny, latest Nvidia h100 Hopper GPU. You can probably see where this is going just like with the Dual CPU Grace module. These two are also Envy link chip to chip interconnected, meaning that the CPU and GPU have a whopping, 900 gigabytes per second of theoretical bandwidth to talk to each other. So, for some perspective, a GPU using a full 16 Lane, Gen 5 pcie slot – would only have about 64 gigabytes, a second of peak throughput that is 1 14 as much as this, and that’s far from the only mind-bending number that this thing is capable of. While the CPU side uses the same up to 480 gigabytes of lpddr5x for the GPU side, they need much faster hbm3 memory that runs at a whopping four terabytes per second, it’s about four times faster.

NVIDIA Made a CPU.. I’m Holding It.

That’S why the memory needs to be right on the package right next to the GPU. Now all that is great and cool and all but hbm is very expensive and, as you can see, there’s only so much space here. So the h100 only gets 96 gigabytes of memory, okay yeah for gaming. That certainly sounds like a lot, but AI data sets can involve terabytes of data, so it can get used up very quickly. That’S where the interconnect comes in.

NVIDIA Made a CPU.. I’m Holding It.

It allows the GPU to access the cpu’s memory in a very direct and transparent way, giving the h100 hopper GPU a functional memory capacity of nearly 600 gigabytes in Practical terms, according to Nvidia, that puts Grace Hopper anywhere from about two and a half times to nearly Four times as fast as an x86 CPU paired with their last generation, a100 GP and where things get really wild is in the data center. With an Envy link switch system, you could connect up to 256 gpus together, giving them access to up to 150 terabytes of high bandwidth memory. I mean you guys, remember that crazy, Mars Lander demo, that we showed off on the petabyte of flash array.

You could load that entire 1 billion Point data set into memory in that configuration and still have 50 terabytes to spare. Now this module get more power hungry than the Dual CPU version: a thousand versus 500 watts per module. But I mean that’s for CPU, GPU and RAM for both of them and with this kind of performance. Of course not everybody wants to move to an arm hybrid CPU, GPU architecture, so Nvidia is still going to be supporting their uh old-fashioned configurations, be they h100 gpus and a pcie form factor or their hgx h100, with up to eight smx-5 gpus.

Each of these draws a massive 700 Watts, making an RTX 4090 look like a child’s play thing and supports Envy link between these gpus and NV switch to additional servers. This is the g593-sd 0 and gigabyte was very proud of the fact that they are the first Nvidia certified HDX h100 8 GPU server in a 5u chassis man. That is a lot of compute in a tiny space Jake’s in my ear here telling me I should pull one of the power supplies, but if you’ve noticed it getting darker, it’s because they’re actually shutting down the pre-show and they’re trying to get us out of here, But there is one more thing that we wanted to talk about: where’d it go dang, it Jake. No, oh my God! Oh my God, okay! Well, this is uh! No wait! This isn’t the one! I wanted.

Okay, it’s a connect. X7. This is an even faster network card, so this is probably the first Nvidia developed melanox network card, given that uh the acquisition was what about two years ago, six, six yeah but Nvidia didn’t buy melanox just to make faster connectex cards. No, it was to make these.

This is a blue field three, so it has networking on it. This is a 100 gigabit one, but it’s available. It speeds up to 400 gigabit, but what’s really special about it is that it has up to 16 processing cores on it. Why? You might ask well just like, in the old days when we started offloading tcpip processing to our network cards rather than having our CPU handle them.

This is going to offload all kinds of interesting things like encryption of your network traffic or say, for example, handling managing your file system, because when you’re someone like an AWS – and you want to squeeze as much revenue as possible out of every CPU in your data Center, you don’t want it handling, stupid BS that you could just offload to your network card. So the idea here is to free up CPU resources that can be leased to customers by putting them onto the network card itself, and this is especially true for software, where the developer sells, you a license per core. That’S why? Even though these are going to be wildly expensive, a lot more than the 4060 TI Nvidia is going to sell shed loads of them. Just like I sold this Segway to our sponsor pulseway. Are you sick of feeling, like a prisoner, changed to a desk managing it systems, Unleash Your Inner it hero with pulse waves, remote monitoring and management software pulse waste platform gives you the power to manage your it infrastructure from anywhere even from the comfort of your own Couch and with real-time alerts and notifications, you can be the first to know about potential issues before anyone else on your team. It’S accessible through whatever devices close to you, thanks to their convenient apps, allowing you to control your it systems like a boss, even if you’re lounging in your pjs so say goodbye to the boring routine of it management and hello to the fun of being an I.T Hero with pulseways advanced technology, don’t wait.

This is your chance to become a legend in the IT world. Just try pulse wave for free today and experience the power of simplified it. Infrastructure management click the link below to get started. If you guys enjoyed this video, why don’t you check out? Oh the petabyte gosh? This is a good one. Well, we’re at the gigabyte Booth come on uh the g-rad one yeah, actually, no new one new new one, a three One X.

Four, I mean damn it .