Hi, this is Wayne again with a topic “This is NVIDIA’s new GPU”.
Here it is my friends concrete proof that satire truly is dead. The GPU beside me contains 36 Nvidia Grace Blackwell Super Chips and is estimated to cost over $ 3 million. Now, obviously, the big heat sink on the side is illustrative. You won’t be installing one of these in your gaming rig unless uh you happen to have a 100,000 watts of power on tap and a building scale. Liquid cooling system, but many of the Technologies Nvidia is introducing here, will benefit Gamers. The the biggest one isn’t really obvious until you go under the hood in my hands is one gb200 super chip.
Now some of this we’ve seen before, like this 72 core Nvidia Grace CPU, but these puppies right here these are all new and very, very exciting. Do you guys see this tiny, tiny line here thinner than the width of a human hair? That is the gap between the two black well dyes that make up a b200 GPU wait. A second GPU! Is that not two gpus? Yes, but also no, while SLI might be dead for consumers? Nvidia has been hard at work, creating interconnects that run at absolutely dizzying speeds. Allowing multiple G GPU dies to now act as a single GPU that allow multiple gpus to act as a single super chip and that allow multiple Super Chips to act as a single.
Oh, no, to act as a single cohesive processing unit. And it is going to unlock gaming experiences and more that are going to blow your mind. You speak English right.
Yes, I speak English. How can I help you? Can you also speak segue to our sponsor? No, I am also skilled in survival and op. Yes, you can, yes, you can it’s fine Ridge ridge’s got your last minute.
Father’S Day’s gift covered with a big sale click on our Link in the description and get up to 40 % off their Rings, their wallets and more we’ll get to the Demos in a bit. But first, let’s take a closer look at the product that is turning Global Tech media into Jensen. Hang’S swifties Nvidia chose not to disclose the number of cacor tenores or even the size of the ony caches of their new b200 Blackwell GPU. But they did give us some numbers to work with Apples to Apples. They expected to hit around 10 petaflops at fp8 sparse, which puts it roughly 2 and half times faster than last gen Hopper.
Also each of these is expected to draw about 1,000 Watts. Hence the uh liquid cooling. Each of our gpus gets 192 GB of hbm 3E high-speed memory running at a casual 8 terabytes a second and is equipped with 1.8 terabyte per second Envy link, and these numbers get even more ridiculous. When we look at the super chip as a whole, each super chip has two b200 gpus and a gray CPU for a total of 72 arm CPU cores 864 GB of RAM and draws a total of 2700 Watts, oh and by the way, each of the 18 Of these Blackwell compute nodes that make up an nvl 72 rack contains two Super Chips: good Lord in California.
The rack that I was standing to in the intro would cost a whopping $ 30 an hour to run or about A4 million doar a year, assuming you’re paying residential energy rates. Speaking of running, they literally need to take these demos to another room, so I’m going to have to tell you about the spline on our way out of here. We got our hands, however, temporarily, on what Nvidia is calling the spline of their nvl 72 rack.
This here contains 5,000 wires, totaling over 2 miles mil and is cleverly laid out to optimize the latency and power efficiency, see the networking all goes in the middle right here, and the Blackwell compute nodes, like the one they just took from us, go at the top And the Bottom now they could have used fiber optics, except that uh that would have cost them a casual 20,000 watts of additional power consumption. So uh clever layout for the win put it all together and you’ve got a w 72 Blackwell gpus 2600 gray, CPU cores 132 terab of hbm 3E memory with over half a pyte per second of aggregate bandwidth. That’S good for 720 pedop flops of fp8 training, delivering results upwards of 30 times faster than a previous generation hgx h100, and if you didn’t notice with perfect linear scaling, something that is only possible when integrating your system this tightly, even the placement of the individual blades Matter on our super micro rack that we’re looking at here, you can see that they’ve got 10 up top and eight at the bottom, with the nine nvlink switch units sandwiched in between that’s because timing. The electrical signals matters a lot and is easier on a more symmetrical setup. Now, unfortunately, Nvidia didn’t have a switch for us to show you they’ll have to be represented by this piece of plastic, but each of the nine units can handle 14.4 terabytes per second of NV link. It is so integrated that Nvidia says they think of this entire rack as one massive power hungry single GPU and it’s kind of hard to argue otherwise other than that. Most of the time. It’S not doing graphics and I thought that’s what the G was for and the craziest part is. We haven’t even looked at the craziest systems, yet that was all mgx a standard set of reference designs.
That’S intended to be compatible with multiple Generations. Hence the mg hgx is a whole different beast in this or on it it is eight Blackwell b200 gpus with a combined 1.44 terabytes of GPU memory. Absolutely ridiculous, but the difference between this and what I just showed you is completely gone – is any trace of Grace. Cpus, this is purely a GPU board, because this insanity is meant to be integrated into a partner system.
Like say from Super Micro now, Nvidia does sell their own dgx unit with this board and the rest of the components that make up a complete system. But that’s mostly intended to be a reference system. These 8 gpus get combined with Envy link, just like the the rack setup for a whopping 72 peda flops of FPA training, while drawing nearly really 10,000 Watts. Now, naturally, this much power is a little hard to cool, which is why it’s so massive, but the good thing about it is: it doesn’t require messing around with water or racks, with 120,000 watts of power, if you’re installing these into an existing data center.
Since those practically don’t exist, so you’ve got to spread them out a little bit according to your power budget, which means you need networking, and that is where nvidia’s new hardware networking products come in. This ethernet switch will do something in the neighborhood. If I think it’s like 50 terabits per second of switching, which is all really cool, but what are we doing with this? Exactly I don’t know how about Healthcare? The tools I’m looking at right here, use machine learning to approximate viral protein. Folding generate potential drug molecules to disable them and then test rapidly accelerating drug development.
Oh – and this is cool finding exactly what it is you’re supposed to be taking a picture of with the ultrasound wand can take a bit of time. Why not let the machine identify it for you, that’s a left, ventricle right cool know. What else is cool simulations like the one we’re living in behind me? Is Earth 2 a climate and weather simulation program that can run at such a high resolution that you can determine what’s going to happen on a 1 km x, 1 km basis is which is pretty cool.
But what? If you need to simulate the movement of hot and cold air on a molecular level, well, you can do that too. That is nuts, which is all cool, but what if I can’t afford a dgx or an mgx to train those data sets well, you can still use or experience nvidia’s, new, Nims or Nvidia inference. Microservices Nims are pre-trained and pre-optimized containerized AI models that you can download and deploy for any number of use cases, and if that all sounded like gobleg um, let’s go back to that bilingual demo for a second facial animations can be an extremely timec consuming component of Game development and are one of the big reasons that localization can be such a challenge.
The Nim in use here allows automatic mapping of speech to mouth animations and facial expressions. Meanwhile, this guy takes things to two steps: further using Nims for automatic speech, recognition and facial animations, but also a third one that I think is perhaps the most interesting to me. There’S a major concern right now in the games industry that AI is going to take jobs away from writers, but this guy uses a n for data retrieval. That is part of inworld ai’s platform, and this is really cool instead of him just crapping out. Whatever response chat, GPT might throw at you, he’s actually got an extensive backstory that does need to be written by a human writer. In order to give you a personality and context specific information that will help you advance the story.
Have you ever tried any of the fine merchandise from LTT store.com? Now that is really cool and we’re just scratching the surface right now over time, Nvidia is going to be looking to The Gaming Community, both developers and Gamers, for inspiration for what to do with these, and oh I’ve got one more really cool demo G assist here. Might just be a tech demo at the moment, but it’s a pretty darn compelling one. How do I craft a stone axe? Okay, that is cool, but what dinosaur am I looking at right? Now, okay, that’s kind of sick and what’s cool is the image recognition and I believe the voice to text are both running locally on this RTX series. Gpu.
I don’t know about you guys, but I think this is so cool that I don’t know what to say way to our sponsor back Blaze. Losing your data is never fun so having solid backups of everything is super important and back Blaze is an affordable, easyto. Ouse cloud backup solution with plans that start at just $ 9 a month. You can backup almost anything from your Mac or PC and access it anywhere in the world with their web and mobile apps and they’ve restored over 55 billion files, with multiple options for how you can retrieve your data, including having them send a physical hard drive straight To your door and if you’re worried about accidentally deleting files, you can increase your retention history to one year for free plus for organizational and business purposes.
Their Advanced admin controls are designed for security, scalability and ransomware resilience back blazs has over three exabytes of data under their management and has the trust of over half a million customers, including us. That’S right. We not only work with them on a sponsored basis. We actually back up our servers nightly to back Blaze, so starting at $ 9 a month.
It is hard to find a better investment than your peace of mind so sign up today and get a free 15-day trial at back blaze.com. If you guys enjoyed this video uh, why not check out our video from last computex showing off the grace CPUs and their last gen Super Chips? We got a little bit more into the weeds and it was very, very cool. .