Computex 2023 Show Floor Day 2

Computex 2023 Show Floor Day 2

Hi, this is Wayne again with a topic “Computex 2023 Show Floor Day 2”.
Foreign, thank you. So one thing I was looking for at J5, create and everywhere else on the show floor is 10 gigabit USBC to 10 gigabit Ethernet is that a thing that exists J5 create does have a 10 gigabit to five gigabit Ethernet adapter. So I guess you uh, the USB overhead won’t give you any kind of a penalty. You should be able to get the raw speed, but really mostly, they were just showing off their 10 gigabit USB type-c, docs and multiple 4K display outputs and a lot of really cool USB stuff, USB accessories. But I want a USBC 10 gigabit to 10 gigabit Ethernet adapter.

Does anybody over on this side have a 10 gigabit USBC to 10 gigabit Ethernet ethernet adapter and the answer is no waterproof power, Delivery Systems and Distribution Systems so much stuff? We literally have video products from Yuan and they can do ak-60 capture, but 10 gigabit Ethernet. No, why is USB so cursed? Msi is doing the smart charger thing too everybody’s getting in on EVs and power distribution systems and all the upgrades that go with that. I also found some absurdly huge e-ink displays. I really can’t wait for those to be uh, affordable enough and available enough that I could just deploy these in my house player three has entered the game: the Ampere one processors, dual processor – in this 1u chassis from gigabyte.

Computex 2023 Show Floor Day 2

You know you’ve got eight memory channels, uh two one gigabit Lan ports built in 12 gin 2.5 inch, Gen, 4, nvme, SATA hot, swap base dual ROM architecture, ocp 3.0 Gen, 5 x, 16 slots, dual 2000 watt power supplies, so arm is already ready for the prime Time this is a normal one. You server ready to drop into the data center. Maybe you need a little bit more expandability to you is an option which gives you more pcie connectivity. You get the same ocp3 slots at the back, the same memory configuration I mean this could potentially be a really insane memory density, ampere one and it’s basically ready to go dual five nanometer process technology arm CPUs ready to go here today, arm’s a threat and or Congratulations arm, depending on your perspective, can get an arm system with a single CPU, but you’ve still got up to 128 cores 256 cores if you’ve got dual socket. But you know 128 cores yeah, that’s fine, two full height half length, pcie Gen, 5, x16 expansion, slots and kind of a lot of power distribution. In case, you need to run something that uses a lot of power: dual 1300 watt, 80 plus Platinum, redundant power supplies. Again. This is a standard rack configuration ready to drop into the data center, but it’s arm and it’s ready to run your Linux or open source workloads or really actually a bunch of workloads even beyond that, but it’s basically ready to plug in and deploy in the data Center cost per Watts cost per unit compute cost per other thing is going to be the Dominator.

In that conversation edsff in the server in the wild yeah so stories we were taking a look at earlier, like from kyocsia in this form factor in 1u. Look, how much storage you can pack in a 1u server in the edsf form factor this is this is really a lot of fun and all of these connections at the rear, they’re, all pcie and you’ve got to have this kind of a setup to use it. This one is uh based on Fortune Xeon scalable in this 1u platform. That’S what we need in terms of pcie connectivity now normally there’s a g-raid card back there to do the uh, the software raid or the hardware accelerated software raid through the pcie connections.

But it’s a lot of connectivity needed for that. Many nvme drives well that many EDS FF drives, which are pcie connected so still kind of appreciate. Your storage think about the possible storage in 1u of insanely high speed. Storage. 24 no 32 drives times 15 16. Terabytes of capacity, the editor is going to do the math for us. Look at that number.

Computex 2023 Show Floor Day 2

That is an absurdly huge number. So, if you’re going to have a number that large you’re going to need insanely fast connectivity right well, what about the melamox connect? X7. General purpose, uh short depth or medium depth, yeah yeah, I mean I would say medium because that’s I would say that’s true.

Computex 2023 Show Floor Day 2

This is a single socket, fourth generation, so Sapphire Rapids. This chassis supports up to four full height full length. Pcie slots plus we’ve got extra PCI connectivity on both sides. Here, there’s lots of room for pcie connectivity and you’ve still got room for storage, there’s also the e263 z30, and this is a short depth server, but check it out.

You’Ve got two full length: pcie gen five slots in the top there and you’ve got four more half length: full height pcie slots there and it’s pcie. Gen 5. you’ve got Sapphire Rapids. On top of that, so you can run your. This. Is your Sapphire Rapids platform.

The full eight memory channels and still have a lot of room for everything else you need to do and the storage is at the rear for your operating system, plenty of cooling, for whatever you’re going to be running for your pcie peripherals. This is the inference specialist inference with machine learning, yeah check it out the AMD inferencing cards. These are only eight pcie Lanes, so they can pack four of them in here in this chassis design. Normally something like this chassis would be used. For you know a double height GPU, but in this case these inference cards don’t need it.

They still give you the same level of compute but they’re a lot more compact, so they can fit eight of those in a side. 16 per chassis density is the name of the game. Then this system is for 2u nodes in a 2u chassis, and this is Genoa. So this is the newer version of the chassis that I’ve taken a look at before on the channel except you’re, supporting up to 96 cores per socket two sockets per system. Four systems per node and you’ve got all that nvme connectivity at the front of the chassis. Six connections per node – and this is what Jensen was showing us on stage this platform, most powerful AI supercomputer, the world’s first Nvidia certified hgx100, eight gpus in five views.

This is again ready to deploy in your existing rack. These are your eight gpus on the bottom, and this takes up just for you of Rackspace. The fifth view: is this one, you dual socket node on the top.

This is where your compute is going to live and you’ve got all that high speed connectivity to your gpus to live in the bottom Grace. It’S the CPU Hopper, it’s the GPU on one module and then wait a minute. Should there be another module over here? Yes, very good, various observation, but wait.

This chassis will actually support four of these yeah. Well, the heatsink is missing. There should be a heatsink here, because it’s going to need a lot of airflow.

It’S going to generate a lot of heat. This thing has up to 96 gigabytes of GPU hbm 3 memory, 512 gigabytes of LP ddr5x, with ECC CPU memory capacity. It’S a 2u4 node Design. This is CPU plus GPU designed for giant scale, high performance, AI plus compute.

It’S got triple 3 000 watt power supplies because that’s the kind of power load we’re up against when we’re cramming all of this into you. But this is basically standard rack configuration. It doesn’t require anything exotic you just plug it in and then you’re good to go on, whatever your existing networking infrastructure is with Nvidia getting into the CPU game a little bit more more form really. I guess this system will give you two CPUs per node from Nvidia. I mean yeah, two high performance CPUs for high performance, compute and cloud computing, but you still have four nodes per chassis. This is the greatest CPU Superchip.

It’S designed with up to 144 arm. Neoverse V2 cores and that’s got sve2 900 gigabytes per second NV link. C2C connection: that’s about 7x, faster than pcie Gen 5. That gives you any idea up to one terabyte per second of total memory, bandwidth and up to 960 gigabytes of memory capacity per module and yeah.

It is compatible with Nvidia Bluefield and Bluefield 3 dpus. You’Ve also got 16 dedicated nvme slots at the front. The Nvidia connect X7 for ultra high speed connectivity for whatever you happen to plug in you. Don’T you know, Nvidia doesn’t care it’s pcie.

You can use that on your epic system. You can use that on your Sapphire rapid system. You can use that on your new Grace, Hopper supercomputer system. That’S that’s no problem at all, but what? If you could get that kind of a physical interface, but in a blue field dpu, this kind of thing is really exciting because it lets you move a lot of your software stack onto a pcie card, potentially which will take some of the load off of your Server, what that means is the Practical matter is if, for example, you know ZFS are using for storage, you could actually do ZFS processing directly on this card and then write the process, data to system memory or other pcie devices in the system. So you can do direct pcie to pcie IO, the hardware exists and for some customers they have the software and they’re doing special stuff in order to really take advantage of this. But this is basically a full system on a card. We’Ve even got a teeny tiny CMOS battery located right.

There I mean this, is you know, for all intents and purposes, this thing is doing compute and it is a network with a host system, but via pcie Patriot was also here showing off their Gen 5 ssds. 12.4 gigabytes per second read and 11.8 gigabytes per second right: it’s not quite Enterprise storage, but it is PCI Express gen, 5. they’re, also showing off their fast ddr5 memory, Flagship 8, 000 Mega transfer per second memory.

They may have made a mistake on their sign. I actually mentioned that. Yes, oh I’m, sorry, oh my God, you’re recording. I promise. I promise I’m a good guy. It’S fine! We know the difference. That’S actually! Four thousand.

It was a misprint. Oh okay, there’s 4 000 megahertz slash or 8 000 Mega transfers. We should put a Post-It note over it, so we were looking at Enterprise storage earlier and I was just commenting.

It’S like pcie, Gen, 5 storage is already here in consumer land. Basically, even though all those servers with all those e1.s drives nobody’s got pcie Gen, 5 storage, adding those uh, I might have something for you to look at right. You seem like something. Why would you make a crazy high-end kit? Very few people can run it. This is to show this is that old adage like? Why do you do this because we can yeah? If we can build this, we uh sure can figure out. You know how to make the more basic gaming kits like, for example.

These are the new guys. We did show like a previous es, but they come in white as well. So if you don’t like, like the Viper Venom styling, which comes in RGB, not RGB, these come in RGB and non RGB with white and they go up to seven thousand but or AMD Expo up to 62.. The 7800 has 7 600 and 7400 and the 7600 is 7400 and 7200.

That’S a nice feature. I don’t know why anybody doesn’t do that. We do it just so that more compatibility it gives people yes in mind because let’s say you think, hey my board.

Now you don’t have to be like. Oh, this isn’t good for me. You just select.

The other profile makes sense. I’M over on the other side now and uh found some more USB system, USB peripheral vendors – I guess chipset folks, five gigabit Ethernet imminently doable 10 gigabit, not so much, maybe as media won’t. Let me down 240 watt USB charging no 10 gigabit Ethernet.

This kiosk accepts a question and then answers it with AI and it’s not even an Nvidia demo, literally Chad’s EPT and processing your request with the most suitable response. Please give me a moment: that’s so cool huh, this mechanism inside and there is a proper call yeah and we just show our mouths places. We have that, but we don’t have 10 gigabit Ethernet and a USBC form factor.

What kind of clown world is this PCI Express serial cards? Oh yeah, but no 10 gigabit Ethernet .