Hi, this is Wayne again with a topic “Tyan and Supermicro at COMPUTEX 2023”.
Foreign super micro was here, of course, showing off their chassis, whether it’s GPU compute, dense or CPU, compute vents or blade centers or whatever you’re looking for they’re building a chassis for it, even the new e3.s, E3 dot s. Yes, the new form factor I mean. Theoretically, that’s the form factor with support up to pcie Gen 6, but they’re already showing off pcie Gen 5 qualified platforms. This is the x13 8U with eight hey Max 1550 Intel gpus yeah, 100, 128 gigs 128 gigs of a lot of horsepower. That’S that’s a lot of GPU horsepower in one package. So that’s you know this is Intel’s.
Packaging technology come to the GPU kind of a little bit ahead of the CPU, but parallel to what they’re doing with with CPUs. So this chassis, designed for the e3.s you’ve got 32 slots, but you’ve got two PCI Express five Lanes to each device. So no bandwidth limits there and that’s just 64 Lanes directly to the CPU, but you’ve actually got two rows of devices for a total of 128 Lanes in the device. So this is a cxl module.
Yes, you know the most people don’t realize with the e3.s standard. Uh is rated or the design spec has up to 16 pcie Lanes, it’s part of it, but right out the gate. We’Re gon na we’re gon na have cxl and up to eight Lane devices, it’s a lot of bandwidth in a very small package.
So the E1 modules are more designed for a 1u rack configuration, whereas this is a 2u server. You know, we’ve got all the connectivity. This would be it really good. You could still use this for one new configuration that makes sense if you’re going for density clusters. Maybe you’ve got a four node system and two you, the e1. man, is going to like make a lot more sense.
In that scenario, a supermarket was also showing off their liquid cooled AI development platform. It was a distribution block inside a relatively normal. Looking Tower. Slash rack convertible chassis, you know your normal looking nvme storage at the front and then everything is all about airflow and everything’s, all about airflow to the distribution block and the radiator. And then your your liquid cooling for all of your AI compute that lives inside Supermarket was also showing off the Bluefield of the dpu-3. You know, Nvidia Innovation Nvidia basically needs everybody to make all of their systems.
They can’t make systems fast enough and the interconnectivity and moving the the compute required for network connectivity for 400 gigabit off of the CPU and off of whatever the GPU infrastructure might be and on to the dpu is making a lot of sense for these AI systems And so the network cards have suddenly gotten very complicated, basically turned it into their own Standalone systems that then talk to their host systems via pcie. So you can do a lot of compute directly on that pcie card and take a lot of the burden off of the host system in case you’re having trouble picturing. What four gpus might look like when it’s actually mounted inside the system. This will give you a little bit of an idea.
We’Ve got our Nvidia GPU here and even though this is a desktop case, because these gpus don’t have their own cooling, the air pressure from Super micros, I mean look at these fans. These fans move an enormous amount of air, this type of a GPU in this kind of a tower because of the airflow. It will be sufficient cooling for these two slot gpus that don’t have their own active cooling like a normal GPU. Would this is a you know, an AI compute, h100 type GPU, so this is what’s necessary to keep them cool, but also not super loud.
The first board I took a look at Italian’s Booth, of course, is the am5 board. Pcie layout here is very, very smart. We haven’t really seen this from any of the server-based am5.
They say they’re ready to go for the higher TDP, but you’re going to need liquid cooling, so you’re going to have to sort of keep that in mind, as you think, about a chassis for this board. But four dimms standard ATX power for like the Homeland Home Server type deploy. This could be a really good solution to get two gen 4 m.2 one right on top of the other, and you’ve still got all this pcie connectivity. This is really good for the am5 platform. Considering that you’ve got you know relatively limited, uh pcie Lanes coming out of that, and it’s got the onboard ipmi you get a pretty good rear. I o configuration in terms of on-board capabilities, including that dedicated ipmi, port, video and VGA from your ipmi plus you’ve got all that pcie connectivity. If you want something, that’s a little bit more suitable for desk side Computing, there’s.
Also this dual socket configuration for Sapphire Rapids. It’S still designed for flow through cooling, like you, would have on a rackmount server, but this can also work in a workstation configuration again. You know you run two dual slot gpus in these slots and then you have these two slots for your high speed, peripherals or whatever that you might need to run in a workstation dpus x16 peripherals pcie gen 5.. If two sockets is overkill for your application, there’s the Tempest to HX the S 5652. This is a single socket version of much the same. To get a little bit more room for more pcie slots and again it’s meant for the Dual GPU, the Dual slot.
Gpu configuration plus you’ve still got some pcie slots for whatever you want to. Whatever you want to run, you could run up to four dual slot gpus with this config. That would be no problem for some of the other Home lab like Advanced Home, lab or embedded Solutions. There’S also the LGA 1700 Solutions, because don’t forget with the i7, you can run error correcting memory, so they have the sodium based solution here that has integrated mix and then they’ve got a more desktop ish class Micro, ATX motherboard platform, which also has pretty solid built-in Audio they’ve also got the Tomcat cx-8056.
If you want to do two dimms per Channel. This also gives you an ocp three slot. You’Ve got onboard ipmi, as you would expect. You’Ve got the expansion slot risers here, and this motherboard also offers a lot of pcie connectivity at the front edge of the motherboard.
So this chassis, this chassis is the chassis that will accept either motherboard the s8056 or the Intel version both will physically fit. You’Ve got the ocp3 slot at the back. So if you want to see the s8056 in a more chassis configuration, that’s what it would look like in one! U so in this configuration it makes a little bit more sense. You’Ve got your ocp3.
You’Ve got your pcie risers here at the rear. You’Ve got your dual uh two dim per Channel configuration for Genoa. That was not something we saw on launch, but it’s now qualified and in the channel and and basically ready to go and then all of that pcie connectivity that we saw at the front. All of that comes here to 12 nvme Bays this chassis.
The Genius of the board design is that it’s exactly the same layout for the other model motherboard you just swap it out, and you could run this in a dual CPU configuration they’re Cloud platforms, get the Genoa socket 12 memory channels, although you can run more than More than 12 dimms for capacity, if you really need it, but 12 12 dims is, you know, recommended it’s your ocp3 slot at the front, and then this is a. This is a variation of the this is ipmi. Is this an ipmi interface? This is a variation on the ipmi interface that I have not seen in the wild. Having aftermarket and uh you know, add-in ipmi is just another layer of security Assurance when building the system, because you can have a separate quality control, separate production lines and a separate assurance that nobody’s putting anything in there that they shouldn’t be putting in there sure so And there’s four of these nodes in this 2u chassis, so you can have 96 times four CPU cores total capacity.
Four independent systems and you’ve still got enough physical room to run those 15 terabyte e1.s nvme that we saw earlier. This is the tie-in transport. 1085. B8. 261, the transport SX as configured you’ve got 24 nvme bays at the front and they are using retimers in order to drive all of that. Pcie connectivity back to the front, but with this dual socket, Genoa system, you’ve still got enough. Pcie Lanes since you’ve got two x-16 slots at the rear that are open for high speed, networking or or dpus, or any other kind of high speed.
I o that you want to run. If you don’t need 24 nvme at the front, you can eliminate the risers and actually pack, four gpus at the rear on this chassis. So it’s a very, very flexible chassis, depending on what your needs are for a 2u configuration is a little bit more. Your speed, the motherboards we’ve already looked at, are available in 2u configurations. This is the transport ts70b8056. These are three and a half inch drives at the front. You’Ve got a combination of nvme and three and a half inch SATA and all of your pcie connectivity is still available at the rear, so you could run full height full length. Gpu configurations in this and you’ve also got two dims per Channel single, socket motherboard. In this configuration – or you can use retimers now that motherboard has a bunch of connectors which will give you 18 of your nvme at the front.
The extra nvme come from re-timer cards, but you’ve still got a lot of pcie connectivity at the back. In this version of the chassis, this is the ts70a b8056. You can see all that nvme at the front. You may have heard of the open compute project or the open compute standard.
This is the Capri 2 ocp server, so open compute is its own Universe of awesome things, but each one of these is a compute node. That’S designed to be installed in the Capri to chassis but they’re different configurations. So this configuration gives you two high five half link pcie slots plus two standard u.2 and you’ve still got an ocp3 slot down. Here.
We’Ve got some some sort of 25 gigabit. Ethernet interface depends on the purpose that you can have the different uh. 4X25. 2X.
100. 2X, 400, just whatever kind of ocp3 peripheral you need. You’Ve got your Genoa slot. You have four dim slots, so it’s four channels – and this configuration brings you all of your pcie connectivity to your slots at the rear.
But you’ve got four e1.s slots, so you can run 4 nvme in here in whatever kind of disc configuration you want, and so this chassis has one full height full length slot and one full hot half Point slot. But this will work for AI accelerators and you know inference cards. Basically anything you want to throw at it and it’s full gen. 5.. In case you think that uh AMD is getting all the open, compute action. Now the open compete modules are also available in Sapphire Rapids configurations and that’s what they had up and running on the show floor.
They can actually take for a little bit of a spin in this configuration foreign .