Checking Out The ASRock Rack B650D4U Server Motherboard!

Checking Out The ASRock Rack B650D4U Server Motherboard!

Hi, this is Wayne again with a topic “Checking Out The ASRock Rack B650D4U Server Motherboard!”.
Boom, would you look at that b650 am5, but this looks like a Server Motherboard. It’S just plain, vanilla, green there’s, not a trace of RGB on this and that’s exactly what it is. This is a Server Motherboard designed for am5 we’re going to take a close look at this, but we’re also going to take a look at use cases where it would make sense to do this in an Enterprise environment, not just the home lab. Oh, I’ve got something special for you today we’re going to take a look at this motherboard, but we’re also going to take a look at places where you can shove this motherboard as an upgrade path. You’D be surprised, we’re going to use an am5 7900 no X.

Checking Out The ASRock Rack B650D4U Server Motherboard!

That’S the low power version 12 cores on this platform will handily outperform a Zeon, E5, 2650, D2 or V2. Now to be sure, that’s like a decade old platform. Some of you are still rocking that out there. You could hollow out that platform and actually make use of it.

Checking Out The ASRock Rack B650D4U Server Motherboard!

It turns out the chassis and power supplies got more longevity than the motherboard that actually makes sense, but we’ll take a closer look at that in a second. This is the 2 l2t BCM version of this motherboard. What that means is that it has built-in 10 GB ethernet. Our 10 GB solution is broadcom.

Checking Out The ASRock Rack B650D4U Server Motherboard!

That’S the BCM 57 416. It’S two RJ45 10 GB ethernet ports, plus we’ve got two Intel: 210 1 GB ethernet ports. I might have liked to see 2.5 GB ethernet, but support for the 219. It’S a little hit and miss on the whole uh VMware type platform. So it’s the Enterprise Nick versus not an Enterprise Nick. This does have a built-in aege at 2600 for outof band management and it has its own separate, Nick based around an real Tech.

Uh 8211 F. So that means you got onboard VGA through the Remote Management, but you do also have display port and HDMI out because hey all these am5 CPUs do actually have integrated video, so you’ve got VGA and integrated video and you got to use the VGA for remote console. So, okay, I’m fine with that PCI layout is always a challenge on these there’s, actually a bunch of Micro, ATX motherboards from a bunch of different vendors and the pcie layout. None of them is really what I would prefer, but to be clear, this pcie layout is exactly what hyperscalers want.

There are a lot of cloud providers that just want rows and rows and rows and rows of these kinds of motherboards. They don’t really need a lot of pcie slots. It’S basically do you get the 1 gbit interface or the 10 gbit interface they’ll build that in and it doesn’t really matter. In fact, the other version of this motherboard that doesn’t have the built-in 10 gbit has an extra m.2, because guess what ASRock did they used the pcie lanes? That would go to the second m.2 slot to go to a 10 gbit ethernet adapter and in this case it’s built in instead of being you know, on an m.2 cuz, we did the m.2 cheat codes, video remember, and that was a lot of fun. We do manage to have three pcie slots, albe it a little bit of a strange layout with respect to the lanes.

We have the primary slot, which is x16 PCI Express 5.0, and it really is PCI Express 5.0 holy crap. We also got another four lane slot, and then we have a PCI Express 4.0 by1, that is to the chipset. We also have an m.2, which is also directly to the CPU, and that is PCI Express 5.

This is a completely Vanilla, Bo standard Micro, ATX motherboard. That would work in any Micro ATX system Micro ATX case, but these are also designed to fit in rack mount chassis where it’s set up for a Micro, ATX motherboard we’ve got a bunch of four pin fan headers but they’re. Actually, the extended four pin fan headers.

So if you have uh high air flow like if you’re trying to cram this in a oneu rack and use it with really really really power hungry fans, this motherboard does support that. Because, again, I have a feeling that this motherboard is not super modified version of a motherboard that a hyperscaler requested. Specifically, we have dual 8 Pin power connectors at the top edge of the motherboard, as well as a 15 pin front.

Vga header got our Standard 24. Pin ATX power connector an outof band power supply management connector at the top Edge, which we will come back to a hardware, TPM module cuz. Sometimes the TPM can still be a little flaky. You can add a TPM module here and and sort of make some make some potential operating system hurdles for you go away at the bottom edge of the motherboard.

We’Ve got a our front panel, 20 pen, USB 5 GB header, a USB 2.0 header for two more USB 2 ports. Four pin front speaker connector, although it’s got a built-in one: a UR header for external serial Port, LED debug codes and then debug and led headers for things like the in case. You got a front panel connection now for a home lab. Would I recommend a motherboard like this for home lab in the past? I definitely would have because those are manufactured for higher reliability, typically they’re, better boards, blah blah blah, but asrock’s actually done a really good job, integrating features like automatic power on. If it’s off, you know reset the bios, do something sensible if something goes sideways so generally for a home lab. You really don’t need out of band management anymore, to be able to remotely manage the thing, because you can remotely manage the thing from another system through a Serial port or something like a Raspberry Pi KVM, which is, you know, $ 50 to $ 100 in parts. So that really lowers the margin on something like this. That said, these are also engineered for 24/7 operation in inside a hyperscaler or inside a large cloud provider.

So you do get some features for that. This is a ddr5 platform and it is engineered for DDR 55200, but it drops down to DDR 53600 if you run four dims per slot. That means we’re looking at 96 to 64 GB as the uh reasonable maximum. For this platform, you could do 192 GB, but it’s going to be ddr5 3200.

So there’s going to be a little bit of a performance penalty. Well, it depends on what you’re doing what you’re doing it might not matter. So you have to know whether or not you really need DDR 55200 or not, and if you don’t 192 GB is your memory limit? Otherwise, if you’re going to run two dims in the platform, one dim per Channel 96 GB right now, eventually, maybe you could do 128 and 256.

Those dims are not being manufactured yet now. I recently did a video with this. The n100 DC ITX, which is not Enterprise grade at all, and this is four cores four threads and you can run a Home Server home lab off of this and use some of those non-enterprise features to just get it to turn on. That is 100 %. An apples to oranges comparison, so, if you don’t mind noise, something like this could be a good candidate for retrofit.

It’S a dual socket 2011 system in our case 2650 V2, and it’s originally a super micro chassis. Super micro is basically standard in terms of layout terms of motherboard rear, IO cutout in terms of a lot of features here. Even our power supply connectors are basically standard and and in fact, forward compatible with our azrock motherboard yeah.

It’S a little sacrilegious, putting an azck motherboard and a super micro chassis, but I won’t tell anyone if you don’t uh this by the way is also our test system. 2650 V2 versus a single 7900 no X from AMD 7900 wins all day long single core multicore and ourc casee running virtualization moving the virtual machines from this over to the other platform, basically seamless and easy. This has got a 10 gbit uh LC connector fiber optic card. Meanwhile, our azrock motherboard has dual 10 gbit broadcom. This is an old Intel card, but even the broadcom Nyx will run circles around this ancient intel x510.

If you do decide to undertake retro converting a case like this you’re still going to have to deal with noise mainly. But when you look at the front, you can see that we’re dealing with a huge number of mechanical hard hard drives and that’s something to keep in mind when you look at the power supply when you pull the power supply and you look at it, you’ll see 800, WTS and you’ll be tempted to say: okay, it’s an 800 wat power supply. I can basically throw anything modern in here. That’S not really true, even though socket 2011 was sort of oriented with its wattage around the 12volt power rail.

In our case, most of our 800 wat power budget is designed to go for hard drives in the front of this chassis, not for the motherboard. So again, a retro conversion to am5 makes a lot more sense, even that, if you were going to try a retro conversion to something like thread, Ripper or epic, because those processors are going to use 100 200 Watts. More especially, if you go the Bergamo route versus the Milan route, it really is something that you have to be careful with now, if you decide not to use half of your drive Bays, okay, that can work out a little bit better, but also keep in mind This is still a 10-year-old chassis and the power supply form factor here, not exactly standard. This particular super micro case also uses the standard four pin power connectors, but if you’re doing your own retro conversion, you might run across fans that are more than four pin double check. The pen out of the more than four pin fan header on the ASRock motherboard, to be sure they match, but generally motherboards and Server Chassis of The Last 5 Years. Uh are reasonably standardized with regard to that at least as long as it’s not Dell or HP. So, even with all the gotchas and limitations, the question is: should you generally, the answer is no: unless you’re just up against a wall from budget or longevity, youve got an application that you’re retiring anyway, and you just want to make it last a little longer and You start you started having hardware flakiness issues. This could be a reasonable way to solve the problem. Just swap the motherboard in chassis you’re good to go oh yeah and check out our power supply. Connector.

Remember I mentioned at the beginning. Get this little system management, bus header, it’s just it’s a glorified low speed, low number of wires required communication interface, but for this particular chassis, the power supply actually can communicate with the azrock motherboard through this interface. That’S not always going to be the case, but in this cas case it is actually true.

So you can plug this into that black header at the top edge of the motherboard and be ready to go as far as being notified at the outof band management level. When one of the power supplies fails, this is an am5 motherboard that I picked up with my own money. I am keeping my eyes peeled for something in an am5 flavor, that has say two X8 slots and two or three X4 slots and two on board m.2 and four to eight on board SATA.

I think that would be the perfect am5 layout. The performance here is is no slash either. I mean I’m using this with a 7900, but a 7950 X would also work fine in this platform. There’S no PBO or overclocking.

The vrm and power delivery is designed for a completely inpc 750x, but running a 7950 X on this there’s no real performance compromises there’s also something like the a 620m pro RS Wi-Fi. We took a look at this basically on launch day, but it wasn’t ready yet the BIOS the BIOS has since been updated. This is a pretty solid board. I’M going to revisit this board in terms of like this is the cheapest am5 motherboard.

You can possibly get, and it’s not unreasonable. It’S not unreasonable performance. It doesn’t throttle now it did throttle in the beginning it. There were some some software Hang-Ups with this motherboard that were pretty severe if you were going to push the limits with a 7900.

The a620 M Pro RS Wi-Fi is a reasonable, desktop board and it does have some of the same bios features where you can just say: Hey whenever you lose power, turn yourself back on turn yourself on at this specific time which can help you, if you’re building, Something with like you know, 5 9es, but that motherboard is not really designed for 24/7 365 operation. That said from my recent trip to AMD, it was nice to see the folks at AMD kind of embracing uh. You know non-epic for features that have typically only been reserved for epic. I mean AMD has been able to leverage the fact that their Zen cores are very similar.

If not, basically the same up and down the stack Zen 4C in the power save configuration, it’s not actually uh a different instruction set. It’S just laid out on Silicon, it’s physically smaller. It has uh a lower power cap, but that gets the job done. The same is basically true in the Enterprise AMD is going so far so fast in the Enterprise that the availability of inexpensive am5 with a reasonably okay qualified platform means that the bottom falls out of the used Market.

Where, once you could be really excited getting an old castoff, dual socket server: now you don’t really want it, because the power usage is too high. It’S too hard to retrofit. It doesn’t make sense from a performance standpoint.

Those 2650 V2s CPU single thread scores like 280 300. Something like that. The 7900 no X is north of 700 for the single thread score. So 2 2650s is much slower, single and basically multi- thread than a 7900 with only 12 cores. That’S sort of the world that we live in I mean it is a decade, but when you look at the lifetime of servers and when they actually bought it, they might have bought it a couple years into the service lifetime. Then they used it for 5 years. Well, that’s like seven or eight years right now, so it’s only been out of service for a year or two, and then you sort of press it into use for Home Server context nah! I just just get this.

I would like to see fully qualified ECC support. That is not really a thing I mean AMD says you know it’s up to board vendors to qualify. It ECC does work on here. Ecc udims, but the thing that’s missing is the platform doesn’t report the ECC error to the management bus.

You have to do that with a kernel module. So that’s a software thing. It’S not like epic, where it’s a more well-rounded solution in that the Epic management platform can can manage that, and actually the RAS aspects of the AMD platform is something that is huge. I want to do a separate video on that, because AMD has come a long way very quickly and the folks that they have reporting on you know handling errors on the platform.

How do we deal with ECC errors? How do we Elevate that up to the operating system? How do we have the outof band management controller or the management controller? In general, say I’ve detected an ECC error uh. It was this dim amd’s done a lot of work on that and I can show some pretty interesting stuff, so stay tuned for that video. I’M little this level one. This has been a quick look: quickish look at the ASRock rack, b650 d4u, the one with the 10 gig interface and maybe some usage and upgrade ideas.

I’M a this level, one I’m signing out. You can find me in the level one forums .