Dual Actuators: 500+ MB/sec Mechanical Drives w/ Exos Mach.2

Dual Actuators: 500+ MB/sec Mechanical Drives w/ Exos Mach.2

Hi, this is Wayne again with a topic “Dual Actuators: 500+ MB/sec Mechanical Drives w/ Exos Mach.2”.
Hard drives mechanical storage, they’re, Precision, mechanical instruments: this is a Seagate nine gigabyte, hard drive, that is over 30 years old yeah. I said nine gigabytes, there’s like a hundred dollars of neodymium in here. It is a spinning platter, that’s enormous five and a quarter inches. This is six inches across five and a quarter inches on the inside and there’s a read.

Dual Actuators: 500+ MB/sec Mechanical Drives w/ Exos Mach.2

Rod head that moves across the magnetic platters because of the Bernoulli effect it’s floating, very close, but not touching to the actual platters and there’s a magnetic coil just like. What’S in a speaker that converts electricity into sound, except instead of converting electricity into sound, you convert it into mechanical motion. Of course, you could use it for sound, see also the flopotron okay foreign, but really I want to wax poetic about how awesome it is that we have this Rube Goldberg machine of mechanical storage and we’re still in the 1 to 10 terabit per square inch levels Of density uh for the people in the European audience, you know we’re talking about greater than one terabit per square centimeter.

So that means we have a two-dimensional space and we’re sticking in a trillion magnetic bits in that tiny little space, and there is a mechanical. Yes, physically movable mechanical thing inside here that can move to that level of accuracy and find it oh well, maybe not this one with its nine gigabytes of storage and 110 billion platters, but this one. These are not super widely available yet because they’re not for the uninitiated to use this is 18 terabytes.

Dual Actuators: 500+ MB/sec Mechanical Drives w/ Exos Mach.2

Okay, you know you’ve seen 18 20 22 terabyte drives before, but this one has dual read: write heads! Well, it turns out. This is actually pretty easy to use with ZFS pretty easy to use on Linux and that’s why I’m taking a look foreign reached out and said: hey 18 terabyte drives. Do you think you can use these, and I said yes and they said well, it’s a little weird because they show up as two devices in the system and how do you set that up with raid and everything else, and I said ah, we’ve been doing that With ZFS for quite a long time on this channel handling, that is no problem at all. These drives are serial attached scuzzy, although Seagate has a version of these that is SATA. That is on the way I’m going to probably take a look at the SATA version of those discs a little bit later. The SATA drives actually do show up as one big drive, but it depends on software to take full advantage. These drives are fast for sequential transfers. This is able to saturate a SATA link. This is huge, just like a SATA hard drive for sequential transfers.

These mechanical drives are basically the same speed. It’S double the speed of what you would expect from a mechanical hard drive, because there are actually two read write heads there is a little bit of a trick getting them to work in tandem. We’Ll talk more about that, so I’m going to set up 12 of these in our 45 drives test storage server 12 times 18 terabytes is a whole bunch of storage.

We’Re gon na do some other upgrades to our storeinator and we’re going to do some ZFS benchmarks on that. It’S pretty much what we’re working with in terms of our physical server setup, because there are 12 drives. There are 24 devices that show up from the perspective of our ZFS pool or our setup now, because each Drive is physically two drives.

Dual Actuators: 500+ MB/sec Mechanical Drives w/ Exos Mach.2

If you’re setting up something like raid Z1, you only have one drives worth of redundancy. Let’S say something happens and a drive goes bad well. If the motor goes bad you’re going to lose two of those devices at once, that’s not going to work with raid Z1.

Unless you’re sure that you distribute your v-devs so that you lose different devices on different v-davs, in fact, here’s a demonstration of that our ZFS pool here is 24 devices. Two v-davs of 12 devices each raid Z1. I’Ve made sure that each half of each Drive is on a separate v-dev.

So when I pull one of the drives and we run Z Pool status, we can see that well, we’ve we’re in a degraded State because we’ve lost one device in our raid Z1 pool. Now remember with ZFS, I don’t actually recommend pulling the drive, that’s malfunctioning. I recommend that you put in the drive, that’s going to replace the other drive, so you always want a free slot in your chassis and our storynator. We’Ve got 30 Bays, maybe use like 28 or 29 of them, so that when you do have a drive that dies, you’ve got a free slot to put in the replacement drive and then let it resilver to the replacement drive. And then you pull the drive. That’S malfunctioning now in case you’re wondering how we make sure it’s in the worldwide numbers. If you look at the dab directory on Linux, we have these worldwide numbers, the ls, slash, Dev, slash disk, slash, buy ID, and then there are all these worldwide numbers there’s also worldwide numbers on the label. So if you can see the label or you make notes about the worldwide number on the label, you can always physically find the disk.

That part is no problem, but there’s one bit in the middle of the worldwide number. That tells you if it’s a top or bottom half of the drive, easy peasy. You can use the script that I wrote on the level 1 forums to help. You pick your device groups and then you can set up your raid Z Pool and go from there or, if you’re, using something like true Nas, and you want to do this through the GUI.

It’S a little trickier. I recommend just using the terminal or command line function and those things to set it up so for power users or anybody watching this channel using these drives on your system. I think it’s basically a non-issue in fact, our original 172 terabyte storage server video, some versions of those NetApp disk shelves actually contain 48 disks. There’S two discs in a sled. It works pretty much the same way as here some of those old posts in setting that up on the Forum. It works exactly the same way, so we could create a ZFS pool of our 24 virtual devices in raid Z2 and then any one of our drives could die and we won’t lose any information. If two drives die. However, we will lose information. You can also create raid Z3 so that one and a half drives can fail, and this is probably the safest configuration if you’re going to run a single v-dev now, of course, SATA ssds or even the lowest end.

Ssds are still going to outperform the mechanical hard drives, even when you’re striping the contents together you’re basically doing RAID 0 inside a single mechanical hard drive. But I think Seagate has done the right thing from an engineering perspective, because it would make the drive too complicated and too costly to try to raid zero in some sort of mechanism on the controller here they’re going to have to you, know, sort of sort of Do some trailblazing – and you know who knows flash drives in another 10 years or so my finally outstrip capacities here, because 18 terabytes for for this price point really not super expensive. Now, because these are SAS drives, they’re not going to be able to plug directly into your motherboard SAS is not cross compatible with SATA, or anything like that. You can use SATA drives on a SAS controller, but you cannot use SAS drives on a SATA controller. Fortunately, SAS controllers are cheap. You can get SAS 6 or SAS 12 host bus adapters, which is fancy way of saying not a RAID controller on eBay for 10 bucks.

20 bucks 30 bucks, something like that. Sas, of course, works out of the box on most of the storynator 45 drives chassis configurations. If you’re doing this on the cheap, those LSI disk shelves that I’ve been seeding into YouTubers hands since time immemorial.

Those will work great because they’re dual path – they’re they’ll, support all of this all this madness that we’re doing so adding SAS capabilities to your existing Home Server or going nuts with your home lab, not a big deal and the fact that these are starting to be Readily available and the fact that they hyperscaler has been using these for a long time and I think Seagate was a little worried about uh. You know the unwashed masses using a complicated product, but no this is the right approach and, like I say there is a SATA version of this disc, but it works a little differently and so I’ll cover that in a different video, I’ve added another 100 gigabit Ethernet Interface to our Falcon Northwest, Xeon 56 core system and yeah, we are copying over the network in excess of five gigabytes per second to a mechanical storage array. There’S just 12 drives. It’S pretty unheard of. I mean think about this for a second we’re moving a tiny piece of ceramic, with some copper coil embedded into it across a magnetic hard drive platter with enough precision and speed that we’re we’re dumping 5 billion bytes per second. Well, I mean I’m convinced dual actuator technology is the future. The fact that 12 drives configured with a little less redundancy than I like, but still 12 drives, can clear five gigabytes per second through our 100 Gig link with our Falcon Northwest system is extremely impressive. This is a really awesome technological achievement and I really hope it enters the mainstream. There’S also coupon codes below, maybe I’m trying to get coupon code so the time I’m shooting this – I don’t know if that’ll actually come through, but there’s links below. You should definitely check that out to show interest for dual actuator drives. I’M on to this level, one I’m signing out. You can find me Bland with my awesome Falcon Northwest system with his 100 Gig ethernet and then maybe I’ll, be on the forums later all right.

I’M signing out I’ll, see you there, foreign .