Hi, this is Wayne again with a topic “We were WRONG about RAM – Or were we?”.
This video from 2013 has 2.5 million views and yet it’s straight up wrong. Can you imagine how many gamers for all these years, we convinced ram speed, didn’t matter, we now know it does matter, but here’s a question for you: when did ram speed start to matter? Were we wrong before or did something change since babyface linus was still using windows 7. and if so, what i can tell you nothing changes about these segways to our sponsor. I fix it for repairs on the go ifixit. Has you covered find out more about the ultra portable minnow and moire sets and how they can make your repairs easier at the end of the video, we’ve got a few theories about why the impact of ram speed is different for gaming today than it was nine Years ago, and while we all agree that each of them is probably true, we don’t know which is the most influential since i’m already here. Let’S start with my theory, which is this: the benchmarks themselves were flawed, and so our results were also flawed. Looking at the old video, where we claimed ram speed didn’t matter, we ran a gtx 660 ti at 1080p max settings with anti-aliasing enabled we are gpu bound out the wazoo, which isn’t going to tell us much about how the ram is behaving.
I need to redo these tests with reasonable settings, so i pulled those old parts out from the archives and built a similar enough system to rerun metro last light. The numbers won’t quite line up since we don’t have our 660 ti anymore, but we can see how it does with this factory. Overclocked 660. at the original benchmark settings we’re once again looking at very little difference between the ram speeds.
Technically, our 2666 kit runs over five percent faster in minimum frame rates, but that translates into less than a whole frame per second in the real world, so yeah. If we turn off anti-aliasing we’re still seeing a similar result, despite the lesser load on the gpu, it’s when we switch over to more modest in-game settings that we see almost no difference across the board. What okay, then i expected that the higher our frame rates, the more ram speed, would matter. I guess the result makes some sense seeing as the higher the settings, the more data needs to pass through the cpu to the gpu. Maybe if i run a notoriously cpu bound title at esports, settings will get more of an impact on ram speed.
Kinda minimum frame rates creep up as we go up in ram speed, but it’s not by a lot and average fps doesn’t change by much cinebench r15. Also doesn’t show much of a difference in cpu rendering, although we depart from our original review here by mostly even jumps as we go up, the stack in opengl did, i just accidentally prove our old video right. Well, not so fast.
Let’S hear jake’s theory games have gotten a lot bigger over the last nine years and because of that, a ton of data has to go to the gpu very quickly top that off with more intense physics, calculations and heavier operating systems. With more going on. In the background and you’ve got yourself a ram bottleneck, so my theory, or at least part of it, is that ram speed, just didn’t matter as much back then, and what changed is actually the games, which is a good point. We saw this in reverse when we dropped the settings down in metro and got less disparity between our ram kits.
Cinebench opengl gave us another clue which is in its cpu utilization. It seems like three to four threads are in use at any given time. So larger, more modern games with bigger assets that use more threads should show a greater disparity between our ram kits. So, let’s test jake’s theory, the most modern title we have that i can easily run on windows. 7 is shadow of the tomb raider, which is both pretty large and uses.
Multiple threads and well slow ram hurts a bit at 1080p ultra, but there’s no tangible difference going from 1600 to 2666, despite the immense cost difference when it was new. Cutting the detail level to low narrows the gap even further, which tracks with what we saw in metro, bigger assets, need faster ramps or a bigger bag. Hey back orders through the backpacker up and some are the reviews, go check them out. You can fit so many assets in there, so jake’s onto something assets are a factor, but now we had to ask is that all there is let’s hear linus’s thoughts, since we made that video gpus have gotten ridiculously powerful. Even just three years later we went from a gtx 660, with two gigs of ram and an anemic processor to the 1060, a mid-range gpu that had up to six gigabytes of vram, not to mention that it was much more powerful.
So my theory is that, as gpus have gotten more and more powerful they’ve also become more and more demanding on the rest of your system, whether we’re talking about pci express slot bandwidth or you got it system, memory bandwidth. This is probably the most obvious difference between our old test setup and today’s. So, let’s test that i slotted an rtx 2080 ti into the old bench in place of the gtx 660. and oh boy.
Does it ever make a difference, not just that it’s so much faster than the 660, but we finally see scaling between our memory speeds. The difference in metro is at minimum about five percent going from 1600 to 2666, and the absolute chasm between 800 and 1600 is staggering. Shadow of the tomb raider similarly slams the slow memory and gives the fast stuff a roughly 10 to 12 advantage over the typical frequency back in the day cs, go despite being notoriously cpu bound, is ironically, less sensitive here, perhaps due to the settings or those old Assets with the biggest difference being an average frame rate, so linus was onto something here, but there’s a twist cinebench again doesn’t seem to cure until we boot up the opengl test, where we get similar scaling to the gtx 660.. That test must be cpu limited. Since we’ve removed all the other variables, which is where alex’s theory comes into play, since 2013, cpus have gone bonkers fast, we’ve gone from dual cores, being fine and quad cores being high-end, and they they couldn’t even break four gigahertz. Now quad-cores are low-end and we’re easily pushing past five gigahertz on eight core processors.
Crazy plus cpus have gotten way faster in terms of ipc and they have way bigger caches. Keeping that many cores fed means you need faster ram. So my theory is that in 2013 we simply didn’t have enough cores and general cpu speed for ram speed to really make a difference.
Another good point and to illustrate it: i’ve taken our modern core, i9 12, 900k and tweaked its ddr5 to be as slow as possible. Yes, that says: ddr5 1600.. It benchmarks close to but is still a little bit faster than the ddr3 1600 on our old bench.
But with about 2.5 times the latency, it’s as close to apples to apples as we’ll get and we’ll have the bench’s parts linked down below with the rtx 2080 ti the difference. Ram speed makes is material in metro, but it’s not as major as you might think. At less than 10 percent going from ddr3 to ddr5 and that carries over to cs go too and again, there’s virtually no difference in cinebench whatsoever.
We need something more modern to really see the difference here. So i fired up f1 2021 and far cry 6 with and without hd, textures and yeah. The difference between 1600 and 6000 is incredible, especially in far cry 6, where we’re looking at upwards of 50 percent, but something magical happens when we look at the numbers for 4 800 mega transfers per second ram versus 6 000.. Look at that.
Does it remind you of something yeah, it’s more or less in that zero to five percent range, just like our old bench tests this time the 2080 ti is the bottleneck. There’S always something isn’t there for the sake of completeness when we pair the gtx 660 with the modern system, it’s predictably gpu bound in every scenario except csgo, where the extra ram speed did help pretty significantly in both minimum and average frame rates. With all the testing done, it’s time to answer the question, what changed our old testing methodology didn’t actually end up playing a major role. In fact, it turns out that anti-aliasing aside running games at higher quality is a good benchmark for testing ram speed software. On the other hand, did in fact change things sometimes dramatically, but only to a point if your cpu and gpu are being properly fed and fully utilized.
It becomes a question of how fast that hardware is rather than how fast the ram is and if they aren’t being fully utilized, then faster ram will nudge them closer to peak performance. That means that faster memory today may or may not significantly improve performance depending on the application and your hardware, but later on down the road, when you upgrade your gpu you’ll, be bottlenecked, far less if you have faster ram. In other words, our old video wasn’t technically wrong, but there’s far more to the story than we understood back then, and almost certainly more that we didn’t uncover today. What we did learned today was that compared to cpus gpus are more sensitive to changes in ram speed, or at least the rtx 2080 ti is more sensitive to ram speed than our 12900k is our test.
Suite was limited, so radeon or ryzen may scale differently. The big takeaway is this: it’s that combination of software complexity, more available bandwidth and more data-hungry gpu cores that makes memory speed matter and it does matter, and so is our sponsor. I fix it, you break it. I fix it not me, of course, but i fix it.
Moire and minnow kits are the tool kits for the tinkerer on the go. The pocket size minnow driver kit is only 14.99 with an easy to open magnetized case, a built-in sorting, tray 16 different bits and a handle with a built-in e-sim eject tool. Pretty fancy for something slightly bigger and longer the moire driver kit is only 19.99 and comes with 32 different bits with extended reach. Next, for digging into those hard to reach nooks and crannies and all ifixit kits come with a lifetime warranty as well so you’re sure to end up in a landfill somewhere before your ifixit kit does so check out our links in the description to get yours today. Thanks for watching guys, instead of throwing you to that old video, how about checking out our much more recent video exploring whether 8 gigs of ram is still enough in 2022? The answer might surprise you .