Hi, this is Wayne again with a topic “Can You Have TOO Many CPU Cores?”.
Whether we’re talking dollars in your bank account items on a seafood, buffet or dates, you’ve got lined up on tinder. More is generally considered to be better a sentiment. That also seems to hold true with the number of cores in your computer’s CPU, at least if you buy into the marketing hold on, even though having many cores definitely gives you a boost in multi-threaded applications like rendering 3d animations. There are actually situations where more cores gives no benefit whatsoever or can even actually hurt your systems performance.
But how could this be well to start off with the more cores you pack on to a CPU, the more power they need and the more heat they generate and remember that, because CPU cores are crammed into a relatively small space manufacturers end up working against some Serious limits when it comes to thermal design, power or TDP. This means that to prevent the CPU from drawing too much power and producing too much heat, the individual course have traditionally run their clock frequencies lower to improve efficiency, and even if the advertised boost clock for a CPU with lots and lots of course can appear to Be high, it’s often the case that they cannot maintain these clocks for long periods of time or that they only do it when you’re running very light applications. So, if you’re, using your computer, mostly for applications where single-threaded performance matters more such as games that super expensive, 18, core CPU might actually yield you a worse experience than something cheaper and if you go with a really high core count. Cpu there’s another wrinkle with how processors, with that mini-course access the system memory you see in some cases, these larger CPUs need to have their course split into two groups or nodes, of course, with each group getting its own memory, controller and segment of the physical memory. In a scheme called non-uniform memory, access or Numa, this is generally quicker than the opposite solution, called uniform memory, access or Yuma, where all the cores share one big pool of memory, but here’s the thing a CPU that uses Numa, which is better for latency sensitive applications, Can often struggle when running a single program that uses tons of threads because of the different memory access times between the nodes and the fact that each node would have to wait on the other one to finish. Working on the same data, highly multi-threaded programs like these often don’t want to cross nodes, even if it would mean being able to take advantage of the entire CPU so back to Yuma. Then right! No because one controller manages all the memory accesses to give every program equal time, rather than allowing access to the memory more directly as Numa Numa has a built-in performance penalty that increases the more nodes your system has to manage so using a CPU with separate groups Of course means you’re gon na be subjected to one of these drawbacks and you’re going to take a performance hit by either way, and these are problems that you simply don’t run into on smaller chips with fewer course, because you’re not dealing with multiple nodes but getting Away from memory access, sometimes the cores themselves are even designed in a way that bottlenecks them the more of them.
You slap on to a chip. Do you remember how, before Rison AMD processors seemed to be significantly slower than intel, despite having more coarse? Well, a big reason for this was that those old bulldozer, FX processors didn’t use full-course. Instead, an FX cpu advertised as having a course would in reality have eight integer units, but only for floating-point units that were shared between the eight cores. So if you don’t know what a floating-point unit is, you can learn more about that right up here. But the point is, you could think of these CPUs as having four half cores that were missing, which severely hampered their single-threaded performance in some key applications.
Now this design allowed AMD processors to handle more threads for a cheaper price, but it also meant that their real-world performance leg, duay behind Intel and the only way AMD could try and compensate was to increase clock speeds which increased heat output and can tributed to aim Ds reputation for hot running CPUs for many years, so what’s our bottom line, then, although both AMD and Intel are using much wiser strategies for their many core, CPUs and clever, boosting techniques to give them similar single-threaded performance to their less costly brethren if the best sales Pitch for a super premium product is that it doesn’t suffer a performance penalty in the applications that you use well, you’d, better, make sure you’ve got a use case for it before spending your hard-earned cash and no playing for at night and watching tech. Quickie. Definitely don’t count and neither does using brilliant, because you can kind of do it anywhere.
Effective learning guys is active, not passive, so learning from lectures and videos is just not as fast as diving in and doing things yourself and brilliant is a problem-solving based website and app with a hands-on approach with over 60 interactive courses in math science and computer science. Brilliant lets you master concepts by solving fun, challenging problems for yourself rather than by watching someone else do it, and this is really useful if you’re in a stem course, their courses have storytelling code, writing, interactive challenges and problems to solve, and you can check out great Courses like the one on search engines, so you can learn how it is that Google answers a question in a fraction of a second, even though there are so many billions of sites to search through so check it out now, because the first 200 of you to Head to brilliant org, slash tech, wiki, we’re gon na have that linked below are gon na get 20 % off their annual premium subscription. So, thanks for watching guys like dislike check out our other videos and don’t forget to subscribe that wouldn’t be brilliant, would it .