The BIGGEST CPU Ever! – Waferscale Explained

The BIGGEST CPU Ever! - Waferscale Explained

Hi, this is Wayne again with a topic “The BIGGEST CPU Ever! – Waferscale Explained”.
Although transistors still keep on shrinking, it’s getting more and more difficult to pack as many of them on a chip as we’d like partial solutions to this, such as using chiplets to reduce the rate of manufacturing defects and stacking transistors on top of each other have been In vogue for a while now, but it might not be too surprising that some manufacturers have decided to simply make the chips themselves bigger when in doubt supersize. Now, i’m not saying that your next computer might have a cpu. That’S so big it’ll take up half the motherboard, but when you get away from personal computers and start looking at chips that we might see in data centers in the near future, you start seeing some pretty eye-watering stuff we’re talking about designs like the wafer scale. Engine 2 from cerebros, currently the largest chip in the world, built on a seven nanometer process and contains 850 000 cores and is a whopping, 21.5 centimeters or 8.5 inches long. That’S more total area than 25 ryzen desktop cpus, perhaps unsurprisingly, a chip.

This big, and with this many transistors 2.6 trillion to be exact, requires a lot of power. The wafer scale, engine 2 sucks down 15 kilowatts. So if you were somehow able to drop this into your pc, you’d need 15 1000 watt power supplies just to keep it fit. That’S not even counting the rest of the system, but despite this, the new design should actually result in power savings. You see data centers and supercomputers that do artificial intelligence processing often have to use lots of separate chips such as gpus, spread across a large facility having the same amount of computing power on just one. Physical chip is far more power efficient, even if the power consumption rating of that chip is a lot higher than a typical gpu, but there are other advantages to this approach. Besides, just saving energy, you might be wondering why we aren’t simply just sticking a bunch of chiplets onto one package instead to make something like a really big version of an amd epic processor and we’ll tell you why, right after we think ifixit for sponsoring this video Ifixit wants to help you fix all of your devices, so you never have to pay for a costly replacement again from your xbox to your toothbrush. I fix it as parts and guides for almost any device you can think of.

They have over 70 000 step-by-step guides. With photos to make it easy check out, ifixit.com techwiki to pick up a repair kit and join the right to repair movement today, so as versatile as chiplets have been, they still suffer from having more latency than one big monolithic processor, the little interconnects that move data Between chiplets, as quick as they may be, and they are fast – are still slower than if you physically put computing units directly adjacent to each other to form one big chip. Ultimately, this means that huge monolithic chips can process more data than a system with the same number of transistors spread out among multiple chips, and when you consider just how much data has to be processed for ai applications and scientific research, it makes a difference.

Wafer scale technology has already drawn interest from diverse industries, including national intelligence and healthcare, but though it has some obvious advantages that doesn’t necessarily mean that it’s the silver bullet to large-scale compute challenges, for instance. One big issue is the fact that these processors are designed to handle lots of data, so they also need access to a lot of memory and designs with chiplets and larger amounts of memory built on the same package may end up being more popular. This is similar to what tesla has done with their new d1 chip, which was developed in part to help propel tesla’s self-driving ai technology. The d1 chip itself is much smaller than the wafer scale engine, but tesla has included over 11 gigabytes of high speed sram in an arrangement of 25 d1 chips connected together to make a training tile, that’s bigger than your head and, of course, making smaller chips reduces The amount of silicon you’ll waste due to manufacturing errors, as we mentioned earlier, but regardless of whether a particular company is using wafer scale or an arrangement more like teslas. Putting a large amount of silicon on one plane may end up becoming an industry trend. Because if america has taught us anything, it’s that there’s a deep human need to supersize and if you feel the need, like the video or dislike the video check out our other videos and comment below with video suggestions.

We make videos here, get your videos here at tech. Quickie, don’t forget to subscribe and follow see you later .