Hi, this is Wayne again with a topic “Intel’s Lunar Lake AI Chip Event: Everything Revealed in 10 Minutes”.
As we’ve launched the core Ultra with meteor Lake, it also introduced this next generation of chiplet based design and lunar lake is the next step forward and I’m happy to announce it today. Lunar lake is a revolutionary design. It’S new IP blocks for CPU GPU and npu it’ll power, the largest number of NextGen AIP, PCS in the industry. We already have over 80 designs with 20 OEM that will start shipping and volume in Q3. You know first, it starts with a great CPU and with that this is our next Generation lion, Cove processor, that has significant IPC improvements and delivers that performance, while also delivering dramatic power efficiency gains as well.
So it’s delivering core Ultra performance at nearly half the power that we had in meteor Lake, which was already a great chip. You know the GPU is also a huge step forward. It’S based on our next Generation C2 IP and it delivers 50 %, more Graphics of performance and literally we’ve taken a discrete graphics card and we’ve shoved it into this amazing chip called lunar Lake. Alongside this, we’re delivering strong AI compute performance with our enhanced npu up to 48 tops of performance and, as you heard, satcha talk about our collaboration with Microsoft and co-pilot Plus and along with 300 other isvs.
Incredible software support more applications than anyone else now. Some say that the npu is the only thing that you need and simply put that’s not true, you know and now having engaged with hundreds of isvs. Most of them are taking advantage of CPU GPU and npu performance. In fact, our new she2, you know GPU, is an incredible on device.
Ai performance engine, only 30 % of the isvs we’ve engaged with are only using the npu, the GPU and the CPU and combination deliver extraordinary performance. The GPU 67 tops with our XMS performance, 3 and 1/2x, the gains over prior generation and since there’s been some talk about this other exolite uh chip coming out and its superiority to the x86. I just want to put that to bed right now. Ain’T, true, you know lunar Lake Running in our Labs today, outperforms the ex Elite on the CPU, on the GPU and on AI performance, delivering a stunning 120 tops of total platform performance and it’s compatible.
So you don’t need any of those compatibility issues. You know this is x86 at its finest every Enterprise, every customer, every historical driver and capability simply works. This is a no-brainer, everyone should upgrade, and you know the final nail in the coffin of this discussion is some say. The x86 can’t win on power efficiency, lunar Lake busts, this myth as well this radical new s, so architecture and design delivers unprecedented power.
Efficiency up to 40 % lower s, so performance than meteor Lake, which was already very good. Customers, are looking for. High performance cost effective, Genai training and inference Solutions, and they started to turn to Alternatives like gouty. You know they want choice.
They want open open software and Hardware Solutions and time tomarket Solutions at dramatically, lower tcos and that’s why we’re seeing customers like neighbor airel, Bosch, infosis and Seeker turning to gouty 2 and we’re putting these pieces together. We’Re standardizing through the open source community in the Linux Foundation, we’ve created the open platform for Enterprise AI, to make Zeon and gouty a standardized AI solution for workloads like rag. So let me start with maybe a quick medical query: okay, so this is Zeon and gouty working together on a medical query. So it’s a lot of private confidential on-prem data being combined with a open source, llm, exactly okay, very cool, all right. So, let’s see what our llm has to say, so you can see like a typical llm we’re getting. You know the text answer here standard, but it’s a multimodal llm.
So we also have this great visual here of the chest: x-ray. Okay, I’m not good at reading. X-Rays, so what does this say? I’M not great either, but the nice thing about um and I’m I’m going to spare you my uh, my typing skills, I’m going to do a little cut and pasting here. The nice thing about this uh multimodal llm is. We can actually ask it questions to further further illustrate what’s going on here, so this llm is actually going to Analyze This image and tell us a little bit more about this uh hazy. Opacity such as it is so you can see here. It’S saying it’s down here in the lower left, so once again just a great example of multimodal L and as you see you know, gouty is not just winning on price. It’S also delivering incredible TCO and incredible performance, and that performance is only getting better with gudi 3 gudy 3 architecture is the only ml perf Benchmark alternative to h100s for llm training and inferencing, and gudy 3 only makes it stronger. You know we’re projected to deliver 40 % faster time to train than h100s and 1.5x versus h20s, and you know faster inferencing than h100s and delivering that 2.3x performance per dollar and throughput versus h100s and in training. You know, G 3 is expected to deliver 2x. The performance per dollar – you know – and this idea is simply music to our customers – ear, spend less and get more. It’S highly scalable uses open industry standards like ethernet, which we’ll talk more about in a second and we’re also supporting all of the expected open- Source Frameworks.
Like pytorch VM, you know, hundreds of thousands of models are now available on hugging face for gudy and with our developer Cloud you can experience gouty capabilities, firsthand, easily accessible and readily available. But, of course, with this, the entire ecosystem is lining up behind gudy 3, and it’s my pleasure today to show you the wall of gouty 3. Today we’re launching Zeon 6 with ecores, and we see this as an essential upgrade for the modern data center.
A high core count: high density, exceptional performance per watt. You know it’s also important to note that this is our first product on Intel, 3 and Intel 3 is the third of our five nodes in four years, as we continue our March back to process technology, competitiveness and Leadership next year. I’D like you to fill this rack right with the equivalent compute capability of the Gen 2 using Gen 6.
Okay, give me a minute or two I’ll make it happen. Okay, get with it come on Hop to it. Buddy, and you know it’s important to think about.
You know the data centers. You know every data center provider I know today is being crushed by how they upgrade how they expand their footprint and the space. The flexibility you know for high performance Computing, they have more demands for AI in the data center and having a processor with 144 cores versus 28 cores for Gen 2 gives them the ability to both condense, as well as to attack these new workloads as well with Performance and efficiency that was never seen before so Chuck. Are you done? I’M done. I wanted a few more reps, but you said equivalent.
I even put a little bit more okay, so let me get it that rack has become this you and what you just saw was eor delivering this distinct Advantage for cloud native and hyperscale workloads, 4.2x in media trans code, 2.6x performance per watt and from a sustainability Perspective, this is just gamechanging, you know, uh a 3:1 rack consolidation over a 4year cycle, just one 200 rack data center would save 80k megawatts per uh. Uh megawatt hours of energy and Zeon is everywhere. So imagine the benefits that this could have across the thousands and tens of thousands of data centers. In fact, if just 500 data centers were upgraded with what we just saw, this would power almost 1.4 million Taiwan households for a year 3.7 million cars off the road.
For a year or Power, taipe 101 for 500 years and by the way this will only get better and uh. You know if 144 cores is good. Well, let’s put two of them together and let’s have 288 cores so later this year, we’ll be Dr bringing the second generation of our Zeon 6 with ecores a whopping 288 cores, and this will enable a stunning 6:1 consolidation ratio, better claim than anything we’ve seen in The industry .