Intel Went Off the Rails for This

Intel Went Off the Rails for This

Hi, this is Wayne again with a topic “Intel Went Off the Rails for This”.
Speaking of spicy Intel announces a photonic chip with scads of threads scads scads. That’S that’s a metric term metric scads, it’s a real unit. I don’t believe you it’s like a moment. You know we use it colloquially, but it’s actually a real thing that one’s actually true, though yep yeah scads is true. Look it up.

Is it actually? Yes, no, I’m not gon na. Do it he’s gon na, do it? No, he has to now. No because nope, you know it’s unveiled Puma a photonic chip that is based on custom risk architecture with eight cores and 66 threads per core for a total of 200 528 threads. Modern x86 CPUs usually have only two threads per core with some, with some exceptions.

Um such as Intel Xeon Phi, which had four threads per core IBM’s power, eight, which had eight, which is into modern x86 chip, but is a modern chip, yep and Sun Spark chips which went up to 16, which is also not x86, but yeah yeah. It’S just that we said modern x86. I totally got you yeah, it’s worded weird uh. The chip only uses 75 watts of power and uses 32 Optical.

I o ports that each operate at 32 gigs per second per direction, direction, wow holy crap for a total of one terabyte per second of total bandwidth. There are Optical interconnects between chips, so they can be connected together in eight socket server, sleds that in turn interconnect with each other yeah. So, like Optical Optical networking, like directly to the I forget what tour we were on, I think Riley called this Sam Altman’s wet dream, which I thought was hilarious uh. We were on a tour where we were looking at a configuration that had basically like fiber optic connectivity between what were they like daughter boards or something. I can’t remember man, it’s escaping me now, but it was super cool and they were kind of talking to us about the future of of like scalability in the data center and how, as they get fiber optics closer and closer to the actual dies. We’Re going to see the lines blur between what is a chip and what is many chips and what is many servers full of many chips? Computing unit yeah, it’s I, the RR discussion question here is: will this ever make sense for the consumer prosumer Market? Honestly, ever is a really really big word, but I don’t see it coming anytime soon I mean soon. I don’t think so, but ever I actually do think so. Both Intel and AMD have completely abandoned. Even hedt you’re you’re running you as in Dan, not the audience. You are running uh uh, like llm AI stuff at home, yeah yeah yeah. So you know Dan, isn’t everyone in the world currently, but there’s really cool applications for having your own version at home. That could potentially learn off yourself in the long run.

Intel Went Off the Rails for This

Maybe not so much right now, but in the in the long run, uh, and also it keeps it more private there’s benefits. Do you think you could see that becoming a thing? What running them at home, yeah yeah, like average people, we were really hoping that there was going to be this breakthrough technology, one sec uh, which would basically bring like data center level, GPU production to the home from consumer cards, but that ended up yeah, not materializing. There’S some more development in that space right now, when we talked about that, I hardcore called it. I remember that conversation that it wasn’t going to be possible. Oh yeah, I by that point I knew it wasn’t going to be possible, but there’s some new stuff. That’S probably kind of exciting in that space too um, so that’s kind of not going to be happening, but the 490s and these high-end cards that are consumer grade actually perform a lot better than some of the dedicated Machining learning cards.

Uh that that Nvidia produces the only downside is that you can’t squish 16 of them together into one big GPU in the frame. Vram they’re, not super data and yeah. So that’s that’s a disadvantage um, but it’s really nice we’re we’re kind of doing it. I’M doing it at home more as like.

The hype died down now and so now we’re learning now we’re actually trying to build tools, we’re trying to figure out how they function, we’re trying to understand how they work with gpus. Why are they limited to single GPU? Why can’t that be spread across multiple gpus uh? Why is inference only able to happen in a single section of vram? Why can’t that be segmented or something like that and learning how llms function, the different types of tunes and models and lauras and all the little settings that you don’t get to play with when you’re using the big boy online, llms um? So this this all to me, feels like something that is not consumer ready yet, but it’s in that Niche Enthusiast space, actually it might actually get to Consumers. Eventually, a lot of people have been producing really really easy tools that you can. Basically, you go to one GitHub page, you download a one-click installer you’ve already lost 90 of people, I mean yeah, but I mean that is way easier than building something from source for sure. You know what I mean yeah, no doubt so.

Intel Went Off the Rails for This

Somebody who’s a little bit computer savvy, I mean the the entry point for everybody is just these online websites right, but then you’re just playing with them you’re, not like tinkering right um, but it’s pretty easy to get going yeah and they run on consumer cars. I think we could see this type of tech being in the home for sure just hell, yeah definitely not now, and the nice thing is uh if you’re worried about privacy like I want to play with the having a home assistant AI in my house yeah, I Don’T want all the microphones in my house calling back to Google or Amazon or Microsoft every six seconds, so it runs in my bedroom. You know that’s the kind of that’s the kind of space where, where I’m really seeing this it’s going to shine far on Hardy um says at Linus. It was the IBM tour where they had like eight CPUs sharing cash between chips. Yeah, like oh man, that was that was cool stuff, so yeah.

Intel Went Off the Rails for This

If you guys check out the IBM Z tour that we did, uh man IBM reached out recently. They want to work together again because they were like thrilled with how that video went. But honestly I I was thrilled with how that video went too, but I’m a little bit worried that so much of what we were able to say the first time around is going to be redundant. The second time around, because Z, moves slowly and that’s kind of the point right is that they are they’re cutting edge, but in sort of uh, a lumbering Big Iron Giant, Icebreaker style, kind of way getting into unfound territory, but you’re chugging yeah. And so I I I worry that if I were to go, do it again I’d be like what is z for the same thing. It was for last time, but this one’s even more wild and better, but then maybe I’m underestimating them. Maybe I’m gon na show up and they’re going to be like oh yeah, but you know what about this uh.

So who knows I? I was just. I was blown away by the kind of cool stuff they had just everything from their. You know custom memory modules to those Optical interconnects and stuff.

It’S super cool, yeah, yeah, .