Nvidia UNLAUNCHES Their RTX 4080 12GB

Nvidia UNLAUNCHES Their RTX 4080 12GB

Hi, this is Wayne again with a topic “Nvidia UNLAUNCHES Their RTX 4080 12GB”.
Why don’t we jump right into the big topic today and videos on launch of the RTX 4080-12 gig amazing? First of all, for context, guys, Nvidia announced the RTX 4090 4080 with 16 gigs of RAM and 4080 with 12 gigs of RAM at the same time, but they were, as they have done in previous generations, planning to stagger out the actual availability of these cards as Well, as any uh specifics when it comes to Performance as measured by third-party reviewers like yo, what that means, then, is that this card was not really uh. It was announced, but not necessarily actually launched, um and the reason that there’s been a controversy and the reason that Nvidia has come out and said that we’re gon na uh We’re not gon na go ahead and launch. This was that community members felt that the naming for the card was very misleading and uh. Honestly, I I feel exactly the same way. This is exactly what you and I talked about back when Nvidia launched the GTX 680., so that was like nine years ago, or something like that, and the reason that we felt that way about it was that the GTX 680 did not feature a top-tier die. It didn’t feature a big die version of Kepler, it actually featured a size down die compared to the previous generation, Fermi Flagship, the 580, and so for us. We were looking at it going hey, we couldn’t help noticing. This is more of like a 70 class card. Um, what yo? What’S the what’s the deal with that, because Nvidia has traditionally at least until quite recently reserved their eight number, whether we’re going all the way back to you know the 8800 GTX um I mean even 800 GTS, I believe, used the G80 die. If I recall correctly so so, typically, they reserved the number eight for their their top spec chip, and it wasn’t until the 680, as far as I remember anyway, could be a little fuzzy on the details that they went and they took a tiered down chip and Were like yeah, you know what we actually managed to improve performance, so much generation over generation that we’re going to take a a a second tier chip and we’re gon na we’re gon na brand it as a flagship um. So I do object to.

I do object to any naming scheme that is seems to be designed to intentionally mislead the consumer. Now that I think about it, we probably should have taken AMD to task for skipping a generation going from five thousand to seven thousand with ryzen. I I find that sort of behavior extremely extremely frustrating and unnecessary when it could be as easy as hey. The first number whoops is the generation of product. Now I think the justification for that is that there was a Zen three plus six thousand series Refresh on mobile or something mobile.

Nvidia does the same thing, the launch at just mobile, or I think, one time it was like just Best Buy or something yeah. Well, they it completely ignored. I believe it was GTX 300.

Nvidia UNLAUNCHES Their RTX 4080 12GB

They went from 200 to 400 because I don’t know, except there was, as you said, like an oem only card yeah that you could get in like a Lenovo pre-built or something, but it was actually a 200 series and it was rebranded. So there was no reason for it to exist at all and yeah uh people are pointing out in the chat that they actually skipped twice because they also went from ryzen 3000 to 5000, so they’re, just like yeah we’re advancing really fast here. You know we’re really really pumping out those new those new generations. All you’re really doing is racing to a point where you’re gon na have to come up with another naming scheme and that’s going to be annoying at least do what Intel does, and you know launch that non-generation in between okay, okay, that’s not entirely Fair! That’S not! That’S not entirely fair, so the point is that I object to any naming scheme that seems to intentionally obfuscate what the product actually is and when you call this card, this 40 80 12 gig, when you call it a 4080. The implication to me, the consumer and, and I mean in the last generation they did use a top tier chip for their 80 card. The implication to me, the consumer, is that this is a top spec piece of silicon, and the only difference is the ram capacity and the only difference is the ram capacity.

Nvidia UNLAUNCHES Their RTX 4080 12GB

But in actuality, based on uh nvidia’s own first party benchmarks that showed the 12 gig version underperforming the 16 gig version by up to 30 percent. So the reality is that this is a 40 60 TI, realistically, is what it actually is. If we were to look at all the different tiers of silicon, that Nvidia produces and say, okay, yeah 80 is top 70 is often actually top but cut down, but sometimes a next one down and 60 and 60 TI things start to get a little bit murky.

Nvidia UNLAUNCHES Their RTX 4080 12GB

In fact, there have been situations where they’ve actually used different dies for a 60 or 60 TI class product um, where, like it’s a big one, that’s cut way down or a smaller one that has all of the features functional. So, if we’re being realistic, this is like a 60 TI class product at best um and maybe with the new making room for it. You know the 90 okay, maybe it’s a it’s a 40 70., but I I I I think that it’s pretty clear that this was misleading. I think a lot of members of the community felt that it was misleading and we’ve got to at least give Nvidia credit. I guess for once again, unlike some other Mega corporations, at least having some shame when we took them to task over their treatment of Hardware unboxed, did they or did they not reverse course they did okay, sure when heart ocp took them to town hold on hold On damage control, when hard ocp took them to task for uh, what was it project green light? Uh? I don’t remember what it was, but I remember this happening yeah. I believe it was the green light program where they clamped down on overclocking partner cards, which probably is well not probably is was one of the considerations for EVGA dropping out of the GPU Market. How you’re supposed to differentiate your product? If you can’t even differentiate it because Nvidia says no um so like the boring, dad program is more like it actually that they didn’t. No, they didn’t really back down on that.

Okay, fine, so they backed down on one thing and now a second thing yeah. It’S just damage control. I don’t think it’s shame. Well I mean that does demonstrate a little bit of Shame.

We’Ve certainly seen other companies, I think it’s financially attached, like I don’t think that’s what I’m saying with damage control could have some Financial shame. Okay, okay, so here here here, for example, okay Apple has been – I don’t think I don’t think they’re going. This was wrong.

I feel shame I’ll, take the hit on this and fix it. I think they’re going. This was a bad move by us, which will financially not be a good thing for us to do.

Let us adjust this so that we make the more Monies don’t even know. If there would have been a financial impact, because, at the end of the day, the consumers who actually look at benchmarks and watch or read, reviews are going to know how things stack up the FPS per dollar is what It ultimately comes down to. Don’T do that plus some features right and for the people who don’t do that they were just potentially gon na, buy a cheaper die and think it was a 4080..

I think I I don’t think there’s a financial. My assumption is that they would do that and then people would rage and the rage would turn into potential sales for 5 000 Series, yeah or six thousand series or yeah, maybe seven thousand. So maybe we’ll skip two numbers: 69 000 series, sure whatever whatever it might be. Let’S go just do it AMD, okay, um but yeah. I don’t know I I don’t. I don’t think Nvidia field stream personally, okay, all right, fair enough! All right! I’Ve watched them.

Do many a thing that should result in shame and I have watched them just not care at all yeah. In other news, they literally turned their apology into a marketing exercise showing lineups of people to buy 4090s. So, okay um. I guess that really is more supportive of your position than of mine um, but at least at least they on lunch if they turn around and they launch this thing as 40 70, I’m still gon na be disappointed because I still think that’s BS the. I think it’s a 4060 TI. I think it’s a 4060 TI.

That’S that’s what I think it is um. I think that that fully functional – maybe it’s a 30. Maybe it’s a 40 70., if the if the 40 uh, if the 4080 die, is like a 40 70 TI at some point like if we get a TI for a small price uplift halfway through the cycle, I mean you got ta assume Nvidia is planning on That now oh yeah, I don’t think Nvidia intended to slow down their release of new architectures of gpus from like every 12 to well, maybe like 15 to 18 months. Actually, they used to come even like inside a year like way back in the day.

But I don’t think it was their intention to slow down from a yearly cadence-ish to like what two to three years, who’s God in full plane chat, said what about the board Partners, though, what about Nvidia, as far as I can tell, does not care? No so yeah yeah. No, I I think, there’s exactly zero F’s given by board Partners man, you know what we did not do. A good job of spelling out. We did a video uh.

Has it gone up yet I actually don’t remember. Maybe it went up, yeah yeah, it went up, we went yeah, no, it’s up. It’S up, we went, we did a video on using an arc GPU as a co-processor just for aging one encoding, yeah and that’s a pretty sweet video. But we actually were really focused on. If you had a last gen Nvidia card and you don’t need additional performance and you don’t want to pay 900 for a 4060 TI with the wrong label on it right that was that was kind of the angle for the video. But what we really should have done was we should have focused on hey. This is the one feature that just kind of kills AMD cards. Their encoder just has not been as good. Historically and apparently it’s been improving, I I know I keep saying that we need to take a closer look at it. Apparently it’s been improving.

My understanding is, it’s still not on par with nvank, though, so maybe what we should have done is positioned it as the solution to Nvidia is AMD plus Intel, which, if I told you that, like two three years told me I was an idiot yeah right, but It’D be cool, but with how much cheaper an equivalently, performant Radeon card is right now right for uh, you know like a 6800 or like a 6700 right now, man, those things absolutely rip. So, if all you need is a good video encoder with how much cheaper they are, that’s actually a totally viable alternative man, so on a full Pinchot said, maybe they’re trying to avoid lawsuits like the GTX 970. I think that’s a completely different situation. Yeah! That’S a different situation for those of you who are who are not up on that. Basically, they had a version of the 970 that was um three and a half gigs of the four gigs of memory ran at full speed and the last half a gig ran at a reduced rate, and it was because of the um the memory bus width on The card and functionally it made very little. I honestly, I felt that they should have disclosed it for one thing, because they knew, but I felt that that one was a bit of a nothing Burger in terms of performance, because the reality of it was that the 970 was not such a performance card that You were going to reach the very limits of how much data you needed to put in its vram without running into other bottlenecks like it was still.

It was, but I hear you stupid yeah. It was bad yeah, but um yeah. Apparently it was all 970s, not a version.

Oh sorry, sorry yeah. It was 9.75 with my memories. Sorry yeah yeah yeah, so the 970 compared to the 980, which was uh, which was full dire, which was a wider memory, bus or whatever uh had that like issue or something like that yeah.

So they they knew they knew and it was bad, but it also didn’t really affect performance, except in some, like very, very weird edge cases. .