Cyberpunk 2077 — GTX 1070 @ 1440p — Will It Play?

Cyberpunk 2077 — GTX 1070 @ 1440p — Will It Play?

Hi, this is Wayne again with a topic “Cyberpunk 2077 — GTX 1070 @ 1440p — Will It Play?”.
Hello and welcome to tech deals. Cyberpunk 2077 everybody’s favorite, well-optimized 2020 aaa game. Wait, that’s not right scratch that reverse it. In any case, this is the second of these videos. I’Ve done recently. If you guys continue to watch them, i will continue to make them. Last week i did the gtx 970 4 gigabyte card at 1080p low detail that one averaged 34 frames per second, with a one percent low of 23 frames per second.

Now, if you look at the real-time performance in the upper left-hand corner of the screen, you might say: are you sure you tested the right card here? Tech? Yes, i did because this is 1440p high detail rate tracing off, of course, because it’s a gtx, not an rtx card, but this just demonstrates how massively faster the gtx 1070 is. How much the extra vram really makes the difference and, frankly how you can stretch a card to a point, but at some point you’re really just asking to do something: it’s not meant to do now. I can already hear everybody right now. Why didn’t you test this? At 1080p, why didn’t you match the previous car? Well, first of all, there’s no reason to run 1080p low detail on a 1070.. Yes, it will run very very well, but it will be kind of ugly and frankly, it’s just unnecessary. I could run 1080p high detail, but then it would not be a one-to-one apples-topless comparison with the 970 versus the 1070..

The 970 is non-functional at high detail. It’S like five frames per. Second, it’s completely utterly unplayable, so you have to put it at low detail and putting a 1070 at low detail is just well.

Why it? I guess it’s a. I guess it’s a one-to-one comparison, but the truth of the matter is: there’s no need. We have enough.

Vram we have eight gigs of vram and we have enough performance to get decent playable performance. Now we are at 1440p and yes, the argument can be made. Well, why didn’t you do 1080p i’ll tell you why? Because i actually made this particular test to compare it to the 2070 and to the 30-70 and of course those cards would be utterly ridiculous.

Cyberpunk 2077 — GTX 1070 @ 1440p — Will It Play?

At 1080p i mean you could, but the point is, i don’t think very many people bought 2070s and 3070s, assuming you can find a 3070 in stock to play at 1080p. There are people who do that and actually i don’t think it’s a bad idea, but i don’t think most people are doing it. I think 1440p is really sort of the target of the 70 series cards, at least to the average person, and so that’s why they’re being tested here.

Cyberpunk 2077 — GTX 1070 @ 1440p — Will It Play?

I just did the 970 at 1080p at low detail, because it was that or not included at all, because it was really it just at high detail or at 1440p. It was silly. So what’s interesting here is that we are getting roughly the same performance at 1440p.

Cyberpunk 2077 — GTX 1070 @ 1440p — Will It Play?

High on the 1070, as we got it 1080p low on the 970., the 1070 was an amazing value for the money and if any of you bought that card back in 2016, you got your money’s worth. You have had that card now for five, almost five years, not it’s may when i’m recording this you’ve had it for almost five years, if you bought it at launch and you have gotten so much performance out of it and here’s what’s truly mind-blowing, of course, there’s Nothing to replace it with that msrp unless you get lucky, but if you go on ebay right now, guess what gtx 1070s are currently selling for on ebay, if that’s right, virginia tell them what they’ve won they’ve won about 400-ish dollars. 1070S are currently selling for their launch.

Msrp used on ebay, almost five years later: okay, you’re all gon na yell at me when i say cryptocurrency mining, because that that term has almost become a an unmentionable. It’S it’s become a dirty word amongst gamers they’re, like no. Don’T talk about the crypto mining, i want to play games man. I want to get a new card and play games.

I know i know i don’t control the world. The world doesn’t change because we’re unhappy about this. It just is what it is. But yes, right now, a gtx 1070 is launch price on ebay for a used car, so you should have bought one five years ago. I am not going to show you the entire test run today, because otherwise we’d be here for 19 minutes and seven seconds, which is how long this benchmark actually ran, and i don’t think you want to hear me find something to prattle about for 19 minutes. So i’ll try and keep it to about half that.

What i would like to comment on is the real-time performance in the upper left-hand corner of the screen. It is not, in fact, 60 frames per second. However, thanks to our cpu, it is fairly smooth, not all the time, there’s definitely places where you’ll notice the lack of performance. However, because we have a modern, fast processor, the individual frames are not paced poorly.

If you look at the graph beneath the three real-time numbers, we’ve got a real-time performance of 26 frames per second right now, a real-time average of 31 and a real-time one percent low of 23., that’s pretty close to where it’ll end up actually but the graph below That is the frame time graph, meaning how much time is in between individual frames. It is one thing to have a 30 frame per second average. However, if that 30 frame per second average comes about, because you’ve got some frames at 15 and some at 45 and it’s bouncing between 15 and 45 frames per second, you have a grotesque, unplayable terrifically awful mess. Now that particular scenario would be reflected in the one percent low, because if you did bounce between 15 and 45, the one percent low would be closer to 15 and then you go wow average is 30. 1 low is 15 awful performance. But when the one percent low is 24 and the average is 30, then your first thought might be wow, it’s a very, very smooth, 30.

there’s a couple of dips, but otherwise it’s just fine, no problem! Well, yes, and no there’s some truth to that as well, because it is playable. Do not misunderstand me: cyberpunk 2077, at 1440p high detail ray chasing off, of course, and no dlss, because we don’t have tensor cores is playable at 30 frames per second, it wouldn’t be. My first choice: i wouldn’t recommend everybody strive for this, but it is playable and it does work. A 2070 will be a nice jump up from this and definitely recommended if, if you upgrade it to a 2070 you’ll, be much happier at 1440p than on a 1070.. The faster gddr6 helps the faster course helps it’s, you know not going to be 100 frames per second, but it definitely will be better than this. Give this video a like and leave a comment down below if you’d like to see the 2070 at 1440p high detail compared to this in the next article.

And if you don’t, what do you want to see? I will definitely be reading the comments and just something different that i’ve done not done in the past. I am not recording these all in one session. I am recording them one at a time and actually reading your comments before i do the next one. So i did read your comments on the 970 video uh in the past.

I used to just sit down and record like 10 of them and schedule them out for 10 weeks, but that prevents me from taking any feedback from you guys and seeing what you guys want to see and changing up what i do the following week. So i’m just going to do these one at a time and get feedback from you guys because well, if you guys weren’t watching them, then not much point in making them so do appreciate that you guys are all here to do that. Look at the driving – and i got to tell you the the visual image on the screen is one thing: controllability is another: how much input lag is there between when you press the w key and the car moves forward? How much input lag between when you press the left mouse button and your character fires? It’S not that bad! Actually, now the cpu usage is fairly low. It was higher in the city.

It was up around 28 to 30 percent in the city. It’S down here around 15 to 20 percent, the country there’s less to render out here. You don’t need an i9 10 900 k at 30 frames per second, because there’s just not enough frames to be delivered. If you had a 20 70 or a 30 70, then actually it’s not a terrible idea. Frankly, an i9 9900k, an i7 10 700 f would be perfectly fine. Frankly, even an i5 10 400 would be okay, six course.

12 threads. I wouldn’t recommend a four core chip for this. It’S fine if you’ve got an older graphics card like this and you’re running at lower frame rates. A 4 core 8 thread chip at 30 frames per second is really not that bad. But if you have a graphics card capable of doing 60 frames per second you’ll find the four course actually to be somewhat limiting, and so that’s actually a subject that i probably don’t cover enough is the fact that, as you scale up your gpu, you need to Scale up your cpu, it’s not just a matter of one size fits all for what cpu do you buy to get smooth performance because, as the frame rate goes higher, then you need more power in order to pre-render all those frames, the world geometry detail, collisions, etc. The world modeling takes more cpu power at higher frame rates.

Let me know what you think of this video in the comment section below that’s, also where you should tell me which graphics card and or resolution you want to see tested next and if you want to see a different game. Well, leave that down there below as well all the various links in the video description to twitter to twitch to discord and all of that stuff will be found down near the bottom. I look forward to reading what you all have to say. Thank you.

So much for watching, i will see all of you next time. You .