The SNEAKY Thing That Can Slow Down Your Games – Upscaling Lag Explained

The SNEAKY Thing That Can Slow Down Your Games - Upscaling Lag Explained

Hi, this is Wayne again with a topic “The SNEAKY Thing That Can Slow Down Your Games – Upscaling Lag Explained”.
There’S one big compromise that gamers have had to make for a long time. Do you want your games to look better, or do you want them to run faster? Typically, this is meant turning down your graphics, setting to get more frames per. Second, especially if you don’t have a high end graphics card, but today we’re instead going to talk about the amount of input lag that gets introduced when you’re trying to upscale a game. We consulted with our friend amine Shaban over at Mercy to put this video together, so we like to thank him for his help now to be clear, I am NOT talking about your GPU rendering frames from scratch. What I’m referring to instead is what happens after your GPU finishes, rendering a frame and either the GPU or your display resizes the image to make it fit a certain resolution.

You can see this if you’re running a PC game at below your monitors native resolution to improve performance or if you hooked up an older console to modern flat panel TV. Now there are different forms of upscaling, some of which look nicer than others, but they all require a certain amount of post-processing time which can introduce noticeable input, lag, meaning that there’s a delay between when you press a button or move a thump stick or move the Mouse and the corresponding action appearing on the screen, and this can seriously hinder gameplay for obvious reasons, especially in older titles like classic platforms, where a responsiveness is a huge part of making the game feel like you remember, but why does it introduce so much leg? Well, to get the image looking as nice as possible, some algorithms look at the frames that are rendered before and after the frame to the upscale to better understand what a high-res version of the same image is supposed to look like. Then they apply what they think are correct. Changes to the frame, this method of analyzing multiple frames that are held in what’s called the frame buffer before they’re shown to the user, can definitely yield visual improvements.

But not only is this a computationally time-consuming process that adds leg, it can also result in worse image quality if the frames its examining or highly compressed, for example, if you’re watching a movie and I’ll turn it of approach to reduce lag is to instead of relying On multiple frames at one time, have the algorithm look at certain elements of a single frame that human brains are typically sensitive to, for example, Mercedes mercy, Mercedes mercy, Mercedes M classics, Mart HDMI, cable has a built-in library of objects like edges and textures that we naturally Key and on think about how jaggies caused by bad anti-aliasing of edges are often really noticeable to us. Interestingly, characters eyes are also a focus, as humans are psychologically programmed to be very sensitive to what someone else’s eyes are doing. This kind of strategy of focusing mostly on key visual elements, can greatly reduce lag time, while improving visual quality due to its reliance on predetermined, visual cues for the algorithm to focus on as well as the fact it only examines one frame, but like other upsampling methods. It’S not perfect, so can we do better? It turns out.

The answer is yes, though, we might still be some years away from seeing it becoming widely available, rather than programming a staler to spot a few specific elements. Computer scientists have been training artificial intelligences, to recognize what more complex objects are supposed to look like accurately. Stealing, an HD image to 4k or even a K is a very computationally intensive problem.

So large amounts of AI training will reduce the reliance of predefined features and allow a scaler to recognize anything from whether or not an object is a dog to how it handles scenes with complicated lighting. We’Re already seeing this to some extent with nvidia x’. Deep learning. Super sampling or a dl SS, where a supercomputer is fed with lots of frames from different games and figures out an algorithm to produce something close to an ideally anti-aliased image. These algorithms are then pushed out to individual users through software updates.

Not only does this allow gamers to improve how their games look without lowering frame rates, the more efficient post, processing algorithms optimized through AI should hopefully make games feel more responsive as well, but remember that if you just suck at games like csgo, because you have straight-up Terrible reflexes, AI probably won’t help you, so you might want to just give turn-based games a shot. Are you concerned about a data breach causing your credit card info to fall into the wrong hands, then check out today’s sponsor privacy comm for a free, easy-to-use service that hides your credit card number. You see it works by creating a virtual credit card. That’S a lock to whichever merchant you’re shopping at so even if the merchant gets hacked the bad guys won’t be able to use your card anywhere they please and if they try you’ll, get a push notification so that you’re always in the loop and can cancel the Card immediately, privacy also has a browser extension that auto fills information for you when you’re making a purchase and they are PCI DSS compliant.

They use military-grade encryption to secure information ad. They offer two-factor authentication and since privacy makes money from merchants, there’s no cost to you. So if you sign up today, you’ll get five bucks.

Five actual bucks so check it out at privacy, com4, slash tech wiki. So thanks for watching guys if you’d like this video, give it a thumbs up, subscribe and be sure to hit us up in the comment section for your ideas about future videos that we should make about tech topics that you want explained well, do it .