Inside Google’s Secret Pixel Camera Test Lab (Exclusive)

Inside Google's Secret Pixel Camera Test Lab (Exclusive)

Hi, this is Wayne again with a topic “Inside Google’s Secret Pixel Camera Test Lab (Exclusive)”.
The pixel 8 has long been known for having one of the best cameras for still images, but the pixel 8 series is the first time we’re seeing a real emphasis on video. I’M here at the Google Real World testing lab with an exclusive first look inside to see what makes the pixel camera tick and trust me when you think lab you’re. Probably not thinking of this Google built these spaces to replicate where we actually take photos and videos. There’S a cafe and a living room, plus a few secret Areas. We weren’t allowed to show you the key to testing in this lab, rather than in any Cafe on campus, is control.

Every lighting element in each room can be manipulated individually from color temperature to intensity lighting grids on the ceiling. Recreate every situation from late evening, light to Sunrise and having a controlled environment helps Engineers test the same situations over and over again to make sure the phones deliver consistent results. This custombuilt phone rig is what they use to test and compare side by side we’re seeing how Google tested the pixel’s new low light. Video enhancement called night sight video.

So we introduced the original night sight feature years ago. Um to help you take ultra low light photos, but it was always a struggle to bring it to video, because it’s the difference between processing 12 megapixel pictures and over 200 megapixels per second of video but night sight. Video lets us take all that. All that HDR plus night sight photo processing and bring it to video without having to make any compromises.

So this looks like an everyday Cafe, but when the lights go down and the testing starts, what exactly are you looking for? It turns out simple everyday scenes like moodlet, dinners or a romantic evening. Those are really challenging for cameras because, let’s say you got the two people sitting here. Well, maybe your camera has to decide which of them to focus on. Maybe one of them is closer, but not facing the camera and the other one is facing the camera but they’re further away. A candle is a very small point, a bright incredibly bright light and, worse than that, it moves and it casts different shadows across the whole room as it moves, and you have to make sure that that flickering of the candle doesn’t cause flickering of the exposure. You have to make the camera confident, so it sees things happening around it and it’s very smooth and controlled.

So you might see tests where a face is both far away and back lit to really push the camera into a very challenging condition which, for you might literally just be you know your Saturday night party, but for the camera is a real test processing these lowl Light videos takes a lot of computational power, a 60-second 4K video at 30 frames. A second is equivalent to processing 1,800 photos, which is why video boost files get sent to the cloud. Video boost adjusts, dynamic range, color and detail using the same HDR algorithm used for still images. It works on daytime and lowl light videos which Google calls night sight. Video here’s it before and after on the same clip to show you what it looks like with and without video boost turned on now night sight has been on the pixel 3 since 2018 for Stills, but it’s taken that long to bring it to video and it’s One of the Technologies: that’s really set the tone for other phone makers to roll out their own night modes. Now, most phones do a good job of rendering color in daylight, but low light can get really confusing for the camera, especially when there’s random Textures in the scene and words like on the Monopoly board.

Cameras have what’s called a bear filter over the sensor that helps them distinguish what color is, but there’s a little bit of kind of combination and rendering and algorithms that have to happen to turn that bay filter into an RGB image in low light. That’S a lot harder to do. You can end up with rounding errors, for example that make the green not quite look like the right green. So in a scene like this, we have pinks oranges, greens, yellows, multiple kinds of green, the human eyes, very good at picking up different shades of green to make sure that all those things look right even in low light um, so that we can still have vivid Saturated colors that night side is known for when you’re looking at all of these test images, these test files, is it what you think is right, what the human eye perceives or is it an algorithm? What’S determining what is right and what isn’t right? So you always have to start with a trained eye and your own experience of well.

What would I want the camera to do in a case like this? If this was my home and my family, we can repeat the same test over and over again and determine what the right answer is and then make sure that camera does that consistently with this board game over here, there’s actually a color chart next to it. So we know exactly what the correct color is, because the color chart is calibrated. We know what those are supposed to be, but just producing the the correct image doesn’t always mean it’s the right one, there’s always a difference between how you remember a moment how you want to remember it, and maybe what the color chart said it was um and There’S also a balance to find in there, of course, we’re not all just taking photos and videos indoors. Google’S testing also happens outside too in this comparison, we’re seeing an autofocus test between the two phones when a subject is moving and the differences in tone. Mapping on the technical side, uh the way I would describe this is your human eyes.

Let’S say you can see an incredible amount of what we call dynamic range from detail in a very deep Shadow, all the way to something very bright. Like a sun through a window, you can see with your eye a lot of that. A camera can see a lot less of that, and then the formats like JPEG that we use to transmit images can see even less of that now, when the real world is this big, but the format is that big? We need methods to compress this much range into that.

Much range tone mapping is how we do that compression and really good tone. Mapping can do a lot of compression while still looking natural, like your eye, would have seen it and where would a video be without good audio testing. 1. 2.

3. The usual way of adjusting and improving audio is by frequency tuning, but if you’re trying to get rid of sounds like wind, that can make speech sound bad because it’s also low frequency. So that’s where speech en enhancement uh comes in. We use AI.

We have this train AI model, that’s very good at uh, identifying audio. That is speech. So once we can identify that speech, we can preserve that speech portion of the audio and then reduce the non-sp speech. One t scene, one 2. Three. This is speech, enhancement off yeah, pretty noisy scene.

One 2 3 test scene, one 2 3. This is with speech announcement on. Let’S see, if you can uh hear the difference, you definitely can hear it. That is impressive, because, again you don’t sound like you’re in a recording studio or super isolated.

It’S still true to where you were yeah exactly having a controlled space like the real world test. Lab is not only important for the software side of things. It actually also helps with the hardware development too, when you’re building a piece of Hardware. You have to make sure that Hardware Works week after week after week, properly through all the different prototypes and Factor versions that you get. You can’t do this once and then hope it works forever with autofocus. You have lenses that are static and you have some lenses that move they slide back and forth and in a case like this, you can have situations where um the phone’s been like this and the lens is hanging back at the at the back and then you Flip the lens up like that to focus let’s say I took it off the table and I went to take a take a video and as the as the lens is moving Focus to where it wants to be.

Well, all the grease on that rail has pulled at the back, so you’re sort of pushing the grease and the lens, and that can give a a different behavior on the first Focus than you might get on subsequent ones. After that, grease has been spread through the rails. What makes a good video like to make a pixel like really good video quality? What are the criteria that kind of has to be met before you’re, ready to ship that you can never have a static scene in video? But you don’t want things like exposure and focus to waver back and forth.

Inside Google's Secret Pixel Camera Test Lab (Exclusive)

You want them to lock on and be very confident and be very stable. So a lot of the challenge in a place like this is really helpful, because we can also change the lighting conditions. We can change the scene in controlled ways and make sure the camera stays locked on to the right focus and the right exposure and the right balance.

So getting that we call it temporal consistency that change over time. Right is really important and it’s very helpful to have a controlled scenario like this to do it in for lots more on the pixel camera and how it’s all been tested in this cool real world lab. You can go check out an article by my friend, Patrick Holland, it’s on CET now and Linked In the description and, of course, click like And subscribe for lots, more content, and let me know what other Tech you want me to focus on in the next deep Dive video see you .