The problem with under-display cameras

The problem with under-display cameras

Hi, this is Wayne again with a topic “The problem with under-display cameras”.
For years now, phone manufacturers have been attempting to eliminate, or at least minimize bezels around the screen. They’Ve tried hole, punch, cutouts or even little pop-up cameras to help maximize screen real estate, but a new solution. That’S gaining popularity is under display cameras which are completely hidden from view with them. You could have a perfect seamless screen with nothing breaking up your screen’s image, or at least that’s the dream. That dream is finally becoming a reality with three recently released phones using this technology, the samsung galaxy z4, 3, the xiaomi mix 4 and zte’s axon 30.

question is: is the trade-off for a notchless phone worth the worst quality of the front-facing camera? The idea itself isn’t actually all that new companies, like microsoft, have been researching under display cameras for a long time, but until now it hasn’t shown up in consumer products like phones now having a screen without a notch or a hole. Punch is wonderful, but mainly the area covering the camera was even more distracting than having a notch and the camera quality was, to put it mildly. Pretty awful, but hiding a camera under a phone screen is no easy feat. Steven batisch from microsoft’s applied sciences team worked through the technology, it’s kind of like looking through a screen door so like, if you’re looking through a screen door and you have a camera through the screen.

So you can see through it. You know, but as a display gets higher and higher resolution, you got to think of that screen. Door becomes more dense and it becomes really hard to see through and you start losing a lot of light, and not only that you not only lose a lot of light. You get these things that actually affect where the light goes like diffraction.

So we can put a camera behind a really big display, because it’s got big pixels, which really means it’s got a lot of room um that isn’t a pixel. But when you get really small displays, like you see today in the market um, the pixels are really dense, you’re talking about like 400 dpi in in a lot of cases. So what do you do then? Right, like that’s, really, that’s a really tough screen door to take an image through the first generation of under display cameras and phones solved this problem by reducing the pixel density in the area covering the camera. This led to less lens obstruction and therefore, let more light through to the sensor, but the image quality still paled in comparison to traditional selfie cameras, not to mention it was very clear where the camera was housed with a distracting low resolution.

Patch, newer generation phones are able to maintain a higher pixel density of around 400 ppi right over their cameras by utilizing a newer wiring technology that allows slightly more light to enter the lens, but even this cut down on nearly 80 of the light. That would otherwise be hitting the lens, which still doesn’t give great results, especially in low light. So there’s always a balance. There’S a trade-off. You know you take a camera that potentially might be struggling in a dark environment and you just make it all that harder for them. You know: where do you solve the problem? You do your best on the optic side, and eventually, i think you know the optics. Will be able to help and the display technology be able to help and maybe get maybe uh, only a 40 percent light through rather than 20, but you’re still losing like 60 and the rest. You have to rely on computation to make up for these optical shortcomings.

The problem with under-display cameras

Manufacturers across the board have to use software to compensate for the challenges that still exist with their hardware, so using ai and neural networks to do things that cameras and phones do today really well, which is take. You know snapshots in the dark and they use ai to try to recover as much as the detail that was there and also, at the same time, correct for any artifacts or aberrations caused by that screen door effect. It’S kind of a twofold problem: you got to solve it in optics and you got to solve it using computation in the end of the day, but in both ways, you’re making it hard on the camera to do a great job in capturing um, a great image. So the two phones i have here are two devices with more recent generation of this under display camera technology uh.

This is the xiaomi mix 4, and this is the zte axon 30.. They both have kind of about 400 ppi area covering the screen integrated into the yellow panel and, as you can see like it, it’s pretty difficult to make out, especially with the naked eye previous phones, that used under display cameras. They had a kind of rough low, pixel density area. That honestly, i thought looked worse than a notch.

It was really distracting, but here, like especially on this mix 4, which is a really nice looking phone uh, i think it looks great. You can barely see the camera under the screen unless you’re looking really really closely kind of at an angle or on a white background. But generally you can’t really see this camera.

The problem with under-display cameras

The problems, of course, are when you uh open the cameras themselves and yeah. It’S pretty clear just from looking at the live view, feed that you know these uh cameras are working with some pretty serious hardware limitations like the the image is kind of hazy and blown out, and it does improve a little bit like after you take a picture. So if i pop it and it you know it it’s still kind of processing, but after it processes uh, you know the sharpness and the exposure is improved a little bit, but it’s still uh. You know it’s: it’s not the best selfie camera around for some people who don’t care too much about the quality of their selfies or their zoom calls. Maybe it is worth it, but if you do care about image, quality with your selfies or if you want to take them in low light, you’ll find that the technology really isn’t ready, yet at least not for phones, even experts, like steven, don’t know if the trade-off Is worth it, customers would look at that and see, maybe, like you know, immediate benefit in that there’s.

No notch but front cameras are also doing many other things like biometric authentication and windows, hello, and so you have to fit more things in there and it’s it’s up in the air, whether whether you can actually do biometrics behind um behind the screen like that, and So you know, are you really solving a problem, i’m not sure yet um? So i think it’s going to be challenging. I also think that sometimes people like to see where the camera is, and so again i’m not really sure whether it is the the benefit for the customers. Obviously, you have no notch and you potentially have thin bezels at the top and the bottom that might have some benefit from maybe an aesthetic point of view. But it’s really hard to say i would say in anything like this, there’s basically pros and cons, and you just have to think through it and it depends on the product you’re trying to build and experience you’re trying to deliver.

The problem with under-display cameras

Steven’S team at microsoft has focused their efforts within the display cameras on larger video conferencing devices. They aren’t set on solving the cosmetic issue of trying to eliminate bezels or notches, but are tackling a very real world problem, helping maintain eye contact over video calls. You know i was really intent in trying to solve the problem for telepresence and we’ve been working in this area for at least 15 years, where we wanted to help bring people together, um over video conferencing and make it feel like it’s real and one of the Problems you want to solve, there is to make the window um uh, basically like a real window like i am talking through you through a an actual window uh rather than a computer screen, which is very two-dimensional. Uh doesn’t take into account where you’re, looking where your, where your position is like, like in a real window, does and also i wanted to do and solve for the cues that were really important in person-to-person communication like eye contact, we have a number of different uh Research threads around trying to solve the problem, um both optically um and computationally, and, of course you know, one of the nice applications is okay, putting cameras underneath the display um for the purposes of putting the camera where it needs to be to help with eye contact Right, because if you have the camera, where you’re looking at it, you are making eye contact, um and two also to hide the camera um from from the bezel.

Now it’s important to note that steven mentions this tech is only made possible with plastic oled screens, which explains why you’re seeing it in phones. First, most work machines like laptops use, lcds, so it’ll still be a minute before under display cameras start to appear in shipping devices, but microsoft’s efforts do show that there are uses for the idea beyond just shrinking phone bezels right now, it’s more of a neat curiosity Than anything else – and i definitely don’t think you should buy a phone based on this feature alone, these things tend to get better with time, though so, who knows, maybe cameras under the screen will soon be as common as fingerprint sensors. Okay, i have to speak from my personal point of view rather than like from like a you know, like a microsoft product, making point of view.

My personal point of view, i don’t mind the bezels. I think that the bezel is a great place to put technology um and um, and i don’t want you know me personally. I would like the best front camera possible.

.