A leaker has provided further details about the Vivo X100 Pro+ or Vivo X100 Ultra camera flagship, which is expected to be launched in 2024. Based on a prototype, the triple cam equipped with 4.3x optical zoom appears to be quite versatile and can apparently reach ridiculous sounding digital zoom-le...
I really don't get the use of super high resolutions on tiny sensors like that.
Sure, you can have a crazy zoom (aka crop) while still retaining good enough resolution, but at this point?
All the detriments that minuscule, high-res sensors bring about won't just disappear.
Don't you enjoy photos of blurry gray splots with AI oversharpened edges who are supposed to be birds or squirrels?
Despite all the marketing fluff, phone cameras make small but steady advances. I bet you'd make a somewhat acceptable photo at this 200x zoom level, if you shine a pair of 500 watt floodlights at your scene, and put your phone on a tripod.
My phone has a x10 zoom option that is barely usable without at least resting it on a surface, I can't imagine trying to take an even half decent photo at x200..
pixel binning as a 'solution' to a problem which needn't even exist in the first place.
Well, I fully agree with this article. There is one other good use of binning/supersampling though, and that is better chroma resolution relative to luma.
But even that won't do much, with all the other shortcomings already present.
Let me preface this by admitting that I’m not a camera expert. That being said, some of the claims made in this article don’t make sense to me.
A sensor effectively measures the sum of the light that hits each photosite over a period of time. Assuming a correct signal gain (ISO) is applied, this in effect becomes the arithmetic mean of the light that hits each photosite.
When you split each photosite into four, you have more options. If you simply take the average of the four photosites, the result should in theory be equivalent to the original sensor. However, you could also exploit certain known characteristics of the image as well as the noise to produce an arguably better image, such as by discarding outlier samples or by using a weighted average based on some expectation of the pixel value.
Statistical photography aka computational photography aka supersampling. Statistically bin together number of smaller pixels to cut the amount of noise to create picture of a lower resolution than sensor level, but better quality.
Federation had a hiccup there, I'm only seeing your reply now
Supersampling is definitely something interesting, but up to what point? On a sensor this small, even something like 48 sampled to 12 already suffers to a degree where I would stop calling it useful.
Don't get me wrong here, I can see the use first hand on my own phone. My second lens for night mode does 20MP to 5, and while the image is brighter than the main lens, it's just as grainy, and a much lower output resolution too.
Now granted, my phone is a few years old now, and modern devices surely have better sensors, but no amount of trickery will make up for those physical limitations.
The glass on the lens doesn't even resolve that much resolution. I doubt it's even physically possible to make a piece of glass that perfect. There is a reason people still buy medium format cameras over full frame, the glass elements can be larger and therefore small imperfections are a smaller fraction of the lens. This is also why bigger telescopes are always better.