Many of these cameras are able to take bracketed[1] exposures, and the SNR in even just one image from such sensors is immense compared to the tiny sensors in phones. Surely with this much more data to work with, HDR is much nicer and without the edge brightening typically seen in phone HDR images.
[1]: https://www.nikonusa.com/en/learn-and-explore/a/tips-and-tec...
At a glance, my samsung note 22 ultra takes better picture than my nikon d7500. At a glance.
However, as soon as you want to actually DO anything to it, like, view it in any real detail, or on anything but a tiny screen, reality returns.. While the phone is absolutely fantastic for a quick snapshot, it just does not come close to the definition of the older camera with the bigger sensor.
Even if SNR isn't really the issue with the tiny sensors (it is), the minuscule lenses used by phones almost certainly don't come anywhere close to the detail fidelity of a lens 300x the size.
Messing around in lightroom etc is just not worth it for hundreds of shots I take per day sometimes.
Camera manufacturers definitely are at a disadvantage when it comes to AI/ML expertise, infrastructure and data compared to Google or Apple. So I do think these big cos. do a better job on average from a purely technical viewpoint. On the other hand, there's no accounting for taste - stylistically, I do agree with many others that they are often too heavy-handed.
People value quick shots/edits and don’t care about quality or editing things later don’t mind an iPhone doing all this behind the scene - but it is irreversible. The sort of error in the article would drive myself and other photographers up the wall.
Also, an Iphone has a CPU and ISP that outclass desktops from only a few years ago - camera manufacturers simply don’t have the same compute available.
On the other hand, some brands do provide interesting computational photography in their cameras at the very high end. Panasonic mirrorless full frame cameras have a pixel shift mode for super res/no bayer interpolation, with some ability to fix motion between steps. Phase One has frame averaging and dual exposure in their IQ4 digital backs, for sequential capture into a single frame and super high dynamic range respectively.
iPhone HDR produces an HDR image file. Consumer HDR apps do the opposite - they take an HDR raw and tone map an sRGB JPEG out of it.
The only portable format that really supports HDR images is EXR, so if you're not generating that you're not getting it.
(I don't think there's anything that can do deep fusion either, though obviously you need it a lot less.)
what forbids them from buying the competitive SoCs from Qualcomm? Pride?
As far as I know, iPhone and Android aren't doing anything that isn't already done by digital cameras. They ramp up the settings on things like noise reduction and sharpness to balance out their tiny sensors, but it's more or less the same algorithms that the cameras are using.
Good cameras even allow you to tweak the settings and control RAW conversion right on the camera. The author could have botched the noise reduction on his Fujifilm to match the iPhone if he wanted to. [0]
[0] https://www.jmpeltier.com/fujifilm-in-camera-raw-converter/
When you take sunlit panoramas for instance, the iPhone will auto bracket and perform hdr treatment, it’s fantastic : you can see both the ground and the blue sky and the clouds. You can’t do that easily with a dslr, certainly not like a phone is doing right now, which is integrating the last x frames and modulating the digital shutter to capture multiple exposures.
Same when you take night shots, where the iPhone is integrating 1-2 s of sensor frames and compensating for movement with the accelerometer. You can take handheld shots of the sky, which is completely out of question with a standard dslr. Maybe Sony can do that with a very fast lens, in body stabilization and a very high iso, but this is 10k worth of equipment right there.
I’m still liking my dslr « real » bokeh and photos, but some of the innovation in digital photos should spill into dslr/mirrorless.
It's the opposite, most digital cameras aren't doing enough compared to smartphones, which have minimum control and look very good at 2M pixels. A proper camera with default settings looks bland in the hands of an amateur (too little saturation) and handles dynamic range poorly (without bracketing or post-processing). Plus it's never going to beat smartphones at connectivity (SNS), or portability.
Most people aren't going to spend time processing RAW files, or try to get in-camera picture control/film simulation etc. to look pleasing like photographers do.
> As far as I know, iPhone and Android aren't doing anything that isn't already done by digital cameras. They ramp up the settings on things like noise reduction and sharpness to balance out their tiny sensors, but it's more or less the same algorithms that the cameras are using.
That is not true AFAIK, phone cameras use things like "AI" enhancement amongst other things. The fake bokeh is one example of this, but there are others, like "AI sharpening" etc. . Also the amount of NR is typically much higher in a phone camera.
You are correct they typically also employ a lot more range compression and contrast enhancement to make pictures "pop" more, but you can often achieve similar things by adjusting camera settings especially in lower grade DSLRs.
Let me figure out when a scene is static enough to combine multiple shifted pictures for better snr. Ideally, let me enable that feature on my camera. Combine that feature with image stabilization, and let me tweak the conversion from multiple images after the fact.
Bought a Nikon z50 because this was annoying me. Shooting JPEGs and it does a vastly better job and it will still go in my pocket with the 28mm f/2.8 or the 16-50 kit lens. The iPhone is embarrassingly bad in comparison.
in my workflow I use either dx0 photolab which has excellent facilities for really bringing out a single image.
sometimes I wanna go hardcore with a landscape and take multiple shots manuallly and blend them together using software like aurora HDR (example: https://www.flickr.com/photos/193526747@N04/52219385902/ ) That image is 5 stacked images combined together using a bit of computational photography and adjusted for saturation.
if you want somethign taht will get you descent results fast and work with raws. you can also go with luminar https://skylum.com/luminar
an iphone image looks better at first snap but my z5's images blow them out of the water once I give them some love in the edit room.
I was using Hugin to align the images from my Nikon DSLR. I found that you can get to at least double the resolution in both dimensions fairly quickly, but you'll never get to "enhance" like in TV shows.
Olympus (OM Systems?) should build this stuff into their cameras. I used to have a bit of inkling to some day "upgrade" to full frame, but not any more.
I'd be interested in computational photography on ILCs if they allowed tuning it -- with phones, it comes with a bunch of other stylistic choices, and I want control over that stuff.
This would also pair well with Fujifilm’s lineup which already includes camera features focused on in-camera processing.
And developing the raw involves picking a white balance and several kinds of lens correction.
Wash your mouth out with soap! I did not spend five thousand dollars on a D850-based macro rig to have it produce results no better in quality than what I can get from my phone.
That said, there can be just as much (or more) "computational photography" going on with a digital camera as there is with a modern phone, the difference is that cameras and processing software give control to the user, and phones typically do not.
Computational photography techniques on smartphones, on the other hand, were always designed around squishy "user perception" goals to make photos look impressive, details be damned.
One can find pathological cases for traditional cameras too - moire is a common problem, Fuji X-Trans sensors historically had a watercolor/worms effect particularly in greenery, etc.
People who think digital cameras do anything close to what modern phones do don't have a clue. They're the opposite of 'Phones can do anything a DSLR can' people and they are just as wrong.
The quality difference is also very obvious compared to my phone even though my camera is easily 8 years old.
It's literal magic.
[1] https://ai.googleblog.com/2018/10/see-better-and-further-wit...
[2] https://petapixel.com/2019/05/28/how-googles-handheld-multi-...
> Slightly more objectionable, but still mostly reasonable, examples of computational photography are those which try to make more creative use of available information. For example, by stitching together multiple dark images to try to make a brighter one. (Dedicated cameras tend to have better-quality but conceptually similar options like long exposures with physical IS.) However, we are starting to introduce the core sin of modern computational photography: imposing a prior on the image contents. In particular, when we do something like stitch multiple images together, we are making an assumption: the contents of the image have moved only in a predictable way in between frames. If you’re taking a picture of a dark subject that is also moving multiple pixels per frame, the camera can’t just straightforwardly stitch the photos together - it has to either make some assumptions about what the subject is doing, or accept a blurry image.
Their point is that it's not magic; these techniques rely on assumptions about the subject being photographed. As soon as those assumptions no longer hold, you start getting weird outputs.
Seen this in cheap point and shoot cameras and cheap chinese phones though.
Especially the iPhone photography and videography is always overrated by the fanbois and some of the "professionals". While it might look good on "some" pictures with the heavy post processing, it just doesn't have any details. It might just appeal fine for a 100% view of the picture as is and even the slightest post processing or editing done on the output pictures ruins them a lot.
One has to depend upon what the developer of the application or the manufacturer thinks is the right picture (and who the hell are they to decide what my photo should look like?) and most of the time they are terribly wrong.
Apple is just overrated and for that matter, even some of the Android's as well.
Raw pics from a full frame sensors hold the fort and will continue to hold for a longtime to come unless the phones match DSLR in terms of sensor size and optics size. Until then "computational photography" will make the pictures look terrible and dictate how it has to look like.
I see a lot of comments where folks talk about RAW. But seriously, how does it matter for any normal user who tends to click a pic using the phone instead of a DSLR? If one is photographer, it makes sense, else it is additional workflow to get it in RAW and do the post processing on a computer... I'm just saying...
Thoughts welcome...
There are entire categories of image quality that only Apple seems to bother even trying to improve — and then they leapt past everyone.
A few years ago if you wanted to make a HDR, 4K, 60 fps Dolby Vision wide-gamut video…
That would have cost you. Tens of thousands on cameras, displays, and software. It would have been a serious undertaking involving a lot of “pro” tools and baroque workflows optimised for Hollywood movie production.
With an iPhone I can just hit the record button and it does all of that for me, on the phone!
Did you notice that it also does shake reduction? It’s staggeringly good, about the same as GoPro. Just setting up the stabilisation in DaVinci is half an hour of faffing around.
The iPhone just has it “on”.
I could go on.
A challenge I give people is to take a still photo and send it to someone else that is wide gamut, 10 bit, and HDR, any method they prefer.
Outside of the Apple ecosystem this is basically impossible in the general case. Everything everywhere is 8-bit SDR sRGB.
Heck, even professional print shops still request files in sRGB!
So yes, the software in the Apple ecosystem does have a big impact on the end result of photography.
I can take a 14-bit dynamic range picture with my Nikon, but I can’t show it to anyone in that quality because of shitty Windows and Linux software, so what’s the point?
I take pics with my Apple iPhone instead. All the people I want to show pictures to have iDevices, so I can share the full HDR quality that the phone camera is capable of, not some SDR version.
However, as far as the iPhone producing HDR HEIF photos - as I recall from some brief reading, it seems like possibly an intentional choice from Apple to do this in an opaque, nonstandard way, so other image pipelines can't easily take advantage of it. I don't really want to give them credit for that.
When it comes out of the ecosystem, it has to understand and speak English. No matter how good it might in French or German.
So, the argument is pretty subjective and no point in continuing as no one knows what is under the hood and how it appeals to the eyes. It's subjective and it may or may not have all the required details for it to survice outside of the ecosystem. It's what one calls in vendor lock-in. One needs all idevice and isoftware ecosystem to function and survive in the iworld. And that world is mostly controlled and directed by the company!
This might look like too much of deviation from the topic, but it is how the idevices are portrayed to the world and how the ifanbois take it. It's is just overrated.
Right, because a 10 bit wide gamut desktop display isn't affordable to the vast majority of people?
"Full frame" cameras do not have the best image quality, and don't even have the best image quality for their price. (eg used medium format film cameras are cheaper.)
They're just the best cameras people have heard of. If you're doing product photography you might want a Phase One instead.
It doesn't matter much though; lighting and lens quality are what really make a photo even in a controlled environment.
You'd be hard pressed to get better quality from a scanned medium format negative than from a modern 60MP full frame sensor. You might just get there if you use a drum scanner, but any faster and more practical scanning process won't get you there.
I love my Nikon but for video I need a tripod to eliminate camera shake. My iPhone gives me stable images every time.
10 years or so ago a variation of this made headlines all over as certain Xerox Workcentres were transposing numbers during scans, due to a compression algorithm that was sometimes matching a different number than the one actually scanned.
https://www.theregister.com/2013/08/06/xerox_copier_flaw_mea...
https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-i...
*Looks up pigeonhole principle*: https://en.wikipedia.org/wiki/Pigeonhole_principle
> If 5 pigeons occupy 4 holes, then there must be some hole with at least 2 pigeons.
This is so obvious.
> This seemingly obvious statement, a type of counting argument, can be used to demonstrate possibly unexpected results. For example, given that the population of London is greater than the maximum number of hairs that can be present on a human's head, then the pigeonhole principle requires that there must be at least two people in London who have the same number of hairs on their heads.
Oh...
The new iphone and the pixel6 both use the same trick where they have a 50 megapixel sensor (probably the same and likely a Sony sensor) that produces 12.5 megapixel raw photos with four pixels combined information. So the dng I get from my phone has already had some processing done to it but not a lot. Also worth noting that both phones have multiple lenses with different focal lengths and sensors. So, it matters a lot which one you use. You'd control this via the camera app typically with its different modes and zoom levels. I'm not sure if it uses exposures from all sensors to calculate a better raw but that would not surprise me.
In terms of noise, the image quality is actually very good. I've done some night time photography with both the pixel6 and my Fuji XT-30, which is an entry level mirror less camera. The Fuji has better dynamic range and it shows in the dark. But the noise levels are actually pretty good for a phone camera. Very usable results with some simple post processing. Especially compared to my previous Android phone (a Nokia 7 plus) which was noisy even in day light. Mostly doing raw edits is not worth doing that but it's nice to have the option. The phone does a decent job of processing and mostly gets it right when it comes to tone mapping and noise reduction. When it matters, I prefer the Fuji. But sometimes the phone is all you have and you just take a quick photo and it's fine.
A high end full frame camera will get you more and better pixels and more detail. Even an older entry level dslr will generally run circles around smart phone sensors. And that's just the sensor and camera. The real reason to use these is the wide variety of lenses and level of control over the optics that those provide. In phone bokeh is a nice gimmick. But it's a fake effect compared to a nice quality lens. Likewise you can't really fake the look you get with a good portrait lens (the effect that things in the background seem bigger). Phone lenses have a fixed focal range and generally not that much aperture range. There's a reason people pay large amounts of money to own good lenses. They are really nice to use and deliver a great photo without a lot of digital trickery. And they are optimized for different types of photography. There is no one size fits all lens for all photography.
I wonder how hard to is to take 'RAW' photos without adding an app first.
This is a “there’s already a solution, but the average consumer wouldn’t know about it, because the defaults are made for them” type of problem.
One could claim that it's a UI problem, and should be exposed in the Camera app. This may be true, but the files are 10 to 12 times larger, with a real "quality hit", as perceived by the average user, for overall aesthetics. I personally think it should be in the settings menu. It's not something you would want enabled without understanding and intent.
It’s a little frustrating that apple added this feature, for this exact kind of thing, and they’re, inadvertently or not, getting a little dumped on, do to lack of knowledge/research.
These features (Google also attempted to standardize it, not sure they succeeded) were a big deal in the photography world.
Now, are most people going to notice that the iPhone wrecked the text on their subject? Probably not. But they probably also wouldn't notice if the model wasn't applied to the image at all. The median consumer probably mostly benefits from (in terms of how much they like the photo) AE, a bit of curve reshaping (using a smoothed histogram CDF algorithm or something), and maybe some extra saturation.
I never saw any of these phones altering the details like in article.
https://www.35mmc.com/10/01/2015/low-light-fun-ilford-hp5-ei...
3200ISO on black-and-white film was pushing it pretty hard. Yet these pictures look good, in a noisy kind of way. Let an algorithm loose on them and it'll "fix" things, first and foremost by smoothing the skin. Even older low-end dedicated digital cameras do this, some brands more than others. The pictures in low light feel more like a badly done painting than a good, honest, albeit noisy photo. One possibility is that the noise from a digital sensor is not as uniformly pleasing as that from film, so it must be masked.
"Phones take amazing snapshots, but dedicated cameras can make better photographs."
The new smartphone cameras are capable of pretty amazing things and they can extend taking good pictures to a whole new audience. If you need the control though that large sensors and specific lenses can bring then you will need a dedicated camera.
When making photos in low light, I always try to lean my phone against something (a bench, a lamppost, a tree, a building) to let the longer exposure be sharper.
Apple power adapters actually have a bunch of text on them printed in unreadable light gray; you can try shooting them and while they're a bit clearer than plain old eyesight, they're still pretty unreadable.
This applies to dedicated cameras too though - physical image stabilization can compensate for camera motion but not for subject motion. The difference is that a) physical IS can compensate throughout each exposure, not just between exposure and b) the photographer is not bound to a black box algorithm but can instead use his own a priori knowledge to align the images if needed.
From this way of looking at things a normal long exposure also imposes a prior assumption (that nothing is moving). It's just that we're used to the artefact that's generated when this prior isn't true (motion blur).
As someone who upgrades every several years I’ve been wondering how people who upgrade every year and rave about the camera being better are even seeing at this point.
(Stills only I’m talking about)
Also, the camera, lenses, and sensors don't all update every year. Early on in Apple's tick-tock approach to design iteration, camera updates were the "s" models ("tock") in release cycle.
Now they seem to be just incrementing the number and you have to pay attention to what if any changes they make. This time they did 4x pixels and do pixel binning for regular shots and low light.
Of course you cannot really compensate for the lens not resolving enough detail, or not focusing close enough; but since the almost totality of photos taken on a phone will be seen on another mobile device these are the less important bits of the equation. Correct exposure and good colors always look good, regardless of how much you zoom the photo. OP's use case is very limited, and unfortunately didn't provide enough context about the nature of the photo.
One aspect that is little discussed is the inflated quality perception of such a photo when seen on the actual device, an iPhone in this case.
iPhones have an incredible screen. OLED, wide gamut color, high PPI. A photo looks radically better on an iPhone compared to opening the same photo on a standard monitor.
A photon can be any color of the rainbow. The reason ink and TVs can get away with using only 3 colors is because our eyes only have 3 types of receptors (cone cells). Each receptor responds to a range of wavelengths. "In-between" wavelengths will trigger multiple receptors. For example, a TV can send a mix of red and green photons and create the same brain signals as yellow photons would. Animals with more types of receptors, such as bees or the mantis shrimp, wouldn't be fooled by a TV with only three base colors.
A camera's sensor performs the same lossy compression as our eyes. Light comes into the camera in a range of wavelengths, and triggers each type of pixel a different amount. Each type of pixel has a sensitivity curve engineered to resemble the sensitivity curve of one of the cone cells in our eyes.
Understanding that natural light isn't just red, green, and blue makes it clear why chromatic aberration can't be fixed computionally. A green pixel can't know when it's receiving green photons that are perfectly aligned, or yellow light that needs to be destorted.
P.S. There cameras that can "see" a greater range of colors. Search for "spectral cameras" and "infra-red goggles"
PPS This is also why a RGB light strip might look white, but objects illuminated by it might look odd. You might be familiar with the fact that a blue object illuminated by a red light will look black. For the same reason, it's possible for a yellow object to be eliminated by red, green, and blue light and still look black.
PPPS This is also why custom wall paints are a mixture of more than three colors. Two paints may look completely the same, but objects illuminated by the light bounced off the walls look completely different.
PPPPS This is also why high-CRI lightbulbs are a thing. If you get something hot, like the sun or a tungsten filament, it will release photons with a wide range of wavelengths. Neon tubes and LEDs emit a single wavelength, so they must be coated with phosphors that fluoresce — emit light at a different wavelength than they absorbed. Using more kinds of phosphors is more expensive, but makes it more likely that whatever object is illuminated gets all the wavelengths it is able to reflect.
Once they find a way to interact with the processing engine then the quality will jump again.
For the vast majority of users, the phone camera is super awesome and just fine.
Mirrorless cameras may have been delayed if it weren't for the competition with phones. DSLRs were only around a few years before camera phones.
I’m not sure what the pipeline looks but I thought this type of situation was where pro raw was supposed to be used?
This gives you a clean low-noise image with editing flexibility but it does have the flaws of deconvolution and stacking and AI denoising.
Actual raw from a cell phone is insanely noisy and hideously soft from diffraction in the best of cases.
You can get single shot RAW output from Halide and other third party apps on iPhones. It's actually perfectly usable and not particularly noisy or soft. I haven't personally had any problems with the output of ProRAW (which applies far less aggressive sharpening than the standard JPEG processing). I'm pretty sure the photo in the article would have come out fine if it had been shot using ProRAW.
90% of the time, my iPhone photos are fine straight out of the camera as HEIC. But every once in a while, I get something like is described here (or in several other recent similar articles).
Even for the "true raw" ones, I don't know if they're truly raw. Do they have distortion and light fall-off correction applied?
[0]: https://ai.googleblog.com/2021/04/hdr-with-bracketing-on-pix... [1]: https://dl.acm.org/doi/10.1145/3355089.3356508
The most infuriating thing is that you can usually see the image before post processing if you are quick enough and those look sharp and good, but this trash software can't be turned off.
Are you talking about proRAW? Or JPEGs straight from the Camera app?
It would be really interesting, though, to see an image signal processing expert weigh in on what the algorithm(s) are actually doing in this case.
One very interesting one is ptychography (in microscopy often Fourier ptychography, since you can use Fourier optics to describe the optical system [0]), which uses a model of an optical system to get an image (iirc x7-x10 resolution) out of many blurry images, while knowing a bit about the optics in front of your image sensor - it can also work in remote sensing to some degree (better with coherent illumination though).
Edit: This is not just averaging or maxing pixels, it reconstructs the image using reconstructed phase information from having low-res pictures with different, known illumination or camera positions.
They'll all be manufactured as a one piece glass moulding and single CCD chip - and the whole thing will be very cheap to make, having moved all the difficulty into software.
I have previously discussed having taken photos with a smartphone where certain objects within some images have been so modified by the processing algorithm as to be almost unrecognizable so I won't repeat those various scenarios here. Instead, I'd like to dwell on the implications algorithmic image processing for a moment.
Let's briefly look at the issues:
1. Despite a recent announcement by Canon about a large increase in dynamic range in imaging, (https://news.ycombinator.com/item?id=34527687), I'm unaware of any current imaging sensor breakthrough that would vastly improve both resolution and dynamic range. Thus, essentially, we have to live with what we're already capable of physically squeezing into our present smartphones.
2. Manufacturers are improving both image sensors and optics but only incrementally. Thus, with current tech and absence of truly significant breakthroughs, we have to live with the limitations as outlined in the article (aberrations, lens flare, sensor insensitivity etc.).
3. Essentially, we're stymied both by the limitations of current tech and physical (smartphone) size. Usually, to overcome such limitations, we'd fall back on the old truism 'there's no substitute for capacity' and just make things bigger as we did with photographic emulsions, past camera lenses, loudspeakers, pipe organs, etc. but that's not possible here.
4. Outside incremental improvements in hardware—the Law of Diminishing (hardware) Returns having arrived—manufacturers have had to resort to computational methods. The trouble is that it seems with the present algorithms that the Law of Diminishing (computational) Returns is also already upon us, so what does this mean? Quo vadis?
5. Clearly, in its current form computational/algorithmic processing has hit a stumbling block or at least a major hiatus. Here, further incremental improvements are likely using current methods and there's little doubt that they'll be applied to recreational photography (smartphones and such), however, unfortunately, we now have a serious (and very obvious) problem with the authenticity of images taken by these cameras.
Simply, when software starts guessing what's within images then we've not only lost visual authentication but we have serious downstream issues. It raises questions about whether or not photographic evidence based on computational imaging can be relied upon—or even submitted—as evidence in a court of law (I'd reckon, without ancillary cooperating/conjunctive evidence, such images would not muster if the Rules of Evidence tests were applied.
How serious is this? Clearly, it depends on circumstance but long before 'guessing-what's-in-the-image' became in vogue simple compression was 'suspect' in, for example, serious surveillance work—because compression artifacts in an image raised doubts as to what objects actually were—simply, could objects be identified with 100% certainty, if not then what figure could be placed on such measurements/identifications.
(Such matters are not hypotheticals or idle speculation, I recall in nuclear safeguards a debate over compression artifacts in remote monitoring equipment. Here, authenticating and identifying objects must meet strict criteria and a failure to authenticate (fully identify) them means a failure of surveillance which is a big deal! For example, the failure to distinguish between, say, round cables and pipes with 100% certainty could be a serious problem, as the latter could be used to transport nuclear materials—thus it'd be deemed a failure of surveillance. That's not out of the bounds of possability in a reprocessing plant.)
Obviously, the need to authenticate what's in an image with 100% certainty isn't a daily occurrence for most of us but as these tiny cameras become more and more important and ubiquitous then we'll start seeing them used in areas where their images must be able to be authenticated.
Post haste, we need rules and standards about how these computational algorithms process images and how they should be applied.
6. What's the future. On the hardware side we need better sensors with higher resolution and more sensitivity and improved optics (that, say, use metamaterials etc.). Such developments are on their way but don't hold your breath.
Computational/algorithmic processing has the potential to do much, much better, but again don't hold your breath. There's considerable potential to correct focus and aberration problems etc. using both front-end and back-end computational methods ('front-end correcting lenses etc. on-the-fly and back-end as post-image processing) but much work still has to be done. Note: such methods also don't rely on guessing.
What people often forget is that when a lens cannot fully focus or suffers aberrations, etc. information in the incoming light is not lost—it's just jumbled up (remember your quantum information theory).
In the past untangling this mess has been seen as an almost insurmountable problem and it's still a very, very difficult one to resolve. Nevertheless, I'd wager that eventually computational processing of this order will be commonplace, moreover, it'll likely provide some of the most significant advances in imaging we're ever likely to witness.
Two interesting developments here are the pixels in Starvis 2 sensors, which as a first afaik use a 2.5D structure to increase full-well capacity by a lot. And another, non-production sensor by Sony where they developed a self-aligning process and pixels are actually split in two layers, with the top layer only carrying the photodiode and the bottom layer entirely dedicated to the readout transistors. That's promising for lower readout noise and also for increasing full-well capacity.
Both these announcements are what I'd call large incremental changes (big changes within existing technologies). If those dynamic range/noise figures turn out to be roughly in line with the publicity then they're much to be welcomed and half the world and I will be glad to see them.
Moreover, such large changes cannot be ignored by other manufacturers otherwise they'd be left behind. That means they'd have to license the tech quickly, that is unless there's some gotcha like ridiculously low or unreliable production yields etc. Anyway, we'll soon see.
I still have some reservations until more info emerges. As I mentioned in the earlier post I hope the changes are mainly real hardware improvements and not just little changes coupled with a great deal of back-end processing. As I mentioned there, we can do without 'smoke-and-mirror' announcements (which, unfortunately, are all too frequent).
How far can a noise reduction algorithm go? Can we use a white painted wall as a mirror?
You mean, "resolution"?
Edit: That kind of resolution is induced by the optics and not by the sensor (if your optics can resolve the target, you can always add more optics to magnify the image if you have a low-pixel-resolution sensor).
Edit2: The poster you replied to is right that optical resolution is a constraint in terms of information that can be reconstructed after being imaged through a specific optical system.
An optical system filters light in phase space (imagine a space of position, angle and intensity of light in each point of the optical system, in a geometrical optics picture) and since some components are cut off, you cannot reconstruct an image to arbitrary fidelity, you lose information (or are stuck with a certain optical resolution).
The word “simply” is doing a lot of work in that last sentence, I’m sure!
https://plato.stanford.edu/entries/computational-philosophy/
(BTW, I just posted same to front page, if the subject interests you & we’re lucky, it’ll generate some discussion)