what about computational methods? i have always wondered how stacking many short exposures without tracking compares to deconvolution of a single long exposure. it seems that there is software able to do this by taking into account both motion blur and the PSF of the imaging system:
The problem is that the noise can swamp the signal. Another example of this would be doing astrophotography during the day. The sun doesn't block anything, it just makes the sky glow with "noise". Theoretically it has exactly as much signal from space as it does at night, but because the sun adds so much noise it's completely lost.
> "because the sun adds so much noise it's completely lost."
Do you mean that it would be conceptually possible to image planets or even deep-sky objects during the day with incredibly efficient denoising software? (I am a noob in astronomy)
I would be, yes. As early as the 1950’s, several avionics companies made daylight-capable star trackers (for jam-resistant long-distance airplane navigation) using chopper techniques. Those trackers were mostly mechanical, except for electronics to demodulate the star signal from the single pixel sensor.
I suspect diffusion models can shine at denoising single shot deep sky images. Will be attempting when I find bandwidth. I do a lot of deep sky landscape photography (IG: @dheeranet) and I want to do them in one go instead of stacking ground (untracked) and sky (tracked) separately.