The camera knows which way it's oriented, so it should just write the pixels out in the correct order. Write the upper-left pixel first. Then the next one. And so on. WTF.
What are the arguments for this? It would seem easier for everyone to rotate and then store exif for the original rotation if necessary.
Performance. Rotation during rendering is often free, whereas the camera would need an intermediate buffer + copy if it's unable to change the way it samples from the sensor itself.
The hardware likely is optimized for the common case, so I would think that can be a lot slower. It wouldn’t surprise me, for example, if there are image sensors out there that can only be read out in top to bottom, left to right order.
Also, with RAW images and sensors that aren’t rectangular grids, I think that would complicate RAW images parsing. Code for that could have to support up to four different formats, depending on how the sensor is designed,
Your raw-image idea is interesting. I'm curious as to how photosites' arrangement would play into this.
RAW images aren't JPEGs so not relevant to the discussion.
If a smartphone camera is doing it, then bad camera app!
It's basically a shame that the exif metadata contains things that affect the rendering
This is particularly important on smartphones and battery operated devices. However, most smartphone devices simply save the photo the same way regardless of orientation, and simply add a display-rotated flag to the metadata.
It can be super annoying sometimes, as one can't really disable the feature on many devices. =3
Could you explain this one?