The stupid thing is that any device with an orientation sensor is still writing images the wrong way and then setting a flag, expecting every viewing application to rotate the image.
The camera knows which way it's oriented, so it should just write the pixels out in the correct order. Write the upper-left pixel first. Then the next one. And so on. WTF.
> The camera knows which way it's oriented, so it should just write the pixels out in the correct order. Write the upper-left pixel first. Then the next one. And so on. WTF.
The hardware likely is optimized for the common case, so I would think that can be a lot slower. It wouldn’t surprise me, for example, if there are image sensors out there that can only be read out in top to bottom, left to right order.
Also, with RAW images and sensors that aren’t rectangular grids, I think that would complicate RAW images parsing. Code for that could have to support up to four different formats, depending on how the sensor is designed,
TIL, and hard agree (on face value). I’ve been struck by this with arbitrary rotation of images depending on application, very annoying.
What are the arguments for this? It would seem easier for everyone to rotate and then store exif for the original rotation if necessary.
Because your non-smartphone camera doesn't have enough ram/speed to do that I assume (when in burst mode)
If a smartphone camera is doing it, then bad camera app!
One interesting thing about JPEG is that you can rotate an image with no quality loss. You don't need to convert each 8x8 square to pixels, rotate and convert back, instead you can transform them in the encoded form. So, rotating each 8x8 square is easy, and then rotating the image is just re-ordering the rotated squares.