> The camera knows which way it's oriented, so it should just write the pixels out in the correct order. Write the upper-left pixel first. Then the next one. And so on. WTF.
The hardware likely is optimized for the common case, so I would think that can be a lot slower. It wouldn’t surprise me, for example, if there are image sensors out there that can only be read out in top to bottom, left to right order.
Also, with RAW images and sensors that aren’t rectangular grids, I think that would complicate RAW images parsing. Code for that could have to support up to four different formats, depending on how the sensor is designed,
At this point I expect any camera ASICs to be able to incorporate this logic for plenty-fast processing. Or to do it when writing out the image file, after acquiring it to a buffer.
Your raw-image idea is interesting. I'm curious as to how photosites' arrangement would play into this.