I was playing with acquiring images of different exposures from video stream, due to too large still image latency. My code currently receives one frame and sets exposure to another supported value.
When taking such images, although zoom factor is still the same (0), image taken on EExposureNight is somewhat "zoomed out" compared to other images. Development is done on Nokia N95.
1) is there any technical reason, aka sensory hardware, to be so, or is it my fault?
2) would Partnering Api Request solve our problem when taking multiple images of different exposure, shortest possible time between images and same *visualy perceived* zoom factor on each image?
We really need this to implement some image deconvolution and superresolution algorithms.
Thanks for reply,