×
Namespaces

Variants
Actions

How to provide custom autofocus and white balance control on Windows Phone 8

From Nokia Developer Wiki
Jump to: navigation, search

One of the outstanding camera features of Lumia Windows Phone 8 devices is their capability to manually control camera parameters such as focus distance, exposure (shutter) time, white balance, and ISO rate. This article explains how to use these manual controls to replace and extend the built-in automatic camera controls using custom algorithms.

WP Metro Icon Multimedia.png
SignpostIcon XAML 40.png
WP Metro Icon WP8.png
Article Metadata
Code ExampleTested with
SDK: Windows Phone 8.0 SDK
Devices(s): Nokia Lumia 920, HTC 8X
Compatibility
Platform(s):
Windows Phone 8
Platform Security
Capabilities: ID_CAP_ISV_CAMERA
Article
Created: HarMe (01 Aug 2013)
Last edited: hamishwillee (09 Sep 2013)


Contents

Introduction

Usually the automatic camera controls for focus, exposure, ISO rate, and white balance provided are sufficient for taking good photos. But there can be specific circumstances where specialized algorithms yield better results. For instance an app which is intended for quality control in a manufactory can use special features (e.g. color hints, shape, edges, etc.) of the products to be inspected to find the optimal focus distance. Other possible use cases are for example: QR code detection, object detection, document scanners, etc.

Based on simple algorithms this article describes how to develop custom controls for auto focus and white balance adjustment. During the course of this article we will also learn how to use live frame data, and multi-threading using the C# BackgroundWorker thread class for image processing.


Pre-requisites

This Section describes how to set up advanced camera functionality in a Windows Phone 8 application which is required for the further steps presented in this article.

First, create a blank Windows Phone 8 Silverlight project using the Visual Studio wizard and open the MainPage.xaml file. The following lines add a VideoBrush element which displays the live camera view finder:

<Canvas>
<Canvas.Background>
<VideoBrush x:Name="videoBrush">
</VideoBrush>
</Canvas.Background>
</Canvas>

In MainPage.cs add the following code which setups the camera hardware and frees resources when the app goes to background:

private PhotoCaptureDevice _photoDevice = null;
 
protected override void OnNavigatedTo(System.Windows.Navigation.NavigationEventArgs e)
{
// Initialize camera
InitializeCamera(CameraSensorLocation.Back);
 
base.OnNavigatedTo(e);
}
 
protected override void OnNavigatedFrom(System.Windows.Navigation.NavigationEventArgs e)
{
// Free camera resource when page goes to background
_photoDevice.Dispose();
_photoDevice = null;
 
base.OnNavigatedFrom(e);
}
 
private async void InitializeCamera(CameraSensorLocation sensorLocation)
{
// Open camera device
_photoDevice = await PhotoCaptureDevice.OpenAsync(sensorLocation, new Windows.Foundation.Size(640, 480));
 
// Display camera viewfinder data in videoBrush element
videoBrush.SetSource(_photoDevice);
 
// Specify method to be called when new viewfinder image data is available
_photoDevice.PreviewFrameAvailable += photoDevice_PreviewFrameAvailable;
}
 
void photoDevice_PreviewFrameAvailable(ICameraCaptureDevice sender, object args)
{
//TODO: here we will add code later
}

Finally make sure that permissions to access the camera are set correctly in WMAppManifest.xml:

<Capabilities>
<Capability Name="ID_CAP_ISV_CAMERA" />
</Capabilities>
 
<Requirements>
<Requirement Name="ID_REQ_REARCAMERA" />
</Requirements>

Working with viewfinder frames

When a new viewfinder frame is available then the PhotoCaptureDevice.PreviewFrameAvailable event is fired. Within the attached listener the frame data can be accessed. The PhotoCaptureDevice provides three different methods for getting the frame image data:

  • GetPreviewBufferArgb: plain ARGB (alpha, red, green, blue) image data. Each color component has the same length, e.g. for an image with 200x100 pixels there are 200*100 entries for A, R, G, and B channels each (resulting in a total buffer size of 4*200*100 bytes).
  • GetPreviewBufferYCbCr: Y (luminance) and 2x2 subsampled interleaved Cr/Cb (color, also referred to U/V) component data (NV12 format). For an image of 200x100 pixels the first 200*100 entries in the buffer represent the Y components followed by 200*100 bytes of interleaved Cr/Cb color information (resulting in a total buffer size of 2*200*100 bytes).
  • GetPreviewBufferY: Y (luminance) data only (the buffer size of a 200x100 pixel image is 200*100 bytes).


The following graphic visualizes the data structure returned by each method:

Colorspaces.png

Tip.pngTip: The methods GetPreviewBufferYCbCr and GetPreviewBufferY are generally the fastest and should be used whenever possible (e.g. if no RGB component data is required).

Note.pngNote: Red and blue channels of ARGB data coming from the Windows Phone 8 camera are swapped.


Getting live frames and handling data in a separate thread

Image processing is often a time consuming task. To keep the viewfinder preview fluent and the user interface reactive, all processing is sideloaded into a separate worker thread. Here the C# BackgroundWorker class is used for means of simplicity but implementing a custom thread class works the same way. First the following member variables are defined in MainPage.cs:

// Background processing
private BackgroundWorker _backgroundWorker = null;
 
// Buffers holding frame data. We need an int buffer for ARGB data and a byte buffer for Y data
private int[] _frameDataRGB = null;
private byte[] _frameDataY = null;
 
// The camera viewfinder preview frame dimension
private Windows.Foundation.Size _previewFrameSize = new Windows.Foundation.Size();
 
// Abstract image processor class
private AFrameProcessor _frameProcessor = null;

The constructor is changed as follows to initialize the worker thread:

public MainPage()
{
// Init some initial frame processor like WhiteBalanceProcessor or AutoFocusProcessor
_frameProcessor = new WhiteBalanceProcessor();
 
// Method which is called when the state of the frame processor changes
_frameProcessor.OnFrameProcessorStateChanged += new AFrameProcessor.FrameProcessorStateHandler(frameProcessore_stateChanged);
 
// Setup background worker thread
_backgroundWorker = new BackgroundWorker();
_backgroundWorker.DoWork += new DoWorkEventHandler(worker_DoWork);
 
// Init other components
InitializeComponent();
}

The methods InitializeCamera and photoDevice_PreviewFrameAvailable are modifed as follows:

private async void InitializeCamera(CameraSensorLocation sensorLocation)
{
// Open camera device
_photoDevice = await PhotoCaptureDevice.OpenAsync(sensorLocation, new Windows.Foundation.Size(640, 480));
 
// Display camera viewfinder data in XAML videoBrush element
videoBrush.SetSource(_photoDevice);
 
// Init frame buffers
_previewFrameSize = _photoDevice.PreviewResolution;
_frameDataRGB = new int[(int)_previewFrameSize.Height * (int)_previewFrameSize.Width];
_frameDataY = new byte[(int)_previewFrameSize.Height * (int)_previewFrameSize.Width];
 
// Set pointer in the frame processor to the PhotoCaptureDevice
_frameProcessor.setPhotoCaptureDevice(_photoDevice);
 
// Specify method to be called when new viewfinder image data is available
_photoDevice.PreviewFrameAvailable += photoDevice_PreviewFrameAvailable;
}
 
void photoDevice_PreviewFrameAvailable(ICameraCaptureDevice sender, object args)
{
// Start the processing in the background worker if it's not busy
if (!_backgroundWorker.IsBusy)
{
// Copy buffer depending on format required by frame processor
switch (_frameProcessor.getDataMode())
{
case TDataMode.EDataMode_RGBA:
sender.GetPreviewBufferArgb(_frameDataRGB);
break;
case TDataMode.EDataMode_Y:
sender.GetPreviewBufferY(_frameDataY);
break;
}
 
 
// Start background worker
_backgroundWorker.RunWorkerAsync();
}
}

The new code in the method InitializeCamera allocates memory for RGBA and Y viewfinder frames. In photoDevice_PreviewFrameAvailable the frame data is copied into the frame buffer arrays and the background worker thread is started (if not running) or notified about a new frame (if already started). If the background worker is currently processing a frame then the new frame is skipped.

The worker thread's DoWork implementation looks as follows:

private void worker_DoWork(object sender, DoWorkEventArgs e)
{
// We pass the current frame buffer to the frame processor based on the format required by the frame processor
switch (_frameProcessor.getDataMode())
{
case AFrameProcessor.TDataMode.EDataMode_RGBA:
_frameProcessor.addFrame(_frameDataRGB);
break;
case AFrameProcessor.TDataMode.EDataMode_Y:
_frameProcessor.addFrame(_frameDataY);
break;
}
}

In the DoWork method the new frame data is added to the frame processor which does further processing.


The abstract class AFrameProcessor is the basis for all frame processors which will be discussed in the remainder of this article (for sake of simplicity all code is put into an abstract class. It would be cleaner to specify an interface instead for production code):

abstract class AFrameProcessor
{
// Specifies possible frame data types
public enum TDataMode
{
EDataMode_RGBA,
EDataMode_Y
}
 
// Specifies frame states
public enum TFrameProcessorState
{
EState_Idle,
EState_Processing,
EState_Success,
EState_Error
}
 
// Current state
protected TFrameProcessorState _state = TFrameProcessorState.EState_Idle;
 
// If true, then the processor will be reset when the next frame arrives
protected bool _reset = false;
 
// Delegate for status changes
public delegate void FrameProcessorStateHandler(object sender, EventArgs<TFrameProcessorState> state);
public event FrameProcessorStateHandler OnFrameProcessorStateChanged;
 
// Pointer to the PhotoCaptureDevice
protected PhotoCaptureDevice _photoDevice = null;
 
// Returns the data type required for processing
abstract public TDataMode getDataMode();
 
// Set the PhotoCaptureDevice
public void setPhotoCaptureDevice(PhotoCaptureDevice device)
{
_photoDevice = device;
}
 
// Reset frame processor
public void resetProcessor()
{
_reset = true;
}
 
// Methods for adding new frames. Data is passed by reference.
abstract public void addFrame(byte[] data);
abstract public void addFrame(int[] data);
 
// Helper for firing state event
protected void fireStateStatus(TFrameProcessorState state, String message)
{
if (OnFrameProcessorStateChanged != null)
{
OnFrameProcessorStateChanged(this, new EventArgs<TFrameProcessorState>(state, message));
}
}
}

Next an indicator rectangle is added to MainPage.xaml on top of the viewfinder which indicates the frame processor's state:

<Rectangle x:Name="poiRectangle" Stroke="White" Height="100" Width="200" Canvas.Left="300" Canvas.Top="195"></Rectangle>

The implementation of the frame processor's OnFrameProcessorStateChanged handler is:

private void frameProcessore_stateChanged(object sender, EventArgs<AFrameProcessor.TFrameProcessorState> state)
{
Color color;
switch (state.Value)
{
case AFrameProcessor.TFrameProcessorState.EState_Idle:
color = Colors.White;
break;
case AFrameProcessor.TFrameProcessorState.EState_Error:
color = Colors.Red;
break;
case AFrameProcessor.TFrameProcessorState.EState_Processing:
color = Colors.Yellow;
break;
case AFrameProcessor.TFrameProcessorState.EState_Success:
color = Colors.Green;
break;
}
 
Dispatcher.BeginInvoke(() =>
{
poiRectangle.Stroke = new SolidColorBrush(color);
});
}

The border color of the indicator rectangle is changed in the main UI thread using the global dispatcher.

The framework for implementing custom camera control algorithms is now setup completely. The remainder of this article discusses actual implementations of various sample algorithms.


Autofocus using contrast

Autofocus (AF) can be implemented as active AF and passive AF. Active AF uses infrared light or ultrasound for measuring the focus distance. Passive AF either uses phase detection (requires additional hardware) or contrast detection. This article concentrates on passive AF with contrast detection which uses one or multiple measurement points on the camera sensor to find the best focus distance value. The local focus quality of all points is then combined into a global quality value metric (e.g. using area based weighting, simple mean, position relevant weighting, etc.). Features for calculating the local focus quality are for instance edges and gradients. The principal workflow is depicted here:

Autofocusworkflow.png

  1. Measure focus quality for current focus distance.
  2. Adjust focus distance.
  3. Continue with Step 1 until the best focus distance is found.


The following graphic shows how contrast based focus quality calculation based on gradients works in principle:

Autofocuscontrast2.png

Depending on the sharpness (gradient) of edges the histogram of intensities changes. An image with clear edges shows steep gradients while in a blurred image the gradients are more flat.


Implementation

First a helper class FocusDistanceMeasurement is defined which stores the sharpness and focus distance values:

class FocusDistanceMeasurement {
public UInt32 focusDistance = 0;
public int sharpness = 0;
}

All autofocus related code is implemented in the class AutoFocusProcessor which extends the abstract class AFrameProcessor:

class AutoFocusProcessor : AFrameProcessor
{
// Range of focus values provided by the camera driver
private UInt32 _minValue = 0;
private UInt32 _maxValue = 0;
 
// The focus distance of the last frame
private UInt32 _focusDistance = 0;
 
// Number of steps used for adjustment
private UInt32 _focusStepSize = 20; // change this for coarser/finer
 
// A list containing the previous measurements
private List<FocusDistanceMeasurement> _focusHistory = new List<FocusDistanceMeasurement>();
 
// The sharpness is calculated by splitting the image into areas of the given size
private int[] _sharpnessArea = new int[] {16, 12}; // make sure that this is a divisor of the preview size
private int[] _sharpnessMap = null; // holds the sharpness data of the last frame
 
// Different autofocus mode implementations
enum TContrastMode {
EContrastMode_SMD,
EContrastMode_MIN_MAX,
EContrastMode_VARIANCE
};
 
private TContrastMode _contrastMode = TContrastMode.EContrastMode_SMD; // change this to test the various modes
}

The class variables hold the supported focus distance range, the sharpness map of the current frame, the focus history, and other helper variables. The enum TContrastMode specifies three different focus modes which will be described later. All autofocus algorithms require only luminance information:

public override TDataMode getDataMode()
{
return TDataMode.EDataMode_Y;
}

Each time when a new frame arrives the following code is executed:

public async override void addFrame(byte[] data)
{
if (_reset) // Reset frame processor
{
_state = TFrameProcessorState.EState_Processing;
fireStateStatus(_state, "Autofocus processing started");
 
// Clear history of focus distances
_focusHistory.Clear();
 
// Get the supported focus distances
CameraCapturePropertyRange range = PhotoCaptureDevice.GetSupportedPropertyRange(CameraSensorLocation.Back, KnownCameraGeneralProperties.ManualFocusPosition);
 
_minValue = (UInt32)range.Min;
_maxValue = (UInt32)range.Max;
 
// Set the focus distance to the maximum
_photoDevice.SetProperty(KnownCameraGeneralProperties.ManualFocusPosition, _maxValue);
await _photoDevice.FocusAsync();
 
_focusDistance = _maxValue + 1; // make sure that this differs for the first run
 
_reset = false;
}
else if (_state == TFrameProcessorState.EState_Processing)
{
UInt32 curValue = (UInt32)_photoDevice.GetProperty(KnownCameraGeneralProperties.ManualFocusPosition);
if (curValue != _focusDistance) // focus distance has changed
{
_focusDistance = curValue;
 
// Generate sharpness map
generateSharpnessMap(data);
 
// Check focus change results
bool focusingFinished = false;
if (_focusHistory.Count() > 4)
{
bool gettingWorse = true;
 
// Check if the sharpness is getting continuously worse (by at least 1% per tick).
// If so, terminate auto focus and set best position.
for (int i = _focusHistory.Count() - 1; i > _focusHistory.Count() - 4; i--)
{
if ((_focusHistory[i].sharpness * 101) / 100 > _focusHistory[i - 1].sharpness)
{
gettingWorse = false;
break;
}
}
 
focusingFinished = gettingWorse;
}
 
// We have either reached the end of the possible focus range, or we found a good distance
if ((int)_focusDistance - (int)_focusStepSize < _minValue || focusingFinished)
{
// Select the best value
FocusDistanceMeasurement best = _focusHistory[0];
for (int i = 1; i < _focusHistory.Count(); i++)
{
if (_focusHistory[i].sharpness > best.sharpness)
{
best = _focusHistory[i];
}
}
 
// Set best focus distance
_photoDevice.SetProperty(KnownCameraGeneralProperties.ManualFocusPosition, best.focusDistance);
await _photoDevice.FocusAsync();
 
// Notify about success
_state = TFrameProcessorState.EState_Success;
fireStateStatus(_state, "Focus success at distance " + best.focusDistance);
 
// Clean up
_focusHistory.Clear();
}
else if (_state == TFrameProcessorState.EState_Processing)
{
// If we have not found a good focus distance yet then try the next one
_photoDevice.SetProperty(KnownCameraGeneralProperties.ManualFocusPosition, _focusDistance - _focusStepSize);
await _photoDevice.FocusAsync();
}
}
}
}

When focusing starts (e.g. after _reset is true) the focus distance is set to the maximum (infinity).

Warning.pngWarning: This code gets the maximum focus difference for the device using the value of property ManualFocusPosition. While this is the correct approach and works for Nokia and HTC devices, at time of writing there are defects in Huawei and Samsung devices which mean that the wrong values are returned for minimum and maximum values. Production code should manage this case.

For each new frame that arrives a sharpness map is calculated and the sharpness value for the region of interest is calculated and stored in the history. When the sharpness gets worse over consecutive steps, or the minimum of the supported focus range is reached, then the best focus distance is selected.

Note.pngNote: After manually setting the focus distance the method PhotoCaptureDevice.FocusAsync() is called to set the focus. On some devices this is not necessary.

The method generateSharpnessMap calculates the overall image sharpness and stores it in the history:

private void generateSharpnessMap(byte[] data)
{
// Dimensions of the original frame
int w = (int)_photoDevice.PreviewResolution.Width;
int h = (int)_photoDevice.PreviewResolution.Height;
 
// and the scaled down sub-image
int ws = w / _sharpnessArea[0];
int hs = h / _sharpnessArea[1];
 
// Generate a new buffer for the sharpness map. CAUTION: in production code we should check the dimension as well
// because the preview resolution might have changed.
if (_sharpnessMap == null)
{
_sharpnessMap = new int[_sharpnessArea[0] * _sharpnessArea[1]];
}
 
// Calculate sharpness for each sub image
int x, y;
int sharpness;
for (y = 0; y < _sharpnessArea[1]; y++)
{
for (x = 0; x < _sharpnessArea[0]; x++)
{
switch (_contrastMode)
{
case TContrastMode.EContrastMode_MIN_MAX:
sharpness = calculateSharpness_minMax(x * ws, y * hs, ws, hs, w, data);
break;
case TContrastMode.EContrastMode_SMD:
sharpness = calculateSharpness_difference(x * ws, y * hs, ws, hs, w, data);
break;
case TContrastMode.EContrastMode_VARIANCE:
sharpness = calculateSharpness_variance(x * ws, y * hs, ws, hs, w, data);
break;
default:
sharpness = 0;
break;
}
 
_sharpnessMap[y * _sharpnessArea[0] + x] = sharpness;
}
}
 
// Get sharpness for the ROI
int[] sampleRegion = new int[] { 200, 100 }; // we just take the center of the viewfinder here
int startX = (w - sampleRegion[0]) / 2;
int startY = (h - sampleRegion[1]) / 2;
 
// Get the corresponding region of the ROI in the sharpness map
int minSx = 0;
int minSy = 0;
int maxSx = _sharpnessArea[0] - 1;
int maxSy = _sharpnessArea[1] - 1;
 
minSx = (int)(startX * _sharpnessArea[0] / w + 0.5f);
minSy = (int)(startY * _sharpnessArea[1] / h + 0.5f);
maxSx = (int)((startX + sampleRegion[0]) * _sharpnessArea[0] / w + 0.5f);
maxSy = (int)((startY + sampleRegion[1]) * _sharpnessArea[1] / h + 0.5f);
if (maxSx >= _sharpnessArea[0]) maxSx = _sharpnessArea[0] - 1;
if (maxSy >= _sharpnessArea[1]) maxSy = _sharpnessArea[1] - 1;
if (minSx >= _sharpnessArea[0]) minSx = _sharpnessArea[0] - 1;
if (minSy >= _sharpnessArea[1]) minSy = _sharpnessArea[1] - 1;
if (minSx < 0) minSx = 0;
if (minSy < 0) minSy = 0;
if (maxSx < 0) maxSx = 0;
if (maxSy < 0) maxSy = 0;
 
// Get the total sharpness within the ROI
sharpness = 0;
for (y = minSy; y <= maxSy; y++)
{
for (x = minSx; x <= maxSx; x++)
{
sharpness += _sharpnessMap[y * _sharpnessArea[0] + x];
}
}
 
// Store sharpness information in history
FocusDistanceMeasurement m = new FocusDistanceMeasurement();
m.focusDistance = _focusDistance;
m.sharpness = sharpness;
 
_focusHistory.Add(m);
}


Three different methods for calculating sharpness maps are presented in this article:

  1. MIN/MAX method.
  2. SMD method.
  3. Variance method.


MIN/MAX Method

The MIN/MAX algorithm searches for the minimum and maximum luminance values where the difference of maximum and minimum corresponds to the sharpness. The idea behind this is that a big difference is equal to a high contrast. This method is very sensitive to noise.

private int calculateSharpness_minMax(int x, int y, int w, int h, int stride, byte[] data)
{
int max = int.MinValue;
int min = int.MaxValue;
 
int i, j;
for (j = y; j < y+h; j++)
{
for (i = x; i < x+w; i++)
{
int lum = data[j * stride + i];
 
max = (lum > max) ? lum : max;
min = (lum < min) ? lum : min;
}
}
 
return (max + min > 0) ? ((max - min)) : 0;
}


SMD Method

The SMD (Sum Module Difference) method calculates the horizontal and vertical gradients. The maximum equals the sharpest point. The SMD method is more tolerant to noise because it uses local (neighbor) information.

private int calculateSharpness_difference(int x, int y, int w, int h, int stride, byte[] data)
{
// Horizontal and vertical SMD
int[] smd = new int[]{0, 0};
 
int i, j;
 
// Horizontal
for (j = y; j < y + h; j++)
{
for (i = x; i < x + w - 1; i++)
{
int a = data[j * stride + i];
int b = data[j * stride + i + 1];
smd[0] += Math.Abs(a - b);
}
}
 
// Vertical
for (j = y; j < y + h - 1; j++)
{
for (i = x; i < x + w; i++)
{
int a = data[j * stride + i];
int b = data[(j+1) * stride + i];
smd[1] += Math.Abs(a - b);
}
}
 
return smd[0] + smd[1];
}


Variance Method

The Variance method calculates the variance over the luminance of the image. A higher variance means that there is more contrast in the image.

private int calculateSharpness_variance(int x, int y, int w, int h, int stride, byte[] data)
{
int lumSum = 0;
int lumSquared = 0;
int numPixels = w * h;
 
int i, j;
 
for (j = y; j < y + h; j++)
{
for (i = x; i < x + w; i++)
{
int lum = data[j * stride + i];
 
lumSum += (lum / numPixels);
lumSquared += ((lum * lum) / numPixels);
}
}
 
return lumSquared - (lumSum * lumSum);
}


White balance from reference color

A popular means to manually adjust white balance is to take a picture of a white colored region (for instance a sheet of paper) and use this as white reference for the color temperature adjustment. This step usually involves working with RAW image sensor data and color conversion matrices. On Windows Phone there is no access to this information and thus a naiive approach is implemented using RGB data. This approach adjusts the color temperature until the sensor measurement of the white reference region (best) matches the reference color white:

Whitebalanceworkflow.png


Implementation

The following code shows the WhiteBalanceProcessor class which extends the abstract class AFrameProcessor:

class WhiteBalanceProcessor : AFrameProcessor
{
// The amount of Kelvin the color temperature is changed. Higher values yield faster results.
// while lower values are more accurate.
private static UInt32 _adjustmentFactor = 100;
 
// Range of supported color temperatures by camera driver
private UInt32 _minValue = 0;
private UInt32 _maxValue = 0;
 
// Best color temperature and white value found yet
private UInt32 _bestTemperature = 0;
private byte _bestWhitePixelValue = 0;
 
// The algorithm uses RGB data
public override TDataMode getDataMode()
{
return TDataMode.EDataMode_RGBA;
}
 
public override void addFrame(int[] data)
{
// Get white value for ROI
byte newWhitePixelValue = calculateWhiteValue(data);
 
if (_reset) // reset frame processor
{
_state = TFrameProcessorState.EState_Processing;
fireStateStatus(_state, "White balance processing started");
 
// Turn automatic white balance off
_photoDevice.SetProperty(KnownCameraPhotoProperties.WhiteBalancePreset, null);
 
// Get supported color temperatures
CameraCapturePropertyRange range = PhotoCaptureDevice.GetSupportedPropertyRange(CameraSensorLocation.Back, KnownCameraPhotoProperties.ManualWhiteBalance);
 
_minValue = (UInt32)range.Min;
_maxValue = (UInt32)range.Max;
 
_bestTemperature = _minValue + ((_maxValue - _minValue) / 2);
_bestWhitePixelValue = newWhitePixelValue;
 
// Set color temperature to maximum
_photoDevice.SetProperty(KnownCameraPhotoProperties.ManualWhiteBalance, _maxValue);
 
_reset = false;
}
else if (_state == TFrameProcessorState.EState_Processing)
{
UInt32 curValue = (UInt32)_photoDevice.GetProperty(KnownCameraPhotoProperties.ManualWhiteBalance);
if (curValue - _adjustmentFactor >= _minValue)
{
// Check if white value has improved
if (newWhitePixelValue >= _bestWhitePixelValue)
{
_bestTemperature = curValue;
_bestWhitePixelValue = newWhitePixelValue;
}
 
_photoDevice.SetProperty(KnownCameraPhotoProperties.ManualWhiteBalance, curValue - _adjustmentFactor);
}
else
{
// Success
// Future extension: a more sophisticated version should now search in this region for the best value by using a smaller adjustment factor
 
_photoDevice.SetProperty(KnownCameraPhotoProperties.ManualWhiteBalance, _bestTemperature);
 
_state = TFrameProcessorState.EState_Success;
fireStateStatus(_state, "White balance success");
 
}
}
}
 
// Calculate average white value for a sample region in the center
private byte calculateWhiteValue(int[] frameData)
{
int w = (int)_photoDevice.PreviewResolution.Width;
int h = (int)_photoDevice.PreviewResolution.Height;
 
// Only pixels within this sample region will be used for calculation
int[] sampleRegion = new int[]{10, 10};
 
byte result = 0;
 
int startX = (w - sampleRegion[0]) / 2;
int startY = (h - sampleRegion[1]) / 2;
 
int x, y;
long sum = 0;
 
int idx;
int r, g, b;
for (y = startY; y < startY + sampleRegion[1]; y++)
{
for (x = startX; x < startX + sampleRegion[0]; x++)
{
idx = y * w + x;
 
// Get RGB components
int fd = frameData[idx];
b = (fd >> 16) & 0xff;
g = (fd >> 8) & 0xff;
r = (fd >> 0) & 0xff;
 
// Create average. Perfect white is 255.
sum += (b + g + r) / 3;
}
}
 
// Scale down to 0...255
result = (byte)(sum / (sampleRegion[0] * sampleRegion[1]));
 
return result;
}
}

Each time a new frame arrives, the method addFrame(int[]) calculates the "white value" of the region of interest by calculating the mean value over the sum of RGB components. An optimal white value is 255 but due to sensor noise, shadows, etc. it is usually smaller. The color temperature is reduced step-wise (using PhotoCaptureDevice.SetProperty(KnownCameraPhotoProperties.ManualWhiteBalance, COLOR_TEMPERATURE_KELVIN)) until the lower range limit is reached. From the set of measurements the color temperature with the best (closest to 255) white value is set as optimal white balance.


Remarks

In our tests this fast and simple method yields good results but it is sensitive to noise and shadows. Using the naiive assumption to calculate the white value over the mean R/G/B values can also be problematic (e.g. when there are two dominant colors). More advanced methods could employ other color spaces (for instance HLS/HSV), filtering the region of interest, and outlier detection and removal. The speed of the presented method could be improved by using a coarse to fine approach where first a set of coarse temperatures are tested and then the best matches are used for local maximum calculation.

Note.pngNote: Due to a bug in the firmware of some Lumia devices, manual adjustment of white balance does not work. A bug report has already been sent. The methods work properly in the WP8 simulator and on other devices.


Summary and Conclusions

This article presented the complete pipeline from setting up advanced camera settings in Windows Phone 8 to implementing custom camera algorithms using the manual controls provided by the Windows Phone 8 platform. The article also described how to employ multi-threading for image processing. Based on three auto focus algorithms the concept of passive contrast based autofocus has been shown. As second example a naiive white balance algorithm was presented which sets the color temperature based on reference colors.


Credits

Some of the concepts presented in this article are based on the FCamera project and the TU Berlin image processing lectures.

This page was last modified on 9 September 2013, at 00:14.
242 page views in the last 30 days.

Was this page helpful?

Your feedback about this content is important. Let us know what you think.

 

Thank you!

We appreciate your feedback.

×