Please note that as of October 24, 2014, the Nokia Developer Wiki will no longer be accepting user contributions, including new entries, edits and comments, as we begin transitioning to our new home, in the Windows Phone Development Wiki. We plan to move over the majority of the existing entries. Thanks for all your past and future contributions.

Revision as of 09:53, 16 December 2013 by galazzo (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

HDR I - Implementing High Dynamic Range filters using Nokia Imaging SDK

From Wiki
Jump to: navigation, search
Featured Article
15 Sep

This article explains first how to implement an HDR engine and next how to combine Nokia Imaging SDK's filters to create new filters or improve the existing ones.

Lumia 1020 main.pngWinner: This article was a winner in the Nokia Imaging Wiki Competition 2013Q3.

SignpostIcon XAML 40.png
WP Metro Icon WP8.png
Article Metadata
Code ExampleTested with
SDK: Windows Phone 8.0 SDK, Nokia Imaging SDK Beta 1
Devices(s): Nokia Lumia 1020, Nokia Lumia 925, Nokia Lumia 920, Nokia Lumia 820
Windows Phone 8
Platform Security
Created: galazzo (22 Aug 2013)
Last edited: galazzo (16 Dec 2013)



High-dynamic-range imaging (HDRI or HDR) is a set of methods used in imaging and photography to capture a greater dynamic range between the lightest and darkest areas of an image than current standard digital imaging methods or photographic methods. HDR images can represent more accurately the range of intensity levels found in real scenes, from direct sunlight to faint starlight, and is often captured by way of a plurality of differently exposed pictures of the same subject.

Non-HDR cameras take pictures at one exposure level with a limited contrast range. This results in the loss of detail in bright or dark areas of a picture, depending on whether the camera had a low or high exposure setting. HDR compensates for this loss of detail by taking multiple pictures at different exposure levels and intelligently stitching them together to produce a picture that is representative in both dark and bright areas.

HDR is also commonly used to refer to display of images derived from HDR imaging in a way that exaggerates contrast for artistic effect.

This article first shows how to implement a "pure" HDR engine for Windows Phone, covering both capturing the images, aligning, and blending them (and briefly discussing the performance on several Lumia devices). It then goes on to introduce an innovative approach to HDR, in which filters from the Nokia Imaging SDK are applied to the pre-blended HDR images. Using this approach we've improved some images over what could be done using pure "HDR". In some cases we've improved some of the filter effects, and in others we've even created some "completely new" effects.

HDR-I process

Image Alignment

The main process behind HDR technique is to take a sequence of shots taken with different exposure and to process a blending. In this context, even if we try to keep hand very steady, often images will not be aligned, producing what is called Ghost effect.

To align images I used a technique named Phase Correlation that leverages properties of Fast Fourier Transform. The process is made by following steps:

  • Take FFT of both images
  • Calculate the cross-power spectrum by taking the complex conjugate, multiplying the Fourier transforms together elementwise
  • Apply the inverse Fourier transform
  • Find the coordinates ( x, y ) of the max value in the resulting image

The local maximun is the coordinates represent the translation values and from test made has the so named sub-pixel precision. In real world it means that we have the precision of a one pixel level.

The following image shows the usage of phase correlation to determine relative translative movement between two images translated by (30,33) pixels. The rightmost image clearly show a peak in the phase-correlation representation at approximately (30,33).

Phase Correlation, the white point at (30,33) pixel position (image taken from Wikipedia)


A full implementation was added to DSP library, here the signature of function:

public async static Task<DSP.CartesianPoint> PhaseCorrelation( int[] image1,  int[] image2, int image_width, int image_height, int area_x, int area_y, int area_side)

The function take as input the full resolution pixel array of both images but focuses just on an area specified. This is because, for further improvements, you want to process a different area than the center where there is a better contrast for better performances. In fact if we focus on flat color area without a good reference point is difficult to align images, but this is true also for human eye.

  • image1 is the array of pixel of first image
  • image2 is the array of pixel of second image
  • image_width is the width of both images
  • image_height is the height of both images
  • area_x is the x origin of the area to take into account
  • area_y is the y origin of the area to take into account
  • area_side is the side of square area to take into account

In this implementation I used to split massive computation on separated thread. This give a performance improvement. The code could be improved in readibility by implementing the code inside the Thread in separate functions or other minor improvements, but this is not the focus of this article that is to learn the processes behing image alignement and HDR. To improve performance appreciably we need to switch to C++.

Note.pngNote: High-Mid range Lumia devices has 2 core processors. I tested performance splitting the computation on more then two threads, but I had no relevant difference.

Using Phase Correlation in your code

WriteableBitmap CapturedImage { get; set; }
WriteableBitmap CapturedImage1 { get; set; }
DSP.CartesianPoint displacement1 = null;
// ... compute image acquisition and render shots on CapturedImage and CapturedImage1
img_a = CapturedImage.Pixels;
img_b = CapturedImage1.Pixels;
displacement = await DSP.FourierTransform.PhaseCorrelation(img_a, img_b,
(int) CapturedImage.PixelWidth,
(int) CapturedImage.PixelHeight,
((CapturedImage.PixelWidth - area_side) / 2),
((CapturedImage.PixelHeight - area_side) / 2),

HDR Engine

This section provides the code used in the HDR engine, covering both the code to take images at different exposures, and also the blending process.

Before we start, below is an example of an HDR-enhanced image produced without applying filters to the source images. While the Lumia 925 does an impressive job "by default", note the subtle improvement that HDR adds to the light coming through the left lens of the glasses.

Reference Image HDR Image

The images used to produce the HDR above are shown below.

Sub Exposed Normal Exposed Over Exposed
Sub Exposed Normal Exposed Over Exposed Result

Sub Exposed Normal Exposed Over Exposed Result

Taking three shots with different exposures

As already discussed, HDR works by blending images taken at different exposures in a clever way. While blending more images may result in better quality, three shots is a good compromise between app performance and image quality.

The code below shows how the three shots are taken:

private PhotoCaptureDevice camera;
private WriteableBitmap CapturedImage { get; set; }
private WriteableBitmap CapturedImage1 { get; set; }
private WriteableBitmap CapturedImage2 { get; set; }
private async void OnTakePhotoClicked(object sender, RoutedEventArgs e)
using ( MemoryStream streamSubExposed = new MemoryStream() )
using ( MemoryStream streamNormalExposed = new MemoryStream() )
using ( MemoryStream streamOverExposed = new MemoryStream() )
CameraCaptureSequence cameraCaptureSequence = camera.CreateCaptureSequence(1);
CameraCapturePropertyRange exposureRange = PhotoCaptureDevice.GetSupportedPropertyRange(CameraSensorLocation.Back,
await camera.FocusAsync();
cameraCaptureSequence.Frames[0].CaptureStream = streamSubExposed.AsOutputStream();
camera.SetProperty(KnownCameraPhotoProperties.ExposureCompensation, exposureRange.Min);
await camera.PrepareCaptureSequenceAsync(cameraCaptureSequence);
await cameraCaptureSequence.StartCaptureAsync();
cameraCaptureSequence.Frames[0].CaptureStream = streamNormalExposed.AsOutputStream();
camera.SetProperty(KnownCameraPhotoProperties.ExposureCompensation, 0);
await camera.PrepareCaptureSequenceAsync(cameraCaptureSequence);
await cameraCaptureSequence.StartCaptureAsync();
cameraCaptureSequence.Frames[0].CaptureStream = streamOverExposed.AsOutputStream();
camera.SetProperty(KnownCameraPhotoProperties.ExposureCompensation, exposureRange.Max);
await camera.PrepareCaptureSequenceAsync(cameraCaptureSequence);
await cameraCaptureSequence.StartCaptureAsync();
camera.SetProperty(KnownCameraPhotoProperties.ExposureCompensation, 0);
streamNormalExposed.Seek(0, SeekOrigin.Begin); streamSubExposed.Seek(0, SeekOrigin.Begin); streamOverExposed.Seek(0, SeekOrigin.Begin);
CapturedImage.SetSource(streamNormalExposed); CapturedImage.Invalidate();
CapturedImage1.SetSource(streamSubExposed); CapturedImage1.Invalidate();
CapturedImage2.SetSource(streamOverExposed); CapturedImage2.Invalidate();
// Process alignment

Note.pngNote: The code above uses the whole of the device exposure range. The choice of exposure range can impact image quality. If a small range is selected you may miss the advantages of HDR process. A large range sometimes produces low contrast images which become difficult to align - particularly in conditions where the sub exposed image is almost total dark and the over exposed images is almost total white.

Radiance Map

For more accurate result we can use a technique like tone mapping. A tone map is a weighted map that brings informations like brightness, contrast and saturation combined in a weight that at the end represent the importance of that pixel. Process the tone map, weights of each of these three layers will be used to merge pixels during the blending process. Without providing boring math informations I can summarize the steps to build a tone map:

  • Take a copy of the original image and apply a Blur Filter. The level depend on the effect you want.
  • Perform the difference between the original image and the blurred version
  • Apply the Emboss Filter to a copy of the original image. This is the equivalen of the Laplacian operator ( to simplify like edge detector )
  • Sum the result of forst operation with the embossed image

Assuming image1 is a WriteableBitmap

WriteableBitmap toneMap1 = new WriteableBitmap(CapturedImage.PixelWidth, CapturedImage.PixelHeight);
using (EditingSession editsession = new EditingSession(image1.AsBitmap()))
using (EditingSession blendSession = new EditingSession(image1.AsBitmap()))
// Create the blurred version of original image
// Perform the difference between the original image and the blurred copy
editsession.AddFilter(FilterFactory.CreateBlendFilter(blendSession, BlendFunction.Difference));
// Create the Laplacian of the original image using the emboss filter
// Add the result of blur with emboss filter
editsession.AddFilter(FilterFactory.CreateBlendFilter(blendSession, BlendFunction.Add));
// Perform a gray scale as we need just informations on radiance not colours
// Render the result
await editsession.RenderToWriteableBitmapAsync(toneMap1, OutputOption.PreserveAspectRatio);

Here an example result of that code:

Sub Exposed Normal Exposed Over Exposed

Blending Process

The choice of the blending function is one of the most important decisions in the process, as it can have a huge effect in terms of quality and performance.

The fastest function is to compute the simple average of the three pixels. While this method is fast and can result in good image quality in many common situations, it is a naive approach. While mobile devices do not have the computation power of desktop systems, there are other functions which provide better images within an acceptable timeframe.

For better results we want to give more importance to those pixels which are best exposed in the image (i.e. not too bright or too dark) - these will be pixels with values in the middle of the range (128 in the [0, 255] range or 0.5 in a range of [0,1]).

Tip.pngTip: This best exposed pixel may not be the pixel in the "normal" exposed image - this is after all the "whole point of HDR"!
Consider a situation where the luma of the three images are (in 0 - 255 range): sub exposed 150, normal 167, over exposed 255 (full white). In this case the best pixel is the one in the sub exposed image because it is nearest to the middle of the range (128). It is reasonable to assume this brings more information on colour and saturation that whose are more burned (the normal image at 167. The over exposed "255" image should not be considered at all.

One of the best functions we might use is the Gaussian, because this smoothly enhance pixels with luma nearest to the middle. Unfortunately this function is computationally expensive so as a compromise we instead consider a Triangular function. This function reduces the importance of pixels that are too dark or too bright or dark (in the extreme case of 0 (black) or 255 (white ) the pixel gets a weight of 0):

value = 1 -  Math.Abs( (luma - 128 ) / 128 );

This is a good compromise compared with Gaussian function. While the shape (behaviour) of the two functions is similar the triangular function is much more efficient. Benchmarking on the Lumia 925 shows that processing a resolution image using a Gaussian function takes 32 seconds, compared to 5 for the triangular function.

Note.pngNote: Passionate photographers might argue that the improved performance is worth the wait. A future implementation could provide the user with their choice of blending function: Naive, Triangular and Gaussian.

As improvement we can use other triangular filters based on the working layer. For Sub exposed layer we would like to give more importance to that pixel when increasing the luma of over exposed layer.

value = luma_over_exposed_level;

This means that higer is the luma in the over exposed layer higer is the importance of darkest pixel.

For over exposed layer we would give more importance to that pixel when is nearest to zero.

value = (1 - luma_sub_exposed_level);

At a first glance could seams that both formulas cancel each other out. This could be true in the exact case of equal range luma between layers, for example [64] (Sub Exposed), [128] ( Normal Exposed), [196] ( over exposed ), but in other case there is a better luma balancing.

Here is the final Blend() function

private async Task<int[]> Blend(WriteableBitmap image1,
WriteableBitmap image2,
WriteableBitmap image3,
DSP.CartesianPoint displacement1,
DSP.CartesianPoint displacement2,
bool enhanceShadows = false)
int[] result = new int[CapturedImage.PixelWidth * CapturedImage.PixelHeight];
// This step create the radiance map for each of three images
WriteableBitmap toneMap1 = new WriteableBitmap(CapturedImage.PixelWidth, CapturedImage.PixelHeight);
WriteableBitmap toneMap2 = new WriteableBitmap(CapturedImage.PixelWidth, CapturedImage.PixelHeight);
WriteableBitmap toneMap3 = new WriteableBitmap(CapturedImage.PixelWidth, CapturedImage.PixelHeight);
using (EditingSession editsession = new EditingSession(image1.AsBitmap()))
using (EditingSession blendSession = new EditingSession(image1.AsBitmap()))
editsession.AddFilter(FilterFactory.CreateBlendFilter(blendSession, BlendFunction.Difference));
editsession.AddFilter(FilterFactory.CreateBlendFilter(blendSession, BlendFunction.Add));
await editsession.RenderToWriteableBitmapAsync(toneMap1, OutputOption.PreserveAspectRatio);
using (EditingSession editsession = new EditingSession(image2.AsBitmap()))
using (EditingSession blendSession = new EditingSession(image2.AsBitmap()))
editsession.AddFilter(FilterFactory.CreateBlurFilter(BlurLevel.Blur1) );
editsession.AddFilter(FilterFactory.CreateBlendFilter(blendSession, BlendFunction.Difference));
editsession.AddFilter(FilterFactory.CreateBlendFilter(blendSession, BlendFunction.Add));
await editsession.RenderToWriteableBitmapAsync(toneMap2, OutputOption.PreserveAspectRatio);
using (EditingSession editsession = new EditingSession(image3.AsBitmap()))
using (EditingSession blendSession = new EditingSession(image3.AsBitmap()))
editsession.AddFilter(FilterFactory.CreateBlendFilter(blendSession, BlendFunction.Difference));
editsession.AddFilter(FilterFactory.CreateBlendFilter(blendSession, BlendFunction.Add));
await editsession.RenderToWriteableBitmapAsync(toneMap3, OutputOption.PreserveAspectRatio);
Task q1Task = Task.Run(new Action(() =>
int r1, r2, r3, g1, g2, g3, b1, b2, b3, r=0, g=0, b=0;
double y1, y2, y3;
double _y1, _y2, _y3;
int displacement1_X, displacement1_Y, displacement2_X, displacement2_Y;
int[] buffer1 = CapturedImage.Pixels;
int[] buffer2 = CapturedImage1.Pixels;
int[] buffer3 = CapturedImage2.Pixels;
int[] bufferToneMap1 = toneMap1.Pixels;
int[] bufferToneMap2 = toneMap2.Pixels;
int[] bufferToneMap3 = toneMap3.Pixels;
double totalLumaWeights = 0;
int captured_image_width = CapturedImage.PixelWidth;
int captured_image_height = CapturedImage.PixelHeight;
int captured_image_height_half = captured_image_height / 2;
int pixel1 = 0, pixel2 = 0, pixel3 = 0;
for (int y = 0; y < captured_image_height_half; y++)
for (int x = 0; x < captured_image_width ; x++)
displacement1_X = (x + (int)displacement1.X); if (displacement1_X < 0) displacement1_X = 0; if (displacement1_X >= captured_image_width) displacement1_X = captured_image_width - 1;
displacement1_Y = (y + (int)displacement1.Y); if (displacement1_Y < 0) displacement1_Y = 0; if (displacement1_Y >= captured_image_height) displacement1_Y = captured_image_height - 1;
displacement2_X = (x + (int)displacement2.X); if (displacement2_X < 0) displacement2_X = 0; if (displacement2_X >= captured_image_width) displacement2_X = captured_image_width - 1;
displacement2_Y = (y + (int)displacement2.Y); if (displacement2_Y < 0) displacement2_Y = 0; if (displacement2_Y >= captured_image_height) displacement2_Y = captured_image_height - 1;
pixel1 = buffer1[x + (y * captured_image_width)];
pixel2 = buffer2[displacement1_X + (displacement1_Y * captured_image_width)];
pixel3 = buffer3[displacement2_X + (displacement2_Y * captured_image_width)];
r1 = ((pixel1 >> 16) & 0xFF);
r2 = ((pixel2 >> 16) & 0xFF);
r3 = ((pixel3 >> 16) & 0xFF);
g1 = ((pixel1 >> 8) & 0xFF);
g2 = ((pixel2 >> 8) & 0xFF);
g3 = ((pixel3 >> 8) & 0xFF);
b1 = (pixel1 & 0xFF);
b2 = (pixel2 & 0xFF);
b3 = (pixel3 & 0xFF);
_y1 = ((double)r1 / 255.0) * 0.3 + ((double)g1 / 255.0) * 0.59 + ((double)b1 / 255.0) * 0.11;
_y2 = ((double)r2 / 255.0) * 0.3 + ((double)g2 / 255.0) * 0.59 + ((double)b2 / 255.0) * 0.11;
_y3 = ((double)r3 / 255.0) * 0.3 + ((double)g3 / 255.0) * 0.59 + ((double)b3 / 255.0) * 0.11;
y1 = (0.1 + ((double)(bufferToneMap1[x + (y * captured_image_width)] & 0xFF) / 255)); // adiacentLuma * 10;
y2 = (0.1 + ((double)(bufferToneMap2[displacement1_X + (displacement1_Y * captured_image_width)] & 0xFF) / 255)); // adiacentLuma * 10;
y3 = (0.1 + ((double)(bufferToneMap3[displacement2_X + (displacement2_Y * captured_image_width)] & 0xFF) / 255)); // adiacentLuma * 10;
y1 *= (1 - (Math.Abs(_y1 - 0.5) / 0.5)) * 10;
y2 *= _y3 * 10;
y3 *= (1 - _y2) * 10;
// Enhance Shadows
if (enhanceShadows)
y1 *= (20000 * ((_y1 + _y2 + _y3) / 3) );
y2 *= (20000*_y3);
y3 *= (20000*_y2);
totalLumaWeights = (Math.Abs(y1) + Math.Abs(y2) + Math.Abs(y3));
y1 /= totalLumaWeights; y2 /= totalLumaWeights; y3 /= totalLumaWeights;
r = (int)Math.Floor((double)(r1 * y1 + r2 * y2 + r3 * y3));
g = (int)Math.Floor((double)(g1 * y1 + g2 * y2 + g3 * y3));
b = (int)Math.Floor((double)(b1 * y1 + b2 * y2 + b3 * y3));
if( r < 0 ) r = 0; if( g < 0 ) g = 0; if( b < 0 ) b = 0;
if (r > 255) r = 255; if (g > 255) g = 255; if (b > 255 ) b = 255;
result[x + (y * captured_image_width)] = 0x1000000 | (r << 16) | (g << 8) | b;
Task q2Task = Task.Run(new Action(() =>
for (int y = captured_image_height_half; y < captured_image_height; y++)
for (int x = 0 ; x < captured_image_width; x++)
// The same code as Task above
await Task.WhenAll(q1Task, q2Task);
return result;

I tested the above code with three devices: Lumia 925, 920 and 820. The value of pure HDR for the Lumia 820 and 920 was generally fairly obvious. However the Lumia 925 has the same very wide exposure range as the 1020, and exceptional quality in low light conditions. As a result it was much harder to find situations in which HDR added value over the (really impressive!) default performance of the Lumia 925.

Experiments in filtered-layer HDR

Pure HDR blends the pixels of unaltered reference images that are under-, normal-, and over-exposed. While this can create some very impressive improvements to certain types of images (specifically those with under or overexposed areas) it isn't particularly innovative. As the performance of camera sensors gets better (as in the Lumia 925) the areas where pure HDR has such a vivid impact should decrease.

Therefore to make this article more interesting and innovate I've also chosen to explore what can be done by pre-filtering the reference images using filters in the Nokia Imaging SDK. The purpose was both to see whether it is possible to further improve images over what can be done using pure HDR, but also to see whether I could improve the effects themselves or even create completely new ones.

The approach I followed was to leverage the HDR property to work with layers, applying different filters to each layer. For example, rather than just applying "blur" and "sepia" to an image, I instead do

  • Sub exposed image - sepia, blur level 2, lomo with blue dominance
  • Normal exposed image - lomo with red dominance
  • Over exposed image - sepia and blur level 7, lomo with green dominance

Note.pngNote: There is a reason why I applied green dominance to the overexposed' image - on RGB colour model the green component is the most important and brings more information on brightness. Consider "night vision" has green as its primary component. Blue is more suitable for dark tones. The Red component is positioned for exclusion.

The resulting image produces something new, and which in some cases is a better image or effect than the original.

The Layer approach on HDR filter blending

Note.pngNote: In this article when I refer to effect I mean the HDR combination of Nokia Imaging SDK filters with different settings in different layers. When I refer to filterI mean the original filter - blur, lomo etc. As a result the Lomo "effect" is different then Lomo "filter".

The prototype

I created a prototype to test all possible combinations. As you can see from the image below the prototype allows the filters to be applied dynamically, based on the situation, and of course inspiration.

Screenshot of my prototype app

Note.pngNote: The prototype UI and performance were sufficient for testing, but have not yet been optimised. There is a toggle button on the top right corner, that if enabled will load three images from isolated storage named "normalexposed.jpg", "subexposed.jpg" and "overexposed.jpg" that have been stored in Isolated Storage.

Here the sequence of filter for each image layer:

Sub exposed image:

using (EditingSession editsession = new EditingSession(StreamToBuffer(streamSubExposed)))
if (RadColorBoost.IsChecked) editsession.AddFilter(FilterFactory.CreateColorBoostFilter(1.0));
if( RadCartoon.IsChecked ) editsession.AddFilter(FilterFactory.CreateCartoonFilter(true));
if( RadLomo.IsChecked ) editsession.AddFilter(FilterFactory.CreateLomoFilter(0.5, 0.5, LomoVignetting.Medium, LomoStyle.Blue));
if (RadSepia.IsChecked) editsession.AddFilter(FilterFactory.CreateSepiaFilter());
if (RadStamp.IsChecked) editsession.AddFilter(FilterFactory.CreateStampFilter(3, 128) );
if (RadNegative.IsChecked) editsession.AddFilter(FilterFactory.CreateNegativeFilter());
if (RadTemperature.IsChecked) editsession.AddFilter(FilterFactory.CreateTemperatureAndTintFilter(50, 0));
if (RadRedColorAdjust.IsChecked) editsession.AddFilter(FilterFactory.CreateColorAdjustFilter(0,0,0.1));
if (RadContrast.IsChecked) editsession.AddFilter(FilterFactory.CreateContrastFilter(0.6));
if (RadBrightness.IsChecked) editsession.AddFilter(FilterFactory.CreateBrightnessFilter(0.6));
if (RadBlur.IsChecked) editsession.AddFilter(FilterFactory.CreateBlurFilter(BlurLevel.Blur2));
await editsession.RenderToWriteableBitmapAsync(CapturedImage1, OutputOption.PreserveAspectRatio);

Normal exposed image:

using (EditingSession editsession = new EditingSession(StreamToBuffer(streamNormalExposed)))
if (RadRedColorAdjust.IsChecked) editsession.AddFilter(FilterFactory.CreateColorAdjustFilter(0.8,0,0));
if (RadTint.IsChecked) editsession.AddFilter(FilterFactory.CreateTemperatureAndTintFilter(0, 50));
if (RadTemperature.IsChecked) editsession.AddFilter(FilterFactory.CreateTemperatureAndTintFilter(30, 0));
if (RadLomo.IsChecked) editsession.AddFilter(FilterFactory.CreateLomoFilter(0.5, 0.5, LomoVignetting.Medium, LomoStyle.Red));
if (RadContrast.IsChecked) editsession.AddFilter(FilterFactory.CreateContrastFilter(0.3));
await editsession.RenderToWriteableBitmapAsync(CapturedImage, OutputOption.PreserveAspectRatio);

Over exposed image:

using (EditingSession editsession = new EditingSession(StreamToBuffer(streamOverExposed)))
if (RadRedColorAdjust.IsChecked) editsession.AddFilter(FilterFactory.CreateColorAdjustFilter(0, 0.1, 0));
if (RadColorBoost.IsChecked) editsession.AddFilter(FilterFactory.CreateColorBoostFilter(4.0));
if (RadCartoon.IsChecked) editsession.AddFilter(FilterFactory.CreateCartoonFilter(true));
if (RadLomo.IsChecked) editsession.AddFilter(FilterFactory.CreateLomoFilter(0.5, 0.5, LomoVignetting.Medium, LomoStyle.Green));
if (RadSketch.IsChecked) { editsession.AddFilter(FilterFactory.CreateContrastFilter(0.3)); editsession.AddFilter(FilterFactory.CreateSketchFilter(SketchMode.Color)); }
if (RadSepia.IsChecked) editsession.AddFilter(FilterFactory.CreateSepiaFilter());
if (RadSolarize.IsChecked ) editsession.AddFilter(FilterFactory.CreateSolarizeFilter((float)1.0));
if (RadStamp.IsChecked) editsession.AddFilter(FilterFactory.CreateStampFilter(3, 200));
if (RadContrast.IsChecked) editsession.AddFilter(FilterFactory.CreateContrastFilter(0.3));
if (RadBrightness.IsChecked) editsession.AddFilter(FilterFactory.CreateBrightnessFilter(0.3));
if (RadBlur.IsChecked) editsession.AddFilter(FilterFactory.CreateBlurFilter(BlurLevel.Blur7));
await editsession.RenderToWriteableBitmapAsync(CapturedImage2, OutputOption.PreserveAspectRatio);

After that of course we proceed with blending as explained for "pure HDR"

int[] buffer = await Blend(CapturedImage, CapturedImage1, CapturedImage2, displacement1, displacement2, UseHDRSaturation.IsChecked);

The sections below show the results of my experimentation.


The "glow" effect is used to create a "glow" around a particular subject while still keeping most of the rest of the image "crisp". The image below shows how we've created a slightly "ghostly" Koala on a fairly solid chair- the difference can be more appreciated looking near its eyes, mouth and the joey (baby Koala).

Reference Image Result

The base of the glow effect is the blur filter applied to different pre-blended images in the HDR. It has the following settings:

  • Sub exposed image: Blur Filter level 2
  • Normal exposed image: none
  • Over exposed image: Blur Filter level 7

I used the maximum level of blur on overexposed image to enhance the subject's brightness and none on the Normal Exposed image to keep detail of our subject. The sub-exposed image has a low level of blur which has a smoothing effect while still keeping the chair fairly "crisp".

Compare this with a simple blur filter, which affects the whole image.

Blurred Image Glow Image

Color Boost

The color boost effect restores the colour-richness to a "faded" image. The results speak for themselves!

Reference Image Result
Campanile Mirazzano.jpg
Campanile Mirazzano Color Boost.jpg

In this case we combine a normal colour boost with a Lomo filter. Using different percentages we get the warm colors provided by the colour boost while avoiding "burned pixel" effect of the original filter. The colours are smoother and behaviour is more uniform - there is also more detail in the sky.

Here the settings:

  • Sub exposed image: Lomo level 1.0
  • Normal exposed image: none
  • Over exposed image: Lomo level 4.0

The original colour boost is a great effect, but it isn't exactly subtle! Below is a comparison of the original filter and our new effect.

Color Boost filter on reference Image The colour boost effect
Camoanile color boost.jpg
Campanile Mirazzano Color Boost.jpg

Darkness falls

This effect was discovered by accident. Despite the fact I'm not a fan of dark culture, the ominous nature of the effect did strike me. The core of this effect is the Stamp Filter combined with Glow effect seen above and a Lomo Effect.

Reference Image Result
Dark image reference.jpg

Here the configuration:

  • Sub exposed image: Stamp Filter with Smoothness 3, Threshold 128
  • Normal Image: None
  • Over exposed image: Stamp Filter with Smoothness 3, Threshold 200

Note that threshold into over-exposed image is higher than the sub-exposed image as we want enhance the dark effect, but also to give a different mask area and improve quality effect during blending. Remember that stamp filter give you a black / white version of your image.

Obviously the impact of the effect is greater in an image which looked nice to start with, but this effect can further darken even an already ominous image.

Reference Image Result
With addition of Sepia Effect

Gradient Coloured filter

This is an effect I like very much and need a further code implementation. Experts or professional photographers are usual to apply coloured lens filters in front of the camera. This filter are useful to correct light aberrations if polarized but also to add special effect that enhance the mood of the shot.

Gradient yellow coloured lens filter for high range cams

Here an example of an orange filter

Original HDR Image Result

Note the gradient. The top of image is fully multiplied between the filter and the image while goes to fade until the bottom of the image is equal to the original withput effects.

To understand how the filter is made have a look to the code below. The basis idea is to create a gradient image with the choosen color runtime and to apply the equivalent of Multiply operation when using the Blending Filter or blending levels on Photoshop.

private void GradientFilterAsync(WriteableBitmap image, int color=0x018000FF, double heightGradient = 0.6)
// Create the buffer for gradient result filtered image
int[] gradientPixels = new int[image.PixelWidth * image.PixelHeight];
// Get reference to original image
int[] pixels = image.Pixels;
int height = image.PixelHeight;
int width = image.PixelWidth;
double gradient = 0;
double ag = 255;
// Extract Red, Green and Blue component of choosed gradient color
int rg = ((color >> 16) & 0xFF);
int gg = ((color >> 8) & 0xFF);
int bg = (color & 0xFF);
int r, g, b, pixel;
// Calculate the height of the gradient. The passed value has a range of [0, 1]
// Default value is 0.6 that means we want the gradient cover the 60% of image
// After 60% no effect will be applied and the image will keep the original values
heightGradient *= (double) height;
for (int y = 0; y < height; y++)
// Calculate the current gradient Alpha
gradient = (double)y / ((double)heightGradient); if (gradient > 1) gradient = 1;
for (int x = 0; x < width; x++)
ag = ((1-gradient));
pixel = pixels[x + (y * width)];
r = ((pixel >> 16) & 0xFF);
g = ((pixel >> 8) & 0xFF);
b = (pixel & 0xFF);
// This is the important operation of Alpha Blending the gradient.
// That's the pixel Multiply operation.
// When the gradient value is equal to one will be used just the original values.
r = (int)((double)r * gradient) + (int)((double)r * (ag * rg / 255));
g = (int)((double)g * gradient) + (int)((double)g * (ag * gg / 255));
b = (int)((double)b * gradient) + (int)((double)b * (ag * bg / 255));
if (r < 0) r = 0; if (g < 0) g = 0; if (b < 0) b = 0;
if (r > 255) r = 255; if (g > 255) g = 255; if (b > 255) b = 255;
gradientPixels[x + (y * width)] = 0x1000000 | (r << 16) | (g << 8) | b;
gradientPixels.CopyTo(image.Pixels, 0);

For reference on pixel multiplication in image blending I suggest the Wikipedia page

Here some example with different colours:

Green Pink Violet Yellow

Lomo Filter

This section show a different version of lomo filter that is smoother then the standard filter. This filter was born as reading comments on filters applied like that on App Store and around the web some user complain the fact that often filters are to over then desired. Said that take this filter as is, a different version of existing based on some user requests.

To obtain this effect I used the following configuration:

  • Sub exposed image
Lomo filter with Blue dominance for dark area
Sepia Filter
  • Normal exposed image
Lomo filter with Red dominance
  • Over exposed image
Lomo filter with Green dominance for bright area
Sepia filter

Combining Lomo Filter in that way the result is almost neutral as Red, Green and Blue dominance blended carry out the White or better the original color, but mixed with the HDR algoritm and becouse of the targeted selection of dominance in the different level we have a sort of improvement of color distribution also in terms of light ( the Green component ) and shadows ( the Blue component ). We saw the effects of that in the Color Boost example above where attenuated and improved the effect of Color Boost Filter.

Sepia Filter is used to restore the Red effect of classic Lomo Filter. You can use also the Antique filter with very good results in my opinion. Don't use this step if you want the neutral effect.

Once again not applying to many filters in the middle layer of normal exposed image let more image details.

Reference Image Result
Satndard Lomo filter with Red dominance
HDR Lomo filter
Sub Exposed Normal Exposed Over Exposed

As you can see pixels are less burned and a softer effect. I suggest to play also with Color Boost, Color Adjust and Contrast filter to take a different effect but in that way you a nearest to the original one.

Enhance Brightness

In this section we will show how to use the Sketch Filter to enhance brightness. This filter works like Cartoon Filter but keep more detail where needed. Here an example:

Sketch filter example

As you can see keep some color details of tree and leafs and house.

The idea is to use this filter in color mode just on one layer as it is almost heavy as effect. I choosed the over exposed layer, as Normal exposed layer keep image detail and I don't want to loose too many shadows so I applied it on over exposed layer, anyway feel free to test effect other layers. At the end I corrected the result image with saturation and contrast.

HDR Shot Original Image Result
iPhone 4Gs
Nokia Lumia 925
Nokia Lumia 925

The HDR iPhone shot cannot be taken into account as despite HDR produced an image that talk for itself, but was added to understand from were we started. The 925 shot is very impressive and we are accustomed to be normal, but this isn't.

To understand better the result here a compare with the result and the reference image shot with Nokia Pro Camera and applied a Brightness Filter with a parameter of +0.7

Original Image Result
Brightness filter
HDR Sketch filter

This examples shows the speachless power of Lumia 925. Here a result of the shot taken with a Lumia 820 with his basic camera not comparable with Lumia 925. Light condition are better but also difficult.

Original Image Result
Shot taken with standard camera
HDR shot with Sketch filter

Because of better light conditions sketch filter was applied in a softer way, but brightness was included alike. Note also the effect of HDR around spotlights.


This filter has a lot of potentials but may be somtines is undervalued. The proposed technique improved the original filter as give more details to the final image. Here the configuration I used:

  • Sub exposed image: Cartoon Filter with distinct edge option active
  • Normal Image: none
  • Over exposed image: Cartoon Filter with distinct edge option active
Original Image Result

To appreciate the difference with the original filter here a comparison between the original filter and the HDR one applied to the same refernce image.

Original Cartoon Filter HDR Cartoon Filter

Look at the tree details and other details that were disappeared in the original filter like the shovel leaning against the wall at bottom left.

Here the result of mixing the Gradient Coloured Filter with the HDR Cartoon Filter. My impression was that the result is nearest to the style some comics.

Yellow Gradient Filter Pink Filter


In this section will be shown how the use of Flash in HDR also with good light condition. I limited the use of flash just on sub exposed image because with shutter closed but using flash we enhance some area without having the typical lit glow of flash.

In that example I wished to use Flash in combination with Temperature Filter because I liked it, but the effect is well visible.

Original Shot Result
Sub Exposed Testo del titolo Testo del titolo

In the sub exposed image you can see the effect of using flash as there are yellow glow around the steel details, effect that can be seen in the result image. In that case was useful to use flash as over exposed image is almost burned, and sub exposed image almost dark loosing details, so the winning would have been just the normal exposed image without giving improvement except the Temperature Filter whose choise depends on your preferences and gives no relevant value in that case.

Filter Gallery

In this section a gallery of some shots.

Tint filter, very low light conditions
This page was last modified on 16 December 2013, at 09:53.
337 page views in the last 30 days.