×

Effects

Effects process the image in some way, and are the large-scale building blocks of the processing pipeline. The following effects are available in the Nokia Imaging SDK: Filter Effect, HDR Effect, Interactive Foreground Segmenter, Lens Blur Effect, and Delegating Effect.

Filter Effect

Use the FilterEffect when you want to apply one or more of the many lightweight filters included. The effect applies a list of filters to the image, one by one. A useful analogy is how optical filters can be stacked onto the lens of an SLR camera. The SDK comes with more than 50 filter implementations: Sepia, MagicPen, Antique, etc.

The application might often choose to use only a single FilterEffect in the pipeline, but since it effectively is a “filter group”, it can be used to form a “preset” or “module” that can be added to or removed from a processing pipeline.

The following sections detail the use of some of the more complex filters.

Reframing Filter

The ReframingFilter lets the user freely reframe the image by effectively specifying a new “canvas”. A reframing area is placed over the image by specifying a rectangle, a rotation, and optionally a pivot point which otherwise defaults to the center of the reframing area. This rectangle can extend outside the current boundaries of the image, and any such area will be rendered in transparent black.

Here is a code sample that performs three reframing operations on a Ferris wheel image:

  1. The image is reframed as a close up around the Ferris wheel, by setting up a ReframingArea.
  2. The area from step 1 is reframed, rotating the ReframingArea by 25 degrees using the center of the ReframingArea as a pivot point.
  3. The area from step 1 is reframed, rotating the ReframingArea by 25 degrees, this time using the top left corner of the ReframingArea as a pivot point.
using (var imageSource = new StorageFileImageSource(storageFile))
using (var filterEffect = new FilterEffect(imageSource))
using (var renderer = new BitmapRenderer(filterEffect)) 
{
    var imageInfo = await imageSource.GetInfoAsync();
    var filter = new ReframingFilter();

    filter.ReframingArea = new Windows.Foundation.Rect(180, 10, 200, 340);
    filter.Angle = 0;
    filterEffect.Filters = new IFilter[] { filter };
    var buffer1 = await renderer.RenderAsync();

    filter.Angle = 25;
    var buffer2 = await renderer.RenderAsync();

    filter.PivotPoint = new Windows.Foundation.Point(0, 0);
    var buffer3 = await renderer.RenderAsync(); 
}
Original image First reframing Second reframing Third reframing

For simple crop operations within the boundaries of the original image, use the CropFilter. To rotate the image an arbitrary angle while resizing the “canvas” so that the entire original image is shown, use the RotationFilter.

Blend Filter

The BlendFilter takes a foreground image source, and when applied blends this image with the image that the FilterEffect is processing. The resulting image is processed by the next filter in the FilterEffect.Filters list.

If an alpha channel is present in the foreground image, it is used to combine the result of the blend filter with the original foreground image. A grayscale image can be provided as a separate alpha mask, and will then be used instead of the alpha channel in the foreground image. A level property functions as a global alpha value, and is multiplied with the alpha value for each pixel to produce the actual value used.

The following code sample that blends an image consisting of a black frame around an otherwise transparent image onto another image.

using (var backgroundSource = new StorageFileImageSource(backgroundFile))
using (var foregroundSource = new StorageFileImageSource(foregroundFile))
using (var filterEffect = new FilterEffect(backgroundSource))
using (var blendFilter = new BlendFilter(foregroundSource))
using (var renderer = new BitmapRenderer(filterEffect))
{
    blendFilter.BlendFunction = BlendFunction.Normal;
    filterEffect.Filters = new IFilter[] { blendFilter };
    var buffer = await renderer.RenderAsync();
}
Background image Foreground image Blend result

The blend filter can also work on an image and a separate alpha mask, represented by a grayscale image. This is useful for several reasons:

  • The GradientImageSource can be used to generate grayscale masks.
  • The output of the InteractiveForegroundSegmentor is a black and white mask, which can be used directly as input to the blend filter.
  • Conserving memory. See the description of the AlphaToGrayscaleFilter below for an explanation of how to save memory when blending is done repeatedly with the same image, or set of images, containing an alpha mask.

The following code sample demonstrates using a foreground image without alpha channel, and a separate grayscale image as alpha mask.

using (var backgroundSource = new StorageFileImageSource(backgroundFile))
using (var foregroundImageSource = new StorageFileImageSource(foregroundImageFile))
using (var foregroundMaskSource = new StorageFileImageSource(foregroundMaskFile))
using (var filterEffect = new FilterEffect(backgroundSource))
using (var blendFilter = new BlendFilter(foregroundImageSource))
using (var renderer = new BitmapRenderer(filterEffect))
{
    blendFilter.MaskSource = foregroundMaskSource;
    blendFilter.BlendFunction = BlendFunction.Normal;
    filterEffect.Filters = new IFilter[] { blendFilter };
    var buffer = await renderer.RenderAsync();
}
Background image Foreground image Foreground mask Blend result

Local Blending

Note: Available since version 1.2 beta

Blending can also be done into a target area of the background source. The TargetArea is specified with a rect, using the unit coordinate system of the background image, i.e. the top left corner of the background image is at (0, 0), and the bottom right corner is at (1, 1) . The area can also be rotated around its center, by setting TargetAreaRotation to the desired angle of counter clockwise rotation.

There is also a TargetOutputOption property that is used to control how the foreground is rendered into the target area. If set to Stretch, the foreground image will be resized to fit the target area exactly. If set to PreserveAspectRatio, the foreground image will be blended into the target area centered and with the original aspect ratio intact. If set to PreserveSize, the size portion of the target area will be ignored, and the foreground image will be blended in its original size.

The following code uses the same input images as the example above, but blends into a smaller area.

using (var backgroundSource = new StorageFileImageSource(backgroundFile)) 
using (var foregroundImageSource = new StorageFileImageSource(foregroundImageFile)) 
using (var foregroundMaskSource = new StorageFileImageSource(foregroundMaskFile)) 
using (var filterEffect = new FilterEffect(backgroundSource)) 
using (var blendFilter = new BlendFilter(foregroundImageSource)) 
using (var renderer = new BitmapRenderer(filterEffect)) 
{
    blendFilter.MaskSource = foregroundMaskSource;
    blendFilter.BlendFunction = BlendFunction.Normal;
    blendFilter.TargetArea = new Rect(0, 0.48, 0.3, 0.3);
    blendFilter.TargetAreaRotation = -3;
    blendFilter.TargetOutputOption = OutputOption.PreserveAspectRatio;
  
    filterEffect.Filters = new IFilter[] { blendFilter };
    var buffer = await renderer.RenderAsync();    
}
Blend to target area

Alpha to Grayscale Filter

The AlphaToGrayscaleFilter copies the alpha channel to the color channels, resulting in a grayscale representation of the alpha channel. The alpha channel is set to 255. This filter can be used to split up an image containing alpha information (e.g. coming from a PNG file) into an image with color information only, and a grayscale mask. If this is used as a preprocessing step, these two images can then later be used as inputs e.g. to the blend filter as described above, thus saving memory since JPEG files can be processed much more efficiently than PNG files.

using (var imageSource = new StorageFileImageSource(pngFile)) 
using (var filterEffect = new FilterEffect(imageSource)) 
using (var jpegRenderer = new JpegRenderer()) 
{
    jpegRenderer.Source = imageSource;
    var imageBuffer = await jpegRenderer.RenderAsync();
    var alphaToGrayscaleFilter = new AlphaToGrayscaleFilter();
    filterEffect.Filters = new IFilter[] { alphaToGrayscaleFilter };
    jpegRenderer.Source = filterEffect;
    var maskBuffer = await jpegRenderer.RenderAsync(); 
}  

HDR Effect

Note: Available since version 1.1

The HdrEffect applies local tone mapping to a single image to achieve an HDR-like effect. It can be used to apply an "auto fix" to the image, resulting in improved image quality for the majority of images. It can also be used to apply a more "artistic HDR" to the image.

The Strength property controls how strong the local tone mapping effect will be on the image. With a higher strength setting more noise is introduced, and this can be suppressed using the NoiseSuppression property. If strength is set to a high value and noise suppression is kept low, the effect will produce dramatic, surrealistic images.

The effect also has properties to control global Gamma and Saturation. For both these properties, 1.0 implies no change. For saturation, values lower than 1 will decrease, and values greater than 1 will increase the saturation in the final image. For gamma, values lower than 1 will produce a lighter image, and values greater than 1 will produce a darker image.

The following example demonstrates how the default settings produce an improved image, and how modifying the settings can result in a much more dramatic image:

using (var source = new StorageFileImageSource(sourceFile)) 
using (var hdrEffect = new HdrEffect(source)) 
using (var renderer = new BitmapRenderer(hdrEffect)) 
{
     var improvedBuffer = await renderer.RenderAsync();
 
     hdrEffect.Strength = 0.9;
     hdrEffect.NoiseSuppression = 0.01;
     var artisticHdrBuffer = await renderer.RenderAsync();
}  
Original image Image improved with HDR Artistic HDR

Image Aligner

Note: Available since version 1.2 beta

The ImageAligner is used to align a series of images that differ by a small movement, e.g. a series of images taken in the burst capture mode available in Windows Phone 8.1. Alignment works for quite small movements only, for example such movements that occur when the user tries to hold the camera still, and quickly degenerates if the images are moved too much. It also works only on constant or near constant exposure settings.

Start the alignment by assigning a list of image sources to the Sources property. Optionally, the ReferenceSource property can be set to specify which image in the list will serve as a reference image in the alignment process. The other images will then be modified to become aligned with this. If not set, or explicitly set to null, the ReferenceImageSource will default to the middle element in the source list.

When the sources are set, you can call the CanAlignAsync() method to find out if it is possible to align a particular image source. One or more images may fail to align without the whole alignment process failing. If a source can be aligned, an image source for the aligned image is retrieved by calling AlignAsync(). This method will throw an exception if called for a source that cannot be aligned.

The example below tries to align a list of images, using the second source as reference, and saves successfully aligned sources. The input and output is visualized as animated GIF images. See the documentation on the GifRenderer for information about how to render animated GIFs.

using (var aligner = new ImageAligner())
using (var renderer = new JpegRenderer()) 
{
    aligner.Sources = unalignedSources;
    aligner.ReferenceSource = unalignedSources[1];
    
    var alignedSources = await aligner.AlignAsync(); 
 
    foreach (var alignedSource in alignedSources) 
    {   
        if (alignedSource != null)
        {
            renderer.Source = alignedSource;
            var alignedBuffer = await renderer.RenderAsync();
            Save(alignedBuffer);
        }
    }
}
Unaligned images Aligned images

Interactive Foreground Segmenter

Note: Available since version 1.1

The InteractiveForegroundSegmenter segments the image into foreground and background based on annotations to the image provided by the end-user.

As input, InteractiveForegroundSegmenter takes the image to segment and an annotation image where representative parts of the foreground and background areas in the image have been marked using the foreground and background colors that can be set on the object. Using these annotations it segments the image and generates a mask where foreground is white and background is black.

Here is an example that uses the interactive foreground segmenter and blend filter to adjust the hue of the foreground of the image. The user provides us with "UserAnnotations" image, where red area represents the foreground of the photo, and blue represents the background.

Main image User annotations Overlay demo Result mask Final result

Here is the code used to produce the final result above, assuming the user annotations are loaded with a StorageFileImageSource:

using (var source = new StorageFileImageSource(MainImage))
using (var annotations = new StorageFileImageSource(UserAnnotations))
using (var redCanvas = new ColorImageSource(new Size(300, 370), Color.FromArgb(255, 255, 0, 0)))
using (var segmenter = new InteractiveForegroundSegmenter(source))
using (var filterEffect = new FilterEffect(source))
using (var blendFilter = new BlendFilter(redCanvas, segmenter, BlendFunction.Add, 0.5))
using (var renderer = new JpegRenderer(filterEffect))
{
    segmenter.AnnotationsSource = annotations;
    segmenter.ForegroundColor = Color.FromArgb(255, 251, 0, 0);
    segmenter.BackgroundColor = Color.FromArgb(255, 0, 0, 250);

    filterEffect.Filters = new[] { blendFilter };

    var buffer = await renderer.RenderAsync();
}

One could also use a WriteableBitmap to allow the user to draw on a canvas and then use the resulting image as annotations. Here is a code sample that demonstrates creating a WriteableBitmap, drawing on it, and finally using it as an annotations source:

WriteableBitmap bmp = new WriteableBitmap(100, 100);
bmp.DrawLine(20, 10, 20, 90, System.Windows.Media.Color.FromArgb(foreground.A, foreground.R, foreground.G, foreground.B));
bmp.DrawLine(50, 30, 50, 70, System.Windows.Media.Color.FromArgb(background.A, background.R, background.G, background.B));
bmp.DrawLine(80, 10, 80, 90, System.Windows.Media.Color.FromArgb(foreground.A, foreground.R, foreground.G, foreground.B));

Bitmap userAnnotations = bmp.AsBitmap();

using (var annotations = new BitmapImageSource(userAnnotations))
{
	...
}
Note: WriteableBitmap's extension method AsBitmap can be found in the Nokia.InteropServices.WindowsRuntime namespace. The DrawLine extension method is part of the WriteableBitmapEx library.

Segmentation is usually an iterative process, meaning that the user will start with a crude version of annotations and inspect the output that the InteractiveForegroundSegmenter generates. The user will then find the areas where the segmentation could be improved, add more annotations to the original annotations image, and render again. This process continues until the user is satisfied with the result.

Please note that the segmentation process can fail if there is not enough information within the AnnotationsSource image. The bare minimums are one pixel in each foreground and background color, however usually more will be required. If the segmentation cannot be completed successfully, an ArgumentException will be thrown with a message "Segmentation could not complete successfully. Try adding more annotations to AnnotationsSource."

Segmentation is an expensive operation, so it cannot be performed on all images with default parameters. To allow processing even on large images, the Quality property should be used. It affects the working size of the algorithm, and therefore a lower quality setting will improve both memory consumption and processing time of the effect.

Lens Blur Effect

Note: Available since version 1.1

The LensBlurEffect applies blur to an image in a way similar to how out-of-focus areas are rendered by a lens, an effect also known as bokeh. The effect supports setting kernels corresponding to different aperture shapes. There are several predefined shapes included in the SDK (circle, hexagon, flower, star, and heart), and custom, user-defined, shapes are also supported.

Lens blur can be applied to the whole image, or alternatively the user can specify a focus area where no blur will be applied. Different areas of the image can be blurred with different kernels. The user specifies this, and optionally also a focus area, with the kernel map.

A kernel map is a grayscale image where each pixel value represents the index of the kernel that will be applied to the corresponding image pixel. The value 255 is reserved for the focus area. To give an example, if we have a use case where the center of the image should not be blurred, the center of the kernel map image should have a value of 255. If we want it to be blurred with the first kernel we have provided, the value should be equal to 0, for the second kernel it should be equal to 1, and so forth. LensBlurEffect takes an IImageProvider as a KernelMap input, allowing the developer to provide the kernel map from a wide range of sources, or it can be generated with a GradientImageSource or BufferImageSource.

The following example applies the lens blur effect on the background of the image, while the foreground remains in focus. It uses a mask created by the interactive foreground segmenter as kernel map.

MainImage Image with annotations Result
using (var source = new StorageFileImageSource(mainImage))
using (var annotations = new StorageFileImageSource(userAnnotations))
using (var segmenter = new InteractiveForegroundSegmenter(source))
using (var lensBlurEffect = new LensBlurEffect(source, new LensBlurPredefinedKernel(LensBlurPredefinedKernelShape.Circle, 30) ))
using (var renderer = new JpegRenderer(lensBlurEffect))
{
    segmenter.AnnotationsSource = annotations;
    segmenter.ForegroundColor = Color.FromArgb(255, 251, 0, 0); 
    segmenter.BackgroundColor = Color.FromArgb(255, 0, 0, 250); 
    
    lensBlurEffect.KernelMap = segmenter;
    var buffer = await renderer.RenderAsync();
}

It should be noted that the lens blur is an expensive operation, requiring a lot more resources than a normal BlurFilter. There is a reason for the increased complexity: lens blur's result is of a lot higher quality, it produces more photo realistic images. The cost can be regulated; the effect can do the bulk of the processing on a smaller image without significantly affecting the end result's quality. The user can regulate the working size with the Quality property, allowing the user to apply the effect even to large images. A lower quality setting will improve both memory consumption and processing time of the effect. The size of each kernel used by the effect is also affected by the Quality property of the effect, so you as the developer don't need to adjust those sizes when changing the parameters of the LensBlurEffect. That being said, a Quality setting below 1.0 is a compromise, and it will produce worse results in both focused and blurred areas of the image.

Delegating Effect

Use a DelegatingEffect when the image is to be processed by a user-implemented class that implements the ICustomEffect interface. The user class is attached to the DelegatingEffect upon creation. In C#, this is much simpler through the use of the CustomEffectBase base class. See Custom Sources and Effects.


Last updated 9 June 2014

Back to top

Was this page helpful?

Your feedback about this content is important. Let us know what you think.

 

Thank you!

We appreciate your feedback.

×