Core concepts

In this chapter you will find an overview of the Imaging SDK, explanation of its basic building blocks, and several code examples. This chapter provides readers with the knowledge they will need to use the Imaging SDK on both basic and more advanced levels.

The libraries

Before starting to use the functionality provided by the Nokia Imaging SDK, the SDK libraries must be added to the project. This is done using the Visual Studio NuGet package manager. For detailed instructions, please see chapter Adding libraries to the project.

The bulk of the functionality in the Nokia Imaging SDK is provided as a Windows Runtime Component. One of the key benefits of this is that developers can use the library from the .NET family of languages (C#, VB), C++ and JavaScript (unless on Windows Phone, where this is not supported). More information on the Windows runtime can be found at Windows Phone API reference (MSDN) and API reference for Windows Store apps.

In addition to the Windows Runtime Component, the SDK also contains a .NET class library. This provides API:s that are intended to make the .NET developer's life a bit easier by allowing her to work with .NET types such as Stream and WriteableBitmap, and providing base classes for implementation of custom image sources and effects.

The basic building blocks

The SDK allows accessing image data without decoding a whole JPEG image for fast previews, rotation, cropping of high resolution images, and applying one or several of the 50 filters effects provided. All these use cases are implemented by mix and matching 3 basic elements: an Image Source, a Renderer, and an Effect to create a chain of operations, or a filter graph.

These elements, because they implement the interfaces “IImageProvider” and “IImageConsumer”, can be connected in different ways forming links and branches, flexibly expressing a powerful image processing “graph”.

The three categories of elements used to perform image processing:

  • Image source: Often placed at the beginning of the processing pipeline. Originates an image in some way, e.g. by generating or loading it from storage, and sets it up to be used further in the processing pipeline. All image sources implement the interface IImageProvider.

  • Effect: Takes one or more images as input(s), performs some kind of processing, and outputs one new image. All effects implement the interface IImageConsumer, which enables it to accept one primary source image. If secondary sources are required, these are available as properties as well. Also, just like image sources, all effects also implement IImageProvider in order to output the resulting image.

  • Renderer: Placed at the end of the pipeline. This renders the resulting image into a certain format or container for application consumption. All renderers implement the interface IImageConsumer, since they need to accept one source image.

The SDK contains a number of concrete implementations of image sources, effects, and renderers, in order to fulfill various needs.

Image source classes are named according to what they accept as input. Examples: StreamImageSource, BufferImageSource, BitmapImageSource.

Effect classes are named according to what kind of processing they perform. One example is FilterEffect, which is able to apply a sequence of the 50+ image filters provided by the SDK.

Renderer classes are named according to what sort of output they produce. Examples: JpegRenderer and BitmapRenderer.

Once created, any of these objects can be kept by the application, reused and reassembled into a different image processing pipeline, where that is desirable.

A practical example of a pipeline

Let’s say that an image is available in a System.IO.Stream, for example the image selected by the user with the PhotoChooserTask. We chose to apply two filters to that image, an Antique filter followed by a Rotation filter. The resulting image will be rendered into a WriteableBitmap.

Original image. After “Antique” filter. After “Rotation” filter.

The processing pipeline will be set up like this:

  1. A StreamImageSource is created to use a JPEG image in a System.IO.Stream.
  2. A FilterEffect is created and the previous StreamImageSource is passed in as the source. A list of lightweight filters is created and assigned to the FilterEffect.
  3. A WriteableBitmapRenderer is created and the previous FilterEffect is passed in as the source, along with a WriteableBitmap to render into.
  4. The method RenderAsync() on the renderer is called, which results in a WriteableBitmap containing the processed image.
  5. The objects are disposed.

The same example as written in C#:

var filters = new IFilter[]
    new AntiqueFilter(),
    new RotationFilter(35.0)

using (var source = new StreamImageSource(stream))
using (var filterEffect = new FilterEffect(source) { Filters = filters })
using (var renderer = new WriteableBitmapRenderer(filterEffect, writeableBitmap))
    await renderer.RenderAsync();

Note the array created, and specifically the IFilter interface which is implemented by all lightweight filters usable in a FilterEffect.

While the asynchronous rendering operation is running, an attempt to call most methods and to change most properties on the involved objects will result in an exception being thrown. This is intentional and protects against unintended results.

After the rendering operation has completed, the application is again free to change properties on any of the objects in the pipeline, for instance the angle of the RotationFilter. To see the new result, just call RenderAsync on the renderer again.

The result can be found in the WriteableBitmap that was passed into the WriteableBitmapRenderer. It is also returned by RenderAsync as an IAsyncOperation<WriteableBitmap>, so the application could pass that IAsyncOperation to another part of the application as a “future result”, without also having to track the original WriteableBitmap object.

Also note that in this example, the objects involved are created and disposed of using a chain of using-statements. This is a good practice when the image processing scenario is simple and self-contained. However, as long as the objects are properly disposed of when not needed, they can just as well be kept as class members or in collections and be reused for multiple renderings.

Differences between Windows Phone and Windows

Although the Imaging SDK code is the same between the Windows Phone and Windows, there are still some rare cases where the code behaves differently due to the OS differences.

CameraPreviewImageSource is fundamentally different between these platforms. Because there is no ICameraCaptureDevice support in Windows, the Windows version of the SDK implements InitializeAsync, StartPreviewAsync, and StopPreviewAsync to support preview.

When working with WriteableBitmap class, the exact same code that works on Windows Phone may not work on Windows. In Windows Phone System.Windows.Media.Imaging.WriteableBitmap class is implemented as a managed class, but in Windows Windows.UI.Xaml.Media.Imaging.WriteableBitmap is part of the WinRT library. In addition to the different namespaces, there are some slight behavioral differences between the classes, as demonstrated in the following sample:

WriteableBitmap pic;
pic = await writeableBitmapRenderer.RenderAsync();
// pic.Invalidate(); Needed for Windows but not for Windows Phone
imageTarget.Source = pic;

This code runs on Windows Phone without a glitch, but on Windows the Image.Source is not updated. Windows developer needs to explicitly call Invalidate() on WriteableBitmap after the content has been refreshed, otherwise cached copy will be returned.

Last updated 20 February 2014

Back to top