×
Namespaces

Variants
Actions
(Difference between revisions)

Partial filter application and blending with Nokia Imaging SDK

From Nokia Developer Wiki
Jump to: navigation, search
to_pe (Talk | contribs)
(To pe - Edited section)
to_pe (Talk | contribs)
(To pe - Adding new section + lots of editing)
Line 44: Line 44:
  
 
Here is one example:
 
Here is one example:
<gallery>
+
{| class="wikitable"
File:Magicpool 0.jpg|User picks an image from library or takes a picture
+
|-
File:Magicpool 1.jpg|User picks region for filter application
+
| [[File:Magicpool 0.jpg|thumb|300px]]|| [[File:Magicpool 1.jpg|thumb|300px]]|| [[File:Magicpool 2.jpg|thumb|300px]]
File:Magicpool 2.jpg|The final result using MagicPenFilter and Darken blending mode
+
|-
</gallery>
+
| User picks an image from library or takes a picture ||User picks region for filter application||The final result using MagicPenFilter and Darken blending mode
 +
|}
  
 
Unlike traditional filtering applications, the user can choose arbitrary regions for application filtering. By applying filers to specific regions, those regions can be highlighted. Here are some more examples:
 
Unlike traditional filtering applications, the user can choose arbitrary regions for application filtering. By applying filers to specific regions, those regions can be highlighted. Here are some more examples:
 
<gallery>
 
<gallery>
File:Antique hinge 0.jpg|Select image
+
File:Antique hinge 0.jpg|Original image
 
File:Antique hinge 1.jpg|Select region
 
File:Antique hinge 1.jpg|Select region
 
File:Antique hinge 2.jpg|Antique filter
 
File:Antique hinge 2.jpg|Antique filter
Line 61: Line 62:
 
File:Cartoontable 4.jpg|Emboss filter
 
File:Cartoontable 4.jpg|Emboss filter
 
File:Cartoontable 5.jpg|MagicPen filter
 
File:Cartoontable 5.jpg|MagicPen filter
File:Fantasyfootball 0.jpg|Select image
+
File:Fantasyfootball 0.jpg|Original image
 
File:Fantasyfootball 1.jpg|Select region
 
File:Fantasyfootball 1.jpg|Select region
 
File:Fantasyfootball 2.jpg|Cartoon filter
 
File:Fantasyfootball 2.jpg|Cartoon filter
Line 72: Line 73:
 
[[File:Smartfilter architecture.jpg|Application architecture]]
 
[[File:Smartfilter architecture.jpg|Application architecture]]
  
[[File:Tool selection overlay.jpg|thumb|Tools selection overlay]]
+
[[File:Tool selection overlay.jpg|thumb|300px|Tools selection overlay]]
 
The application consists of just one main editing page (name {{Icode|MainPage.xaml}}) which contains the entire application's functionality. The logic is implemented in the backing view model (named {{Icode|MainViewModel}}). The main functionalities are:
 
The application consists of just one main editing page (name {{Icode|MainPage.xaml}}) which contains the entire application's functionality. The logic is implemented in the backing view model (named {{Icode|MainViewModel}}). The main functionalities are:
 
* Pick an image from the library or take a photo.
 
* Pick an image from the library or take a photo.
Line 134: Line 135:
 
=== Brush selection tool ===
 
=== Brush selection tool ===
  
[[File:Selection tool brush.jpg|thumb|Applying the brush selection tool]]The brush selection tool is the default tool when starting the application. The brush is this case is a circular shape filling a 50x50 area. This can be further extended in three major directions:
+
[[File:Selection tool brush.jpg|thumb|300px|Applying the brush selection tool]]The brush selection tool is the default tool when starting the application. The brush is this case is a circular shape filling a 50x50 area. This can be further extended in three major directions:
 
# The size could be customizable.
 
# The size could be customizable.
 
# Wider selection of brush shapes: square, triangle, star, custom shape, shape combinations, etc.
 
# Wider selection of brush shapes: square, triangle, star, custom shape, shape combinations, etc.
Line 221: Line 222:
 
=== Magic wand selection tool ===
 
=== Magic wand selection tool ===
  
[[File:Selection tool magic wand.jpg|thumb|Applying the magic wand selection tool]]This selection tool is designed to select an area of "similar looking pixels". For example, you might want to select a part of the image which has same color, but different lighting e.g. door, table, field, sky, water, etc. Determining how similar two pixels are is done by calculating the distance between their UV components. If the distance is below some threshold value, they are considered "similar", otherwise they are not considered similar. UV components are part of the [http://en.wikipedia.org/wiki/YUV YUV color space] and you have to convert RGB values using the transformation formulas. Distance is the standard two-dimensional Euclidean distance.
+
[[File:Selection tool magic wand.jpg|thumb|300px|Applying the magic wand selection tool]]This selection tool is designed to select an area of "similar looking pixels". For example, you might want to select a part of the image which has same color, but different lighting e.g. door, table, field, sky, water, etc. Determining how similar two pixels are is done by calculating the distance between their UV components. If the distance is below some threshold value, they are considered "similar", otherwise they are not considered similar. UV components are part of the [http://en.wikipedia.org/wiki/YUV YUV color space] and you have to convert RGB values using the transformation formulas. Distance is the standard two-dimensional Euclidean distance.
  
 
Selection is applied on every point user touches:
 
Selection is applied on every point user touches:
Line 305: Line 306:
 
=== Rectangle selection tool ===
 
=== Rectangle selection tool ===
  
[[File:Selection tool rectangle.jpg|thumb|Applying the rectangle selection tool]]
+
[[File:Selection tool rectangle.jpg|thumb|300px|Applying the rectangle selection tool]]
 
This is probably the classic selection tool which is familiar to anyone who ever used image processing applications. The idea is simple: user touches the screen at one point and drags the finger around to draw the desired rectangle. Once the user is satisfied, finger is lifted and the area under the rectangle is used for selection. To enable the region preview during the dragging process, another interface is used:
 
This is probably the classic selection tool which is familiar to anyone who ever used image processing applications. The idea is simple: user touches the screen at one point and drags the finger around to draw the desired rectangle. Once the user is satisfied, finger is lifted and the area under the rectangle is used for selection. To enable the region preview during the dragging process, another interface is used:
 
<code csharp>
 
<code csharp>
Line 479: Line 480:
  
 
First the selected filter is applied to the selected region. Then the original image is adjusted to keep the original pixels except selected region and the result of the first editing session is applied to the adjusted original image. There are two properties used in the code above which haven't yet been explained: {{Icode|byte Grayscale}} and {{Icode|byte Grayscale2}}. Their default values are 255 which yields white color in both adjustments and can be set via sliders on the main page. They will be explained in the section below.
 
First the selected filter is applied to the selected region. Then the original image is adjusted to keep the original pixels except selected region and the result of the first editing session is applied to the adjusted original image. There are two properties used in the code above which haven't yet been explained: {{Icode|byte Grayscale}} and {{Icode|byte Grayscale2}}. Their default values are 255 which yields white color in both adjustments and can be set via sliders on the main page. They will be explained in the section below.
 +
 +
Blending itself is a filter that can be created using the {{Icode|FilterFactory.CreateBlendFilter}} method. This is great since blending can become a part of the filter stack and you can undo its effect.
  
 
It is now time to talk about blending.
 
It is now time to talk about blending.
  
 
== Blending ==
 
== Blending ==
 +
 +
As mentioned before, blending is a process of creating a new image from the two input images. Blending function can be described as a function that takes two pixels as input and returns a third one:
 +
<pre style="font-size:1.5em;">result = f(pixel1, pixel2)</pre>
 +
So any function that takes two integers and returns a third is a blending function. However, there are several classic blending functions present in practically all modern image processing applications and they also come with Nokia Imaging SDK.
 +
 +
Before explaining each individual blending function, let's see how this application utilizes blending. As we have seen before, user selection is copied to another image. Let's see how that looks like:
 +
{| class="wikitable"
 +
|-
 +
! Region selection in the application !! Adjusted original image !! Selected region with filter applied
 +
|-
 +
| [[File:Regionselection1 0.jpg|thumb]]  || [[File:Regionselection1 1.jpg|thumb]] || [[File:Regionselection1 2.jpg|thumb]]
 +
|-
 +
| ||User selection is painted white||Unused pixels are white
 +
|}
 +
 +
The rightmost image above is blended on top of the middle image above. Order of images is most of the times important and swapping their order can yield different images under the same blend function. As mentioned before, application allows for setting the shade of gray for the filling pixels in the two pictures on the right above. This has strong impact on the filters since those pixels are part of the blending (since user selection splits the image in two parts). Instead of adjusting the shade of gray, selecting a color, gradient image or another image as a filler, different results can be achieved.
 +
 +
Let's take a look now at each individual blending function.
 +
 +
  
  
Line 493: Line 516:
 
=== Source Code ===
 
=== Source Code ===
 
Download [[Media:PaintingBlendingFilter.zip|Source Code]]
 
Download [[Media:PaintingBlendingFilter.zip|Source Code]]
 
{{VersionHint}}
 

Revision as of 15:27, 31 August 2013

Note.pngNote: This is an entry in the Nokia Imaging Wiki Competition 2013Q3

This article explains how to use Nokia Imaging SDK to create powerful image editing applications. In this particular application, a user can select part of the picture by 'painting' over it and apply filter only to painted region. Then the filtered region can be blended back onto the image to create interesting images.

SignpostIcon XAML 40.png
WP Metro Icon WP8.png
Article Metadata
Code ExampleTested withCompatibility
Platform(s):
Windows Phone 8
Dependencies: MVVM Light, Windows Phone Toolkit, Coding4Fun
Article
Created: to_pe (30 Aug 2013)
Last edited: to_pe (31 Aug 2013)

Contents

Introduction

Nokia Imaging SDK brings a set of powerful filters which, either alone or combined, can create wonderful images. Applications that rely on filtering technology (e.g. Instagram) apply one or several filters to entire images in order to make them more appealing. By using a different approach, a user can create interesting images by applying filters to arbitrary regions and then applying filters to just that region. Afterwards, that part is blended back into image and images can now feature one or more differently filtered regions. In this article we will show the technical details behind building such application and several examples of interesting effects will be shown.

Since we will talk a lot about blending in this article, a definition is needed. The process of merging two images or, in case of advanced photo manipulation applications, merging two layers is called blending. Since both images or layers are made of pixels, the function that takes two pixels that occupy the same position and returns a resulting pixel is called a "blending function". There are several well known and defined blending modes, you can read up on the wikipedia article. There is also something called "alpha blending" and is used when dealing with transparent images and that is not covered in this wiki article.

Note: when testing the application in the emulator, keep in mind that there is a bug when choosing photos from the library. Until one manually opens the "photos" application, calling the PhotoChooserTask will give appearance that the library is empty. It is not, but you need to start the "photos" application once prior to choosing an image from the library.

Prerequisites

Besides using Nokia Imaging SDK, which is available as a NuGet package, several other packages have been used.

Metro Studio 2 by Syncfusion was used to create icons.

Creating the application

For this particular application, the default Windows Phone 8 template was used. After adding the necessary NuGet packages and adjusting build configuration (specific to using Nokia Imaging SDK, for more information click here we are ready to start using the SDK. The idea behind this application is to leverage filters that come with Nokia Imaging SDK to create stunning and surreal images similar to other applications in the marketplace but with a twist - let user pick which parts of the image are filtered and with what filter.

Here is one example:

Magicpool 0.jpg
Magicpool 1.jpg
Magicpool 2.jpg
User picks an image from library or takes a picture User picks region for filter application The final result using MagicPenFilter and Darken blending mode

Unlike traditional filtering applications, the user can choose arbitrary regions for application filtering. By applying filers to specific regions, those regions can be highlighted. Here are some more examples:

Application architecture and functionality

This is the basic layout of the application. Application architecture

Tools selection overlay

The application consists of just one main editing page (name MainPage.xaml) which contains the entire application's functionality. The logic is implemented in the backing view model (named MainViewModel). The main functionalities are:

  • Pick an image from the library or take a photo.
  • Choose one of the selection tools (the default is "brush selection tool".
  • Discard all changes.
  • Choose a filter to apply to the selected region.
  • Choose a blending mode.
  • Change the overall shade of gray for the replacement pixels (more on that further down).
  • Save resulting photo to the phone or share it via built-in sharing capabilities.

Picking an image is done through the standard PhotoChooserTask and the resulting stream is cached in the application's isolated storage. The next time user opens the application, the last image from previous session will be used as a starting image. This choosing part can be extended to allow picking files from URL, SkyDrive or any other cloud storage provider.

Once the image is loaded, user can start defining the region using the built-in selection tools. There are three that are currently implemented:

  • Brush selection tool - user can touch screen to select anything.
  • Magic wand selection tool - by touching particular area, pixels that are similar to the chosen pixel are selected
  • Rectangular selection tool - user draws a rectangle over the image which is then used as a brush mask

Here are the differences between the selection tools:

By default, the brush selection tool is used. User can select another tool by pulling down a "selection tools" overlay. This can be done by swiping down on the top edge of the phone (the edge above the image). This brings down the "selection tools" overlay which can be seen on the image to the right. Once a tool is selected, the user can start applying it to the selected image (if any). Application is designed with extensibility in mind and allows for easy creation of additional selection tools. Zooming in or inverting selection can also be easily implemented using the same interfaces. All tools inherit from the ITools interface, while

Once user picks an image, a region is defined by applying selection tools. Even though in this early iteration undo feature is missing, it is relatively easy to add it by separating all selections into "layers". Once the user is satisfied with the selection, the currently selected filter is applied on the defined region and blended back onto the image by pressing the "apply" icon (the one with the check). As mentioned before, blending is a process of merging two images to produce a third one. In this case, one image is the original image with the part of the image "carved out " and replaced with white pixels. The second image contains those pixels that have been carved out with the rest of the pixels turned to pure white. The replacement pixels can be adjusted using the two sliders on the main page which will be described in more detail below.


Even after the filter and blending are applied, user can change both the filter used for the filtered region and the blending parameters. This allows for "fine tuning" and experimentation. Once the user is satisfied with the results, the image can be either shared using the built-in sharing capabilities or saved to the media library.

Selection tools

As mentioned before, there are three different selection tools each implemented with specific algorithms. They will be described here in further detail. Each tool implements the ITool interface:

public interface ITool
{
double ImageWidth { get; set; }
double ImageHeight { get; set; }
WriteableBitmap TargetImage { get; set; }
 
void Started(ManipulationStartedEventArgs e);
void Delta(ManipulationDeltaEventArgs e);
void Ended(ManipulationCompletedEventArgs e);
}

First two properties (ImageWidth and ImageHeight) are used for transformation from screen space to image space. In most cases the underlying image will be significantly larger than the UI control used to represent it. The last property is necessary since a tool might want to inspect image for its own purposes. The three methods in the interface are used for analyzing user manipulation.

Using this interface you might implement crop tool, color picker tool and in general read-only tools. To implement a selection tool that generates a mask, you need several additional parameters. To accommodate that, another interface is added on top of the ITool interface:

public interface ISelectionTool : ITool
{
byte[] MaskBuffer { get; set; }
int[] Source { get; set; }
int[] Target { get; set; }
}

Tools that inherit from this interface have input pixels array which should be left unmodified and output pixels array which they use for building the result. MaskBuffer is used to remember which pixels have been manipulated. Undo stack can easily be created by chaining tools and putting them on the stack but due to the lack of time has not been implemented at this point. We can now see how individual tools have been implemented.

Brush selection tool

Applying the brush selection tool
The brush selection tool is the default tool when starting the application. The brush is this case is a circular shape filling a 50x50 area. This can be further extended in three major directions:
  1. The size could be customizable.
  2. Wider selection of brush shapes: square, triangle, star, custom shape, shape combinations, etc.
  3. "Solidness" of the brush - right now it is a filled circle but it can be sparsely filled, hollow or something different (think graffiti brush from MS Paint).

When user touches the image, the center of the brush is positioned to match the center of the touch surface. User can drag around and "paint" the area. Unfortunately, due to the missing zoom capabilities and the inability to customize brushes size, it may be hard to precisely paint the desired region. This brush gives the natural feel of picking the region.

The brush is defined as a mask: a byte[50*50 array filled with values from 0 to 255. The value of 0 means that this pixel is to be left alone, other values signify the "replacement strength". Brush is generated via the following algorithm:

private const int BrushRadius = 50;
private const int Tolerance = 6;
 
private void BuildBrush()
{
_brushMask = new byte[BrushRadius * 2 + 1, BrushRadius * 2 + 1];
 
const double threshold = (BrushRadius - Tolerance) * (BrushRadius - Tolerance);
const double fallout = BrushRadius * BrushRadius;
const double delta = fallout - threshold;
 
for (var i = 0; i < BrushRadius * 2 + 1; ++i)
{
for (var j = 0; j < BrushRadius * 2 + 1; ++j)
{
// distance squared
var d2 = (i - BrushRadius) * (i - BrushRadius) + (j - BrushRadius) * (j - BrushRadius);
byte a;
if (d2 <= threshold)
{
a = 0xFF;
}
else if (d2 > fallout)
{
a = 0;
}
else
{
var t = fallout - d2;
a = (byte)(255.0 * t / delta);
}
 
_brushMask[i, j] = a;
}
}
}

Note that the algorithm generates a soft fade-out which could be used for alpha blending, but is unused in the current implementation. The brush is applied on the separate thread. Since the code is rather large and is provided as an attachment to this article, we will just take a look at the underlying algorithm. When user interacts using the tool, all points and movements are added to the queue which is processed in another thread. This makes the application responsive when user swipes across the screen since the brush applying algorithm is faster when compared with generating the final image and refreshing the UI.

public void Started(ManipulationStartedEventArgs e)
{
lock (_syncLock)
{
_touchQueue.Enqueue(Tuple.Create(e.ManipulationOrigin, new Point()));
_lastPoint = e.ManipulationOrigin;
}
}
 
public void Delta(ManipulationDeltaEventArgs e)
{
lock (_syncLock)
{
_touchQueue.Enqueue(Tuple.Create(e.ManipulationOrigin, new Point(e.ManipulationOrigin.X - _lastPoint.X, e.ManipulationOrigin.Y - _lastPoint.Y)));
_lastPoint = e.ManipulationOrigin;
}
}
 
public void Ended(ManipulationCompletedEventArgs e) { }

In the separate thread, the _touchQueue is inspected and values are pulled out. Once there are no values on the queue (which means that the user is no longer moving the finger across the screen), brush is applied on the user selected points. Brush is clipped to the image and every pixel of the brush is processed (all 2500 of them in this case). If the brush has a nonzero value at the given pixel, a pixel is moved from the Source to the Target and TargetImage is updated to reflect this process. Here is the sketch of the algorithm:

for each point to apply
clip brush to the image
for each pixel that is inside the image
if brushmask != 0
copy pixel from source to target
make display image pixel white

After the algorithm completes, the Target will contain only those pixels where the brush has nonzero value and the TargetImage will have white pixels instead of the old ones. The remaining pixels (those where the brush has zero value) are left intact in the TargetImage and are equal to 0 (black) on Target.

There are several other optimizations that could be used to speed up the algorithm, but then again, this is not a professional application for image manipulation and the performance issues should be negligible on high-end phones.

Magic wand selection tool

Applying the magic wand selection tool
This selection tool is designed to select an area of "similar looking pixels". For example, you might want to select a part of the image which has same color, but different lighting e.g. door, table, field, sky, water, etc. Determining how similar two pixels are is done by calculating the distance between their UV components. If the distance is below some threshold value, they are considered "similar", otherwise they are not considered similar. UV components are part of the YUV color space and you have to convert RGB values using the transformation formulas. Distance is the standard two-dimensional Euclidean distance.

Selection is applied on every point user touches:

public void Started(ManipulationStartedEventArgs e)
{
Apply(e.ManipulationOrigin);
TargetImage.Invalidate();
}
 
public void Delta(ManipulationDeltaEventArgs e)
{
Apply(e.ManipulationOrigin);
TargetImage.Invalidate();
}
 
public void Ended(ManipulationCompletedEventArgs e)
{
Apply(e.ManipulationOrigin);
TargetImage.Invalidate();
}

The algorithm is queue based flood fill algorithm with fixed threshold value. Ideally you would want to be able to adjust the threshold value, but it is not possible in the current implementation. Here is the algorithm:

private void Apply(Point p)
{
// convert from screen space to image space
// prefix o = original
var ox = (int)(p.X * TargetImage.PixelWidth / ImageWidth);
var oy = (int)(p.Y * TargetImage.PixelHeight / ImageHeight);
if (ox < 0 || ox >= TargetImage.PixelWidth ||
oy < 0 || oy >= TargetImage.PixelHeight)
{
return;
}
 
var originalPixel = TargetImage.Pixels[ox + oy * TargetImage.PixelWidth];
var ouv = MathEx.GetUV(originalPixel);
 
// apply flood fill algorithm
var pointsToCheck = new Stack<Tuple<int, int>>();
pointsToCheck.Push(Tuple.Create(ox, oy));
 
while (pointsToCheck.Any())
{
var pointToCheck = pointsToCheck.Pop();
var x = pointToCheck.Item1;
var y = pointToCheck.Item2;
 
var index = x + y * TargetImage.PixelWidth;
 
// already processed
if (MaskBuffer[index] == 255)
continue;
 
// test similarity between colors
var pixel = TargetImage.Pixels[x + y * TargetImage.PixelWidth];
 
var uv = MathEx.GetUV(pixel);
if (MathEx.Distance2(ouv, uv) > 150)
continue;
 
// apply for this pixel
MaskBuffer[index] = 255;
Target[index] = Source[index];
TargetImage.Pixels[index] = (Constants.White & 0xFFFFFF) | (0xFF << 24);
 
// push neighbors to the stack (if reachable)
if (x > 0)
pointsToCheck.Push(Tuple.Create(x - 1, y));
if (x < TargetImage.PixelWidth - 1)
pointsToCheck.Push(Tuple.Create(x + 1, y));
if (y > 0)
pointsToCheck.Push(Tuple.Create(x, y - 1));
if (y < TargetImage.PixelHeight - 1)
pointsToCheck.Push(Tuple.Create(x, y + 1));
}
}

This tool is useful when picking out an area of similar looking colors which would be hard to paint using the regular brush selection tool.

Rectangle selection tool

Applying the rectangle selection tool

This is probably the classic selection tool which is familiar to anyone who ever used image processing applications. The idea is simple: user touches the screen at one point and drags the finger around to draw the desired rectangle. Once the user is satisfied, finger is lifted and the area under the rectangle is used for selection. To enable the region preview during the dragging process, another interface is used:

public interface IVisualTool
{
FrameworkElement Element { get; set; }
Point Position { get; set; }
 
event EventHandler ElementChanged;
event EventHandler PositionChanged;
}

The idea behind this interface is that after the user finishes its current manipulation the actual selection is done. During the dragging period a UI element is used to convey the information about the resulting selection process. Event handlers are used to signal the parent UI component that the underlying visual part has changed and that it needs to be updated. The tool will spawn the UI element that represents the action for this tool. Since positioning in the UI is not something that can be done from the tool itself, PositionChanged event is used to signal that the position has been changed. Positioning can be done using Margin (if the parent element is Grid) or in case the parent is Canvas using Canvas.SetLeft and Canvas.SetTop.

public void Started(ManipulationStartedEventArgs e)
{
_startPoint = e.ManipulationOrigin;
 
Position = e.ManipulationOrigin;
Element = new Rectangle
{
Fill = new SolidColorBrush(Colors.White),
Width = 1,
Height = 1,
HorizontalAlignment = HorizontalAlignment.Left,
VerticalAlignment = VerticalAlignment.Top
};
 
RaisePositionChanged();
RaiseElementChanged();
}
 
public void Delta(ManipulationDeltaEventArgs e)
{
Position = new Point(Math.Min(_startPoint.X, e.ManipulationOrigin.X),
Math.Min(_startPoint.Y, e.ManipulationOrigin.Y));
 
Element.Width = Math.Abs(_startPoint.X - e.ManipulationOrigin.X);
Element.Height = Math.Abs(_startPoint.Y - e.ManipulationOrigin.Y);
 
RaisePositionChanged();
}
 
public void Ended(ManipulationCompletedEventArgs e)
{
Apply(e.ManipulationOrigin);
 
Element = null;
RaiseElementChanged();
TargetImage.Invalidate();
}
</code
 
In the current application, the parent container is {{Icode|Grid}} and handling of the notifications is done like this:
<code csharp>
private void toolsList_SelectionChanged(object sender, SelectionChangedEventArgs e)
{
if (_currentVisualTool != null)
{
_currentVisualTool.ElementChanged -= CurrentVisualToolOnElementChanged;
_currentVisualTool.PositionChanged -= CurrentVisualToolOnPositionChanged;
}
 
// ...
 
_currentVisualTool = _vm.SelectedTool as IVisualTool;
if (_currentVisualTool != null)
{
_currentVisualTool.ElementChanged += CurrentVisualToolOnElementChanged;
_currentVisualTool.PositionChanged += CurrentVisualToolOnPositionChanged;
}
}
 
private void CurrentVisualToolOnElementChanged(object sender, EventArgs eventArgs)
{
if (_visualToolElement != null)
{
_imageWrapper.Children.Remove(_visualToolElement);
_visualToolElement = null;
}
 
_visualToolElement = _currentVisualTool.Element;
if (_visualToolElement != null)
_imageWrapper.Children.Add(_visualToolElement);
}
 
private void CurrentVisualToolOnPositionChanged(object sender, EventArgs eventArgs)
{
if (_visualToolElement != null)
_visualToolElement.Margin = new Thickness(_currentVisualTool.Position.X, _currentVisualTool.Position.Y, 0, 0);
}

Final algorithm is straightforward (_startPoint is defined when the manipulation started):

private void Apply(Point finalPoint)
{
// burn onto image
var startx = (int)(Math.Min(_startPoint.X, finalPoint.X) * TargetImage.PixelWidth / ImageWidth);
var starty = (int)(Math.Min(_startPoint.Y, finalPoint.Y) * TargetImage.PixelHeight / ImageHeight);
 
var endx = startx + (int)(Math.Abs(_startPoint.X - finalPoint.X) * TargetImage.PixelWidth / ImageWidth);
var endy = starty + (int)(Math.Abs(_startPoint.Y - finalPoint.Y) * TargetImage.PixelHeight / ImageHeight);
 
for (var x = startx; x < endx; ++x)
{
for (var y = starty; y < endy; ++y)
{
var index = x + y * TargetImage.PixelWidth;
Target[index] = Source[index];
TargetImage.Pixels[index] = Constants.White;
}
}
}

Further enhancements of this tool might include positioning, scaling and rotating the rectangle before applying it.

Main application logic

All the logic is implemented in the MainViewModel class. It contains an EditingSession instance (field _editingSession) which is responsible for blending layers. Applying the filter to a secondary layer (the one carved out using one or more selection tools) is done using a secondary EditingSession and the result is then both shown in the preview image and blended back into the main session.

Not all filters are added in this implementation and for those that require parameters, those are hard coded at the application's launch. The future version of this application will include all basic filters with special UI for adjusting parameters and also some complex effects. On the other hand, all blend functions are supported and can be tested easily.

Filter and blending is applied when user clicks on the apply button or when user changes filter or any blending parameter. To match the parameters necessary for the selection tools to work, the following fields and properties are used:

  • WriteableBitmap MainImage - Used for UI and binds to SelectedTool.TargetImage.
  • int[] _oldPixels - Keeps original pixels from the selected imageSelectedTool.Source.
  • int[] _buffer - Used for selected region SelectedTool.Target.
  • byte[] _maskBuffer - Used for marking selectionSelectedTool.MaskBuffer. In some other implementations this would never be a part of the main application logic, every time you apply a tool you would get another mask specific for that selection. This would allow applying more than one filter on the same image, but in different regions.

Here is the ApplyExecute implementation (certain parts were omitted):

private async void ApplyExecute()
{
try { _editingSession.Undo(); } catch { }
 
// apply filter to selected pixels only
PreviewImage = new WriteableBitmap(ImageWidth, ImageHeight);
_buffer.CopyTo(PreviewImage.Pixels, 0);
 
using (var ms = PreviewImage.ToStream())
{
var editingSession = await EditingSessionFactory.CreateEditingSessionAsync(ms);
editingSession.AddFilter(SelectedFilter.Creator());
await editingSession.RenderToWriteableBitmapAsync(PreviewImage);
}
 
// remove unnecessary pixels
var removal = (0xFF << 24) | (Grayscale << 16) | (Grayscale << 8) | (Grayscale);
for (var i = 0; i < PreviewImage.Pixels.Length; ++i)
{
if (_maskBuffer[i] == 0)
PreviewImage.Pixels[i] = removal;
 
if (_maskBuffer[i] != 0)
MainImage.Pixels[i] = ((Grayscale2 << 16) | (Grayscale2 << 8) | Grayscale2) | ((255 - _maskBuffer[i]) << 24);
else
MainImage.Pixels[i] = _backup.Pixels[i];
}
 
using (var ms = MainImage.ToStream())
_editingSession = await EditingSessionFactory.CreateEditingSessionAsync(ms);
 
EditingSession blendingSession;
using (var ms = PreviewImage.ToStream())
blendingSession = await EditingSessionFactory.CreateEditingSessionAsync(ms);
 
_editingSession.AddFilter(FilterFactory.CreateBlendFilter(blendingSession, SelectedBlendFunction));
await _editingSession.RenderToWriteableBitmapAsync(MainImage);
 
PreviewImage.Invalidate();
MainImage.Invalidate();
}

First the selected filter is applied to the selected region. Then the original image is adjusted to keep the original pixels except selected region and the result of the first editing session is applied to the adjusted original image. There are two properties used in the code above which haven't yet been explained: byte Grayscale and byte Grayscale2. Their default values are 255 which yields white color in both adjustments and can be set via sliders on the main page. They will be explained in the section below.

Blending itself is a filter that can be created using the FilterFactory.CreateBlendFilter method. This is great since blending can become a part of the filter stack and you can undo its effect.

It is now time to talk about blending.

Blending

As mentioned before, blending is a process of creating a new image from the two input images. Blending function can be described as a function that takes two pixels as input and returns a third one:

result = f(pixel1, pixel2)

So any function that takes two integers and returns a third is a blending function. However, there are several classic blending functions present in practically all modern image processing applications and they also come with Nokia Imaging SDK.

Before explaining each individual blending function, let's see how this application utilizes blending. As we have seen before, user selection is copied to another image. Let's see how that looks like:

Region selection in the application Adjusted original image Selected region with filter applied
Regionselection1 0.jpg
Regionselection1 1.jpg
Regionselection1 2.jpg
User selection is painted white Unused pixels are white

The rightmost image above is blended on top of the middle image above. Order of images is most of the times important and swapping their order can yield different images under the same blend function. As mentioned before, application allows for setting the shade of gray for the filling pixels in the two pictures on the right above. This has strong impact on the filters since those pixels are part of the blending (since user selection splits the image in two parts). Instead of adjusting the shade of gray, selecting a color, gradient image or another image as a filler, different results can be achieved.

Let's take a look now at each individual blending function.




Summary

Nokia Imaging SDK is a powerful tool that can be harnessed in different ways. I hope that this interesting approach to building complex compositions yields interesting applications that go beyond the simple "filter and share" applications. Since the application described here applies filters to just part certain part of the image (or consider applying filter everywhere but on some special part), resulting images can have surreal or hyper-real feel.

Source Code

Download Source Code

394 page views in the last 30 days.