×
Namespaces

Variants
Actions
(Difference between revisions)

Partial filter application and blending with Nokia Imaging SDK

From Nokia Developer Wiki
Jump to: navigation, search
yan_ (Talk | contribs)
(Yan - - Rectangle selection tool)
hamishwillee (Talk | contribs)
m (Hamishwillee - Fix categories)
 
(8 intermediate revisions by 5 users not shown)
Line 1: Line 1:
[[Category:Windows Phone 8]][[Category:XAML]]
+
[[Category:Nokia Imaging SDK]][[Category:Code Examples]][[Category:XAML]][[Category:Windows Phone 8]]
{{Note|This is an entry in the [[Nokia Imaging Wiki Competition 2013Q3]]}}
+
{{FeaturedArticle|timestamp=20131006}}
 
{{Abstract|This article explains how to use Nokia Imaging SDK to create powerful image editing applications. In this particular application, a user can select part of the picture by 'painting' over it and apply filter only to painted region. Then the filtered region can be blended back onto the image to create interesting images.}}
 
{{Abstract|This article explains how to use Nokia Imaging SDK to create powerful image editing applications. In this particular application, a user can select part of the picture by 'painting' over it and apply filter only to painted region. Then the filtered region can be blended back onto the image to create interesting images.}}
 +
{{Announcement|image=Lumia 1020 main.png|height=50px|title=Winner|link=Nokia Imaging Wiki Competition 2013Q3|This article was a winner in the [[Nokia Imaging Wiki Competition 2013Q3]].}}
  
 
{{ArticleMetaData <!-- v1.3 -->
 
{{ArticleMetaData <!-- v1.3 -->
 
|sourcecode= [[Media:BlendingApplications.zip]]
 
|sourcecode= [[Media:BlendingApplications.zip]]
 
|installfile= <!-- Link to installation file (e.g. [[Media:The Installation File.wgt]]) -->
 
|installfile= <!-- Link to installation file (e.g. [[Media:The Installation File.wgt]]) -->
|devices= <!-- Devices tested against - e.g. Nokia Lumia 928, Nokia Asha 501) -->
+
|devices= Nokia Lumia 920
|sdk= [http://www.microsoft.com/en-us/download/details.aspx?id=35471 Windows Phone 8.0 SDK]
+
|sdk= Windows Phone 8.0 SDK, Nokia Imaging SDK Beta 1
|dependencies= MVVM Light, Windows Phone Toolkit, Coding4Fun
+
|dependencies= MVVM Light, Windows Phone Toolkit, Coding4Fun, [http://developer.nokia.com/Resources/Library/Lumia/#!nokia-imaging-sdk.html Nokia Imaging SDK]
|signing=<!-- Special Signing requirements -->
+
|signing= <!-- Special Signing requirements -->
 
|capabilities= <!-- Required capabilities for code (e.g. ID_CAP_LOCATION, ID_CAP_NETWORKING) -->
 
|capabilities= <!-- Required capabilities for code (e.g. ID_CAP_LOCATION, ID_CAP_NETWORKING) -->
 
|language= <!-- Language category code for non-English topics - e.g. Lang-Chinese -->
 
|language= <!-- Language category code for non-English topics - e.g. Lang-Chinese -->
 
|translated-by= <!-- [[User:XXXX]] -->
 
|translated-by= <!-- [[User:XXXX]] -->
|translated-from-title= <!-- Title only - not link -->  
+
|translated-from-title= <!-- Title only - not link -->
 
|translated-from-id= <!-- Id of translated revision -->
 
|translated-from-id= <!-- Id of translated revision -->
|review-by=<!-- After re-review: [[User:username]] -->
+
|review-by= <!-- After re-review: [[User:username]] -->
 
|review-timestamp= <!-- After re-review: YYYYMMDD -->
 
|review-timestamp= <!-- After re-review: YYYYMMDD -->
 
|update-by= <!-- After significant update: [[User:username]]-->
 
|update-by= <!-- After significant update: [[User:username]]-->
 
|update-timestamp= <!-- After significant update: YYYYMMDD -->
 
|update-timestamp= <!-- After significant update: YYYYMMDD -->
 
|creationdate= 20130830
 
|creationdate= 20130830
|author= [[User:to_pe]]
+
|author= [[User:To pe]]
 
}}
 
}}
  
Line 30: Line 31:
  
 
Note: when testing the application in the emulator, keep in mind that there is a bug when choosing photos from the library. Until one manually opens the "photos" application, calling the {{Icode|PhotoChooserTask}} will give appearance that the library is empty. It is not, but you need to start the "photos" application once prior to choosing an image from the library.
 
Note: when testing the application in the emulator, keep in mind that there is a bug when choosing photos from the library. Until one manually opens the "photos" application, calling the {{Icode|PhotoChooserTask}} will give appearance that the library is empty. It is not, but you need to start the "photos" application once prior to choosing an image from the library.
 +
 +
{{Note|Feel free to use the source code attached to this article, for the most recent version refer to the open source repository over at [https://github.com/tpetrina/ImagingApps GitHub].}}
  
 
== Prerequisites ==
 
== Prerequisites ==
Line 74: Line 77:
  
 
[[File:Tool selection overlay.jpg|thumb|300px|Tools selection overlay]]
 
[[File:Tool selection overlay.jpg|thumb|300px|Tools selection overlay]]
The application consists of just one main editing page (name {{Icode|MainPage.xaml}}) which contains the entire application's functionality. The logic is implemented in the backing view model (named {{Icode|MainViewModel}}). The main functionalities are:
+
The application consists of just one main editing page (name '''MainPage.xaml''') which contains the entire application's functionality. The logic is implemented in the backing view model (named {{Icode|MainViewModel}}). The main functionalities are:
 
* Pick an image from the library or take a photo.
 
* Pick an image from the library or take a photo.
 
* Choose one of the selection tools (the default is "brush selection tool".
 
* Choose one of the selection tools (the default is "brush selection tool".
Line 97: Line 100:
 
</gallery>
 
</gallery>
  
By default, the brush selection tool is used. User can select another tool by pulling down a "selection tools" overlay. This can be done by swiping down on the top edge of the phone (the edge above the image). This brings down the "selection tools" overlay which can be seen on the image to the right. Once a tool is selected, the user can start applying it to the selected image (if any). Application is designed with extensibility in mind and allows for easy creation of additional selection tools. Zooming in or inverting selection can also be easily implemented using the same interfaces. All tools inherit from the {{Icode|ITools}} interface, while
+
By default, the brush selection tool is used. User can select another tool by pulling down a "selection tools" overlay. This can be done by swiping down on the top edge of the phone (the edge above the image). This brings down the "selection tools" overlay which can be seen on the image to the right. Once a tool is selected, the user can start applying it to the selected image (if any). Application is designed with extensibility in mind and allows for easy creation of additional selection tools. Zooming in or inverting selection can also be easily implemented using the same interfaces. All tools inherit from the {{Icode|ITools}} interface, while
  
 
Once user picks an image, a region is defined by applying selection tools. Even though in this early iteration undo feature is missing, it is relatively easy to add it by separating all selections into "layers". Once the user is satisfied with the selection, the currently selected filter is applied on the defined region and blended back onto the image by pressing the "apply" icon (the one with the check). As mentioned before, blending is a process of merging two images to produce a third one. In this case, one image is the original image with the part of the image "carved out " and replaced with white pixels. The second image contains those pixels that have been carved out with the rest of the pixels turned to pure white. The replacement pixels can be adjusted using the two sliders on the main page which will be described in more detail below.
 
Once user picks an image, a region is defined by applying selection tools. Even though in this early iteration undo feature is missing, it is relatively easy to add it by separating all selections into "layers". Once the user is satisfied with the selection, the currently selected filter is applied on the defined region and blended back onto the image by pressing the "apply" icon (the one with the check). As mentioned before, blending is a process of merging two images to produce a third one. In this case, one image is the original image with the part of the image "carved out " and replaced with white pixels. The second image contains those pixels that have been carved out with the rest of the pixels turned to pure white. The replacement pixels can be adjusted using the two sliders on the main page which will be described in more detail below.
Line 513: Line 516:
 
! Region selection in the application !! Adjusted original image !! Selected region with filter applied
 
! Region selection in the application !! Adjusted original image !! Selected region with filter applied
 
|-
 
|-
| [[File:blending_sepia_multiply_0.jpg|thumb|300px]]  || [[File:blending_sepia_multiply_1.jpg|thumb|300px]] || [[File:blending_sepia_multiply_2.jpg|thumb|300px]]
+
| [[File:blending sepia multiply 0.jpg|thumb|300px]]  || [[File:blending sepia multiply 1.jpg|thumb|300px]] || [[File:blending sepia multiply 2.jpg|thumb|300px]]
 
|-
 
|-
 
|The final result. Note that the original part
 
|The final result. Note that the original part
Line 526: Line 529:
 
! Region selection in the application !! Adjusted original image !! Selected region with filter applied
 
! Region selection in the application !! Adjusted original image !! Selected region with filter applied
 
|-
 
|-
| [[File:blending_sepia_multiply1_0.jpg|thumb|300px]]  || [[File:blending_sepia_multiply1_1.jpg|thumb|300px]] || [[File:blending_sepia_multiply1_2.jpg|thumb|300px]]
+
| [[File:blending sepia multiply1 0.jpg|thumb|300px]]  || [[File:blending sepia multiply1 1.jpg|thumb|300px]] || [[File:blending sepia multiply1 2.jpg|thumb|300px]]
 
|-
 
|-
 
|The final result is now slightly darker
 
|The final result is now slightly darker
Line 539: Line 542:
 
All values larger than 1 are clamped to 1 (i.e. nothing can be whiter than white). Let's see this blend function in action.
 
All values larger than 1 are clamped to 1 (i.e. nothing can be whiter than white). Let's see this blend function in action.
  
[[File:blending_sepia_additive.jpg|none|thumb|400px|Additive blending lights up the image. Keep replacement pixels darker so they don't affect the other parts.]]
+
[[File:blending sepia additive.jpg|none|thumb|400px|Additive blending lights up the image. Keep replacement pixels darker so they don't affect the other parts.]]
  
 
The order of the images is irrelevant and values of 0 and 0 for the upper and lower slider respectively don't affect the image.  
 
The order of the images is irrelevant and values of 0 and 0 for the upper and lower slider respectively don't affect the image.  
Line 549: Line 552:
 
Let's see this blend function in action.
 
Let's see this blend function in action.
  
[[File:blending_sepia_color.jpg|none|thumb|400px|Color information is mostly lost.]]  
+
[[File:blending sepia color.jpg|none|thumb|400px|Color information is mostly lost.]]  
  
 
This blend function generally works poorly with the current implementation since shades of gray do not carry as much color as regular colors do.
 
This blend function generally works poorly with the current implementation since shades of gray do not carry as much color as regular colors do.
Line 559: Line 562:
 
This blend function "divides the inverted bottom layer by the top layer, and then inverts the result" (from Wikipedia).
 
This blend function "divides the inverted bottom layer by the top layer, and then inverts the result" (from Wikipedia).
  
[[File:blending_sepia_colorburn.jpg|none|thumb|400px|]]  
+
[[File:blending sepia colorburn.jpg|none|thumb|400px|]]  
  
 
The order of images is important. Upper slider darkens the image if the values are smaller while maximum value has no effect on the unselected region. The lower slider increases the contrast.
 
The order of images is important. Upper slider darkens the image if the values are smaller while maximum value has no effect on the unselected region. The lower slider increases the contrast.
Line 567: Line 570:
 
This blend function "divides the bottom layer by the inverted top layer" (from Wikipedia).
 
This blend function "divides the bottom layer by the inverted top layer" (from Wikipedia).
  
[[File:blending_sepia_colordodge.jpg|none|thumb|400px|]]  
+
[[File:blending sepia colordodge.jpg|none|thumb|400px|]]  
  
The order of images is important. Upper slider value of 0 has no effect on the unselected part while increasing the value lights up the unselected part. Lower slider darkens the selected region if the value is closer to 0 and lightens the selected region.
+
The order of images is important. Upper slider value of 0 has no effect on the unselected part while increasing the value lights up the unselected part. Lower slider darkens the selected region if the value is closer to 0 and lightens the selected region.
  
 
=== Overlay blend function ===
 
=== Overlay blend function ===
Line 575: Line 578:
 
This blend function combines Multiply and Screen functions and darkens resulting image where the first image is darker and lightens where the first image is lighter. With this filter it is easy to control dark/light levels in selected and unselected regions.
 
This blend function combines Multiply and Screen functions and darkens resulting image where the first image is darker and lightens where the first image is lighter. With this filter it is easy to control dark/light levels in selected and unselected regions.
  
[[File:blending_sepia_overlay.jpg|none|thumb|400px|]]  
+
[[File:blending sepia overlay.jpg|none|thumb|400px|]]  
  
 
The order of images is important. Values of 0.5 for both sliders have negligible effect. Smaller values for the upper slider darken the unselected region while higher lighten it. The lower slider has the same behaviour, but affects the selected region.
 
The order of images is important. Values of 0.5 for both sliders have negligible effect. Smaller values for the upper slider darken the unselected region while higher lighten it. The lower slider has the same behaviour, but affects the selected region.
Line 583: Line 586:
 
This blending function is also a combination of Multiply and Screen blend functions and can be used to control how light or dark each respective part of the image is. The order of images is important and different implementations have different results.  Top slider will never turn the unselected region to black or white even if maximum values are set. However, the lower slider can turn selected region to black or white.
 
This blending function is also a combination of Multiply and Screen blend functions and can be used to control how light or dark each respective part of the image is. The order of images is important and different implementations have different results.  Top slider will never turn the unselected region to black or white even if maximum values are set. However, the lower slider can turn selected region to black or white.
  
[[File:blending_sepia_softlight.jpg|none|thumb|400px|]]  
+
[[File:blending sepia softlight.jpg|none|thumb|400px|]]  
  
 
=== Screen blend function ===
 
=== Screen blend function ===
Line 590: Line 593:
 
<pre>f(ab) = 1 - (1 - a)*(1 - b)</pre>
 
<pre>f(ab) = 1 - (1 - a)*(1 - b)</pre>
  
[[File:blending_sepia_screen.jpg|none|thumb|400px|]]  
+
[[File:blending sepia screen.jpg|none|thumb|400px|]]  
  
 
The order of images is not important and black color in either image has no effect. White pixel in one image gives white pixel as a result.
 
The order of images is not important and black color in either image has no effect. White pixel in one image gives white pixel as a result.
Line 598: Line 601:
 
This blend function is equivalent to Overlay blend function, but reverses the algorithm.
 
This blend function is equivalent to Overlay blend function, but reverses the algorithm.
  
[[File:blending_sepia_hardlight.jpg|none|thumb|400px|]]  
+
[[File:blending sepia hardlight.jpg|none|thumb|400px|]]  
  
 
As with the overlay, the order is important and values of 0.5 for both sliders appear to have no effect. Top slider can turn the unselected region to black if the value is 0 and white if the value is 255. The bottom slider can never turn the selected region completely black or white.
 
As with the overlay, the order is important and values of 0.5 for both sliders appear to have no effect. Top slider can turn the unselected region to black if the value is 0 and white if the value is 255. The bottom slider can never turn the selected region completely black or white.
Line 612: Line 615:
 
{| class="wikitable"
 
{| class="wikitable"
 
|-
 
|-
| [[File:blending_sepia_darken.jpg|400px|]]  || [[File:blending_sepia_lighten.jpg|400px|]]  
+
| [[File:blending sepia darken.jpg|400px|]]  || [[File:blending sepia lighten.jpg|400px|]]  
 
|-
 
|-
 
| Darken filter || Lighten filter
 
| Darken filter || Lighten filter
Line 635: Line 638:
 
{| class="wikitable"
 
{| class="wikitable"
 
|-
 
|-
|[[File:blending_sepia_exclusion0.jpg|none|thumb|400px|]]  
+
|[[File:blending sepia exclusion0.jpg|none|thumb|400px|]]  
||[[File:blending_sepia_exclusion1.jpg|none|thumb|400px|]]  
+
||[[File:blending sepia exclusion1.jpg|none|thumb|400px|]]  
 
|-
 
|-
 
|No change when blending with black || Unselected part turned to 'negative'
 
|No change when blending with black || Unselected part turned to 'negative'
 
|-
 
|-
|[[File:blending_sepia_exclusion2.jpg|none|thumb|400px|]]  
+
|[[File:blending sepia exclusion2.jpg|none|thumb|400px|]]  
||[[File:blending_cartoon_exclusion.jpg|none|thumb|400px|]]  
+
||[[File:blending cartoon exclusion.jpg|none|thumb|400px|]]  
 
|-
 
|-
 
|Selected part turned to negative || 'Dark cartoon' feel
 
|Selected part turned to negative || 'Dark cartoon' feel
 
|-
 
|-
| [[File:blending_sepia_difference.jpg|none|thumb|400px|]]  
+
| [[File:blending sepia difference.jpg|none|thumb|400px|]]  
 
|-
 
|-
 
|Difference gives minor differences than Exclusion
 
|Difference gives minor differences than Exclusion
Line 700: Line 703:
  
 
Nokia Imaging SDK is a powerful tool that can be harnessed in different ways. I hope that this interesting approach to building complex compositions yields interesting applications that go beyond the simple "filter and share" applications. Since the application described here applies filters to just part certain part of the image (or consider applying filter everywhere ''but'' on some special part), resulting images can have surreal or hyper-real feel.
 
Nokia Imaging SDK is a powerful tool that can be harnessed in different ways. I hope that this interesting approach to building complex compositions yields interesting applications that go beyond the simple "filter and share" applications. Since the application described here applies filters to just part certain part of the image (or consider applying filter everywhere ''but'' on some special part), resulting images can have surreal or hyper-real feel.
 +
 +
Further optimizations for these types of applications is delegating all the work on actual images to another thread and only work in preview mode. This will increase the speed of processing images and reduce the strain on phone which in effect won't drain the battery as much.
 
   
 
   
 
=== Source Code ===
 
=== Source Code ===
 
Download [[File:BlendingApplications.zip|thumb|Source Code]]
 
Download [[File:BlendingApplications.zip|thumb|Source Code]]

Latest revision as of 04:33, 14 October 2013

Featured Article
06 Oct
2013

This article explains how to use Nokia Imaging SDK to create powerful image editing applications. In this particular application, a user can select part of the picture by 'painting' over it and apply filter only to painted region. Then the filtered region can be blended back onto the image to create interesting images.

Lumia 1020 main.pngWinner: This article was a winner in the Nokia Imaging Wiki Competition 2013Q3.

SignpostIcon XAML 40.png
WP Metro Icon WP8.png
Article Metadata
Code ExampleTested with
SDK: Windows Phone 8.0 SDK, Nokia Imaging SDK Beta 1
Devices(s): Nokia Lumia 920
Compatibility
Platform(s):
Windows Phone 8
Dependencies: MVVM Light, Windows Phone Toolkit, Coding4Fun, Nokia Imaging SDK
Article
Created: to_pe (30 Aug 2013)
Last edited: hamishwillee (14 Oct 2013)

Contents

[edit] Introduction

Nokia Imaging SDK brings a set of powerful filters which, either alone or combined, can create wonderful images. Applications that rely on filtering technology (e.g. Instagram) apply one or several filters to entire images in order to make them more appealing. By using a different approach, a user can create interesting images by applying filters to arbitrary regions and then applying filters to just that region. Afterwards, that part is blended back into image and images can now feature one or more differently filtered regions. In this article we will show the technical details behind building such application and several examples of interesting effects will be shown.

Since we will talk a lot about blending in this article, a definition is needed. The process of merging two images or, in case of advanced photo manipulation applications, merging two layers is called blending. Since both images or layers are made of pixels, the function that takes two pixels that occupy the same position and returns a resulting pixel is called a "blending function". There are several well known and defined blending modes, you can read up on the Wikipedia article. There is also something called "alpha blending" and is used when dealing with transparent images and that is not covered in this article.

Note: when testing the application in the emulator, keep in mind that there is a bug when choosing photos from the library. Until one manually opens the "photos" application, calling the PhotoChooserTask will give appearance that the library is empty. It is not, but you need to start the "photos" application once prior to choosing an image from the library.

Note.pngNote: Feel free to use the source code attached to this article, for the most recent version refer to the open source repository over at GitHub.

[edit] Prerequisites

Besides using Nokia Imaging SDK, which is available as a NuGet package, several other packages have been used.

Metro Studio 2 by Syncfusion was used to create icons.

[edit] Creating the application

For this particular application, the default Windows Phone 8 template was used. After adding the necessary NuGet packages and adjusting build configuration (specific to using Nokia Imaging SDK, for more information click here we are ready to start using the SDK. The idea behind this application is to leverage filters that come with Nokia Imaging SDK to create stunning and surreal images similar to other applications in the marketplace but with a twist - let user pick which parts of the image are filtered and with what filter.

Here is one example:

Magicpool 0.jpg
Magicpool 1.jpg
Magicpool 2.jpg
User picks an image from library or takes a picture User picks region for filter application The final result using MagicPenFilter and Darken blending mode

Unlike traditional filtering applications, the user can choose arbitrary regions for application filtering. By applying filers to specific regions, those regions can be highlighted. Here are some more examples:

[edit] Application architecture and functionality

This is the basic layout of the application. Application architecture

Tools selection overlay

The application consists of just one main editing page (name MainPage.xaml) which contains the entire application's functionality. The logic is implemented in the backing view model (named MainViewModel). The main functionalities are:

  • Pick an image from the library or take a photo.
  • Choose one of the selection tools (the default is "brush selection tool".
  • Discard all changes.
  • Choose a filter to apply to the selected region.
  • Choose a blending mode.
  • Change the overall shade of gray for the replacement pixels (more on that further down).
  • Save resulting photo to the phone or share it via built-in sharing capabilities.

Picking an image is done through the standard PhotoChooserTask and the resulting stream is cached in the application's isolated storage. The next time user opens the application, the last image from previous session will be used as a starting image. This choosing part can be extended to allow picking files from URL, SkyDrive or any other cloud storage provider.

Once the image is loaded, user can start defining the region using the built-in selection tools. There are three that are currently implemented:

  • Brush selection tool - user can touch screen to select anything.
  • Magic wand selection tool - by touching particular area, pixels that are similar to the chosen pixel are selected
  • Rectangular selection tool - user draws a rectangle over the image which is then used as a brush mask

Here are the differences between the selection tools:

By default, the brush selection tool is used. User can select another tool by pulling down a "selection tools" overlay. This can be done by swiping down on the top edge of the phone (the edge above the image). This brings down the "selection tools" overlay which can be seen on the image to the right. Once a tool is selected, the user can start applying it to the selected image (if any). Application is designed with extensibility in mind and allows for easy creation of additional selection tools. Zooming in or inverting selection can also be easily implemented using the same interfaces. All tools inherit from the ITools interface, while

Once user picks an image, a region is defined by applying selection tools. Even though in this early iteration undo feature is missing, it is relatively easy to add it by separating all selections into "layers". Once the user is satisfied with the selection, the currently selected filter is applied on the defined region and blended back onto the image by pressing the "apply" icon (the one with the check). As mentioned before, blending is a process of merging two images to produce a third one. In this case, one image is the original image with the part of the image "carved out " and replaced with white pixels. The second image contains those pixels that have been carved out with the rest of the pixels turned to pure white. The replacement pixels can be adjusted using the two sliders on the main page which will be described in more detail below.


Even after the filter and blending are applied, user can change both the filter used for the filtered region and the blending parameters. This allows for "fine tuning" and experimentation. Once the user is satisfied with the results, the image can be either shared using the built-in sharing capabilities or saved to the media library.

[edit] Selection tools

As mentioned before, there are three different selection tools each implemented with specific algorithms. They will be described here in further detail. Each tool implements the ITool interface:

public interface ITool
{
double ImageWidth { get; set; }
double ImageHeight { get; set; }
WriteableBitmap TargetImage { get; set; }
 
void Started(ManipulationStartedEventArgs e);
void Delta(ManipulationDeltaEventArgs e);
void Ended(ManipulationCompletedEventArgs e);
}

First two properties (ImageWidth and ImageHeight) are used for transformation from screen space to image space. In most cases the underlying image will be significantly larger than the UI control used to represent it. The last property is necessary since a tool might want to inspect image for its own purposes. The three methods in the interface are used for analyzing user manipulation.

Using this interface you might implement crop tool, color picker tool and in general read-only tools. To implement a selection tool that generates a mask, you need several additional parameters. To accommodate that, another interface is added on top of the ITool interface:

public interface ISelectionTool : ITool
{
byte[] MaskBuffer { get; set; }
int[] Source { get; set; }
int[] Target { get; set; }
}

Tools that inherit from this interface have input pixels array which should be left unmodified and output pixels array which they use for building the result. MaskBuffer is used to remember which pixels have been manipulated. Undo stack can easily be created by chaining tools and putting them on the stack but due to the lack of time has not been implemented at this point. We can now see how individual tools have been implemented.

[edit] Brush selection tool

Applying the brush selection tool
The brush selection tool is the default tool when starting the application. The brush is this case is a circular shape filling a 50x50 area. This can be further extended in three major directions:
  1. The size could be customizable.
  2. Wider selection of brush shapes: square, triangle, star, custom shape, shape combinations, etc.
  3. "Solidness" of the brush - right now it is a filled circle but it can be sparsely filled, hollow or something different (think graffiti brush from MS Paint).

When user touches the image, the center of the brush is positioned to match the center of the touch surface. User can drag around and "paint" the area. Unfortunately, due to the missing zoom capabilities and the inability to customize brushes size, it may be hard to precisely paint the desired region. This brush gives the natural feel of picking the region.

The brush is defined as a mask: a byte[50*50 array filled with values from 0 to 255. The value of 0 means that this pixel is to be left alone, other values signify the "replacement strength". Brush is generated via the following algorithm:

private const int BrushRadius = 50;
private const int Tolerance = 6;
 
private void BuildBrush()
{
_brushMask = new byte[BrushRadius * 2 + 1, BrushRadius * 2 + 1];
 
const double threshold = (BrushRadius - Tolerance) * (BrushRadius - Tolerance);
const double fallout = BrushRadius * BrushRadius;
const double delta = fallout - threshold;
 
for (var i = 0; i < BrushRadius * 2 + 1; ++i)
{
for (var j = 0; j < BrushRadius * 2 + 1; ++j)
{
// distance squared
var d2 = (i - BrushRadius) * (i - BrushRadius) + (j - BrushRadius) * (j - BrushRadius);
byte a;
if (d2 <= threshold)
{
a = 0xFF;
}
else if (d2 > fallout)
{
a = 0;
}
else
{
var t = fallout - d2;
a = (byte)(255.0 * t / delta);
}
 
_brushMask[i, j] = a;
}
}
}

Note that the algorithm generates a soft fade-out which could be used for alpha blending, but is unused in the current implementation. The brush is applied on the separate thread. Since the code is rather large and is provided as an attachment to this article, we will just take a look at the underlying algorithm. When user interacts using the tool, all points and movements are added to the queue which is processed in another thread. This makes the application responsive when user swipes across the screen since the brush applying algorithm is faster when compared with generating the final image and refreshing the UI.

public void Started(ManipulationStartedEventArgs e)
{
lock (_syncLock)
{
_touchQueue.Enqueue(Tuple.Create(e.ManipulationOrigin, new Point()));
_lastPoint = e.ManipulationOrigin;
}
}
 
public void Delta(ManipulationDeltaEventArgs e)
{
lock (_syncLock)
{
_touchQueue.Enqueue(Tuple.Create(e.ManipulationOrigin, new Point(e.ManipulationOrigin.X - _lastPoint.X, e.ManipulationOrigin.Y - _lastPoint.Y)));
_lastPoint = e.ManipulationOrigin;
}
}
 
public void Ended(ManipulationCompletedEventArgs e) { }

In the separate thread, the _touchQueue is inspected and values are pulled out. Once there are no values on the queue (which means that the user is no longer moving the finger across the screen), brush is applied on the user selected points. Brush is clipped to the image and every pixel of the brush is processed (all 2500 of them in this case). If the brush has a nonzero value at the given pixel, a pixel is moved from the Source to the Target and TargetImage is updated to reflect this process. Here is the sketch of the algorithm:

for each point to apply
clip brush to the image
for each pixel that is inside the image
if brushmask != 0
copy pixel from source to target
make display image pixel white

After the algorithm completes, the Target will contain only those pixels where the brush has nonzero value and the TargetImage will have white pixels instead of the old ones. The remaining pixels (those where the brush has zero value) are left intact in the TargetImage and are equal to 0 (black) on Target.

There are several other optimizations that could be used to speed up the algorithm, but then again, this is not a professional application for image manipulation and the performance issues should be negligible on high-end phones.

[edit] Magic wand selection tool

Applying the magic wand selection tool
This selection tool is designed to select an area of "similar looking pixels". For example, you might want to select a part of the image which has same color, but different lighting e.g. door, table, field, sky, water, etc. Determining how similar two pixels are is done by calculating the distance between their UV components. If the distance is below some threshold value, they are considered "similar", otherwise they are not considered similar. UV components are part of the YUV color space and you have to convert RGB values using the transformation formulas. Distance is the standard two-dimensional Euclidean distance.

Selection is applied on every point user touches:

public void Started(ManipulationStartedEventArgs e)
{
Apply(e.ManipulationOrigin);
TargetImage.Invalidate();
}
 
public void Delta(ManipulationDeltaEventArgs e)
{
Apply(e.ManipulationOrigin);
TargetImage.Invalidate();
}
 
public void Ended(ManipulationCompletedEventArgs e)
{
Apply(e.ManipulationOrigin);
TargetImage.Invalidate();
}

The algorithm is queue based flood fill algorithm with fixed threshold value. Ideally you would want to be able to adjust the threshold value, but it is not possible in the current implementation. Here is the algorithm:

private void Apply(Point p)
{
// convert from screen space to image space
// prefix o = original
var ox = (int)(p.X * TargetImage.PixelWidth / ImageWidth);
var oy = (int)(p.Y * TargetImage.PixelHeight / ImageHeight);
if (ox < 0 || ox >= TargetImage.PixelWidth ||
oy < 0 || oy >= TargetImage.PixelHeight)
{
return;
}
 
var originalPixel = TargetImage.Pixels[ox + oy * TargetImage.PixelWidth];
var ouv = MathEx.GetUV(originalPixel);
 
// apply flood fill algorithm
var pointsToCheck = new Stack<Tuple<int, int>>();
pointsToCheck.Push(Tuple.Create(ox, oy));
 
while (pointsToCheck.Any())
{
var pointToCheck = pointsToCheck.Pop();
var x = pointToCheck.Item1;
var y = pointToCheck.Item2;
 
var index = x + y * TargetImage.PixelWidth;
 
// already processed
if (MaskBuffer[index] == 255)
continue;
 
// test similarity between colors
var pixel = TargetImage.Pixels[x + y * TargetImage.PixelWidth];
 
var uv = MathEx.GetUV(pixel);
if (MathEx.Distance2(ouv, uv) > 150)
continue;
 
// apply for this pixel
MaskBuffer[index] = 255;
Target[index] = Source[index];
TargetImage.Pixels[index] = (Constants.White & 0xFFFFFF) | (0xFF << 24);
 
// push neighbors to the stack (if reachable)
if (x > 0)
pointsToCheck.Push(Tuple.Create(x - 1, y));
if (x < TargetImage.PixelWidth - 1)
pointsToCheck.Push(Tuple.Create(x + 1, y));
if (y > 0)
pointsToCheck.Push(Tuple.Create(x, y - 1));
if (y < TargetImage.PixelHeight - 1)
pointsToCheck.Push(Tuple.Create(x, y + 1));
}
}

This tool is useful when picking out an area of similar looking colors which would be hard to paint using the regular brush selection tool.

[edit] Rectangle selection tool

Applying the rectangle selection tool

This is probably the classic selection tool which is familiar to anyone who ever used image processing applications. The idea is simple: user touches the screen at one point and drags the finger around to draw the desired rectangle. Once the user is satisfied, finger is lifted and the area under the rectangle is used for selection. To enable the region preview during the dragging process, another interface is used:

public interface IVisualTool
{
FrameworkElement Element { get; set; }
Point Position { get; set; }
 
event EventHandler ElementChanged;
event EventHandler PositionChanged;
}

The idea behind this interface is that after the user finishes its current manipulation the actual selection is done. During the dragging period a UI element is used to convey the information about the resulting selection process. Event handlers are used to signal the parent UI component that the underlying visual part has changed and that it needs to be updated. The tool will spawn the UI element that represents the action for this tool. Since positioning in the UI is not something that can be done from the tool itself, PositionChanged event is used to signal that the position has been changed. Positioning can be done using Margin (if the parent element is Grid) or in case the parent is Canvas using Canvas.SetLeft and Canvas.SetTop.

public void Started(ManipulationStartedEventArgs e)
{
_startPoint = e.ManipulationOrigin;
 
Position = e.ManipulationOrigin;
Element = new Rectangle
{
Fill = new SolidColorBrush(Colors.White),
Width = 1,
Height = 1,
HorizontalAlignment = HorizontalAlignment.Left,
VerticalAlignment = VerticalAlignment.Top
};
 
RaisePositionChanged();
RaiseElementChanged();
}
 
public void Delta(ManipulationDeltaEventArgs e)
{
Position = new Point(Math.Min(_startPoint.X, e.ManipulationOrigin.X),
Math.Min(_startPoint.Y, e.ManipulationOrigin.Y));
 
Element.Width = Math.Abs(_startPoint.X - e.ManipulationOrigin.X);
Element.Height = Math.Abs(_startPoint.Y - e.ManipulationOrigin.Y);
 
RaisePositionChanged();
}
 
public void Ended(ManipulationCompletedEventArgs e)
{
Apply(e.ManipulationOrigin);
 
Element = null;
RaiseElementChanged();
TargetImage.Invalidate();
}

In the current application, the parent container is Grid and handling of the notifications is done like this:

private void toolsList_SelectionChanged(object sender, SelectionChangedEventArgs e)
{
if (_currentVisualTool != null)
{
_currentVisualTool.ElementChanged -= CurrentVisualToolOnElementChanged;
_currentVisualTool.PositionChanged -= CurrentVisualToolOnPositionChanged;
}
 
// ...
 
_currentVisualTool = _vm.SelectedTool as IVisualTool;
if (_currentVisualTool != null)
{
_currentVisualTool.ElementChanged += CurrentVisualToolOnElementChanged;
_currentVisualTool.PositionChanged += CurrentVisualToolOnPositionChanged;
}
}
 
private void CurrentVisualToolOnElementChanged(object sender, EventArgs eventArgs)
{
if (_visualToolElement != null)
{
_imageWrapper.Children.Remove(_visualToolElement);
_visualToolElement = null;
}
 
_visualToolElement = _currentVisualTool.Element;
if (_visualToolElement != null)
_imageWrapper.Children.Add(_visualToolElement);
}
 
private void CurrentVisualToolOnPositionChanged(object sender, EventArgs eventArgs)
{
if (_visualToolElement != null)
_visualToolElement.Margin = new Thickness(_currentVisualTool.Position.X, _currentVisualTool.Position.Y, 0, 0);
}

Final algorithm is straightforward (_startPoint is defined when the manipulation started):

private void Apply(Point finalPoint)
{
// burn onto image
var startx = (int)(Math.Min(_startPoint.X, finalPoint.X) * TargetImage.PixelWidth / ImageWidth);
var starty = (int)(Math.Min(_startPoint.Y, finalPoint.Y) * TargetImage.PixelHeight / ImageHeight);
 
var endx = startx + (int)(Math.Abs(_startPoint.X - finalPoint.X) * TargetImage.PixelWidth / ImageWidth);
var endy = starty + (int)(Math.Abs(_startPoint.Y - finalPoint.Y) * TargetImage.PixelHeight / ImageHeight);
 
for (var x = startx; x < endx; ++x)
{
for (var y = starty; y < endy; ++y)
{
var index = x + y * TargetImage.PixelWidth;
Target[index] = Source[index];
TargetImage.Pixels[index] = Constants.White;
}
}
}

Further enhancements of this tool might include positioning, scaling and rotating the rectangle before applying it.

[edit] Main application logic

All the logic is implemented in the MainViewModel class. It contains an EditingSession instance (field _editingSession) which is responsible for blending layers. Applying the filter to a secondary layer (the one carved out using one or more selection tools) is done using a secondary EditingSession and the result is then both shown in the preview image and blended back into the main session.

Not all filters are added in this implementation and for those that require parameters, those are hard coded at the application's launch. The future version of this application will include all basic filters with special UI for adjusting parameters and also some complex effects. On the other hand, all blend functions are supported and can be tested easily.

Filter and blending is applied when user clicks on the apply button or when user changes filter or any blending parameter. To match the parameters necessary for the selection tools to work, the following fields and properties are used:

  • WriteableBitmap MainImage - Used for UI and binds to SelectedTool.TargetImage.
  • int[] _oldPixels - Keeps original pixels from the selected imageSelectedTool.Source.
  • int[] _buffer - Used for selected region SelectedTool.Target.
  • byte[] _maskBuffer - Used for marking selectionSelectedTool.MaskBuffer. In some other implementations this would never be a part of the main application logic, every time you apply a tool you would get another mask specific for that selection. This would allow applying more than one filter on the same image, but in different regions.

Here is the ApplyExecute implementation (certain parts were omitted):

private async void ApplyExecute()
{
try { _editingSession.Undo(); } catch { }
 
// apply filter to selected pixels only
PreviewImage = new WriteableBitmap(ImageWidth, ImageHeight);
_buffer.CopyTo(PreviewImage.Pixels, 0);
 
using (var ms = PreviewImage.ToStream())
{
var editingSession = await EditingSessionFactory.CreateEditingSessionAsync(ms);
editingSession.AddFilter(SelectedFilter.Creator());
await editingSession.RenderToWriteableBitmapAsync(PreviewImage);
}
 
// remove unnecessary pixels
var removal = (0xFF << 24) | (Grayscale << 16) | (Grayscale << 8) | (Grayscale);
for (var i = 0; i < PreviewImage.Pixels.Length; ++i)
{
if (_maskBuffer[i] == 0)
PreviewImage.Pixels[i] = removal;
 
if (_maskBuffer[i] != 0)
MainImage.Pixels[i] = ((Grayscale2 << 16) | (Grayscale2 << 8) | Grayscale2) | ((255 - _maskBuffer[i]) << 24);
else
MainImage.Pixels[i] = _backup.Pixels[i];
}
 
using (var ms = MainImage.ToStream())
_editingSession = await EditingSessionFactory.CreateEditingSessionAsync(ms);
 
EditingSession blendingSession;
using (var ms = PreviewImage.ToStream())
blendingSession = await EditingSessionFactory.CreateEditingSessionAsync(ms);
 
_editingSession.AddFilter(FilterFactory.CreateBlendFilter(blendingSession, SelectedBlendFunction));
await _editingSession.RenderToWriteableBitmapAsync(MainImage);
 
PreviewImage.Invalidate();
MainImage.Invalidate();
}

First the selected filter is applied to the selected region. Then the original image is adjusted to keep the original pixels except selected region and the result of the first editing session is applied to the adjusted original image. There are two properties used in the code above which haven't yet been explained: byte Grayscale and byte Grayscale2. Their default values are 255 which yields white color in both adjustments and can be set via sliders on the main page. They will be explained in the section below.

Blending itself is a filter that can be created using the FilterFactory.CreateBlendFilter method. This is great since blending can become a part of the filter stack and you can undo its effect.

It is now time to talk about blending.

[edit] Blending

As mentioned before, blending is a process of creating a new image from the two input images. Blending function can be described as a function that takes two pixels as input and returns a third one:

result = f(pixel1, pixel2)

So any function that takes two integers and returns a third is a blending function. However, there are several classic blending functions present in practically all modern image processing applications and they also come with Nokia Imaging SDK.

Before explaining each individual blending function, let's see how this application utilizes blending. As we have seen before, user selection is copied to another image. Let's see how that looks like:

Region selection in the application Adjusted original image Selected region with filter applied
Regionselection1 0.jpg
Regionselection1 1.jpg
Regionselection1 2.jpg
User selection is painted white Unused pixels are white

The rightmost image above is blended on top of the middle image above. Order of images is most of the times important and swapping their order can yield different images under the same blend function. As mentioned before, application allows for setting the shade of gray for the filling pixels in the two pictures on the right above. This has strong impact on the filters since those pixels are part of the blending (since user selection splits the image in two parts). Instead of adjusting the shade of gray, selecting a color, gradient image or another image as a filler, different results can be achieved.

Please note that each pixel is represented by values from 0 to 1 and not from 0 to 255 (as in hexadecimal format). Let's take a look now at each individual blending function.

[edit] Multiply blend function

This blending function multiplies color components and is defined as :
f(ab) = a*b

Let's see this blend function in action.

Region selection in the application Adjusted original image Selected region with filter applied
Blending sepia multiply 0.jpg
Blending sepia multiply 1.jpg
Blending sepia multiply 2.jpg
The final result. Note that the original part

remained the same. This is because the white color has value (1,1,1).

Image without selected part Selected part with filter applied

If you change the slider values, the replacement pixels will be darker which will in effect yield darker picture:

Region selection in the application Adjusted original image Selected region with filter applied
Blending sepia multiply1 0.jpg
Blending sepia multiply1 1.jpg
Blending sepia multiply1 2.jpg
The final result is now slightly darker Image without selected part Selected part with filter applied

The order of the images is irrelevant and values of 1 and 1 for the upper and lower slider respectively don't affect the image.

[edit] Add blend function

This blending function adds color components and is defined as :
f(ab) = min(a+b, 1)

All values larger than 1 are clamped to 1 (i.e. nothing can be whiter than white). Let's see this blend function in action.

Additive blending lights up the image. Keep replacement pixels darker so they don't affect the other parts.

The order of the images is irrelevant and values of 0 and 0 for the upper and lower slider respectively don't affect the image.

[edit] Color blend function

Unlike the previous two blend functions, this blend function does not operate on individual color channels, instead it works in a different color space. The definition from Wikipedia is:

The Color blend mode preserves the luma of the bottom layer, while adopting the hue and chroma of the top layer.

Let's see this blend function in action.

Color information is mostly lost.

This blend function generally works poorly with the current implementation since shades of gray do not carry as much color as regular colors do.

The order of images is important. Top slider has no effect on the resulting image. Value of 0 on the bottom slider yields black color for the selected region while 1 yields white.

[edit] Colorburn blend function

This blend function "divides the inverted bottom layer by the top layer, and then inverts the result" (from Wikipedia).

Blending sepia colorburn.jpg

The order of images is important. Upper slider darkens the image if the values are smaller while maximum value has no effect on the unselected region. The lower slider increases the contrast.

[edit] Colordodge blend function

This blend function "divides the bottom layer by the inverted top layer" (from Wikipedia).

Blending sepia colordodge.jpg

The order of images is important. Upper slider value of 0 has no effect on the unselected part while increasing the value lights up the unselected part. Lower slider darkens the selected region if the value is closer to 0 and lightens the selected region.

[edit] Overlay blend function

This blend function combines Multiply and Screen functions and darkens resulting image where the first image is darker and lightens where the first image is lighter. With this filter it is easy to control dark/light levels in selected and unselected regions.

Blending sepia overlay.jpg

The order of images is important. Values of 0.5 for both sliders have negligible effect. Smaller values for the upper slider darken the unselected region while higher lighten it. The lower slider has the same behaviour, but affects the selected region.

[edit] Softlight blend function

This blending function is also a combination of Multiply and Screen blend functions and can be used to control how light or dark each respective part of the image is. The order of images is important and different implementations have different results. Top slider will never turn the unselected region to black or white even if maximum values are set. However, the lower slider can turn selected region to black or white.

Blending sepia softlight.jpg

[edit] Screen blend function

This blend function multiplies the inverted color values and inverts the result. The formula is:

f(ab) = 1 - (1 - a)*(1 - b)
Blending sepia screen.jpg

The order of images is not important and black color in either image has no effect. White pixel in one image gives white pixel as a result.

[edit] Hardlight blend function

This blend function is equivalent to Overlay blend function, but reverses the algorithm.

Blending sepia hardlight.jpg

As with the overlay, the order is important and values of 0.5 for both sliders appear to have no effect. Top slider can turn the unselected region to black if the value is 0 and white if the value is 255. The bottom slider can never turn the selected region completely black or white.

[edit] Darken, Lighten and Plusligthen blend functions

These two filters are similar in nature. The Darken filter creates a new pixel from the lowest components from both pixels, while the Lighten takes the largest components. Formulas are

darken = [min(r1, r2), min(g1, g2), min(b1, b2)]
lighten= [max(r1, r2), max(g1, g2), max(b1, b2)]

In case of the Darken filter, maximum values for sliders leave the picture intact and for the Lighten filter minimum values leave the pictures intact.

Blending sepia darken.jpg Blending sepia lighten.jpg
Darken filter Lighten filter

Pluslighten gives practically the same results as Ligthten.

[edit] Hue blend function

Similar to the Color filter, it works poorly when blending with grayscale parts. Definition of the blending function is:

The Hue blend mode preserves the luma and chroma of the bottom layer, while adopting the hue of the top layer.

This means that the selected region will keep keep being gray. This works better on full-color blending.

[edit] Exclusion and Difference blend functions

These two blending functions are very similar. Difference subtracts second pixel from the first or vice versa to get a positive value. Exclusion is similar, but has lower contrast. The order of images is not important. Slider value of 0 means no change, while slider value of 255 gives inverts pixels (upper slider controls the unselected part, lower slider controls the selected part).

The results can be scary, creepy or plain psychedelic depending which two images are blended.

Blending sepia exclusion0.jpg
Blending sepia exclusion1.jpg
No change when blending with black Unselected part turned to 'negative'
Blending sepia exclusion2.jpg
Blending cartoon exclusion.jpg
Selected part turned to negative 'Dark cartoon' feel
Blending sepia difference.jpg
Difference gives minor differences than Exclusion

[edit] Blender application

As a bonus, in the attached source code you will find another application named Blender. Blender allows you to take two photos and blend them in various ways. You can test blending modes on real photographs using this application and both Color and Hue filter will now make much sense. I will not dwell on the implementation details since the application is quite simple. All the logic happens in the following lines:

var session = await EditingSessionFactory.CreateEditingSessionAsync(FirstImage.ToStream());
session.AddFilter(FilterFactory.CreateBlendFilter(
await EditingSessionFactory.CreateEditingSessionAsync(SecondImage.ToStream()), SelectedBlendFunction));
BlendedImage = new WriteableBitmap(FirstImage.PixelWidth, FirstImage.PixelHeight);
await session.RenderToWriteableBitmapAsync(BlendedImage);
BlendedImage.Invalidate();

Let's see the results.

Difference is used to determine if two pictures are the same or fore alignment. So let's compare the results using Lighten and Pluslighten. The darker the pixels are, the more similar pixels are in both picture.

As you can see, difference exists but is negligible

[edit] Sidenote: Alpha blending

We have seen all the different ways of blending images using Nokia Imaging SDK. In the beginning of this article there was a mention of that technique: alpha blending. While all blending functions described here operate on either RGB values or some other color space, alpha blending takes into account the alpha channel. Alpha channel is used to describe transparency. Photos you take every day don't have this alpha channel since every pixel is represented just by its color.

So where do we get this alpha channel you might ask. The answer is in computer generated images and games. For example, when applying watermark or adding text or a logo on a picture, you use alpha channel to describe which parts of the image are transparent and which are opaque. In games, transparent textures are used to represent transparent materials like glass, bottles, etc. Blending functions that take alpha into account differ from those described above and they are a topic for itself.

[edit] Summary

Nokia Imaging SDK is a powerful tool that can be harnessed in different ways. I hope that this interesting approach to building complex compositions yields interesting applications that go beyond the simple "filter and share" applications. Since the application described here applies filters to just part certain part of the image (or consider applying filter everywhere but on some special part), resulting images can have surreal or hyper-real feel.

Further optimizations for these types of applications is delegating all the work on actual images to another thread and only work in preview mode. This will increase the speed of processing images and reduce the strain on phone which in effect won't drain the battery as much.

[edit] Source Code

Download File:BlendingApplications.zip

This page was last modified on 14 October 2013, at 04:33.
291 page views in the last 30 days.
×