×
Namespaces

Variants
Actions
Revision as of 06:38, 30 June 2014 by croozeus (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Template universal app for video recording with MediaCapture using Imaging SDK Filters

From Nokia Developer Wiki
Jump to: navigation, search
Featured Article
30 Jun
2014

This article provides an example app which is intended to be used as a template for applying filters to a real-time video with the ability to record video with the filters applied. The article will also describe key sections of code that are requirements for creating your own version of this app.

The media player is loading... Recorded video using Nokia Imaging SDK with MagicPenFilter.

WP Metro Icon Multimedia.png
SignpostIcon XAML 40.png
WP Metro Icon WP8.png
Article Metadata
Code ExampleTested with
SDK: Windows Phone 8.1, Windows 8.1
Devices(s): Nokia Lumia 920, Nokia Lumia 820, Windows 8.1 PC
Compatibility
Platform(s):
Windows Phone 8
Dependencies: Nokia Imaging SDK
Platform Security
Capabilities: webcam, microphone
Article
Created: leemcpherson (07 Jun 2014)
Last edited: croozeus (30 Jun 2014)

Contents

Introduction

With the introduction of universal apps and WinRT for Windows Phone 8.1, the Silverlight/WP8.0 method of accessing the camera has changed. Within the latest Nokia Imaging SDK is a IImageProvider called CameraPreviewImageSource which allows you to process frames from a video stream before capturing a final image to process. However, unless you were to provide your own video encoder, you cannot record the video being output by the CameraPreviewImageSource.

WinRT provides a MediaCapture class which provides video preview, image capture, and video record capabilities. The preview can be output to a new XAML class called a CaptureElement. There is no traditional way to apply the Nokia Imaging SDK to the preview (or record) output of the MediaCapture class. However, the MediaCapture class will accept Media Foundation Transforms through the AddEffectAsync method. This article will describe the key sections of code that are required to add a Media Foundation Transform to the app and how to incorporate the Nokia Imaging SDk into it.

Many parts of this app were inspired by or even pulled directly from code provided by other developers on this site.

The app described in this article can be downloaded by zip file File:MediaCaptureNokiaEffects-masterv1.1.zip

or via GitHub here: https://github.com/limefrogyank/MediaCaptureNokiaEffects

C++ Universal Component

The Media Foundation Transform must be coded in C++ and the interfaces that must be implemented within the transform class are quite complicated. Thankfully, there exist several samples of a Media Foundation transform that do not require much modification for our purposes. A good sample to start with is the MediaExtensions sample from the Univeral Apps sample set from Microsoft. The sample contains a GrayscaleTransform component from which this app's transform is based.

http://code.msdn.microsoft.com/windowsapps/Media-extensions-sample-7b466096#content

Preparing the C++ Component

Add Media Foundation Libraries

Next, the Media Foundation libraries need to be added to the linker. For each C++ project (WP8.1 and Win8), right-click on the project name and select properties. Expand the Linker group and select Input. Change the Configuration (at the top) from Active to All Configurations and change Platform from Active to All Platforms so that the following changes are applied to every configuration and platform.

To the end of the "Additional Dependencies" block, add mfplat.lib and mfuuid.lib separated by a semi-colon(;).

Add Common Folder from sample & Include Directories

At the root of the sample app (this app or the MediaExtensions app) is a folder labeled Common which contains several C++ header files (AsyncCB.h, CritSec.h, ExtensionsDefs.h, etc) . These are necessary for the media transform code to compile. Copy the entire Common folder to the root of your own solution.

Next, go to the property page for each C++ project in your app and change the configuration and platform to all as before. Expand the C/C++ menu and click on General. To the end of the Addition Include Directories, add a semicolon (;) and the path "..\..\Common" so that VS2013 can find the header files we just copied.

Modify pch.h

Replace the contents of your pch.h file with the following:

#pragma once
 
#include <collection.h>
#include <ppltasks.h>
 
// Windows Header Files:
#include <windows.h>
#include <mfapi.h>
#include <mfidl.h>
#include <mfapi.h>
#include <mferror.h>
#include <d3d11.h>
#include <D2d1helper.h>
 
#include <assert.h>
 
#include <tchar.h>
#include <Strsafe.h>
#include <objidl.h>
#include <new>
 
#include <wrl\client.h>
#include <wrl\implements.h>
#include <wrl\ftm.h>
#include <wrl\event.h>
#include <wrl\wrappers\corewrappers.h>
#include <windows.media.h>
 
#include <ppltasks.h>
 
#include <ExtensionsDefs.h>
 
using namespace Platform;
using namespace Microsoft::WRL;
using namespace Microsoft::WRL::Wrappers;


Add the Nokia Imaging SDK

Add the Nokia Imaging SDK to all projects in the solution using NuGet. You do NOT have to manually add a reference to the Imaging SDK to the C++ projects. They will not appear as a reference under the properties, but they will be included in the project anyways.

Add NativeBuffer.h

This class is copied entirely from galazzo 's article here: http://developer.nokia.com/community/wiki/Nokia_Imaging_SDK_in_native_code#Wrapping_buffers_into_a_Bitmap

The Transform Class

To the Shared C++ project Add both a .h header and .cpp c++ file. Creating these two files from scratch is beyond the scope of this article. Instead copy the contents of the ImagingEffect.cpp and ImagingEffect.h from the sample to your project. (These files are too large to post directly in this article.) There are several functions within this class that this article will highlight in the next section so that you can modify or add to this class.

Static Transform Functions for NV12 & YUY2

Most of the work of the transform happens within the non-class functions TransformImage_NV12 and TransformImage_YUY2 . These functions transform the video frame at the pointer pSrc and copy the new, transformed image to the pointer pDest.

auto size = Windows::Foundation::Size(dwWidthInPixels, dwHeightInPixels);
auto totalbytes = (int)dwHeightInPixels * (int)dwWidthInPixels * 2; //each macropixel of 4 bytes creates 2 pixels (YUYV)
 
Nokia::Graphics::Imaging::Bitmap^ m_BitmapToProcess = AsBitmapYUY2(pSrc, (unsigned int)size.Width, (unsigned int)size.Height);
 
BitmapImageSource^ source = ref new BitmapImageSource(m_BitmapToProcess);
auto first = providers->GetAt(0);
((IImageConsumer^)first)->Source = source;
 
auto last = providers->GetAt(providers->Size - 1);
 
BitmapRenderer^ renderer = ref new BitmapRenderer(last, AsBitmapYUY2(pDest, (unsigned int)size.Width, (unsigned int)size.Height));
 
auto renderOp = renderer->RenderAsync();
auto renderTask = create_task(renderOp);
renderTask.wait();

In order to use the Nokia Imaging SDK, the first step is to transform the raw byte data into a Bitmap. This is accomplished for the NV12 function using another function pulled directly from galazzo 's article again [[1]]. For YUY2 to Bitmap, the function was modified for the YUY2 format.

Nokia::Graphics::Imaging::Bitmap^ AsBitmapYUY2(const unsigned char* source, unsigned int width, unsigned int height)
{
int totalDimensionLength = width * height;
int size = totalDimensionLength * 4; //YUY2 buffer will be returned from the camera.
 
ComPtr<ImagingEffects::NativeBuffer> nativeBuffer;
MakeAndInitialize<ImagingEffects::NativeBuffer>(&nativeBuffer, (byte *)source, size);
auto iinspectable = (IInspectable *)reinterpret_cast<IInspectable *>(nativeBuffer.Get());
IBuffer ^buffer = reinterpret_cast<IBuffer ^>(iinspectable);
 
nativeBuffer = nullptr;
 
return ref new Bitmap(Windows::Foundation::Size((float)width, (float)height), ColorMode::Yuv422_Y1UY2V, 2*width, buffer);
}

The use of the Nokia Imaging SDK in a C++ app is similar to a C# app. First, a source must be created. Then, several effects or filters are applied. Finally, the result is rendered to a designated target. In this app, the source is a BitmapImageSource that uses the previously created Bitmap in its constructor.

Next, we retrieve effects that will be created in the C# portion (shown later in the article) from the IVector<IImageProvider^> supplied by the function. The first IImageProvider is cast to an IImageConsumer so that its Source property can be set to the newly generated BitmapImageSource. The last IImageProvider in the IVector list is set as the source of a BitmapRenderer.

The BitmapRenderer is created using the last effect and using a Bitmap created from the pointer to the destination array using the same Bitmap conversion functions previously used. The RenderAsync is called and the function waits until it finishes.

Note.pngNote: NV12 and YUY2 are two different formats for video that uses luminance and chroma data instead of straight RGBA data. These are the only two formats used by the Nokia Lumia 920 and the LifeCam HD-5000 webcam.

CImagingEffect::SetProperties

This function allows the C++ transform to receive data from the C# portion of the app. When the transform is applied to the MediaCapture class via AddEffectAsync, only a string variable is passed and no instance of the transform is returned to the C# portion of the app. Therefore, one cannot pass data to the transform via conventional means. However, one of the parameters of the AddEffectAsync method takes an IPropertySet interface. This is essentially a dictionary of KeyValuePair that must use WinRT-types.

This parameter comes through to the MF Transform through the SetProperties function.

HRESULT hr = S_OK;
 
try
{
IPropertySet^ properties = reinterpret_cast<IPropertySet^>(pConfiguration);
m_imageProviders = safe_cast<IVector<IImageProvider^>^>(properties->Lookup(L"IImageProviders"));
}
catch (Exception ^exc)
{
hr = exc->HResult;
}
return hr;

The IPropertySet will contain an IVector<IImageProvider^> using the key, "IImageProvider". However, the transform functions cannot access this class-member variable, so it must be passed via reference as an argument to the transform functions. This is done in OnProcessOutput:

// Invoke the image transform function.
assert(m_pTransformFn != nullptr);
if (m_pTransformFn)
{
auto s = m_altFilterParams;
(*m_pTransformFn)(m_rcDest, outputLock.GetTopRow(), outputLock.GetStride(), inputLock.GetTopRow(), inputLock.GetStride(), m_imageWidthInPixels, m_imageHeightInPixels, m_imageProviders);
}
else
{
ThrowException(E_UNEXPECTED);
}

Compiler bug for WP8.1 Release target

To fix the compiler error when trying to compile for WP8.1 using the Release profile (noted here), simply add the following dummy class inside your project namespace to the header file.

namespace ImagingEffects // Ensure the namespace is your project name.
{
public ref class Dummy sealed
{
public:
property int DummyProp {int get() { return 0; }}
};
}

C# Universal App

Preparing the main App

In order to use the MF Transform that was just created, there are two steps that must be taken.

  1. The C++ app must be referenced in the C# app. WP8.1 to WP8.1 and Win8 to Win8.
  2. The MF Transform must be registered.

In a desktop app, this would be done in the registry. In a WinRT app, this is done in the Manifest file in an XML editor. Right-click on the Package.appxmanifest file in the WP8.1 app, select "Open with...", and choose an XML editor. In between Applications and Capabilities, insert:

<Extensions>
<Extension Category="windows.activatableClass.inProcessServer">
<InProcessServer>
<Path>ImagingEffects.WindowsPhone.dll</Path>
<ActivatableClass ActivatableClassId="ImagingEffects.ImagingEffect" ThreadingModel="both" />
</InProcessServer>
</Extension>
</Extensions>

The corresponding code for Win8 app should just change WindowsPhone.dll to Windows.dll.

MediaCapture & CaptureElement

To setup the page for viewing video, simply place the CaptureElement somewhere on the page and set its source to the instance of MediaCapture generated.

Warning.pngWarning: Failing to shutdown the MediaCapture instance after video previewing has been started will result in your mobile device locking up! In one case, it corrupted the phone to an extent that no other solutions could be loaded onto the phone and a hard reset/device wipe was performed.

With WinRT, it is more difficult to capture the event at which you exit or navigate away from the app. Relying on the OnSuspending event is not enough. One solution to this is to monitor the CoreWindow VisibilityChanged events. When you switch to or from this app, the event will fire. In the OnNavigatedTo event handler, add the following:

_devices = await DeviceInformation.FindAllAsync(DeviceClass.VideoCapture);
ListDeviceDetails();
 
await StartMediaCapture();
 
var core = Windows.UI.Core.CoreWindow.GetForCurrentThread();
core.VisibilityChanged += core_VisibilityChanged;

_devices is a DeviceInformationCollection that will contain all the components capable of media capture on your device. The core_VisibilityChanged event handler contains logic that turns the video off when the window is not visible anymore and turns the video back on when the window is visible.

Warning.pngWarning: The core_Visibility event handler will NOT fire when you manually stop debugging in VS2013! If you just stop debugging, the next time any app accesses the camera, it will lock up the device! Always exit the app or at least navigate to a different app to cause the event to fire and shut down the camera.

Capture Start

Before you can begin capturing video, the MediaCapture instance must be initialized using InitializeAsync along with correct VideoDeviceId being passed within the MediaCaptureInitializationSetings parameter.

app.MediaCapture = new MediaCapture();
var selectedDevice = _devices.FirstOrDefault(x => x.EnclosureLocation != null && x.EnclosureLocation.Panel == Windows.Devices.Enumeration.Panel.Back);
if (selectedDevice == null)
selectedDevice = _devices.First();
await app.MediaCapture.InitializeAsync(new MediaCaptureInitializationSettings
{
VideoDeviceId = selectedDevice.Id
});

The next major step is to select the resolution and ColorMode (SubType) of the preview and record streams. This is accomplished with the GetAvailableMediaStreamProperties on the MediaCapture's VideoDeviceController. The output is an IReadOnlyList of IMediaEncodingProperties. Given that a video device has been selected, casting the IMediaEncodingProperty to a VideoEncodingProperties will yield the information required. The output of the Preview and Recording streams will be different.

_encodingPreviewProperties = app.MediaCapture.VideoDeviceController.GetAvailableMediaStreamProperties(MediaStreamType.VideoPreview);
_encodingRecorderProperties = app.MediaCapture.VideoDeviceController.GetAvailableMediaStreamProperties(MediaStreamType.VideoRecord);
ListAllResolutionDetails();
 
var selectedPreviewProperties = _encodingPreviewProperties.First(x => ((VideoEncodingProperties)x).Width == 800);
ListResolutionDetails((VideoEncodingProperties)selectedPreviewProperties);
await app.MediaCapture.VideoDeviceController.SetMediaStreamPropertiesAsync(MediaStreamType.VideoPreview, selectedPreviewProperties);
 
var selectedRecordingProperties = _encodingRecorderProperties.First(x => ((VideoEncodingProperties)x).Width == _encodingRecorderProperties.Max(y => ((VideoEncodingProperties)y).Width));
ListResolutionDetails((VideoEncodingProperties)selectedRecordingProperties);
await app.MediaCapture.VideoDeviceController.SetMediaStreamPropertiesAsync(MediaStreamType.VideoRecord, selectedRecordingProperties);

Note.pngNote: For mobile devices, it is better to use low bitrate capture resolutions for both the Preview and Recording streams due to high processing requirements.

Adding the effect comes next. This is accomplished by passing a simple string parameter containing the namespace and class of the effect to the AddEffectAsync method on the MediaCapture instance. The actual filters that should be used are passed within a PropertySet class as the third argument of AddEffectAsync. To the PropertySet instance, just add a KeyValuePair with "IImageProviders" as the key and a List<IImageProvider> that contains all of the effects you wish to apply. The first effect will have its source connected automatically to the input video within the component. All other effects must have their Source property referenced to the preceding effect. The last effect will be effect rendered to the output video.

PropertySet testSet = new PropertySet();
 
FilterEffect effect = new FilterEffect();
LomoFilter lomoFilter = new LomoFilter();
VignettingFilter vignettingFilter = new VignettingFilter();
effect.Filters = new IFilter[] { lomoFilter, vignettingFilter };
 
List<IImageProvider> providers = new List<IImageProvider>();
providers.Add(effect);
 
testSet.Add(new KeyValuePair<string, object>("IImageProviders", providers));

Finally, a call to StartPreviewAsync() will begin the camera preview.

Recording

It is not necessary to stop the Preview when Recording is started. However, the Effect MUST be applied to only one of these streams, otherwise the effect will run TWICE within both the preview output and the recording.

First, call ClearEffectsAsync and pass the MediaStreamType.VideoPreview as an argument to remove the effect from the preview. Then, add the effect to the VideoRecord stream using the same filter parameters before called StartRecordToStorageFileAsync (or any other recording method).

await app.MediaCapture.ClearEffectsAsync(MediaStreamType.VideoPreview);
await app.MediaCapture.AddEffectAsync(MediaStreamType.VideoRecord, "ImagingEffects.ImagingEffect", params);
await app.MediaCapture.StartRecordToStorageFileAsync(MediaEncodingProfile.CreateMp4(VideoEncodingQuality.HD720p), _tempStorageFile);

Note.pngNote: The code for recording within the sample is not complete. Use with caution.

Sample Store App - Video Filter

If you want to see an example of these classes in a real store app, please download: http://www.windowsphone.com/s?appid=e4559e62-eb1d-4be8-9eec-8df9dd50a1a9

This page was last modified on 30 June 2014, at 06:38.
423 page views in the last 30 days.

Was this page helpful?

Your feedback about this content is important. Let us know what you think.

 

Thank you!

We appreciate your feedback.

×