×
Namespaces

Variants
Actions

Qt Camera Guide

From Nokia Developer Wiki
Jump to: navigation, search
Article Metadata
Code ExampleCompatibility
Platform(s):
Symbian
Nokia Belle
Article
Created: hamishwillee (28 Aug 2013)
Last edited: hamishwillee (11 Oct 2013)

This is a wiki version of the official Nokia Developer "Camera Guide" for Qt (and Symbian). The original can be found here: File:Camera Guide v1 1 en.pdf

Note.pngNote: This article has only been partially converted from PDF - in order to ensure it is indexed and to allow links to be updated if content moves. The code blocks are not formatted properly and inline code has not been syntax-marked. Feel free to update this version as needed.

Contents

Introduction

This document lists the most common terms, abstractions and interfaces related to accessing the camera devices in Nokia Belle mobile phones.

This document does not provide every detail about every API, but provides a starting point for the implementation of a camera application. The document also helps you to make a decision on which API should be picked to provide the required features.

The APIs that can be used to access the camera interfaces are, from the highest abstraction level to the lowest:

  • Qt Mobility's QML elements,
  • Qt Mobility's lCamera classes,
  • native Symbian ECAM API.


And as an alternative you can use the

  • native camera application to capture an image.

A higher level of abstraction means simplified and quicker implementation, but also more restricted access to the features of the devices. A lower level provides wider possibilities for accessing the camera features, but the implementation is harder and more time consuming.

If the application only needs to capture an image, it is possible to use the native camera application and receive a notification about the captured image. This is described in the last chapter.

While writing this camera guide, a Camera Demo example application was implemented with the Qt Mobility Qt camera classes. For more information, see Qt Camera Demo

Comparison of the APIs

The following table will help you to pick the appropriate camera API for the application that you are developing. The table lists the common use cases that can or cannot be provided with the corresponding API.

Use case Qt Mobility QML Camera API Qt Mobility Qt camera classes Symbian ECAM API Native camera application
Show viewfinder image Supported Supported Supported Supported
Adjust settings Limited Well covered All Well covered
Image manipulation of the viewfinder image Not supported With QAbstractVideoSurface Supported Not supported
Use of secondary camera devices Not supported Supported Supported Supported
Video recording Not supported Supported Supported Supported
Performance of the viewfinder Limited Better Excellent Excellent
Difficulty of implementation Trivial Easy Difficult Easy with code snippet

Using Qt Mobility QML elements

PROS and CONS: + Really easy to set up with QML application
+ Provides all the usual settings of the camera
- Not possible to implement image manipulation to the viewfinder image
- Lacks the option to adjust image processing settings, such as contrast and sharpening
- Does not support the front camera
- Not able to record video

Qt Mobility provides the QML Camera element for easy access to the phone’s primary camera. If the application does not require the more sophisticated features of the camera, the QML API will most likely be sufficient.

An example application that demonstrates the use of the QML Camera element can be found in the Qt documentation. http://doc.qt.nokia.com/qtmobility/declarative-camera.html

Camera settings

The QML API provides all of the camera settings in the same QML element: they are not divided into multiple classes, as they are in Qt Mobility Qt Camera classes. This greatly simplifies the API, but at the same time a lot of features are dropped. For example, the supported features cannot be queried; the developer must know beforehand which will work and which will not. However, the QML API will report an error if there is an attempt to use a feature which is not supported by the backend or the device.

For example, the camera resource can be added and the flash can be set to “on” with just few lines of code:

import QtMultimediaKit 1.1
Rectangle {
Camera {
id: camera
flashMode: Camera.FlashOn
}
}

See the documentation of the QML Camera element at http://doc.qt.nokia.com/qtmobility/qml-camera.html for more information.

Viewfinder

The QML Camera element is also the visible element providing the viewfinder image to the dimensions of the element.

import QtMultimediaKit 1.1
Item {
width: 640; height; 360
Camera {
id: camera
anchors.fill: parent
}
}

Image capturing

Capturing an image normally follows the following process. First, the camera is executed to perform search and lock to obtain proper focus, white balance, and exposure values. When the camera reports that the lock has succeeded, we are ready to capture the image.

The captured image is reported by the imageSaved signal of the Camera and the image can be opened, for example, to the external gallery application by using the Qt.openUrlExternally function.

import QtMultimediaKit 1.1
import com.nokia.symbian 1.1
Item {
width: 360
height: 640
Camera {
id: camera
onLockStatusChanged: {
if (lockStatus === Camera.Locked) {
captureImage();
}
}
onImageSaved: {
// Open the just taken image to native gallery application
Qt.openUrlExternally("file:///" + fileName);
}
}
Button {
id: button
onClicked: camera.searchAndLock();
}
}

Using Qt Mobility Qt Camera API

PROS and CONS:
+ Supports front camera
+ More diverse access to the camera settings
+ Allows implementing image manipulation to the viewfinder image
+ Allows video recording
- Requires more work to integrate the API into a QML application
- The performance of the viewfinder is not as good as the native application
- There are a lot of features that exist in the API but are not supported by the backend, even though the camera devices are able to support them

The Qt Camera API classes are divided into multiple responsibility areas:

  • QCamera – Create and access the camera resources.
  • QCameraExposure – Access and set the exposure settings, such as flash modes and ISO values.
  • QCameraFocus – Access and set the focus settings, such as zoom values.
  • QCameraImageProcessing – Access and set the image processing settings, such as white balance, contrast, and sharpness.
  • QCameraImageCapture –Contain the image capturing settings, such as the resolution of the image. Also provides the API used to take the image.
  • QMediaRecorder – Access and set the video recording settings. Also provides the API used to record the video.
  • QVideoWidget, QGraphicsVideoItem, QAbstractVideoSurface – The first two show the viewfinder image as a widget or a graphics scene item, while the third provides the option to implement your own video surface that shows the viewfinder image.

Qt Mobility Qt Camera classes have the following software and hardware components and abstractions (see the diagram below):

  • Camera devices represent the HW camera devices.
  • Camera interface represents the QCamera to interact with the devices.
  • Settings represent the settings to apply to the QCamera in order to adjust the image and video capturing parameters.
  • System image processing represents the image processing that the underlying system applies to every frame, image, and video frame before they are presented to the application.
  • Frame represents a single frame of the viewfinder image.
  • Image represents the captured image.
  • Video frame represents a single frame of the captured video.

Automatic Parameter Calculation is a part of the camera interface API that performs automatic parameter calculation of the camera parameters when required. Exposure time, white balance and ISO value are examples of these parameters.

See the example application using the Qt Mobility Qt camera classes here: Qt Camera Demo

Symbian capabilities and project settings

Using a camera API on the Symbian platform requires the following capability in the Qt project file (the camera also consumes a lot of memory, so we reserve a lot of heap):

Symbian {
TARGET.CAPABILITY += \
UserEnvironment
# Add a lot of heap for the camera.
TARGET.EPOCHEAPSIZE = 0x20000 0x9000000
}

To handle the HW camera button, the SwEvent capability is required. Note that this capability requires developer certificates, and the application must be Symbian signed at https://www.symbiansigned.com.

Symbian {
TARGET.CAPABILITY += \
SwEvent
}

Camera states

The camera has three states that must be taken into account when the device is accessed:

  • Unloaded state
  • Loaded state
  • Active state

The camera is initially in unloaded state, meaning that the camera capabilities are unknown and the settings cannot be queried or applied. In the loaded state, the camera capabilities can be queried and settings can be applied, but the viewfinder is still not active. In the active state, the viewfinder image is received and the images or video can be captured.

Calling the QCamera::start() will switch the camera to the active state – this may take a few seconds. The camera state can be queried with the method QCamera::state() or by listening to the state changes on QCamera signal:

signals:
void stateChanged(QCamera::State);

To set the camera to the loaded state, the method QCamera::load() can be called. This is useful if there is a need to apply settings before the viewfinder image is shown.

Camera settings

The camera settings can be applied when the camera resource is created and it is in loaded state. For example, the flash can be set to be ‘always on’ like this:

camera = new QCamera();
// Make sure that the camera is in loaded state.
camera->load();
QCameraExposure *exposure = camera->exposure();
exposure->setFlashMode(QCameraExposure::FlashOn);

The camera settings might not be available with all camera devices; the support can be queried with the Camera API. For example, the supported ISO sensitivities or maximum digital zoom value can be retrieved in the following way:

// Supported ISO sensitivities
QList<int> sensitivites = camera->exposure()->supportedIsoSensitivities();
// Maximum digital zoom value
qreal maximumZoomValue = camera->focus()->maximumDigitalZoom();

See the documentation of the Qt Mobility Qt camera classes in http://doc.qt.nokia.com/qtmobility/multimedia.html#camera-classes for more information.

Locking the camera settings

Normally, when the user presses the shutter button halfway down, the focus, exposure, white balance, ISO sensitivity etc. settings are automatically calculated (assuming that the camera is in full automatic mode).

The camera API does all this when the QCamera::searchAndLock() is called.

public slots:
// Locks exposure, white balance and focus.
void searchAndLock();
// Lock only the given parameters.
void searchAndLock(QCamera::LockTypes locks);

The API will report back if the operation succeeded or failed with the following QCamera signals:

signals:
void locked();
void lockFailed();

After lock has succeeded, the camera settings are locked even if the camera is pointed to some other view with more or less light, which can lead to the image being over or under exposed. To unlock these automatically calculated values, call the unlock method of the QCamera:

public slots:
void unlock();
void unlock(QCamera::LockTypes locks);

Viewfinder

QCamera supports multiple ways of attaching a viewfinder directly into the software. The attachment is done via the setViewFinder method. There are three different overloads of it:

void setViewfinder(QVideoWidget *viewfinder);
void setViewfinder(QGraphicsVideoItem *viewfinder);
void setViewfinder(QAbstractVideoSurface *surface);

Each of these methods overwrites the previous viewfinder if it has been set; there can only be one attached viewfinder at a time.

The viewfinder can be any of the three objects accepted by the QCamera::setViewFinder method: QVideoWidget, QGraphicsVideoItem and QAbstractVideoSurface. Each of these objects provides slightly different use cases, and they are meant to be used with different Qt objects:

  • QVideoWidget with widgets,
  • QGraphicsVideoItem with QGraphicsView and
  • QAbstractVideoSurface for more custom solutions.

Performance

In Symbian^3, the QCamera is implemented with Symbian’s ECOM camera API. The procedure of acquiring the camera image for the API is beyond the scope of this document.

The procedure for getting the camera preview for the QCamera’s viewfinder with ECOM goes roughly like this:

  • Symbian’s CFBSBitmap is acquired by S60CameraViewFinderEngine::viewFinderBitmapReady method.
  • QPixmap is created from the CFBSBitmap with QPixmap’s Symbian specific method QPixmap::fromSymbianCFbsBitmap. The Qt reference documentation tells us that the pixmap is created just by referring to the bitmap handle (instead of copying the data) whenever possible and therefore, when successful, this phase should not be very time consuming.
  • QImage is created from the pixmap with QPixmap::toImage(). At this phase, the data is actually duplicated (a deep copy is made).
  • The QImage’s format is checked. If it is something other than ARGB32 or RGB32, it is converted to RGB32.
  • QVideoFrame is created and mapped to refer into the image of the previous phase.
  • QVideoFrame is sent to QCamera’s viewfinder via present().

NOTE: It seems that the original preview image is stored in the graphics memory, and could be used directly if a handle was given by the backend, but this is not currently implemented in Symbian^3.

After getting the QVideoFrame into the Qt’s side, the frame exists in the basic memory as a QVideoFrame object, which cannot be directly drawn by any hardware accelerated graphics system. This is a major issue, because transmitting images this large to graphics memory is very slow (this can be seen in the performance difference when doing anything other than actually drawing the preview image).

When drawing full frames as fast as possible, the fastest way is to use QPainter::drawImage with the frame’s original RGB32 format.

So what can be done to increase the performance? First of all, before any tricks can be applied, the application must choose to use the QAbstractVideoSurface-based viewfinder and manage the drawing by itself.

To speed up the frame rate of the viewfinder image, the amount of data transmitted into the graphics memory must be reduced. This can be done in several ways, but all of them have effects on the quality of the display:

  • When the QImage is scaled down before drawing it, the performance significantly increases. This increase is directly associated with the amount of scaling.
  • If the application does not require new frames to be passed to its viewfinder in real time, it could convert the QImage into a QPixmap and use that for constant drawing (when a QPixmap is created, the image data is transmitted into the graphics memory).
  • The application could try to reduce the number of updates made to the QPixmap (for example, by only updating the rectangles that have been changed most).

There are other ways as well. Especially when using OpenGL rendering, the developer should remember the earlier statement: The only way to gain performance at this phase is to reduce the amount of data transmitted to graphics memory.

QVideoWidget

This is the easiest way to display a viewfinder image in a Qt application. A widget has some control over the display: size (full screen as well), brightness, contrast, hue and saturation. The QVideoWidget can be used directly, without deriving another object from it.

QGraphicsVideoItem

You can use this when working with QGraphicsView if no real time manipulation of the viewfinder image is required.

QGraphicsVideoItem does not have as much control over the image as QVideoWidget. Only its size and aspect ratio can be set. This is the control that the QML Camera element uses internally.

QAbstractVideoSurface

This is the most complex and most sophisticated way to use the viewfinder. QAbstractVideoSurface allows the application to get “real” pixel access into the preview image. The application can implement the QAbstractVideoSurface interface and set it as a viewfinder to QCamera. QAbstractVideoSurface cannot be used directly, and must always be implemented.

Handles

The QVideoFrame, which is actually the capsule that contains the frame information when it is transmitted to the viewfinder, supports different kind of handles. The handle is graphics system / platform dependent. In theory, this allows QCamera to transmit the frames directly as OpenGL / OpenVG handles (by transmitting the handle to existing vgImage or GLtexture). However, the current Symbian^3 QCamera does not use the handles, and the frames are only transmitted through basic memory. This causes significant performance loss, as the camera image is first transmitted from the graphics memory to the basic memory, and when rendering the preview image, back to the graphics memory if any hardware graphics acceleration is used. So when using the current QCamera of Symbian^3, the developer should only consider the HandleType::NoHandle solutions in all aspects of the program.

Formats

A custom video surface derived from QAbstractVideoSurface can be written to support several different formats.

One of the formats supported by the device’s internal camera implementation must be supported in order to actually get the frames. When QCamera is initializing the custom video surface, the format to be used is negotiated via the interface’s method:

virtual QList<QVideoFrame::PixelFormat> supportedPixelFormats(
QAbstractVideoBuffer::HandleType handleType = QAbstractVideoBuffer::NoHandle) const = 0;

The implementation of this method should return a list of pixel formats that are supported for the handleType. For example:

QList<QVideoFrame::PixelFormat> VideoSurface::supportedPixelFormats(
QAbstractVideoBuffer::HandleType handleType) const
{
Q_UNUSED(handleType);
return QList<QVideoFrame::PixelFormat>() << QVideoFrame::Format_ARGB32;
}

The code above will declare that this video surface is only able to process frames in ARGB32 format for all types of handle.

Most devices (all Symbian^3 based devices) support RGB32 and ARGB32 formats, which are definitely the easiest formats to use. When working with MeeGo 1.2 Harmattan, however, the only format supported by its camera implementation is UYVY, which makes the processing a lot more difficult, while at the same time much more efficient, since the native format of the camera is much closer to UYVY than any RGB format.

Implementing the viewfinder

When using custom video surface (QAbstractVideoSurface), the object can do whatever it likes with the incoming frames, including displaying them. This opens interesting possibilities, since the frame can be altered before it is presented on the display. Of course, the viewfinder can just copy the frame’s data and display it on the screen in any way it wants.

When QCamera acquires a new frame, it is sent to the viewfinder via the present method .

virtual bool present(const QVideoFrame &frame);

This method is the place where custom surface receives the actual preview data. When this data (QVideoFrame) is mapped, it works exactly like a QImage, and can actually be used as one very easily. For example:

frame.map(QAbstractVideoBuffer::ReadOnly);
QImage frameAsImage = QImage(frame.bits(), frame.width(), frame.height(),
frame.bytesPerLine(), QImage::Format_RGB32);
// Process the QImage anyway you like
frame.unmap();

Between map and unmap, the data can be used according to mapping flags (in the previous example, read only). Note that by default, QImage does NOT copy the data it is referring to (implicit sharing) and therefore after unmap, the image becomes invalid unless it is actually copied:

frame.map(QAbstractVideoBuffer::ReadOnly);
QImage copiedFrame = QImage(frame.width(), frame.height(), QImage::Format_RGB32);
memcpy(copiedFrame.bits(), frame.bits(), frame.mappedBytes());
frame.unmap();

Note that the previous solution will not work if either the frame’s or the image’s bytesPerLine is not the same as width * bytesPerPixel. This would create an actual copy of the image data before unmapping the frame. The created QImage can then be used anywhere in the application without considering the frame’s map states. Most of the QAbstractVideoSurface implementations choose to copy the QVideoFrame into a member as a “copiedFrame” and, for example, draw it when required (such as in its paint event). This can be done by simply using QPainter:

painter->drawImage(boundingRect(), copiedFrame);

Nothing is forcing the application to use Qt for the image presentation. The object can choose to draw it with very different methods as well. For example, the image can be used to update an OpenGL texture, and then the rendering can be done with direct OpenGL. Just remember that updating OpenGL textures can be slow, and there can be limitations on their resolutions (the image might have to be scaled before updating the texture).

Custom pixel manipulation of a frame

Before its presentation, data can be easily manipulated via all kind of traditional pixel manipulation. For example, the following code, when applied to the copiedFrame, would display the overexposed parts as red:

uint *start = (uint*)copiedFrame.bits();
uint *end = start + copiedFrame.width() * copiedFrame.height();
while (start != end) {
if ((*start&0x00FFFFFF) == 0x00FFFFFF)
*start = 0xFFFF0000;
start++;
}

Or more generally:

uint r,g,b;
uint *start = (uint*)copiedFrame.bits();
uint *end = start + copiedFrame.width() * copiedFrame.height();
while (start != end) {
// Extract RGB components from the pixel
r = (*start&255);
g = (((*start)>>8)&255);
b = (((*start)>>16)&255);
// Do whatever you wan’t with the RGB-components of the pixel
// Reset the pixel with modified RGB-components
*start = r | (g<<8) | (b<<16) | 0xFF000000;
start++;
}

When using custom pixel manipulation, you must be careful that the process is not too time consuming, since executing this kind of function can easily become too slow to be of any practical use.

An example application called MirrorHouse further demonstrates custom pixel manipulation. The application has several effects that manipulate the camera image in real time.

You can view the project pages for MirrorHouse (search on wiki)

Stability

When working with QAbstractVideoSurface and the OpenVG graphics system (currently the default graphics system in Symbian^3), there can be problems with some devices. The camera frame rate seems to get “jammed” from time to time, dropping down by a huge amount. Sometimes the speed is recovered without any interaction, and sometimes not. When the camera is restarted, the problem is fixed, and it is much less likely that it will happen again. The Camera Demo examples experienced these issues with the Nokia N8-00 device.

Image capturing

Photos are captured by using the class QCameraImageCapture.

The QCameraImageCapture object must be associated with the QCamera instance, and the capture mode must be set to capture still images. The camera must be in active state, which can be achieved by calling QCamera::start. After the camera is in active state, it is a good idea to perform an auto focus by calling QCamera::searchAndLock() to lock the focus. Finally, the image is captured by calling QCameraImageCapture::capture() method.

Note that you must handle the QCamera::locked() and QCamera::lockFailed() signals to get the information, no matter if the searchAndLocked succeeded or not.

camera = new QCamera;
imageCapture = new QCameraImageCapture(camera);
camera->setCaptureMode(QCamera::CaptureStillImage);
// Start the camera, the camera will eventually get to active state.
camera->start();
// When on active state perform auto focus and lock it
camera->searchAndLock();
// Call capture() on the QCamera locked() signal handler.
imageCapture->capture();
// After the capture or the when the user does not press the camera button anymore
// it is nice to release the locked parameters.
camera->unlock();

The supported image capturing resolutions can be queried from the QCameraImageCapture object and the desired resolution can be set via the QImageEncoderSettings object.

// Query the supported image capturing resolutions.
QList<QSize> resolutions = imageCapture->supportedResolutions();
// Set the desired resolution and codec.
QImageEncoderSettings imageSettings;
imageSettings.setCodec(“image/jpeg”);
imageSettings.setResolution(1600, 1200);
// Apply the encoding settings.
imageCapture->setEncodingSettings(imageSettings);

Video capturing

Videos are captured by using the class QMediaRecorder.

As the QCamera is derived from QMediaObject, the QMediaRecorder can record the video stream from the camera.

To record a video, the camera must be set into the video capturing mode by calling the QCamera::setCaptureMode.

camera = new QCamera;
mediaRecorder = new QMediaRecorder(camera);
camera->setCaptureMode(QCamera::CaptureVideo);
// Set the capture location, if location is not given default location is used.
recorder->setOutputLocation(QUrl::fromLocalFile(fileName));
// On camera active state call record.
mediaRecorder->record();
// The recording is stopped by calling stop().
mediaRecorder->stop();

Note that all of the camera settings, such as zoom, white balance and contrast, can be adjusted while in the recording mode.

To set the video capturing format, resolution and audio settings, use the QVideoEncoderSettings and QAudioEncoderSettings classes. The supported settings can be queried from the QMediaRecorder object, for example by calling QMediaRecorder::supportedAudioCodecs() and QMediaRecorder::supportedResolutions().

QStringList audioCodecs = mediaRecorder->supportedAudioCodecs();
QList<qreal> frameRates = mediaRecorder->supportedFrameRates();
QList<QSize> resolutions = mediaRecorder->supportedResolutions();
QVideoEncoderSettings videoSettings;
videoSettings.setCodec(“video/mpeg2”);
videoSettings.setResolution(640, 480);
QAudioEncoderSettings audioSettings;
audioSettings.setCodec(“audio/vorbis”);
audioSettings.setQuality(QtMultimediaKit::HighQuality);
mediaRecoder->setEncodingSettings(audioSettings, videoSettings);

For more information about the video recording, see the documentation of the QMediaRecorder class, at http://doc.qt.nokia.com/qtmobility/qmediarecorder.html.

Using secondary camera devices

The Qt Mobility Qt camera classes support access to secondary camera devices as well. The available devices can be queried with the static method QCamera::availableDevices().

Note that this method must be called before creating an instance of QCamera, as the device is given in the QCamera constructor.

QList<QByteArray> devices = QCamera::availableDevices();
if (devices.count > 1) {
// Secondary camera device exists
camera = new QCamera(devices[1]);
}

Handling the HW camera button

As mentioned in the project setting paragraph, handling the HW camera button requires the use of SwEvent capability, and must be set in the project file, while also adding the required libraries.

symbian {
LIBS *= -lcone -lws32
TARGET.CAPABILITY += SwEvent
}

Create a class to listen to the camera key events:

#include <QtCore/QObject>
QT_BEGIN_NAMESPACE
QT_FORWARD_DECLARE_CLASS(QWidget)
QT_END_NAMESPACE
QT_USE_NAMESPACE
class QCameraKeyListener : public QObject
{
Q_OBJECT
public:
QCameraKeyListener(QWidget *parent = 0);
~QCameraKeyListener();
private:
QWidget *m_widget;
};

The class requires a pointer to the main widget of the application in the constructor; which is usually a QDeclarativeView instance. This is commonly done in the main function like this:

#include <QtGui>
#include <QtDeclarative>
#include “camerakeylisterner.h”
int main(int argc, char *argv[])
{
QApplication app(argc, argv);
QDeclarativeView view;
view.setSource(QUrl::fromLocalFile(“qml/symbian/Main.qml);
view.setGeometry(QApplicaton::desktop()->screenGeometry());
view.showFullScreen();
new CameraKeyListener(&view);
return app.exec();
}

Here is the implementation of the CameraKeyListener:

#include "camerakeylistener.h"
#include <QtGui/QWidget> // WId
#include <eikon.hrh> // EKeyCamera
#include <coecntrl.h> // CCoeControl (WId)
#include <w32std.h> // RWindowbase, RWindowGroup, RWsSession
QCameraKeyListener::QCameraKeyListener(QWidget *widget)
: QObject(widget),
m_widget(widget)
{
if (!m_widget)
return;
// Get view's native Symbian window
WId windowId = 0;
if (m_widget->internalWinId())
windowId = m_widget->internalWinId();
else if (m_widget->parentWidget() && m_widget->effectiveWinId())
windowId = m_widget->effectiveWinId();
RWindowBase *window
= windowId ? static_cast<RWindowBase*>(windowId->DrawableWindow())
: 0;
// Get hold of the window group
TInt wGroupId = window ? window->WindowGroupId() : 0;
if (!wGroupId)
return;
RWsSession &wsSession = CCoeEnv::Static()->WsSession();
TUint wGroupHandle = wsSession.GetWindowGroupHandle(wGroupId);
if (wGroupHandle) {
RWindowGroup wGroup(wsSession);
wGroup.Construct(wGroupHandle);
TBool captureOK = ETrue;
// Capture camera key, in Symbian Belle the key up and downs must
// be also handled in order to prevent the native camera application
// to start.
captureOK = captureOK && (wGroup.CaptureKey(EKeyCamera, 0, 0, 100) > 0);
captureOK = captureOK && (wGroup.CaptureKeyUpAndDowns(EStdKeyDevice7, 0, 0,
100) > 0);
if (!captureOK)
qWarning() << "Unable to register for camera capture key events, "
<< "SwEvent capability may be missing"; }
}
QCameraKeyListener::~QCameraKeyListener()
{
if (!m_widget)
return;
// Get view's native Symbian window
WId windowId = 0;
if (m_widget->internalWinId())
windowId = m_widget->internalWinId();
else if (m_widget->parentWidget() && m_widget->effectiveWinId())
windowId = m_widget->effectiveWinId();
 
RWindowBase *window =
windowId ? static_cast<RWindowBase*>(windowId->DrawableWindow())
: 0;
// Get hold of the window group
TInt wGroupId = window ? window->WindowGroupId() : 0;
if (!wGroupId)
return;
RWsSession &wsSession = CCoeEnv::Static()->WsSession();
TUint wGroupHandle = wsSession.GetWindowGroupHandle(wGroupId);
if (wGroupHandle) {
RWindowGroup wGroup(wsSession);
wGroup.Construct(wGroupHandle);
wGroup.CancelCaptureKey(EKeyCamera);
wGroup.CancelCaptureKeyUpAndDowns(EKeyCamera);
}
}

Note that in Symbian Belle, the camera key handling has changed compared to earlier implementations. For more information, see the article at: Camera key handling in Symbian Belle

Getting the captured image

The Qt Mobility Qt camera classes pass on the information about the photo that has been taken by using the QCameraImageCapture signals:

signals:
void imageCaptured(int id, const QImage &preview);
void imageSaved(int id, const QString &fileName);

The first signal provides a preview image for the image that has just been taken, while the second gives the absolute path to the fully processed image.

The image can be opened to the native gallery application by using the static method:

QDesktopServices::openUrl(QUrl(“file:///” + fileName));
Or in QML by using:
Qt.openUrlExternally("file:///" + fileName);

Debugging OOM errors

Applications using the camera consume a lot of memory, leading to a higher risk of out of memory (OOM) errors (especially if there are several applications running in the background). As the system runs out of memory, applications may start behaving erratically, making it difficult to detect the root cause of such behaviour. Luckily, there’s a way to determine whether low graphics memory is the probable cause at run-time. The solution is explained in the following wiki article Getting available graphics memory using Qt on Symbian

See also chapter 6.3 further describing graphics out of memory (GOOM) handling.


Using Symbian Camera API

PROS and CONS: + Provides full access to the camera devices
+ Better performance when showing the viewfinder image
- Requires knowledge of Symbian C++ and the underlying camera API
- Requires a lot more time and effort to implement a full application compared to a pure Qt approach

  • Can be merged within Qt SDK

The Symbian^3 native camera SDK is called ECAM. The ECAM libraries (ecam.lib), headers (ecam.h), and the modules they are referring to are not part of the Qt SDK, and the native Symbian^3 SDK must be used instead – either directly, or by merging the required parts within the Qt SDK. In addition, if more advanced control to the camera is required, the SDK API Plug-in must be downloaded as well. This gives the extra control in the form of

CCameraAdvancedSettings (ecamadvancedsettings.lib, ecamadvancedsettings.h).

Developers should note that when working with the Symbian^3 native SDK, the overall thinking is different.

Calls are made asynchronously, which can cause problems and additional work if the ECAM API is used with the Qt framework.

Using the ECAM API is the most efficient way to receive the camera’s viewfinder image in Symbian^3 devices.

The easiest way for a developer to start Symbian^3 development is to download the tools, and then to get your hands dirty with an example. The Symbian^3 native SDK can be found here: http://www.developer.nokia.com/info/sw.nokia.com/id/ec866fab-4b76-49f6-b5a5- af0631419e9c/S60_All_in_One_SDKs.html

Example application with autofocus support: S60 Platform: Camera Example v3.0

Additional information about ECAM:

98499337-5CB4-5A9C-907A-95FF57775983.html

Using native camera application to take a photo

PROS and CONS:
+ Use the image adjustments of the native camera application
+ No implementation of the camera API is required
- The native camera application requires lots of memory, and there may not be enough left over for the developed application

Sometimes the application developer doesn’t need to include a viewfinder or method of adjusting the camera settings. You might only need to capture an image and use it in some way within the application. For these scenarios, it might be sufficient to use the phone’s native camera application to take the image, and then get the information about the photo that has been taken in order to process it in the application.

This kind of scenario is made up of the following steps:

  • Start the native camera application.
  • Register to listen to the events about newly taken images by the native camera application.
  • Read the image and process it in your application.


Starting the native camera application

The native camera application in Symbian can be started by using the QProcess::startDetached() static method. In the code, write the following snippet to start the native camera application:

QProcess::startDetached("CameraApp.exe");

Register to listen for the new images

Implement an active object (Symbian code) that handles saved camera events. The active object will use the MCameraFileObserver interface to report the file name of the captured image. Add the following libs to the project file:

Symbian {
LIBS += -lcone -leikcore –lavkon
}

Here is the header of the active object and the MCameraFileObserver interface:

#ifndef CCAMERAFILEOBSERVER_H
#define CCAMERAFILEOBSERVER_H
#include <e32base.h>
#include <e32std.h>
#include <e32property.h>
class MCameraFileObserver
{
public:
virtual void NewCameraFileL(const TFileName& aFileName) = 0;
};
class CCameraFileObserver : public CActive
{
public:
static CCameraFileObserver* NewL(MCameraFileObserver& aObserver);
static CCameraFileObserver* NewLC(MCameraFileObserver& aObserver);
~CCameraFileObserver();
private:
CCameraFileObserver(MCameraFileObserver& aObserver);
void ConstructL();
private:
void RunL();
void DoCancel();
TInt RunError(TInt aError);
public:
void Start();
void GetLastFileL(TFileName& aFileName);
private:
MCameraFileObserver& iObserver;
RProperty iProperty;
};
#endif // CCAMERAFILEOBSERVER_H

And here is the implementation of the class. The native camera application is found by using the UID. The class will call the MCameraFileObserver::NewCameraFile() when there is a new image from the native camera application:

#include "CameraFileObserver.h"
// UID and key of the native camera application
const TUid KPSUidCamcorderNotifier = {0x101F882E};
const TUint32 KCamLatestFilePath = 0x00000001;
CCameraFileObserver::CCameraFileObserver(MCameraFileObserver& aObserver) :
CActive(EPriorityStandard), iObserver(aObserver)
{
CActiveScheduler::Add(this); // Add to scheduler
}
CCameraFileObserver* CCameraFileObserver::NewLC(MCameraFileObserver& aObserver)
{
CCameraFileObserver* self = new (ELeave) CCameraFileObserver(aObserver);
CleanupStack::PushL(self);
self->ConstructL();
return self;
}
CCameraFileObserver* CCameraFileObserver::NewL(MCameraFileObserver& aObserver)
{
CCameraFileObserver* self = CCameraFileObserver::NewLC(aObserver);
CleanupStack::Pop(); // self;
return self;
}
void CCameraFileObserver::ConstructL()
{
// WriteDeviceData if the UID is not the SID of the process
TInt err = RProperty::Define(KPSUidCamcorderNotifier, KCamLatestFilePath, RProperty::ELargeText);
if(!((err==KErrNone) || (err==KErrAlreadyExists)))
{
User::LeaveIfError(err);
}
err = iProperty.Attach(KPSUidCamcorderNotifier, KCamLatestFilePath);
User::LeaveIfError(err);
}
CCameraFileObserver::~CCameraFileObserver()
{
Cancel(); // Cancel any request, if outstanding
iProperty.Close(); // Destroy the RTimer object
// Delete instance variables if any
}
void CCameraFileObserver::DoCancel()
{
iProperty.Cancel();
}
void CCameraFileObserver::RunL()
{
TInt err = iStatus.Int();
iProperty.Subscribe(iStatus);
SetActive(); // Tell scheduler a request is active
if(err==KErrNone)
{
TFileName fn;
GetLastFileL(fn);
iObserver.NewCameraFileL(fn);
}
}
 
TInt CCameraFileObserver::RunError(TInt /*aError*/)
{
return KErrNone;
}
void CCameraFileObserver::Start()
{
Cancel(); // Cancel any request, just to be sure
iProperty.Subscribe(iStatus); // Set for later
SetActive(); // Tell scheduler a request is active
}
void CCameraFileObserver::GetLastFileL(TFileName& aFileName)
{
TInt err = RProperty::Get(KPSUidCamcorderNotifier, KCamLatestFilePath, aFileName);
User::LeaveIfError(err);
}

GOOM handling

When the native camera application is started, it is possible that the graphics memory of the device may become very low, and so the system will begin to release resources that consume the memory. This means that there is a chance that the system will close your application as well.

To prevent this from occurring, you must handle the KGoomMemoryLowEvent event. See the following article to learn how it can be handled: Graphics memory handling

Processing the image

After the path to the image has been resolved, the image can easily be read by using the QImageReader class. This will handle the change of formats and scaling to the desired size. This is extremely useful, as the captured images can be very big, and loading these images to the application takes a lot of memory.

In general, the image only needs to be displayed in the same resolution as the application, for example in 360x640, and so it would be a waste of resources to load the image in the original resolution, for example 1832x3264, but only display it in 360x640.

To read the image scaled to the desired resolution, write:

QImageReader* reader = new QImageReader();
reader->setFileName(m_pathToImage);
if (!reader->canRead()) {
QSize imageSize = reader->size();
imageSize.scale(size(), Qt::KeepAspectRatio);
reader->setScaledSize(imageSize);
QImage image = reader->read();
}

Terms and abbreviations

Term meaning
ARGB32 A 32 bit colour space that has alpha, red, green and blue components, of 8 bits each.
ECAM The name of the native Symbian Camera API.
GOOM Graphics Out Of Memory monitoring process that sends GOOM events to the application that report the low graphics memory status.
ISO value A sensitivity setting of the image sensor, defining how easily the sensor will react to light. A higher ISO value requires less light to expose the image, but will add a greater amount of noise to the image.
OpenGL A cross-platform API standard for writing applications, which produces hardware accelerated 2D and 3D computer graphics.
OpenVG An API to provide hardware accelerated 2D graphics.
Qt Mobility A collection of mobility APIs, mostly targeted for mobiles.
RGB32 A 32 bit colour space that has red, green and blue components, of 8 bits each. There are 8 bits left that aren’t used.
SwEvent A Symbian capability that restricts access to APIs that simulate user input events (e.g. HW key presses).
UYVY A packed pixel format (colour space) containing a Y (luma) component for each pixel, and V and U (chrominance) components for every other horizontal pixel.

Change history

  • 16 April 2012 1.0 Initial document release
  • 25 July 2012 1.1 Chapter 4.11 “Debugging OOM errors” added.

Copyright © 2012 Nokia Corporation. All rights reserved. Nokia and Nokia Developer are trademarks or registered trademarks of Nokia Corporation. Other product and company names mentioned herein may be trademarks or trade names of their respective owners.

Disclaimer
The information in this document is provided ‘as is’, with no warranties whatsoever, including any warranty of merchantability, fitness for any particular purpose, or any warranty otherwise arising out of any proposal, specification, or sample. This document is provided for informational purposes only. Nokia Corporation disclaims all liability, including liability for infringement of any proprietary rights, relating to implementation of information presented in this document. Nokia Corporation does not warrant or represent that such use will not infringe such rights. Nokia Corporation retains the right to make changes to this document at any time, without notice.

Licence
A licence is hereby granted to download and print a copy of this document for personal use only. No other licence to any other intellectual property rights is granted herein.

This page was last modified on 11 October 2013, at 08:34.
367 page views in the last 30 days.