Please note that as of October 24, 2014, the Nokia Developer Wiki will no longer be accepting user contributions, including new entries, edits and comments, as we begin transitioning to our new home, in the Windows Phone Development Wiki. We plan to move over the majority of the existing entries. Thanks for all your past and future contributions.

Fundamentals of Symbian C++/Platform Security

From Wiki
Jump to: navigation, search
Article Metadata
Created: hamishwillee (10 Jan 2011)
Last edited: hamishwillee (23 Jul 2012)

The Symbian platform supports the development of native (C and C++) code by third parties such as network operators and independent software vendors. Such code may be packaged up in an installation file called a SIS file, and then installed on the phone by the end user.

Before the introduction of platform security, the third-party code could call any of the APIs exposed by the operating system, even those not explicitly published for external use. So the user had to trust the supplier of an application completely if they installed it. Anything they installed could potentially spend their money (for example, by making premium-rate phone calls), access personal or commercially sensitive data (such as calendar information or e-mail messages), or affect the behaviour of other applications (for example, by changing system settings).

'Platform security' is the collective name for a group of technologies whose primary function is to control application access to data and system services.

Platform security gives the user more control by allowing them to install applications that they trust in a limited way: the user can install an application and be confident that it will only do the things it claims it needs to do. For example, a simple game may be refused network access or access to a user’s personal data.

There are three interrelated components of platform security:

  1. The capability model: this says that every process on the device runs with a set of capabilities. Access to certain system services and resources is only permitted by processes that possess specific capabilities.
  2. Process identity:
    • Every independently certified EXE on the device has a globally unique secure identifier. All servers are able to examine the identifier of their client processes, so a server may know exactly which EXE is requesting a given service.
    • Every independently certified EXE on the device may have a vendor identifier that securely identifies the organization that created it.
  3. Data caging: different parts of the file system are restricted, so only processes with specific capability sets, or secure identifiers, can read and/or write particular directories.

The design makes it possible to grant access to sensitive services only to trusted parties. Symbian uses a certification and code-signing scheme to determine which capabilities may be granted and to assign secure identifiers.

The definitive guide to platform security is the book Symbian OS Platform Security by Craig Heath.


The Capability Model

Every binary in the system – including operating system code – has a capability field that defines the capabilities it has requested at build time.

The structure and definition of this are the same for EXEs and for DLLs, but their meaning and the operating system’s use of them is completely different, as I’ll explain in the sections on EXEs and DLLs below.

Capability Field Structure and Definition

The capability field is a bitfield in which each bit represents a single capability (not all bits are used at the moment).

Each capability is logically independent of all the others, so having any one capability does not imply ownership of any other, and there is no logical hierarchy of capabilities, thus no direct 'super-user' analog. (TCB is a notable exception to this rule, as it is possible to create and modify executables with this capability and thus bypass any security restriction in the operating system simply by creating an executable with the appropriate capabilities.)

Capabilities are divided into user capabilities and system capabilities.

User Capabilities

User capabilities are designed to be meaningful to the end user. Depending on the specific security policy chosen by the manufacturer, the phone may present the user with the option of granting user capabilities to an application on installation.

User capabilities are defined for activities that could cost the user money (such as using the network) or violate their privacy (such as accessing the address book).

The complete list of user capabilities is given below:

  • NetworkServices: the ability to make phone calls, send e-mails, and so on
  • LocalServices: the ability to use short-link network services such as Bluetooth
  • ReadUserData: the ability to read the user’s private data
  • WriteUserData: the ability to modify or create the user’s private data
  • Location: get access to the device’s location
  • UserEnvironment: access to information about the user’s environment, including the ability to record audio and use the camera.

System Capabilities

System capabilities are not expected to be meaningful to the end user, so users are not given the option to grant them.

Some system capabilities allow access to services at a lower level than user capabilities, thereby providing backdoor access to activities already protected by a user capability. For example, direct access to comms device drivers could provide a backdoor to access the network, and direct access to application private data could provide a backdoor to access user data. Additional capabilities are defined for activities that could affect the integrity of the system as a whole, and for certain other quite specialized activities such as digital rights management.

A subset of system capabilities is given below. For the complete list, refer to the Developer Library or the book Symbian OS Platform Security.

  • ReadDeviceData: the ability to read system settings (such as IAP settings)
  • WriteDeviceData: the ability to modify or create system settings
  • CommDD: access to communication device drivers
  • DRM: access to content protected by some form of digital rights management
  • AllFiles: read access to all file system, and write access to application private data
  • TCB: read and write access to the part of the file system where binaries are stored
TCB stands for 'trusted computing base,' which is a standard term referring to that part of a system responsible for ensuring the security of all other parts of the system.

The TCB comprises:

  • The kernel
  • The file server
  • The loader
  • The secure installer.

In fact, TCB capability only has an indirect relationship with the trusted computing base itself. Not all components that require TCB capability are actually part of the trusted computing base. For example, an enterprise device management system may need TCB capability because it needs direct read/write access to binaries, even though it is not primarily responsible for maintaining the system security.

But because the capability bits are stored in binaries themselves, the TCB capability enables the code possessing it to assign any set of capabilities to any binary on the device. So TCB-capable code is able to subvert the whole security model, and must be trusted to the highest degree.

Use of EXE Capabilities

Assignment of Process Capabilities

When an EXE is launched, it – and all the DLLs it links to – are loaded into the EXE’s new process. The capability word is copied from the EXE header and assigned to the new process. So the process capabilities are the same as the EXE capabilities. The process capabilities never change during the process lifetime. DLLs run with the capabilities of the process they have been loaded into. See Use of DLL Capabilities.

Let’s consider an example. Suppose the EXE is an application called MyApp.EXE. It links against an application engine DLL, MyAppEngine.DLL. The engine in turn links against the telephony client DLL, ETEL.DLL.

MyApp.EXE has two capabilities: NetworkServices and ReadUserData.

When MyApp.EXE is launched, it is loaded into a new process, along with the two DLLs it links to. The capabilities are copied from MyApp.EXE’s header into the kernel-side representation of the process:

Process capabilities.PNG

Process Capability Checking at IPC Boundaries

Suppose MyApp wants to make a phone call. It will call a Dial() method on ETEL.DLL, which will send a message to Symbian’s telephony server via the IPC mechanism. The telephony server requires the NetworkServices capability for Dial() requests. So it looks at the capabilities of the calling process (this information is actually maintained by the kernel, so is not forgeable by the client), and denies the request if the capability is not present:

Capability checking.PNG

Servers have a lot of flexibility in how to use capabilities here. They can require capabilities for a client’s Connect() method to succeed, or they can allow connections and police individual API calls. They can also accept or reject calls based on the arguments passed in as well as the capabilities; the file server in particular does this, as we will see.

If a capability check fails, the server may complete the request with an error code such as KErrPermissionDenied, or panic the client.

Servers can implement a security policy for their APIs by deriving from CPolicyServer and defining a CPolicyServer::TPolicy object for it, which defines what check should happen when each IPC method is called.

Use of DLL Capabilities

DLLs also have capabilities, represented in the same way and referring to the same privileges. But the capabilities of a DLL do not affect the capabilities of the process that loads it: process capabilities are entirely defined by the capabilities of the EXE.

The rule for DLL capabilities is: a binary cannot load any DLL that has fewer capabilities than itself. It is enforced by the loader.

The rationale for this rule is as follows: in our example above, we have seen that all code in the process runs at the same capability level. But any given binary cannot possibly ‘know about’ all the other binaries it links to, both directly and indirectly. So how can it trust that the code it is linking to will not abuse the privileges derived from the EXE? For example, suppose MyApp.EXE links to another DLL, which claims to do some entirely innocuous stuff – string to integer conversion, for instance. This DLL may have been supplied by some other third party of whom the developer of MyApp.EXE may know nothing. When MyApp.EXE calls the innocuous DLL function, there is nothing to stop that DLL from making a premium-rate phone call, because the telephony server will only check that the process has NetworkServices, which it does:

Capability subversion.PNG

The app engine thinks it’s executing a simple atoi, but it’s actually been subverted to spend the user’s money. This issue is solved by DLL capabilities.

Thus DLL capabilities mean that: the DLL is trusted not to abuse the privilege it has been granted, and so may safely be loaded into processes running with that capability level. In our example above:

  • If Innocuous.DLL does not have the NetworkServices capability, then MyAppEngine.DLL will fail to load it.
  • If Innocuous.DLL does have NetworkServices, then that means it can be trusted to be loaded into processes running with NetworkServices and not do things like making premium-rate phone calls inside an atoi function.

This means that the capability set for any DLL is the union of the capability sets of all processes in which they may ever need to run. The implication is that for a general-purpose DLL that may be loaded into any process, the DLL must have as many capabilities as possible.

Working Out Which Capabilities You Need

For EXEs, the capability set you need depends on what you want to do, in particular, which server APIs you need to call and what data you need to access. There are not many capabilities and their purpose is usually obvious from their name; you should start off with a good idea of which capabilities your application is likely to need.

If your thread fails a capability check, the server will most probably complete the request with an error code such as KErrPermissionDenied (-46), or may panic the client. In any case, for emulator builds, if the epoc.ini file under /epoc32/data in the SDK installation directory contains the line PlatSecDiagnostics ON, then the server will write a diagnostic line to the debug output, which will look something like this:

*PlatSec* ERROR - Capability check failed – A Message (function number=0x4000100a) from
Thread lbs-application.exe[10285a9b]0001::Main, sent to Server !PosServer, was checked
by Thread EPosServer.EXE[101f97b2]0001::!PosServer and was found to be missing the
capabilities: Location. Additional diagnostic message: Checked by CPolicyServer::RunL

This tells us:

  • The name of the thread whose request failed, in this case lbs-application.exe[10285a9b]0001::Main. The first part of the name will usually be the name of the EXE.
  • The name of the server that rejected the request, which in this case is the positioning server !PosServer
  • The name of the thread the server was running in, here EPosServer.EXE[101f97b2]0001::!PosServer
  • The name of the missing capability: Location
  • The value of the enumeration for the IPC that failed: 0x4000100a. This enables us to know exactly which operation failed: in this case the enumeration name with the value 0x4000100a is ELbsPosNotifyPositionUpdate, so the operation that failed is a request for a location update.

A simplistic approach to determining the capabilities you need would be to start with zero capabilities and keep adding them until you stop getting failures of this sort. But it should be clear that sometimes messages such as this are indicative of more fundamental problems in the code, so you should start with some expectation of the capabilities you will need and pay attention to any surprising failures. For example, if you only want to read contact details, but the API you are using requires the AllFiles capability, then you should treat the error as an indication that you may need to use a different API, not that you need the AllFiles capability.

This is especially true if the missing capability is a very powerful one such as AllFiles. However, even if it is just an unexpected one, it is worth considering whether there is a better way to achieve the same result.

For DLLs, the capability set you need is the union of all the capability sets of all processes in which they may ever need to run. For an application engine, this would be the same as the application EXE, but for a more general-purpose DLL the set can be much bigger.

If your EXE has more capabilities than the DLL it is linking to, the EXE will fail to launch and the following diagnostic will be written to the debug output:

*PlatSec* ERROR – Capability check failed – Can't load lbs-application.exe because it
 links to lbs-appengine.dll, which has the following capabilities missing: AllFiles 

This tells us:

  • The name of the EXE: lbs-application.exe
  • The name of the DLL with insufficient capabilities: lbs-appengine.dll
  • The name of the missing capability: AllFiles.

How to Assign Capabilities

Capabilities are specified in MMP files using the CAPABILITY keyword followed by a list of the names of the capabilities:

CAPABILITY ReadUserData WriteUserData NetworkServices

The special keyword ALL can be used to include all capabilities, and this can be followed with the names of capabilities preceded by a minus sign to include all capabilities except those listed. So:


will include all capabilities except TCB and DRM.

For more information, see Troubleshooting platform security problems.

Process Identity

SecureID (SID)

Every binary in the system contains a 32-bit secure identifier value, referred to as the SID. When an EXE is launched, the SID is copied from the EXE into the process.

  1. The EXE cannot change the SID.
  2. The process SID is always the same as the EXE SID.
  3. If the same EXE is launched multiple times, all the processes will have the same SID.

Exactly as for capabilities, servers can find out the SIDs of their clients, and use this to decide whether or not to service a request. SIDs may be defined for DLLs, but these are not used.

Protected and Unprotected SIDs

SIDs are divided into two ranges:

  1. The protected range, from 0x00000000 to 0x7FFFFFFF
  2. The unprotected range, from 0x80000000 to 0xFFFFFFFF

EXEs that contain SIDs in the protected range must be signed by an approved authority (this is enforced by the software installer at install time). This means that code running on the device can be assured that protected SIDs were properly assigned from the global SID-space. So protected SIDs are globally unique, and can be used to identify individual applications. Protected SIDs are supplied by Symbian Signed, and are specified in the MMP file using the SECUREID keyword. If no SECUREID statement is present in the MMP file the UID3 is used instead: omitting SECUREID and using UID3 are functionally identical, but using SECUREID makes the intent clearer.

If an application has a secure identifier in the protected range, the system is able to protect it from other applications. This means that it can store private data in its own private directory that is inaccessible to other applications, that other applications are not able to impersonate it, and that other application install packages are not able to alter it. Note that processes with the AllFiles capability can access private directories belonging to other applications. However, AllFiles is only granted to very trusted applications.

EXEs that contain SIDs in the unprotected range are not guaranteed to be unique.

VendorID (VID)

Each binary in the system may optionally also contain a 32-bit vendor identifier, referred to as the VID. As is the case for SIDs and capabilities, the EXE VID is copied into the process, cannot be changed by the EXE itself and can be queried by servers. VIDs are, of course, not unique: all EXEs provided by a single organization will share the same VID value. But all VIDs are protected, so if an EXE contains a VID it must be signed by an approved authority.

VIDs could be used by a device manufacturer to implement a security policy in which only their own applications are allowed to use certain interfaces. In such a case, using an SID or even a list of SIDs is too constraining, because they may not know in advance which of their applications may need to use it. Vendor identifiers are less often useful to other developers, but we could imagine the developer of a suite of applications using the VID to share user preferences across their different applications. VIDs may be obtained from Symbian Signed and built into an EXE using the VENDORID keyword in the MMP file.

Data Caging

Data caging is the term used to describe the practice of restricting access to certain parts of the file system.

The most obvious use for this is to protect the binaries themselves. Since we have seen that capabilities, SIDs and VIDs are stored in the binaries, write access to them needs to be restricted. Additionally, data caging protects read-only resources from accidental or intentional modification by unauthorized code, and provides each EXE with its own private data area.

Data caging is enforced by the file server on a per-directory basis. The following top-level directories have some restrictions placed on them:

Directory Capabilities required for read access Capabilities required for write access Comments
\sys\ AllFiles TCB This contains:

\sys\bin\ : all binaries are stored under here. The loader will only load binaries from this location. \sys\hash\ : hashes of binaries stored on removable media (see below).

\resource\ None TCB Read-only resources (such as bitmaps) go here.
\private\ AllFiles, or a SID equal to the subdirectory name AllFiles, or a SID equal to the subdirectory name Application private data storage.

This contains a subdirectory for each EXE that requires it: the directory name is the SID of the EXE, for example: \private\101f7663\.

Access to all other top-level directories is unrestricted.

Data Caging and Removable Media

We have seen that:

  • The capability model enables data caging, because the file server controls access using capabilities.
  • Data caging enables the capability model, because it protects the integrity of the binaries where capabilities are stored.

Removable media threaten the second of these assertions. If we allow binaries to be stored on removable cards, then we can no longer guarantee their integrity. If the card is inserted into a reader attached to a PC, we could directly alter the capabilities inside the binaries. To prevent this, when binaries are installed to a removable drive, the installer calculates a hash of the binary, and stores the hash on the internal drive, under \sys\hash\. Then when the loader loads any binary from a removable drive, it recalculates the hash of the binary and checks that it matches the hash retrieved from the internal \sys\hash\ directory. If it doesn’t match, it fails to load. So:

  • Any binaries on removable media that have not been through the installer will not be run, because the hash will not be found.
  • Any binaries on removable media that have been altered post-install will not be run, because the hash will not match.

Sharing Data Securely

If you need to keep data private, or simply have no need to share it, you can keep it in your private directory. If you need to share your data in a controlled fashion, you need to define a security policy for it. You could implement the policy by implementing a server, but in general you do not have to, because there are mechanisms provided by the OS for sharing data with access control.

In particular:

  • You can define a security policy for a central repository key, or a group of keys. The policies can control read and write access based on a combination of secure identifiers and capabilities, and are defined in the INI file that defines the syntax of the keys themselves and other metadata. See the Central Repository How To guide in the Symbian Developer Library for details on specifying a security policy for keys. Note that third-party developers are not currently able to define new keys or create new repositories.
  • Databases created using Symbian's SQLite component (RSqlDatabase) can be associated with a user-defined security policy, encapsulated as a TSecurityPolicy object, to control who can read and write to the whole database or individual tables, and who can modify the database schema. It is not possible to define new security policies for databases created using Symbian’s DBMS component (RDbNamedDatabase), although you can associate them with one of a number of predefined policies.
  • Access to publish and subscribe properties (RProperty) can be controlled using customized security policies defined as TSecurityPolicy objects. Additionally, for new programs, the category value for a property must now be the same as the defining process's secure ID. This protects the properties a process defines and relies on, stopping other processes from taking them over. It is, in effect data, caging for properties.

The Symbian Press booklet on data sharing and persistence gives an overview of these technologies, and can be found at Data Sharing and Persistence with Symbian C++.

Certification and Platform Security

The Symbian platform distinguishes two main classes of applications: trusted applications and untrusted applications.

Trusted applications are applications that have been signed either directly by an independent authority (for example, Symbian Signed), or indirectly with a key that has itself been certified by such an independent authority. The main criterion is that in some sense an independent authority has been able to exercise some control over the application.

Untrusted applications are applications that have been developed and deployed entirely outside the control of any independent authority. They may be unsigned or, more often, self-signed (signed with a key pair whose accompanying certificate is signed with the same key, rather than by an independent authority).

The concept of trusted and untrusted applications intersects with platform security in two places: secure EXE identity and capability assignment.

Certification and Secure EXE Identity

System software components use EXE secure identifiers to decide whether to grant access to various resources: the most obvious system component and resource being the file server and the application's private data area, respectively. If another EXE has the same SID as mine either accidentally or maliciously, it can impersonate my application and gain access to my private data files. So Symbian Signed (or some trusted authority) needs to check that EXEs only use the secure identifiers that have been assigned to them. So any EXEs containing secure identifiers from the protected space must be signed by an approved authority. This is a consequence of the way secure identifiers are implemented in Symbian OS.

Certification and Capability Assignment

The rules determining what, if any, certification is needed for a given capability set are determined by the device manufacturer as a matter of policy. We can distinguish four general approaches to answering the question of what an application is allowed to do:

  1. An open policy: the application is automatically granted the capability it asks for.
  2. A discretionary policy: the system asks the user if they are prepared to grant the capability.
  3. A controlled policy: the application must be granted the capability by an external authority.
  4. A closed policy: the application is never granted access to the capability.

These different policies represent different trade-offs between openness and control. To simplify, we could say that different stakeholders have different interests here. Developers would like to minimize the barriers to getting their applications written and distributed. Users would like some assurance that the applications they install are reliable. Network operators and device manufacturers want stable devices so as to minimize their support costs and do not always trust users to make good security decisions or even, where applications such as DRM are concerned, to co-operate.

(The reality is more complex, of course: device manufacturers have an interest in promoting third-party development, and developers have an interest in promoting confident users.)

The approach taken by the Symbian platform can be seen as a compromise in which:

  • Applications using only APIs that require no capabilities do not need to be trusted (open).
  • Untrusted applications that require capabilities from a specific subset may be installed if the user agrees (discretionary).
  • Applications that require capabilities outside that set must be trusted (controlled). This set is further subdivided, with more onerous verification requirements being imposed for capabilities considered especially powerful or sensitive such as DRM, AllFiles and TCB.

In theory at least, no capabilities are closed to third parties, although the most sensitive system capabilities (such as DRM and TCB) may be hard to acquire.

The certification options outlined here are subject to change in the future, although the underlying principles are likely to stay the same.

Untrusted Applications

Applications that do not need any capabilities and do not need the protection afforded by having an SID from the protected range do not need to be trusted. Additionally, untrusted applications are typically allowed the following set of capabilities:

  • ReadUserData
  • WriteUserData
  • Location
  • LocalServices
  • NetworkServices
  • UserEnvironment

There are two caveats to this:

  1. Untrusted applications are only allowed these capabilities at the user’s discretion. So, at install time, the installer asks the user whether they want the application to be installed, and lists the capabilities it is requesting. If the user refuses, the application is not installed.
  2. There is no guarantee that all devices will be configured to allow users to grant capabilities at their discretion.

Self-Signed Applications

Technically, untrusted applications may just be unsigned. But, in practice, some manufacturers (for example, Nokia) mandate that all applications are signed. This means that untrusted applications must be self-signed.

For self-signed applications the developer:

  • generates a signing key pair.
  • creates a certificate by signing the public key and their name with the corresponding private key.
  • signs their application with the private key and distributes it along with the certificate.

The Carbide.c++ and Qt Creator IDEs handle most of this for you (including creation of the keys), as discussed in Introduction to Self Signed.

Trusted Applications

To gain access to a wider set of capabilities, applications need to be trusted, and this means the developer needs to interact with Symbian’s certification process.

Symbian Signed ( defines the certification process and is the main interface to it for developers. Further information about Symbian Signed can be found in the User guide: Symbian Signed.

Licence icon cc-by-sa 3.0-88x31.png© 2010 Symbian Foundation Limited. This document is licensed under the Creative Commons Attribution-Share Alike 2.0 license. See for the full terms of the license.
Note that this content was originally hosted on the Symbian Foundation developer wiki.

This page was last modified on 23 July 2012, at 07:47.
55 page views in the last 30 days.