Symbian OS Platform Security/04. How to Write Secure Applications
Reproduced by kind permission of John Wiley & Sons.
What Is a Secure Application?
Before considering how to write secure applications, it is worth examining what we mean by a secure application and why you would want your application to be secure.
There are many definitions of a secure application involving concepts such as mutual authentication, remote attestation and provable behavior but, although these concepts are useful in their specialist domains, most application writers are looking for a more general definition.
A secure application is one that its audience can trust without being disappointed. Conversely, an insecure application is one that either its audience does not trust or that disappoints those who trust in it. Clearly there may be several categories of interested parties in the audience and they may trust your application in different ways. They may also be disappointed in varying degrees and at varying times, and they may respond in different ways, but, as an application writer, you will need to take this variety into account.
In particular, trust may be based on reasonable or unreasonable expectations. By being aware of the security of your own application you can minimize unreasonable expectations by not overselling it, and maximize sales by not underselling it.
As with most forms of security, application security has two strands to it, the first being an analysis of the potential threats and their impacts and the second being the deployment of appropriate countermeasures.
Analyzing the Threats
Who Is Interested in my Application’s Security?
If you examine the groups of people that trust your application, you can consider what is important to them. This can then guide you in determining what should be protected and how well it should be protected. We will list those people typically involved in an application and what their concerns usually are, but you need to think about which groups are interested in your application, what their concerns will be and whether you want to take their concerns into account when producing your application.
There are several categories of people that are involved with the lifecycle of an application and so have an interest in its security. They have differing but overlapping concerns and need to be considered individually.
The first, of course, is you, the application writer. There are four areas that commonly need to be secured:
- anti-piracy information such as registration codes, expiry dates and trial-mode flags. There may be a significant financial impact if this information is compromised
- intellectual property such as pictures and artwork. You may have expended considerable effort in developing these and wish to prevent unauthorized copies, or you may have licensed them from a supplier under terms that require you to protect them
- information thatmay devalue your product if it became widely known, such as a map of all of the high-value items in a game
- methods of bypassing parts of your product without your consent, such as using your game engine with a different client or reskinning your media renderer.
Some of these may be of particular interest to developers of software products that compete with yours, and we will see later that this should also guide your security strategy.
The next category of people to consider is the end-users. Their concerns are usually straightforward and are three-fold:
- incurring unauthorized financial cost. This is often caused by messages that are sent, or voice or data calls that are made, without the user’s consent, particularly to high-charging end-points. Less commonly, malware may attempt to use financial information on the mobile phone, such as using account details to make unauthorized transactions
- disclosure of personal information. This is a particularly sensitive area as different users have different concepts of what information is personal and what is a reasonable disclosure
- inability to access valuable data. There are three common cases of this: loss (deletion) of data, corruption of data and data being held in a proprietary format that can no longer be accessed.
Another group with a vested interest in the security of your application is the retailers. Although not usually concerned with specific security issues, other than anti-piracy protection, they will not want to sell products that generate end-user complaints or damage the market for their other products. Network operators will also be concerned about unhappy endusers, particularly if your application gives rise to disputed call charges. In an extreme case an application that could be provoked to flood the network with call set-up attempts, or other messages, would also arouse the interest of the operator.
The owners of other content on the mobile phone, whether data or applications, will also be concerned with the integrity of your application, even though you do not intend your application to access their content. In particular, they will expect that your application does not cause the following problems:
- the ability to read data that another application protects. A file browser that allowed the viewing of another application’s DRM decryption keys would undermine the integrity of that application
- the ability to subvert data on which another application depends (such as resetting the timestamp on an evaluation copy of an application)
- the ability to use your application as a stepping-stone to increase privileges in an uncontrolled fashion. This, typically, occurs when your application is trusted by another application with more capabilities.
Subverting your application may persuade the higher capability application to misbehave. This is classically known as an ‘escalation of privilege’ attack.
Lastly, the organization that signs your application will usually need assurance that it is secure, especially if your application requests one of the more sensitive capabilities. Their concerns will encompass many of the concerns of the other parties, including those of the owners of other content and applications. Theymay restrict you from writing a file browser that obtains other applications decryption keys, but they also stop other applications from having the capability to read your registration codes, and so they provide a safe but fair playing field for all developers. They can do this by signing applications but also by revoking the signatures on badly behaved applications.
Writing your application in a secure way is a winning strategy for a developer, as signing increases your application’s visibility and the security increases the end-user’s confidence and happiness with your product. An insecure application may lead to poor distribution and, in an extreme case, could lead to the application being blocked, and, possibly most significantly, damage the developer’s reputation.
What Do They Expect to be Secured?
Having decided who the stakeholders in your application’s security are, what their concerns are and which ones are of concern to you, you should produce a list of those areas that you need to protect in your application. The list for a typical application, considering the stakeholders listed above, might be:
- data that is not to be disclosed outside the application. This might include registration codes, etc.
- data that is not to be disclosed except to the user or other trusted software. The user’s contact list may be displayed to the user or sent to a printer but should not be left where an untrusted application could access it. The software that you trust will depend on the data – for example, you may wish to have your artwork displayed but not
- data that requires high integrity. This is data that should be protected against inappropriate modification, such as a certificate that identifies the bank to which the user is willing to send their account number and password
- normal behavior that can be used in abnormal ways. For example, an image renderer that is trusted to print DRM-protected content, used in conjunction with a printing module that will print to file, could easily be used to obtain an unprotected electronic copy
- securing the environment against unexpected modification by specific attacks designed to cause abnormal behavior.
Who Is Interested in my Application’s Insecurity?
Now that you have decided what you want to protect, you can consider who might want to attack your application, and why and how they might go about it. This will guide you in deciding the level of risk, and, therefore, the level of security that you need to protect your application. Attacks come in various guises but usually fall into one of four categories:
- attempts to cause damage or inconvenience to the user of the mobile phone, without regard for specific applications
- attempts to trawl the mobile phone for data deemed useful to the attacker. This might include names, addresses, financial account details, passwords and PINs
- attacks specifically on your application for the data it holds or can access
- attacks that specifically use your application as a means of replication, or of elevating the attacker’s capabilities.
Attacks on your application can come from one of several routes.
- other installed software that includes malicious behavior that is either unknown to or condoned by the user
- specific messages sent to the phone using mechanisms such as SMS, MMS, Bluetooth or infrared
- your own user interface. This could be because the authorized user of the mobile phone is attempting to exploit your application (e.g. to avoid registration) or it might be an attempt by an unauthorized user to obtain information from a stolen phone.
What Countermeasures Can Be Taken?
Know What your Application Does
Moving on to the means to protect your application, it may seem strange that the first topic is ensuring that you know what your application does. However, most applications start small and grow in functionality with each release. At the beginning it is clear to the developer what the purpose of the application is, but, as more features and options get added, the breadth of functionality often clouds clarity of purpose. Eventually, some applications reach the point where there are so many combinations of behavior that it is impractical to remember all of the possible internal interactions. This increases the chances that an attacker can cause unexpected behavior or data disclosure. A simple word processor that is enhanced to display DRM-protected embedded images and is further enhanced to support macro directives in the text might suddenly be used to save an image in an unsecured form.
A large piece of code, potentially with many capabilities and with considerable functionality, is not only more difficult to secure but more dangerous when exploited. If your application is reaching this stage, and you have not already done so, you should consider partitioning it into more manageable (and easier to secure) modules. The degree of partitioning will depend on the protection that you require. At the most extreme, using multiple processes gives you the full protection of the operating system, with each process having its own platform security capabilities, so that no piece of code need have more capabilities than it really needs. Processes can only interact through defined APIs, have their own data-caged private filestore and cannot modify each other’s memory, giving you rigid security. The disadvantages of this approach are extra code, more complex MakeSIS package files and a potential impact on performance. Less extreme forms of partitioning include using multiple threads within a process and, of course, the traditional C++ object class structure. Each thread may have a separate memory allocation but this is not protected against corruption from other threads. Likewise classes in the same thread, and threads within a process, all share a common filestore and capability set.
Whatever form of modularity you choose, you should concentrate on the APIs between each component. Good object-oriented designwill have taught you encapsulation; this skill can be reused for protection. A secure API will expose only the information that needs to be exposed, ensure that modifications to its data retain consistency and is at a sufficiently high level that it is not vulnerable to having a series of requests disordered. For example, if a database expected the following sequence of calls:
TRecordId recordId = database.FindRecord(key);
providing instead the single call:
will protect against attempts to delete an arbitrary record, to lock the database and never unlock it, or to delete without locking. Even if you haven’t partitioned your application into separate processes, this approach will reduce exploitable programming bugs.
When you consider what your application does (or what you think it does) you should give plug-ins special treatment. If your application allows plug-in code to execute (typically using ECOM), then you may need to limit which plug-ins can be loaded. Fortunately, platform security will not permit DLLs with a lower capability set than your process to be loaded by ECOM, but you may wish to provide further restrictions if you don’t want other developers providing plug-ins for your application. Plug-ins are covered in more detail in Chapter 6.
Mind your Capabilities
Most platform security capabilities have particular associated risks, either individually or in combination. If your application has capabilities, it should be designed to ensure that they cannot be maliciously exploited. The following are some examples of common risks associated with particular capabilities.
Network connections can incur charges and may affect other users. A destination number used for telephone calls or messaging should be protected against malicious modification. Consider limiting the number of messages sent, or the amount of data transferred before confirming with the user. This will minimize unexpected costs and reduce the chances of being used as a conduit to broadcast malware.
ReadUserData, ReadDeviceData, Location, UserEnvironment Applications with these capabilities have access to information that the user may not want others to see. Be wary of where you store this information (log files are sometimes a source of leaks of information) and be particularly stringent if your application has NetworkServices or LocalServices capabilities (or communicates with other processes that do) as this gives an opportunity for remote snooping.
Applications with these capabilities have unrestricted access to the file system (AllFiles can read everything and write all but \sys and \resource; Tcb can write to \sys and \resource). It is imperative that they take the highest level of precautions against race conditions with creation, deletion and rollback, and are allowed minimal run-time variations in behavior. It is well worth partitioning the trusted code into a separate process and ensuring that it rigorously identifies which processes it communicates with.
Applications with this capability can kill other processes, switch off the power or switch it on again. These operations should not be performed purely at the behest of other applications – user authorization should be sought. Any code that repeats such operations should limit the recurrence rate (or count) so that the user does not lose control of the mobile phone.
This capability is specifically relevant to servers rather than normal applications. However, you may have partitioned your application to include such a server. It is essential that a server using this capability does not usurp the name of any other protected server and so you should choose the most specific name that is reasonable. More details on secure servers can be found in Chapter 5.
Keep your Data Protected
Keep your Data Files Private
The introduction of platform security in Symbian OS has included the ability to place an application’s data in a secure data ‘cage’ where nothing else can access it. This alone is probably your most powerful tool in protecting your application so make sure that you use it to the full! Each application has a directory under \private in which only that application can create, read and write files.
You should always plan to use your application’s private directory to store all of your files unless there is a specific reason to the contrary. When you install your SIS file, you can arrange for files to be put into your private directory, which the installer will create for you automatically. If you haven’t done so, your application can create the directory itself when it runs.
TInt err = fs.CreatePrivatePath(EDriveC);
if (err != KErrAlreadyExists && err != KErrNone)
One common reason for placing a file outside a private directory is to allow many different processes to access it. However, it is rare to have a file that you want to share with all processes; you usually wish to share with a particular group. In this case, you should consider writing a server to provide and police access to the data, so that you can still keep the file in a private area. A simple way to do this is for a handle to the file to be passed by the server to any other process that you are willing to grant access to (see Chapter 5). This may not give a sufficient level of interlocking or granularity for many applications but, if it is appropriate for you, it is easy to implement. If you have small quantities of data to share then you also have the options of using either the central repository or the secure DBMS server to store the data (see Chapter 7 for more detail on the various options available for sharing data securely).
Do make sure that data that you intend to keep in your private directory remains there at all times. A common error is to create temporary files in public directories where they can be read or modified by other processes. Even exclusive access won’t protect your applications if the mobile phone reboots at the wrong time.
Expect External Access to your Files
Earlier we said that no other application could access your application’s private directory. Strictly speaking this isn’t true as an application with AllFiles capability has full access to all private directories. It is unlikely that any application would be signed to give it an AllFiles capability unless the signing authority was confident that the capability would not be misused. Of the few components that actually have this capability, examples are the native software installer itself (which needs to access private directories to allow applications to be installed and uninstalled) and the secure backup and restore engine (which needs to access private directories to read their backup configuration files and to backup and restore the selected files). You need to consider your application’s interaction with these two components, together with the possibility of direct access to the file system by bypassing the Symbian OS; these cases are described below.
Don’t Trust Removable Media
If your private directory is on a drive that is on a removable medium, for example, a Memory Stick or MMC, then there is nothing to stop the user from removing the card, placing it in a PC and reading it there.
Data on removable media can be accessed without going through the Symbian OS protection mechanisms when it has been removed from the mobile phone.
Whether this is an issue for your specific application is for you to decide but you should consider two points:
- Could your application use the card to store data that you’ve identified min the category that you don’t want the user to access?
- Could your application store data on the card that the user wouldn’t want to be retrieved from a stolen card?
If the user has all their account details stored in your banking application, it doesn’t matter how many passwords your application needs from the user if a thief can bypass it entirely by taking the card out of the mobile phone! Some phones support password-protected cards but you shouldn’t assume that all cards on all mobile phones have this protection.
You can force particular files to be installed on particular drives by explicitly specifying the drive letter in the target path in the PKG file, instead of using the ‘!’ character that lets the user select the drive at installation time. You should use this sparingly however as it stops the user from making the most of their total storage space and may even prevent them from installing your application at all. Obviously your choices will be guided by the size and sensitivity of your files and the space on the mobile phone but some options to consider are:
- Only store sensitive data on an internal drive.
- If you are protecting access to the data with a user-supplied password or PIN, your application could encrypt the stored data with that password or PIN and the user could select the drive of their choice at install time.
- If you have no user-supplied encryption key, your application could create one and store it on an internal drive, using it to encrypt any sensitive data stored on a removable drive.
You can tell if a particular drive is removable by calling RFs::Drive and examining the iDriveAtt member of the returned TDriveInfo for the KDriveAttRemovable attribute.
// Not removable
The encryption methods available to you depend on the particular SDK that you are using.
Of course, even an internal drive is vulnerable to someone willing to take the mobile phone’s circuit boards to pieces and pull the data out of the storage chips. As you’ve already considered what might be of value and who might conduct an attack, you’ll be able to tell if that’s a risk that you’re willing to take.
The second problem that you may have with removable drives is that the user may modify the contents of the card on a PC and then reinsert it on the mobile phone. You may not care if the user fakes their high score in your game, but you might be more concerned if the expiry date for a trial version of your application is changed. You cannot prevent tampering when the card is out of the mobile phone, so any data that must be tamper-proof should be stored on an internal drive. However, it is often more efficient to use tamper-detection, where the user can change the data but the application can detect this and deal with the situation appropriately. For example, if a user did change their high-score table, the application could reset it to zero. If a change was made to the expiry date of an evaluation copy of an application it could simply refuse to run. The simplest form of tamper-detection is to store a cryptographic hash of the vulnerable data on an internal drive. A hash is similar to a checksum but has the property that it is particularly difficult to change data while keeping the same hash, whereas that is relatively easy to do with a checksum. Symbian OS uses this technique itself when program binaries are installed on removable media, to ensure that their capabilities and other properties of the binaries (as well as the code itself) are not modified after installation. Just in case someone wires up a removable card so that they can modify it without removing it from the mobile phone, Symbian OS checks the hash each time the binary is loaded. You may wish to use a similar level of security, or you may decide only to check when the card is changed while your application is running e.g. by using the RFs::NotifyDismount API.
Symbian OS provides a range of hash functions (also known as message digests or one-way functions) – such as SHA-1 and MD5, which are implemented by the CSHA1 and CMD5 classes – that you can use to do this.
TPtrC8 data(_L8("The quick brown fox jumped over the lazy dog"));
TBuf8<128> hash = sha->Hash(data);
Don’t Trust Backup and Restore
Backups are things that we all know should be done regularly and we see them as something that reduces risks rather than creating them. Although they do indeed reduce the risks of data loss or corruption, they add some security risks in return.
The basic purpose of a backup is to copy your data and code from the mobile phone and put it on the user’s PC; however, once it is there it can be examined outside of the normal security constraints. This gives rise to the same sort of risks that removable media presents, but with the added risk that the contents of an internal drive may also be present in the backup.
Data in a backup can be accessed without going through the Symbian OS protection mechanisms while the data is away from the mobile phone.
For backup, there are six different categories into which application data may fit (although not all are likely to apply to a single application):
- There may be some data that it doesn’t make sense to backup or restore under any circumstances:
- It can be easily recreated (e.g. a cache of downloaded web pages).
- It is dependent on the particular mobile phone (e.g. the IMEI).
- It is not meaningful to backup and restore the data (e.g. the total duration of all voice calls made so far).
- It is dangerous to do so (e.g. the usage credits for your application).
- Some data cannot be safely backed up because it is inherently transient (e.g. a temporary file used for spooling) andmay be changing while the backup is in progress. As active applications are usually terminated before a backup starts (and restarted after), this will only apply in unusual cases.
- Some data is never backed up because you don’t want it disclosed under any circumstances. For example, if the user had provided a PIN to allow your financial software package to access their bank account over the Internet, you may decide that it is better to force the user to re-enter it if needed rather than risk its accidental disclosure.
- Some data needs protecting against unauthorized access (e.g. from malware on a PC holding the backup) but does not need protection from the authorized user. Such data might include contacts, call records etc. This data can be protected by encrypting it with a userprovided key and, depending on the particular models of mobile phones that your application is aimed at, this encryption may be done automatically for you by the backup software. Do bear in mind that users are usually not good at choosing good passwords or PINs if you rely on this.
- Some data is only safe to have backed up provided that it cannot be extracted from the backup. The classic example for this would be an authorization or decryption key. This can be done by protecting it with a key that remains on the mobile phone and again, depending on the model, this may be done for you by the backup software. Of course, this means that the data can only be restored to the same mobile phone it was backed up from. It isn’t possible to have data that can be restored on any arbitrary new mobile phone but cannot be decoded on a PC – if you’re not convinced, remember that the PC can run an emulator of the phone!
- Lastly, and probably most frequently, you may have data that can be backed up for which you need no protection. As an example, you may decide that web bookmarks are not confidential. Others might think that they are which is why you have to choose carefully – security almost always has a cost, in terms of implementation effort, performance penalty, or user inconvenience which should be weighed against the benefit.
The easiest approach is to ensure that all the data in a given file is to be backed up with the same constraints (this is usually the case anyway). The backup process ‘fails safe’ in that it protects your application if you do nothing to handle backups. Unfortunately, it does this by backing up none of your application’s program binaries or private files. You can ensure that your files are backed up by creating a backup_registration.xml file in your private directory. The most straightforward form of this that will backup all of your files is:
<?xml version="1.0" standalone="yes"?>
<include_directory name = "\" />
<restore requires_reboot = "no"/>
The directory name ‘\’ specifies the whole private directory of the application; however, you are more likely to want to be selective about the files that get backed up so that you can pick specific files and directories:
<?xml version="1.0" standalone="yes"?>
<include_directory name = "save" />
<include_directory name = "data" />
<exclude_file name = "data\count" />
<exclude_file name = "pid" />
<restore requires_reboot = "no"/>
Note that the <system_backup/> directive causes your program binary files to be backed up as well. Your binaries were in the original SIS file that was installed, so you will not usually want to keep them secret. However, if you do (for example, if they were DRM-protected), remove the directive from the file. If you do this, remember that, if the user loses their mobile phone, the binaries won’t be in the backup when they restore to its replacement, and they will need to keep the original SIS file in order to reinstall the application.
If your data protection requirements are stronger, or you want to back up selected parts of files rather than whole files, you can use an active backup, where your application passes data of your choosing, packaged with whatever protection you decide to implement. You can do this by providing an implementation of MActiveBackupData- Client and using it to create a CActiveBackupClient and specifying <active_backup> in your registration file.
A useful way of getting the benefits of an active backup without having to write too much code is to register for both an active and a passive backup. When the backup engine starts, your application will be asked to provide an active backup. Instead, your application can use the opportunity to extract the particular data that you want backed up into a suitably protected file, and return no data for the active backup. The passive backup can then back up the specially created file.
The problems with restore are similar to those of writable removable media. However, you do have one extra pair of defenses. If you have an active backup you can filter the data on restore and if you have a passive backup only the directories (or files) listed in your backup registration file will be restored, so you can check these before, perhaps, moving the contents to a different location. Remember that if you use a more generic registration file (as in the first example above) rather than a more specific file (as in the second example), you should be alert for the appearance of extra files from a modified backup.
Lastly, when deciding your backup and restore policy, don’t forget any data that you may have in the central repository or the DBMS server.
Don’t Trust Import Directories
Although other processes cannot write to your application’s private directory (with the exception of processes with AllFiles as noted above), if you have a private directory with a sub-directory called import (e.g. \private\13579BDF\import) then the native software installer will allow other SIS files to deliver files into that directory. If you expect to make use of this, then you should bear in mind that the files could come from any unverified, untrusted source and may arrive at any time (for example while your application is in the middle of checking the directory). You may wish to move such files to a separate directory before checking them but remember that the copies won’t be deleted if the other SIS file is subsequently uninstalled.
Be Careful Who You Talk to
No useful application runs without communicating with other processes, even if these are just the file server or the window server. You should ensure that any process with which you are communicating is the one that you expect before sending it confidential data or trusting data from it.
Connecting to Standard Servers
How can you be sure, when you write your data to a file, that you are talking directly to the file server and not another server that is making a copy of the data before passing it on?
Usually you will access the servers provided by Symbian OS through a client library, which is responsible for creating the link with the correct process. The client code will connect to the desired server (using RSessionBase::CreateSession) by specifying the server’s name. This has been the traditional means of identifying servers in Symbian OS but has always had an associated risk – a server with a name such as RandomServer only meant that that particular server had its name registered first when the system started up, not that it was the random number server. The standard Symbian OS servers now have their names beginning with ‘!’ to guarantee their identity. The random number server is now securely named !RandomServer and you can be reassured that you are connecting to what you expect. (If you’re using a random number to generate an encryption key, you want to be confident about your source!) What stops another server from also calling itself !RandomServer? If you try it you’ll find that you need the ProtServ capability to have ‘!’ in the name.
Communicating with Other Processes
Earlier we suggested sharing file handles with processes that you trust. Each process has a unique Secure ID or SID but, if your application has several processes or you are developing a suite of interoperating applications, you may find it easier to use the Vendor ID or VID. Each process has a VID, which is taken from the VID of the EXE from which it was created. You can set this using the VENDORID keyword in the MMP file that you use to compile your application. Any non-zero value requires your application to be signed, and the signing authority will check that no one else uses your VID and that you use no one else’s VID. This means that you can be confident that other applications that communicate with your application are part of your application suite by checking that their VIDs match yours. You can determine the VID of a process using the RProcess::VendorID API and its SID using RProcess::SecureID. Servers can use similar methods on an RMessage2 which is described in Chapter 5.
Know your Creator
Some applications allow the process that creates them to pass in options or data, at the point that the process is created, using RProcess:: SetParameter in the parent process and User::GetTInt- Parameter or User::GetDesParameter in the child process. If you have a helper application that does this (for example, to exercise capabilities that you don’t want to give to the creator process), it should be suspicious of any of the data that is passed in until it has identified the creating process as one that it trusts. One way to do this is using User::CreatorSecureId or User::CreatorVendorID. However, a more complete solution is to use the TSecurityPolicy class. Although this class is more commonly used in servers, the Check- PolicyCreator method is ideal for applications to check out their creating processes.
Be Careful in your Error Messages
No-one is going to be so foolish as to generate an error such as ‘the third character of your password was entered incorrectly’ (although there have been successful attacks on passwords by timing the error response to determine how many characters were correct.) Nonetheless, some error messages can be too informative. For example, if your application provides a search function on a database of people’s names, some of which are only revealed when a PIN is entered, displaying the message ‘This entry is PIN protected’ rather than ‘Not found’ allows someone with access to the mobile phone to tell if a particular person’s name is in the database without needing to know the PIN. Consider the trade-offs that you wish to make between being helpful and disclosing too much.
Be Careful with your Log Files
Log files can be useful when developing an application and also help to diagnose problems after your application has been delivered. They are also a common cause of accidental exposure of information. Before you finish your application you should consider:
- whether you still need any logs
- where logs should be stored (and how you expect them to be accessed)
- what information needs to be logged.
For example, it may be useful for an email client to log messages of the form ‘Connected to server at date/time, downloaded 3 messages’. It may be less desirable to record the server name, still less any user name and password, or the sender and recipient email addresses.
Check your Inputs
As well as being cautious about where your input data comes from, it’s important to check the actual data provided. Failure to check input data is the single biggest cause of technical security weaknesses in computing. Symbian OS is less prone to attacks such as buffer overflow than many other systems through its use of descriptors, but care is still needed.
Decide When to Check
The golden rule with input (particularly user input) is ‘always check before use’. For efficiency it is best to check only once and this usually means placing the checks at the point of input. It is all too easy to assume that a later check is unnecessary because it was performed earlier and vice versa, so try to have all your checks in one place, where possible, and make sure that any exceptions to this are documented in all the places where the checks are done.
The main exception to the ‘check once at input’ guideline is when the checks that you are performing are against things that might change. For example, a file that might not exist when you perform the check may have appeared by the time that you come to use the filename, or a future time for an appointment may be in the past by the time that the user has provided the appointment details. Of course, your application should be robust against such things anyway but it is worth minimizing the window of risk.
It is also important to ensure that your checks cannot be bypassed. If you cross a process or DLL boundary, consider whether something may subvert the data on the way. You should ensure that there is no way that the subsequent pieces of your code can be reached without going through the checking code.
Decide What to Check
Few applications need to check all of their data and few need to check none of it. Most applications are in the dangerous middle ground where you, the application author, must decide what needs checking and what is safe to ignore. A financial application may filter the numeric values that it reads, but should it also check the length of a filename? If the file contains a graphical image, might it be corrupt?
Decide How to Check
The checks that you make depend on the semantics of the data of your particular application. However, the following list may help:
- Are unbounded pointers used when descriptors should be used instead?
- Are any data items allocated to be ‘big enough’? Is this maximum designed to be big enough for the largest non-malicious use and might it be possible to overflow using maliciously constructed input?
- Might the assumptions for size requirements change in the future? Will the code fail securely if these change?
- Are limits checks done against all directions? For example, an integer count may be zero as well as too large or an absolute filename may be too short as well as too long.
- Are you making assumptions about which values the user will use, such as assuming that only the ASCII subset of UNICODE will be used?
If you have the opportunity, letting the user select from a set of values that are known to be valid removes the risk. For example, letting the user pick a date from a displayed calendar ensures that only displayable dates can be selected. Do be cautious that subsequent selections or input may make the original input invalid (for example, selecting ‘January’ from a list of months followed by ‘31’ from a list of days followed by a reselection of ‘February’).
Take Special Care with Filenames
With the advent of platform security you may have noticed that many of the Symbian OS APIs now expect to be passed file handles rather than filenames. This means that the client of the API has to open the file itself and confirms to the API provider that the client has legitimate access to the file. Where possible, you should follow the same approach when accepting files across process boundaries; this is particularly important if your process has more sensitive capabilities. Incidentally, remember that the receiver of a file handle can discover the name of the file (using RFile::FullName) so don’t put information in the filenames that you don’t want to leak out.
Usually you will be in the unfortunate position of having to handle filenames and these require special consideration for two reasons. The first problem is in determining the canonical name of a file. Depending on the circumstances, a filename may be relative to a current directory, may contain special sequences such as ‘.’, ‘..’ or ‘*’ and may be on a substituted drive. Selection from a list (e.g. a file browser) is the easiest way of avoiding this problem but be aware of filenames that are read from files that may have been tampered with (simply rejecting any unexpected sequences may suffice in this case).
The second problem with filenames is that many applications need to discriminate between files that exist to help the application work (e.g. a license key for an image editor) and files that exist for the application to work on (e.g. an image for an image editor). The names of the first group of files are often hard-coded into your application; the names of the second group are frequently user selected. Allowing that selection to encompass a file from the first group may well be dangerous (as an extreme example, consider a word processor that allowed the reading and modification of a file containing a license expiry date). You may choose to protect against this by using a naming convention such as file extensions or by putting all of the files that support your application in a sub-directory with a name that you filter out.
Why Abnormal Situations are Often Exploited
There are many error conditions and unusual events that could occur during the execution of an application. For an application to be secure, it not only needs to handle such events but also needs to handle them securely.
Code is frequently designed, written, reviewed and tested assuming normal, successful execution, with abnormal conditions and error handling frequently added as an afterthought or even omitted altogether. When present, such handling is often little more than ensuring that all allocated data is on the CleanupStack if an out-of-memory Leave occurs. This means that the behavior of the application in unusual conditions has often not been considered and, because of this, creating unusual circumstances is a popular attack from a malicious user or malicious code. Although you may have thought of some combinations of events as being sufficiently improbable that they can be disregarded, once a successful attack has been discovered, many minds will be ensuring that that particular combination happens on demand.
What Events may Occur?
Symbian OS is designed for limited resource devices and as such it is normal practice for code to be written with an expectation that memory allocations might fail. Code is less often written assuming that power may fail at any time or that the mobile phone may be rebooted. Even though the battery may have a full charge, a user might remove it at the least convenient time, for example to put it in a charger, giving your application little or no time to clean up.
Other events may happen during run-time and are usually indicated by error codes being returned from the system APIs. Ignoring error codes is a common practice among programmers but it implies that the programmer knows that the call can never fail (which may be true but only rarely) or that they don’t care if it fails (which again is rarely the case) or that they are hoping to be lucky. If you are that confident in your luck, you should consider a career trading stocks rather than writing software! If you’re not that confident, check your error values.
Some errors are easily produced by an untrained user, such as removing a media card at the wrong point (for example, when your application is about to update the ‘remaining credits’ file). Some may need specific software to exploit them (for example, grabbing a semaphore at the wrong time, crashing your server or deleting or creating public files). Remember that it may take a skilled programmer to produce software to create such problems for your application but if it allows attackers to break into your application there will be many copies made.
Handling Unusual Events
Some errors may be handled by terminating the application. If you take this option, you should ensure that before terminating, your application always leaves a secure, consistent environment. A secure environment means that all sensitive data is caged and any resources are tidied up. In particular you should look out for any resources such as network connections that may be reused by other processes, particularly if your application has already provided the authentication or authorization to use them. A consistent environment means that your application can be run successfully when restarted without being confused by the state it finds.
Always leave your external state (especially filestore) in a form that is safe if your application suddenly terminates.
If your application implements some form of roll-forward or rollback on restart, be particularly careful that the list of operations to be rolled forward or back is secured!
For most errors, your application will continue to execute. The chief risk here is that subsequent code may assume that the previous code has executed successfully. For example, code that loops, prompting the user for a password, but exits the loop during an ‘out of memory’ error does not mean that the password was correctly entered.
This chapter explains how to build up a clear view of what you want to protect, why you want to protect it, how you are going to protect it and who you are protecting it from.
First we covered how to analyze the security threats your application may face, based on consideration of who will need your application to be trustworthy, what they expect to be protected (including the need to avoid disclosing inadvertently the data of other applications), and who might want to attack your application’s security.
Then we discussed what steps can be taken to address the threats, including structuring your application with security in mind, the need to use capabilities responsibly, and protecting data. This includes consideration of the risks of removable media, of backup and restore operations, and other ways that your data may be inadvertently exposed. Finally, we covered some tips on secure implementation, including the importance of verifying both input data and its source, and consideration of unexpected situations that your application might have to deal with. Ultimately, you’re the person who knows your application best. When designing, think what to protect.When coding, always think ‘what if . . . ?’ And when you think that you’ve finished, always ask yourself ‘How would I exploit this?’
© 2010 Symbian Foundation Limited. This document is licensed under the Creative Commons Attribution-Share Alike 2.0 license. See http://creativecommons.org/licenses/by-sa/2.0/legalcode for the full terms of the license.
Note that this content was originally hosted on the Symbian Foundation developer wiki.