Application security

Mobile Applications Security Problems as a Result of Insufficient Attention of Developers

May 27, 2018 by David Balaban

In the second half of 2017, developers uploaded about 2800 applications on Google Play on a daily basis. Each of these applications contains a certain amount of data that is stored or transmitted via cellular and Wi-Fi networks.

It is obvious that the data of mobile applications is the key target of the malefactors: not only do they steal data, but also manipulate it in their own interests. This involves a range of problems, such as fake and alternative (often unreliable) applications, malware, data leakage, poorly protected data or data protection errors, and a variety of tools for accessing and decrypting data.

How Developers Can Affect Security

There are many different opinions about an impact of a developer on security.

  1. The developer does not always respond promptly to security reports, which stress the shortcomings in applications.
  2. The developer has done everything in his power to ensure security, so the user causes any problems that arise. This can continue until the situation is publicized, often with the involvement of journalists.
  3. It’s time to put an end to blaming developers for all the security sins. In other words, developers are not the only ones who are involved in creating an application or product. There are also testers and product managers. At the same time, all of them perform their roles, which makes it possible to make sure the overall safety of the developed product is in place. This opinion was published in the article about a year ago. The idea of splitting responsibilities according to the roles was repeated several times throughout the paragraphs and was complemented by the conclusion that security problems always occur when someone develops a prototype to the level of a commercially successful product. Yet the split of functions implies a division of responsibilities. Sad but the claims regarding the security of the application have a hard time finding their final destination.

Such opinions move focus away from security, and developers do not feel responsible for their work until information about the problems hits the papers.

Developers have always been and will be the first to deal with applications and who can immediately fix problems in them and make them more secure. Pro-activity at its early stage also helps developers maintain a sufficient level of security. However, users should keep in mind that there are some circumstances that go beyond the developers’ control and time frames for troubleshooting may differ greatly. If a developer wants just to update the application, he creates a new build, sends it to the app store, and waits for it to be approved by the moderators. It may take several days. Then you need to wait for the updates if the automatic update on the user’s device was turned off. Thus, a developer only partly controls the process. The question arises, is it worth cultivating and encouraging such delays? This is counterproductive because the developer, being aware of this situation better than others, can in advance plan, for example, the introduction of new functionality, error fixes, and in general may have a checklist of what must be done on time with each update.

It looks like everyone follows the best practice principle, but after a closer look – only troubles …

People have got many needs (personal, professional) two or more devices and, as a result, more than one service provider and dozens of applications to meet their needs. Despite such flexibility and variety, all this software has many common features. For example, the same technologies for storing, transferring, and protecting data, even with different types of implementation, may have similarities at the architectural level.

As for security mechanisms, some of them already work by default and do not require developer involvement, while others do. For example, SSL/TLS, which is used in many applications, is vulnerable to Man-in-the-Middle (MITM) attacks when it comes to incorrect implementation. Whereas iOS 10 (and higher) and Android 7 (and higher) already have mechanisms for preventing MITM attacks that also help to avoid government interception of data from countries such as Kazakhstan and Thailand.

These mechanisms are included by default regardless of the developer’s actions. However, they still can be implemented in slightly different ways. Besides, Android has a completely independent mechanism, while the one of iOS has a managed mechanism, which makes it possible to activate the work of third-party SSL-certificates.

In addition to network problems, there are also problems with locally stored data (internal memory of mobile devices.) Most applications actively use local storage for cached data, optimization of network usage and quick access to user data (conversations, files, multimedia, etc.) Many of these applications often store user data with poor or no protection at all; for example, a private key is located next to the encrypted data.

Also, there are specialized forensic solutions, in which the results of studying these problems are actively embodied in the form of programmed methods for accessing application data and separately stored user data. Moreover, the presence of various security problems often makes retrieval of data easy and comfortable.

Illustrative case-study problems

Unsafe data storage

A large number of applications for storing user accounts, geolocation or bank information uses the “plain text” approach. For a number of people, this may mean uninstalling the application as soon as it becomes widely known.

Mixing data between applications (each application may contain different data) usually indicates that there is a likelihood of compromising additional user accounts. For example, an application that is not a social network can have authorization via social networks, and the account data used (for example, login and token with OpenID) will be stored in clear text.

Unsafe communication

In many cases, even the biggest app developers do not follow safety recommendations regarding the correct implementations of security mechanisms. SSL is the most popular method of communication for various applications. Every security guide states that it is required to check certificates, including the root certificate, to prevent data breaches. However, the lack of proper protection often helps to intercept mobile application traffic.

Data Leaks

Many popular mobile applications, especially games, often collect a huge amount of personal data for profiling and personalizing the device owner. This includes data such as age, gender, geolocation, social networks activities, habits (favorite routes, visited locations, musical, culinary preferences) and much more. In this case, there is a possibility that at least 2 or 3 applications may disclose information about the user. Moreover, the collection of such information is a violation of the requirements of many documents and safety recommendations, considering the goals and functions of the application. The problem is aggravated by the fact that user agreements often do not reflect the essence of the protection mechanisms or protected data or have inaccuracies. Moreover, no one reads them.

It is very important for users to limit the access of applications to their personal data using operating systems, to check the permissions during installation and to read the description in the app markets, and also the reviews of security specialists.

Third-party code

Using third-party libraries helps speed up the development and take advantage of the expertise of other developers in your own application. However, such experience includes all the problems too, including security problems. This practice can actually split the protection regarding security into two or more parts: the same data type, for example, the password, can be found in several places (different user scenarios) and with different levels of protection, as it, for example, happens with Facebook.

Deferred security fix

Creating an application and its appearance in the Play Market does not mean that the development cycle is over. On the contrary, developers are just starting to add new features, fix security bugs, inadvertently introduce new security errors, delete old functions, etc., which allows attackers to not only detect but also effectively use the problems that arise.

Several examples of problematic versions of applications during the same period

Starbucks application and password storage in the clear test

Panda SM Manager iOS Application – MITM SSL Certificate Vulnerability

Indirect leakage from the Tinder application

Mobile Data Leakage Report

Data leakage through ad trackers

In the long run

Each application can be reliable, weak, or unreliable at any time. Also, application updates violate the idea of getting “fixes” with each next version (the new version may be less secure than the previous version), and for some applications the update is mandatory. In practice, actual security problems continue to be ignored.

As a result, security vulnerabilities appear that allow malefactors to compromise users again and again. It seems that the more experienced the development team is, the less should such problems come up in the final product. However, having studied popular applications, we do not see that. This is because they are focused on the working product as a primary measure of making progress, i.e., the stable working product is an indicator of success. At the same time, developers can still create reliable applications and keep them up to date regarding security, using the right tools and sets of requirements.

Posted: May 27, 2018
David Balaban
View Profile