The increase in the use of mobile applications comes with an increase in hackers targeting them. A Symantec survey found that 1 in 36 devices had high-risk apps installed. Mobile applications are seen by hackers as a rich source of consumer data. In addition, valuable IP may be vulnerable through apps, either within the app itself, or via connected databases.
Although there are services or toolkits that purport to make app development cheap and/or simple, if you value your reputation or customer data, it is best to proceed with caution and prioritize app security.
Securing mobile apps is similar to securing desktop applications; it is, however, often more complex, as mobile apps frequently rely on external connectivity to back-end server systems, increasing the potential attack surface.
Here, we’ll look at some of the key areas to consider to protect your mobile apps against cyberattacks.
Securing the connection to your backend server
For apps that rely on data from back-end systems via API calls, it is critical that HTTPS is used for all connections. However, it is important to ensure that your SSL methods are secure and, for example, do not allow just any certificate to be used. (For example, do not use an insecure version of SSLSocketFactory.)
Also, because HTTP basic authentication is considered quite insecure now, REST APIs must be secured by JWT (e.g., OAuth2 access tokens). When user account data is accessed, these tokens must be created as part of a secure interactive login with the user. For mobile apps, the most appropriate grant for OAuth2 is the Authorization code grant with PKCE. Don’t store client secrets in your app code — these are easily discovered.
For any professional application (mobile, desktop or web) it should be considered essential to perform a source code analysis using a third-party system. (Examples include Checkmarx and Appknox.) These will, most likely, reveal vulnerabilities that you would otherwise miss, especially in third-party or open-source libraries.
External code review is much more likely to reveal vulnerabilities compared to in-house reviews and should be considered a fundamental part of a professional development process.
Any sensitive data must be protected against unauthorized access. Credentials and other sensitive data, if stored at all, should be stored in an appropriate keychain (iCloud keychain or Android keychain) rather than less secure locations, such as p-list files. Other user or sensitive data requiring local storage should always be encrypted.
Although code libraries should be kept updated to incorporate any security patches and other relevant items, be wary of updating open-source libraries. Incidents have been reported of malware having been incorporated into open-source libraries and not detected before automated build processes, set to always use the latest version, had incorporated it.
Therefore, for security, always set your build process to use known good versions of open-source libraries and monitor for any security updates. (This brings back the point of incorporating third-party code analysis into your build process.)
Avoid using passcodes for authentication to your application
Where possible use a biometric login (fingerprint or face recognition) as these are, according to Apple, generally more secure. These options are now universally available in iOS or Android SDKs.
To help in cases when there is suspicious activity, always notify users when their account has been signed into from a new or unknown device. This greatly helps in assuring users that you take their security and privacy seriously.
Cloud audit logs can be of great benefit when tracking down suspicious activity from hacked applications or misused or unauthorized attempts to use your application’s APIs.
Development or debug code
You must absolutely ensure that there are no test credentials, keys or certificates, nor any debug or test code left in your production application. These are perfect targets for hackers but are, surprisingly, often overlooked.
Make reverse-engineering difficult
Reverse engineering is a common method of learning about the operation of an app for hacking purposes. There are numerous techniques that can be applied here: detection of debuggers is one that is commonly used. Here, an application would insert debug detection code in various parts of the application and use the result to change the normal operation of the app so that it does not, for example, communicate correctly with a server, store data and so on.
Another deterrent involves performing critical tasks on your server. For example, performing digital signing or encryption on the server would avoid storing keys in the application.
Coding in C++ can make your application more resistant to reverse-engineering than using Java, which can be relatively easy to decompile. Be aware too that some cross-platform development tools (e.g., Xamarin) may create an application that is easy to decompile, especially on the Android platform.
Filter user input
User input may end up being sent as part of a dB query or otherwise result in potential for malicious code injection. Always filter user input, restricting the characters as appropriate.
Making mobile apps resistant to hacking is a major task and should always be amongst the first things to include when initially working out the function and architecture of any app, and plenty of time and resources should be allocated to it. Companies like LaLiga have discovered, to their cost, what can happen if you don’t take this seriously. LaLiga being fined $280,000 for privacy violations caused by their app.
By using security by design as a remit in app development, you can create apps that are useful, fun and secure.
- Symantec Security Center, Broadcom
- SSLSocketFactory, developers.android.com
- About Touch ID advanced security technology, support.apple.com
- LaLiga fined $280K for soccer app’s privacy-violating spy mode, TechCrunch