Introduction

Cloud computing and mobile applications are radically changing the way we do business. Enterprises are building applications more rapidly than ever before, often using Agile development processes and then expanding their internal development programs with third-party software and open-source libraries and components that increase the overall threat exposure cumulatively.

An application or software “vulnerability” is fundamentally a flaw, loophole, or weakness in the application that leads it to process critical data insecurely. By exploiting these vulnerabilities, cyber-criminals can gain access to an enterprise system or software and steal confidential data. Most common software vulnerabilities include escalation of privilege, buffer overflow, and input /output validation vectors such as SQL injection, cross-site request forgery (CSRF), and cross-site scripting (XSS).

Securing software and applications are among the major challenges faced by the industry. Still, in the eyes of software developers, security is an impediment and a roadblock to the overall development process.

Every software program or application has its own development lifecycle, which encompasses the following phases: initiation, development or acquisition, implementation, operations, maintenance, and disposal. Collectively, these are called the system development lifecycle (SDLC). Each of the phases has explicit goals and requirements and security has its set of controls to be adhered to and practiced for each stage.

What Is a Security Control?

Security controls are technical and administrative defenses and security measures for countering and minimizing loss or unavailability of services and application that are due to vulnerabilities.

Security controls are referenced generally all the time but are rarely defined. The security controls can be technical or administrative and can be further classified as preventive, detective, or corrective in nature.

  • Preventive controls are used to prevent the threat from coming in contact with the vulnerabilities or loopholes identified within an application or software package.
  • Detective controls are used to identify threats to the application ecosystem; i.e., threats existing in the application or software.
  • Corrective controls primarily focus on mitigating or moderating the effects of the threat being manifested in an application or software.

Security of the Software Environment

During the software development, operation, and maintenance processes, vulnerabilities offer entry points to attack systems, sometimes at a very deep level. Vulnerabilities in Web applications have been frequently used in this manner.

On the other hand, the greatest risk to any system is from those who are developing the system and the corresponding software. Application and system developers introduce security exposures, either maliciously or unintentionally into the system. Mitigating these types of risks involves security analysis and an investigation of all vulnerabilities identified in the software development life cycle (SDLC) or secure software development life cycle (SSDLC) models. Examining the vulnerabilities for all of the SDLC models is tremendously labor intensive, but there are some areas that are quite common and crucial to most models. These are development team, configuration management practices, and quality assurance practices. Each part of the SLDC emphasizes the difference between secure coding and securing the environment by:

  • reviewing the threat agents and corresponding threat exposure;
  • assessing configuration management’s part in securing the environment;
  • and, finally, discussing methods to monitor for malicious intent.

A few of the major consideration include:

  1. Management framework: Environment-wide security program planning and management providing a security framework for managing risk and continuing the cycle of activity (creating a system), developing and practicing security policies, assigning responsibilities, and monitoring the adequacy of the entity’s application related controls.
  2. Access controls that limit or detect access to computer resources, i.e., data, programs, equipment, and facilities, thus protecting these resources against unauthorized modification, loss, and disclosure.
  3. System software controls that limit and monitor access to the powerful programs and sensitive files that
    * control the computer hardware
    * secure applications supported by the system
  4. Service continuity controls to make certain that, when unexpected events occur across services, critical operations continue without interruption.
  5. Segregation of duties that are policies and procedures, and establishing an organizational structure so that one individual or employee cannot control key characteristics of computer-related operations and thereby conduct unauthorized activities or gain unauthorized access to assets or records.

Security Weakness and Vulnerabilities at the Source-Code Level

The most commonly debated aspect of vulnerabilities identified at the source code level is protecting an application from deliberate misuses such as buffer overflows and other such vulnerabilities. These attack vectors are discovered and exploited time and again at the source code level. Nevertheless, defending source code is a vital part of application security.

Vulnerabilities at the source code level can be managed in multiple ways. The key source-code vulnerabilities include:

  • Input validation
  • Source code design
  • Information leakage (Inappropriate error handling)
  • API security issues
  • Direct object reference
  • Weak session management
  • Using HTTP GET query strings
  • Service usage

Input Validation

Input validation attacks generally happen when there’s a failure to appropriately validate data at the entry and exit points of the application prior to using the data. This flaw fundamentally leads to almost all of the key vulnerabilities in web applications, including but not restricted to cross-site scripting, injection-based attacks, file system attacks, and even buffer overflows.

Data from external entities should never be trusted and should be validated. Generally, large-scale complex applications often have a large number of entry points, which makes it challenging to developers to enforce validation across all the points. This is why it is generally recommended that security controls should have a central implementation (including libraries that call external security services). As an example, to prevent injection attacks, developers should focus on developing a centralized module/function that performs input parameter validation and checks each input parameter against a strict format stipulating exactly which types of input will be allowed. Such implementations are highly useful for preventing injection flaws that would test all the possible forms of inputs available in the application to recognize if the application adequately validates input data prior using it. Input validation vulnerabilities include but are not restricted to:

  • Cross-site scripting
  • Buffer overflow
  • SQL injection
  • Format bug
  • XPATH Injection
  • Cross-site request forgery
  • LDAP Injection

Buffer Overflow

A buffer overflow occurs when a service or a program attempts to put more data into a buffer than it can hold or when an application attempts to put data into a memory area past a buffer. Buffer overflow errors are characterized by the overwriting of memory fragments of a process which generally must not be modified, intentionally or unintentionally.

A buffer overflow attack is an anomaly wherein a service or a request, while writing data to a buffer, overruns the buffer’s perimeter and overwrites adjacent memory locations. Buffer overflow attacks generally occur when people operate on buffers of the character type.

Buffer overflows can corrupt data, crash a program, or cause the execution of a malicious code. From an availability perspective, these attacks generally lead to crashes or put the application or program into an infinite loop.

Executing third-party code: Buffer overflows often can be used to execute arbitrary third-party code, which against the program’s implicit security policy, is outside the scope of a program to execute.

Privilege Escalation

Privilege escalation takes place when a user gets access to resources and functionality that they are not authorized or generally allowed to access. Such elevation or changes in the authorized access should be prevented by the application. This vulnerability is typically caused by a flaw in the application. The result is that the application executes actions and events with privileges beyond those initially intended by the software developer or system administrator.

The amount of escalation depends on what privileges the attacker is authorized to possess by default and to what privilege level an attacker can escalate with a successful exploit against a vulnerability. For example, a software design or a programming error that allows a user to gain an extra level of privilege after successful authentication at least limits the degree of escalation, because the user is authorized to hold some privilege. A remote cyber-criminal gaining super-user privileges without any form of authentication represents a much greater degree of escalation and is thus considered a very poor practice.

It is called vertical escalation when it is possible to access resources and features that are initially granted and provided to more privileged accounts, e.g., acquiring administrative privileges for any application. When it is possible to access resources granted to a similarly configured account with a different level of privilege than the attacker, it is called horizontal escalation, e.g., in an online retail portal, accessing information related to a different user.

Configuration management 

Configuration management can be considered one of the key tools for securing the environment. It is one of the most important techniques for controlling the overall interoperability of the application code and is a significant constituent of the robust security process.

Using inspections as the primary safeguard from development exposures limits the cost savings assured by Agile development practices and does not provide comprehensive protection from a developer wishing to introduce malicious vulnerabilities in the application. In configuration management, the areas having the greatest effect on security are configuration audit and configuration control.

Many of the features essential for configuration control can be delivered through version control tools. Most version control systems permit an individual with authorized access to check source code comprehensively without adequate validation techniques. However, in a so-called secure environment (i.e., an environment that follows and carries out leading cyber-security practices), a version control tool should essentially add to and integrate with the defect tracking application or software and record the identity of the developers who gain access to a specific code source module or function. This will enable all the users to have access as per the privileges associated with the roles within the development code.

Integrating the version control tools with code security analysis tools is useful because it helps to secure the code repositories, as well. Often, developers copy the source code from a tested component or investigate the methods used by another developer to address a specific issue in their work or require access to read source modules they are not currently maintaining. This access provides an opportunity for insiders to research and learn how to introduce malicious functions and features into another source module. By logging all the accesses to source modules, a security administrator can monitor access to the source code and have alerting enabled.

Using configuration audits is the other technique for making a development project more secure. Most of the regulatory agencies require compliance audits to identify the safety measures for critical or high-reliability applications and also to provide an independent review of the delivered product.

A configuration audit in an environment addresses the requirement of assuring that the delivered product or application does not expose the enterprise assets to risk from either software defects or malicious functionality. To increase the confidence that delivered software does not contain defects or malicious functionality, security auditors should validate all the security controls considered within the application. This is predominantly significant with interpreted software development languages, such as Python or other scripting languages, since a defect can eventually permit the entry of a malicious code.

Security of Application Programming Interfaces

An application program interface (API) is a set of procedures, protocols, and tools for building applications. An API specifies how application modules should interact, thus securing APIs is one of the most critical success factors for any application.

Generally, when we speak of API security, we harbor a diverse array of security concepts with many different security categories as per the API interfaces and provisioned services:

  • Authentication—which assists in reliably identifying a user
  • Authorization—which provides identified user access to precise resources and data
  • Encryption—which hides information to prevent unauthorized access
  • Signatures—which ensures information integrity
  • Vulnerability management—to prevent attacks and damage to consumers or providers

Authentication and Authorization

Authentication and authorization are commonly used together. Authentication is used to dependably determine the identity of a user, while authorization is used to determine what resources and features the identified user should have access to.

On the web, authentication is most often executed via a dialog that prompts for username and password. For additional security, software certificates, biometrics, and hardware keys may be used. Once the user is authenticated and authorized, the system selects which resources to allow access to.

For APIs, it is common to utilize some form of access token, which may be obtained through an external or third-party process, e.g., when someone is signing up for the API or through a separate mechanism such as OAuth. The token is traversed with each request to an API and is authenticated by the API before getting the request processed.

Encryption and Signatures

Encryption is generally used to hide information from those who are not authorized to view it. On the Internet, TLS is often used to encrypt HTTP messages, sent and received either by web browsers or corresponding API clients. The only limitation of TLS is that it applies only to the transport layer. Data that also needs to be protected in other layers requires separate security solutions and protocols.

Signatures are used to confirm that API requests or response have not been interfered or tampered with in transit. Even if the message itself is unencrypted, it must be protected against modification and should reach its destination intact.

Both encryption and signatures are used in combination every so often; the signature can be encrypted to permit only certain entities or parties to validate if the signature provided is valid or the encrypted data could be signed to further ensure that the information is neither seen nor modified by unwanted entities or parties.

Vulnerabilities

The area of security vulnerabilities is a wide and diverse field. There are numerous different attack vectors with different methods and targets. One way to categorize vulnerabilities is by target region:

  • Network and OS—issues and matters in the operating system and network components, e.g., buffer overflows, flooding with sockets, denial of service (DOS) or distributed denial of service (DDoS) attacks, etc.
  • Application layer—issues in the hosting application server and related services (e.g., message parsing, session hijacking, security misconfigurations, input validation errors, etc.)
  • API / component—functional issues in the actual API (e.g., injections across platforms, data exposure, partial access control)

CISSP Instant Pricing – InfoSec

Best Practices for API Security Awareness

A serious approach to security testing should include (but not necessarily be limited to):

  1. Requirements—Make security visible in the requirements and backlog processes, on the same level as other areas such as performance, usability, etc.
  2. Knowledge—Invest in security know-how and testing among your software developers and testers, so they are aware of the known security breaches and how to guard against them.
  3. Prevention—Security should be tested and assessed early in a project and don’t leave it until the end right before production.
  4. Monitoring—Continuously monitor your software’s for security vulnerabilities using open sourced tools or customized solutions, with focus on how new changes can have unwanted side effects.
  5. Awareness—Make use of free tools and resources (like those available at OWASP) to get an overview of pertinent vulnerabilities and make sure they do not affect the corresponding applications and projects.

Conclusion

In this article, we discussed multiple areas that are crucial from a CISSP perspective for the software development environment. The OWASP top 10 from a technical standpoint is one of the major areas that get assessed—input validation. Buffer overflows and privilege escalation are the major areas that should be focused upon. In addition, the concepts and processes of the software management life cycle, such as configuration management, have been defined with the areas to be emphasized, especially methods of securing APIs.

Be Safe

Section Guide

Ryan
Fahey

View more articles from Ryan

Earn your CISSP the first time with InfoSec Institute and pass your exam, GUARANTEED!

Section Guide

Ryan
Fahey

View more articles from Ryan