The operating system: the fundamental part of our computing environment. It is the mothership that controls the software and hardware applications that provide the tools of computing. But the operating system is also a security weak point. Just like other software, operating systems have vulnerabilities, and updates like Patch Tuesday are legendary.

Operating systems have vulnerabilities that can be exploited by cybercriminals to great effect. In the top 50 products with distinct vulnerabilities, seven of the top 10 are operating systems. Vulnerabilities result in exploits, and one such exploit example is the cross-platform remote access RAT Trojan named CrossRAT. This Trojan can infect multiple types of OS — Windows, MacOS, Linux and Solaris operating systems — and manipulate files and run malicious executables remotely.

Having a vulnerable operating system is bad enough if you are an enterprise user. However, what if you are using a vulnerable operating system to pilot a commercial aircraft? Or a vulnerable system that voted a government into power? It’s use cases such as these that have prompted a deeper look into this most fundamental aspects of secure systems — the OS.

What Is a Provably Secure Operating System?

The idea of provable operating systems first came about to solve the issue of proving, mathematically, that something was secure. To prove that X + Y = Z, you need to be able to show mathematical or logical proof. The same can be said for secure systems: Can you show a logic, in a design for instance, that can offer proof the system is secure? “Provable” being a probability that something is secure.

Various papers during the 1970s proposed the idea of having security as an innate requisite in the kernel of an OS. In a 1975 paper by Neumann, et.al., “A Provably Secure Operating System,” the substance and architecture of a Provably Secure Operating System (PSOS) was proposed. The design specification of the system was to be a “general-purpose operating system, whose security properties can be formally proven.” The design offered specifications and requirements for 11 levels.

The whole was based on applying SRI Hierarchical Development Methodology (HDM). HDM is a design methodology which separates out “design, data representation, and implementation.” It is particularly applicable to large-scale developments like an operating system and places security considerations as a central tenet.

(Source: Neumann, “A Provably Secure Operating System,” 1975)
[click image to enlarge]

A later paper published in 1979, again by Neumann with Feiertag, was titled “The foundations of a provably secure operating system (PSOS)” and further divided the PSOS into 16 hierarchical layers — each object developed with intrinsic security. This layering of objects in an OS is one of the specific requirements behind the achievable production of a secure operating system.

The design of the PSOS was based on the SPECIfication and Assertion Language called SPECIAL, which is used for large class systems such as operating systems. SPECIAL can be used in the formal proofing of the properties of software systems.

The PSOS hierarchy has objects which are accessed by capabilities. A capability can neither be altered nor forged. Each capability has a unique identifier and contains a set of access rights. Once a capability is created, that’s it: the capability is set in stone. In the PSOS, the object name is mapped to a capability. The access rights form the basis of the security of the OS: “a process cannot pass certain capabilities to other processes or to place these capabilities in storage locations (e.g., a directory or inter-process communication channel) accessible to other processes.”

The Abstraction Layers in a PSOS

The 1979 paper outlined abstracted layers, 0-16, in the PSOS.

  • 16 User request interpretation
  • 15 User environments and name spaces
  • 14 User input-output
  • 13 Procedure records
  • 12 User processes and visible input-output
  • 11 Creation and deletion of user objects
  • 10 Directories
  • 9 Abstract object manager
  • 8 Segments and windows
  • 7 Pages
  • 6 System processes and system input-output
  • 5 Primitive input-output
  • 4 Arithmetic and other basic procedures
  • 3 Clocks
  • 2 Interrupts
  • 1 Registers and other storage
  • 0 capabilities

(Source: Neumann and Feiertag, “The foundations of a provably secure operating system (PSOS),” 1979)

The capability function mentioned previously is the foundation stone of the PSOS and placed in layer 0. All of the other layers are constructed from layer 0.

Capabilities are vital to the security of the system. They can carry fine-grained access controls. They have many of the properties of data, and thus can be written in and out of storage — they are like security agents within the OS, serving as names or tokens for all objects of the PSOS. In a PSOS, security comes down to the capability being unforgeable and unchangeable.

Physical resources sit above layer 0, including primary and secondary storage, processors and input and output devices. Virtual resources are constructed from physical resources.

Layers 10 through 16 include the community and user abstractions. Community programs are typically directories or user processes.

The whole is viewed by Neumann, et. al., as a “family of systems” where you can “choose the level that provides the best set of resources to fulfill the needs of the desired system without having to include unnecessary facilities”.

The Power of the Capability

The concept of the capability and the fundamental nature of this object in the whole is the core of the provable security of the PSOS. Summing up what Neumann, et.al., see as the remit of the capability:

  1. It is simple, with no embedded policy in the mechanism. Importantly, a capability is unforgeable
  2. Capabilities cannot be bypassed
  3. The use of capabilities encourages the use of data and procedure abstractions
  4. Capability use extends to user programs
  5. The access rights of a capability limit the operations allowed against an object
  6. Capabilities can spawn new capabilities (a subset of the original) which inherit access rights. Access rights cannot be changed in this manner. This action can, itself, be prevented

Do We Need Our Operating Systems to Be Provably Secure?

We have to ask ourselves this question because of the limitations of the art of PSOS. Do we really need the OS to be provably secure? In a world where human beings find ways around pretty much everything, does a PSOS actually achieve security gains?

There are drawbacks to the hierarchical layered approach of a PSOS. Modern development and DevOps processes work by making continuous design modifications across layers. This does not fit the hierarchical layers of a PSOS. And this is compounded by the need for modern operating systems to communicate across layers in many ways: Applying this to a PSOS would likely impact performance.

Yet some work does seem to be continuing into developing provably secure operating systems, such as the high-assurance MUEN Kernel. This has some elements that echo the Neumann PSOS idea of component isolation and communication across the layers via policy.

And, of course, there are always people trying to prove that provably secure isn’t actually secure.

One thing is sure: in the world of security, there is no such thing as 100% secure — there are only levels of risk mitigation. A secure OS would not prevent a phishing email from stealing a user’s credentials via a spoof website. Sometimes, it’s important to remember that.

 

Sources

  1. Analyzing CrossRAT, Objective-See
  2. Neumann, et.al., “A Provably Secure Operating System,” USAECOM, 1975
  3. Levitt, et.al., “The SRI hierarchical development methodology (HDM) and its application to the development of secure software,” 1980
  4. Neumann with Feiertag, “The foundations of a provably secure operating system (PSOS),” 1979
  5. Lawrence Robinson and Olivier Roubine, “SPECIAL — A SPECification and Assertion Language,” 1977
  6. Breaking Provably Secure Systems, R.J. Lipton