Skip to content

The Rainbow Series (but mainly the Orange Book really)

This post is going to aim to provide a brief overview of what The Rainbow Series of books is, and a summary of The Orange Book contained within this series.

The Rainbow Series (AKA the Rainbow Books) are a series of computer security standards and guidelines published by the US Department of Defense (DoD) in the 1980s and 1990s. A list of the publications/copies of them can be found on FAS. The objective of the standards within the books is to describe the process of evaluation for trusted systems. These standards may be used as a part of procurement criteria for a company.

The Orange Book: Aims
The Orange book contains the Trusted Computer System Evaluation Criteria (TCSEC). It can be used to evaluate, classify and select computer systems being considered for the processing, storage and retrieval of sensitive or classified information. The criteria are designed to be applied to an overarching system design, rather than to each individual component of a system. In the book, the criteria can be used to classify systems in to four brought hierarchical divisions:

  • Division D: Minimal protection
  • Division C: Discretionary protection
  • Division B: Mandatory protection
  • Division A: Verified protection

The criteria to classify systems was created with three objectives in mind:

  • To provide users with a measure of how trustworthy a system can be considered for the processing of sensitive or classified materials
  • To provide to manufacturers an idea of the standards of security that they should aim for when designing systems
  • To provide a basis for specifying security requirements during the IT acquisition process

Requirements for secure processing can be broken down in to two main types:

  • specific security feature requirements
  • assurance requirements.

The Orange Book: Fundamental Computer Security Requirements
Before the specifics of a computer security system can be discussed, a statement of requirements is needed. This should aim to address questions such as “What do we really mean when we call a system ‘secure’?” In general, secure systems should control (through the usage of various security features) access to information such that only people with the correct authorisation are able to access/process/read/write/create/delete sensitive information. From this basic statement of objective, six fundamental requirements can be derived:

  • 1: Security policy: As described in a previous post, there must be an explicit and well-defined security policy enforced in a system. The use of rules such as no read-up in various security architectures can be seen in the above linked post.
  • 2: Marking: Objects should be marked with a security classification level. This should allow users of a system to at a glance be able to identify the classification of material, and the system should apply different modes of access to materials depending on the material’s markings.
  • 3: Identification: Individual subjects must be identified. This facilitates the checking of users to ensure that they have a sufficient level of authorisation to access sensitive material. This identification/authorisation system must be securely maintained by the computer system, and be associated with all parts of the system performing security-relevant actions.
  • 4: Accountability: Logs of actions must be kept so that any actions affecting the security of the system can be traced back to the responsible party. The system must be able to select which information is required to be recorded to minimise the expense of auditing logs and to facilitate in efficient log analysis. Log information must be protected from modification by unauthorised users.
  • 5: Assurance: The system must contain hardware/software mechanisms that allow for the system to be independently evaluated to check for compliance with the above requires 1-4. It should be possible to carry out these checks in a secure manner.
  • 6: Continuous protection: The trusted mechanisms that enforce the requirements 1-4 must be continuously protected against tampering/unauthorised changes. A system cannot be considered secure if the measures used to check for security compliance can themselves be compromised.

These requirements form the basis for evaluation criteria applicable for each evaluation division and class. The full application of each of these requirements to each class can be read here in The Orange Book.

The Orange Book: Division D: Minimal Protection
Division D consists of only one class, as it is reserved for systems that have been evaluated but fail to meet the requirements for any of the higher evaluation classes.

The Orange Book: Division C: Discretionary Protection
Classes in Division C provide for discretionary protection and through the inclusion of audit capabilities, for accountability of subjects and the actions they take.

  • Class C1: Discretionary security protection. In this class, the Trusted Computing Base (TCB) of the system nominally satisfies the discretionary security requirements by providing separation of users and data. The class C1 system will also contain some form of controls capable of enforcing access limitations on an individual basis. This should allow for users to be able protect private projects/data from others and keep other users from accidentally reading of destroying other users’ data. Class C1 systems are for systems in which cooperating users process data all at the same level of sensitivity.
  • Class C2: Controlled access protection: Class C2 systems have more finely grained discretionary access control models than those seen in a class C1 system, making users more accountable for their actions. This is done via the use of login procedures, auditing of security-relevant events, and resource isolation.

The Orange Book: Division B: Mandatory Protection
In Division B classes, the notion of a TCB that preserves the integrity and sensitivity labels of objects and uses them to enforce a set of mandatory access control rules is a key requirement. The system developer must provide evidence of the security policy model on which the TCB was based, alongside a specification for the TCB. Evidence must be provided to demonstrate that the reference monitor concept has been implemented.

  • Class B1: Labeled security protection. A class B1 system must meet all of the requirements of a class C2 system. In addition, a statement of the security policy model, data labeling, and mandatory access control over named subjects/objects must be present. The capability must exist for the accurate labeling of exported information. Testing must be performed on the security of the system, and any flaws should be removed.
  • Class B2: Structured protection. In a class B2 system, the TCB is based on a clearly defined and documented security policy model. All of the access control enforcement found in a class B1 system should be included and extended to all objects in the Automatic Data Processing (ADP) system. In addition, the problem of covert channels must be addressed. The TCB must be structured so as to form a distinction between protection-critical and non-protection-critical elements. The TCB interface should be well defined, and through well-defined design and implementation it should be possible to test the system more thoroughly to perform a more complete review of it. Authentication methods should be strengthened, and trusted facility management should be provided in the form of support for system administrator and operator functions. Stringent configuration management controls should be imposed. A class B2 system should be relatively resistant to penetration.
  • Class B3: Security domains. The TCB of a class B3 system sets stringent requirements for the reference monitor; it must be mediate all access of subjects to objects, be tamper-proof, and be small enough to be subjectable to strict analysis and testing. The TCB should be structured to exclude code that is not essential to security policy enforcement. The TCB must also have minimal complexity, which represents a need for a high degree of systems engineering in the design/implementation of the system. A security administrator is supported, system recovery procedures are required, and auditing mechanisms are expanded to alert said administrator to security-relevant events. A class B3 system should be highly resistant to penetration.

The Orange Book: Division A: Verified Protection
Division A systems are characterised by the use of formal security verification methods which ensure that both mandatory and discretionary security controls employed within the system can effectively protect classified/sensitive materials stored/processed by the system. The TCB should be extensively documented to demonstrate that all security requirements are met in every aspect of design, development and implementation.

  • A class A1 system is functionally equivalent to that seen in a class B1 system, in that no additional architectural features or policy changes are required. The difference between these two classes stems from the analysis attainable from the highly complete design specifications and verification techniques. This analysis makes it possible for there to be a high level of assurance that the TCB has been correctly implemented. This assurance is developmental in nature, starting with a formal model of the security policy and a Formal Top-Level Specification (FTLS) of the design. There are five importation criteria for class A1 design verification:
    -A formal model of the security policy must be clearly identified and documented, with the inclusion of a mathematical proof that the model is consistent with its axioms and is sufficient to support the security policy
    -An FTLS must be produced that includes abstract definitions of the functions performed by the TCB, and of the hardware/firmware mechanisms that are implemented to enforce separated execution domains
    -The FTLS of the TCB must be provably consistent with the model via formal formal techniques (such as verification tools) where possible, and informal ones otherwise
    -Conversely, the elements TCB implementation must be shown to be consistent with the elements of the FTLS. The FTLS must express the unified protection mechanism required to satisfy the security policy, and it is the elements of this protection mechanism that are mapped to the elements of the TCB
    -Covert channels must be identified and analysed through the use of formal analysis techniques. Covert timing channels may be identified using informal techniques. The continued existence of any identified covert channels within a system must be justified.
  • Beyond class A1: Most envisioned future security enhancements that would take a system beyond class A1 are beyond current technological capabilities. However, some discussion of future possibilities can be found within The Orange Book.
Published inInformation Security

Be First to Comment

Leave a Reply

Your email address will not be published.