COM379 Introduction - University of Sunderland
Download
Report
Transcript COM379 Introduction - University of Sunderland
History of Security Standards
University of Sunderland
CSEM02
Harry R Erwin, PhD
Roadmap
• History
–
–
–
–
Multics
The Bell/LaPadula Model
The TCSEC (Orange Book) standards
The TNI (Red Book) standards
• Recent Practice
– Common Criteria
– Protection Profiles
• Assessment
Resources
• Karger, P.A. and R.R. Schell, Thirty Years Later: Lessons
from the Multics Security Evaluation. RC 22534 (W0207134), July 31, 2002, IBM Research Division: Yorktown
Heights, NY.
• Bell, D.E. and L.J. LaPadula, Secure Computer Systems:
Unified Exposition and Multics Interpretation. MTR 2997,
April 1974, The MITRE Corporation: Washington DC.
• Department of Defense Trusted Computer System
Evaluation Criteria. DoD 5200.28-STD, December 26,
1985, Department of Defense: Washington DC.
• Trusted Network Interpretation. NCSC-TG-005, July 31
1987, National Computer Security Center: Washington
DC.
More Recent Resources
• Stoneburner, G., CSPP - Guidance for COTS Security Protection
Profiles. NISTIR 6462, December 1999, NIST: Gaithersburg, MD.
• Information technology - Security techniques — Evaluation criteria for
IT security — Part 1: Introduction and general model. ISO/IEC
15408-1: 1999 (E), December 18, 1998, ISO/IEC.
• Information technology - Security techniques — Evaluation criteria for
IT security — Part 2: Security functional requirements. ISO/IEC
15408-2: 1999 (E), December 18,1998, ISO/IEC.
• Information technology - Security techniques — Evaluation criteria for
IT security — Part 3: Security assurance requirements. ISO/IEC
15408-3: 1999 (E), December 18, 1998, ISO/IEC.
Other Papers
• Thompson, K., Reflections on Trusting Trust.
Communications of the ACM, 1984. 27(8): p. 761-763.
• Thompson, K., On Trusting Trust. Unix Review, 1989.
7(11): p. 70-74.
• Karger, P.A., M.E. Zurko, et al, A Retrospective on the
VAX VMM Security Kernel. IEEE Transactions on
Software Engineering, 1991. 17(11): p. 1147-1165.
• Biba, K.J., Integrity Consideration for Secure Computer
Systems. ESD-TR-76-372, MTR-3153, April 1977, The
MITRE Corporation: Bedford, MA.
History
•
•
•
•
•
Multics
Unix
TCSEC
Common Criteria
Current Practice
Security
• ‘Freedom from undesirable events’—hence much broader
than the usual concept.
• In the UK, there are three elements to security (in a narrow
sense) often listed:
– Confidentiality—‘protection of data from unauthorized access.’
– Integrity—‘protection of data from unauthorized modification.’
More generally, certain desirable conditions are maintained over
time
– Availability—‘the system is usable by authorized users.’
• Trust is also becoming important—various relationships
between a trustor and a trusted entity.
Multics
• A USAF operating system, 1965-1976, designed
to provide multi-level security (MAC) for a
command and control system. Conceptually based
on CTSS, the first time-sharing system at MIT.
• Security was designed into the system. Smaller
security kernel than SELinux (2.5 times).
• Remains considerably more secure than Windows
XP and UNIX/MacOS X
• Primary worked example of class B2 security (cf).
• ‘no buffer overflows’, but still vulnerable to
malicious software.
Areas of Concern
•
•
•
•
Malicious developers
Trapdoors during distribution
Boot-sector viruses
Compiler trapdoors (see the papers by Ken
Thompson)
Auditing and Intrusion Detection
• Discovered it could be bypassed during the
installation of malicious software, either by
modification/deletion of the audit records or
by experimenting with a duplicate machine
to see what would be detected.
Multics Report Conclusions
• Classes C2 and B1 only suitable for benign,
closed environments.
• Class A1 required for open, hostile
environments. Class A1 requires proving
the system is secure. This leads to the topic
of models.
Security Models
• A security model is a set of rules that can be
formally proven to enforce a security policy.
• There are models for confidentiality, integrity, and
availability. These models conflict, and there is no
model for trust.
• The oldest of these models is the Bell/LaPadula
model.
• A good integrity model of the time is Biba.
The Bell/LaPadula Model
• Abstract representation of computer security as it
applies to classified information. The goal is to
control the access of active subjects to passive
objects according to a defined policy.
• Modes of access: read and write in various
combinations.
• Access permission is stored in a matrix.
• Security classifications are defined as level and
compartment (e.g., TOP SECRET ULTRA).
Security Access Rules
• Simple security property: for a subject to access an
object, the level and compartments of the subject
must dominate those of the object. (Dominant
levels are higher and dominant compartments are
supersets.)
• The *-property prevents a malicious subject from
‘leaking’ information: if a subject has read access
to document A and write access to document B,
then the level and compartments of document B
must dominate those of document A.
• Finally, anything not prohibited is allowed. This is
known as the discretionary property.
BP and Security
• The Bell/LaPadula model can be implemented in a security
kernel and will work correctly.
• The kernel has to be monolithic—that is a single thread
mediates all access and that thread handles all accesses
atomically.
• Implication: distributed security cannot enforce the BP
model. It can only detect after the fact that a violation has
occurred (Abrams, personal communication). This resulted
in the withdrawal of the Trusted Network Interpretation
(q.v.).
• An utter pain operationally as objects emerging from the
process are highly classified.
The Covert Channel Problem
• Early in the development of computer security,
covert channels were discovered.
• A covert channel is a communication channel that
can be exploited to transfer information in a
manner that violates the system's security policy.
• Two types of covert channels:
– storage channels and
– timing channels.
• 100 bits per second very bad and 1 bit per second
tolerable and common. Remains a problem today.
UNIX
• UNIX security was designed in direct opposition
to Multics.
• It enforces discretionary rather than mandatory
access, relying on the owners of the objects to
judge who may have access.
• Intended for a benign environment where users are
trusted. “System high” security.
• Covert channels not an issue.
• Basis (with development) for class C2.
The TCSEC (Orange Book)
Standards
•
•
•
•
•
•
•
•
Descriptions from the standard
Class D
Class C1
Class C2
Class B1
Class B2
Class B3
Class A1
Class D
• This class is reserved for those systems that
have been evaluated but that fail to meet the
requirements for a higher evaluation class.
• Older operating systems.
Class C1, Discretionary Security
Protection
The Trusted Computing Base (TCB) of a class (C1)
system nominally satisfies the discretionary security
requirements by providing separation of users and data.
It incorporates some form of credible controls capable
of enforcing access limitations on an individual basis,
i.e., ostensibly suitable for allowing users to be able to
protect project or private information and to keep other
users from accidentally reading or destroying their
data. The class (C1) environment is expected to be one
of cooperating users processing data at the same
level(s) of sensitivity.
Class C2, Controlled Access
Protection
• Systems in this class enforce a more finely
grained discretionary access control than
(C1) systems, making users individually
accountable for their actions through login
procedures, auditing of security-relevant
events, and resource isolation.
• Current commercial operating systems.
Class B1, Labelled Security
Protection
Class (B1) systems require all the features
required for class (C2). In addition, an
informal statement of the security policy
model, data labelling, and mandatory access
control over named subjects and objects
must be present. The capability must exist
for accurately labelling exported
information. Any flaws identified by testing
must be removed.
Class B2, Structured Protection
In class (B2) systems, the TCB is based on a clearly defined and
documented formal security policy model that requires the
discretionary and mandatory access control enforcement found in
class (B1) systems be extended to all subjects and objects in the ADP
system. In addition, covert channels are addressed. The TCB must be
carefully structured into protection-critical and non- protection-critical
elements. The TCB interface is well-defined and the TCB design and
implementation enable it to be subjected to more thorough testing and
more complete review. Authentication mechanisms are strengthened,
trusted facility management is provided in the form of support for
system administrator and operator functions, and stringent
configuration management controls are imposed. The system is
relatively resistant to penetration.
Class B3, Security Domains
The class (B3) TCB must satisfy the reference monitor
requirements that it mediate all accesses of subjects to
objects, be tamperproof, and be small enough to be
subjected to analysis and tests. To this end, the TCB is
structured to exclude code not essential to security
policy enforcement, with significant system
engineering during TCB design and implementation
directed toward minimizing its complexity. A security
administrator is supported, audit mechanisms are
expanded to signal security-relevant events, and system
recovery procedures are required. The system is highly
resistant to penetration.
Class A1, Verified Design
Systems in class (A1) are functionally equivalent to those in class
(B3) in that no additional architectural features or policy requirements
are added. The distinguishing feature of systems in this class is the
analysis derived from formal design specification and verification
techniques and the resulting high degree of assurance that the TCB is
correctly implemented. This assurance is developmental in nature,
starting with a formal model of the security policy and a formal toplevel specification (FTLS) of the design. In keeping with the
extensive design and development analysis of the TCB required of
systems in class (A1), more stringent configuration management is
required and procedures are established for securely distributing the
system to sites. A system security administrator is supported.
Fundamental TCSEC
Requirements
•
•
•
•
•
•
Security Policy
Marking
Identification
Accountability
Assurance
Continuous Protection
Security Policy
There must be an explicit and well-defined security policy enforced
by the system. Given identified subjects and objects, there must be a
set of rules that are used by the system to determine whether a given
subject can be permitted to gain access to a specific object. Computer
systems of interest must enforce a mandatory security policy that can
effectively implement access rules for handling sensitive (e.g.,
classified) information. These rules include requirements such as: No
person lacking proper personnel security clearance shall obtain access
to classified information. In addition, discretionary security controls
are required to ensure that only selected users or groups of users may
obtain access to data (e.g., based on a need-to-know).
Marking
Access control labels must be associated with
objects. In order to control access to information
stored in a computer, according to the rules of a
mandatory security policy, it must be possible to
mark every object with a label that reliably
identifies the object's sensitivity level (e.g.,
classification), and/or the modes of access
accorded those subjects who may potentially
access the object.
Identification
Individual subjects must be identified. Each
access to information must be mediated based on
who is accessing the information and what classes
of information they are authorized to deal with.
This identification and authorization information
must be securely maintained by the computer
system and be associated with every active
element that performs some security-relevant
action in the system.
Accountability
Audit information must be selectively kept and
protected so that actions affecting security can be
traced to the responsible party. A trusted system
must be able to record the occurrences of securityrelevant events in an audit log. The capability to
select the audit events to be recorded is necessary
to minimize the expense of auditing and to allow
efficient analysis. Audit data must be protected
from modification and unauthorized destruction to
permit detection and after-the-fact investigations
of security violations.
Assurance
The computer system must contain hardware/software mechanisms
that can be independently evaluated to provide sufficient assurance
that the system enforces requirements 1 through 4 above. In order to
assure that the four requirements of Security Policy, Marking,
Identification, and Accountability are enforced by a computer system,
there must be some identified and unified collection of hardware and
software controls that perform those functions. These mechanisms are
typically embedded in the operating system and are designed to carry
out the assigned tasks in a secure manner. The basis for trusting such
system mechanisms in their operational setting must be clearly
documented such that it is possible to independently examine the
evidence to evaluate their sufficiency.
Continuous Protection
The trusted mechanisms that enforce these basic
requirements must be continuously protected
against tampering and/or unauthorized changes.
No computer system can be considered truly
secure if the basic hardware and software
mechanisms that enforce the security policy are
themselves subject to unauthorized modification
or subversion. The continuous protection
requirement has direct implications throughout the
computer system's life-cycle.
Class C2 Requirements
•
•
•
•
•
•
•
•
Discretionary Access Control
Object Reuse
Identification and Authentication
Audit
System Architecture
System Integrity
Security Testing
Documentation
(TCB = Trusted Computing Base)
Discretionary Access Control
The TCB shall define and control access between named users and
named objects (e.g., files and programs) in the ADP system. The
enforcement mechanism (e.g., self/group/public controls, access
control lists) shall allow users to specify and control sharing of those
objects by named individuals, or defined groups of individuals, or by
both, and shall provide controls to limit propagation of access rights.
The discretionary access control mechanism shall, either by explicit
user action or by default, provide that objects are protected from
unauthorized access. These access controls shall be capable of
including or excluding access to the granularity of a single user.
Access permission to an object by users not already possessing access
permission shall only be assigned by authorized users.
Object Reuse
All authorizations to the information contained
within a storage object shall be revoked prior to
initial assignment, allocation or reallocation to a
subject from the TCB's pool of unused storage
objects. No information, including encrypted
representations of information, produced by a
prior subject's actions is to be available to any
subject that obtains access to an object that has
been released back to the system.
Identification and Authentication
The TCB shall require users to identify themselves to it
before beginning to perform any other actions that the TCB
is expected to mediate. Furthermore, the TCB shall use a
protected mechanism (e.g., passwords) to authenticate the
user's identity. The TCB shall protect authentication data so
that it cannot be accessed by any unauthorized user. The
TCB shall be able to enforce individual accountability by
providing the capability to uniquely identify each individual
ADP system user. The TCB shall also provide the
capability of associating this identity with all auditable
actions taken by that individual.
Audit
The TCB shall be able to create, maintain, and protect from modification or
unauthorized access or destruction an audit trail of accesses to the objects it protects.
The audit data shall be protected by the TCB so that read access to it is limited to
those who are authorized for audit data. The TCB shall be able to record the
following types of events: use of identification and authentication mechanisms,
introduction or objects into a user's address space (e.g., file open, program
initiation), deletion of objects, and actions taken by computer operators and system
administrators and/or system security officers, and other security relevant events.
For each recorded event, the audit record shall identify: date and time of the event,
user, type of event, and success or failure of the event. For
identification/authentication events the origin of request (e.g., terminal ID) shall be
included in the audit record. For events that introduce an object into a user's address
space and for object deletion events the audit record shall include the name of the
object. The ADP system administrator shall be able to selectively audit the actions
of any one or more users based on individual identity.
System Architecture
The TCB shall maintain a domain for its own
execution that protects it from external
interference or tampering (e.g., by modification of
its code or data structures). Resources controlled
by the TCB may be a defined subset of the
subjects and objects in the ADP system. The TCB
shall isolate the resources to be protected so that
they are subject to the access control and auditing
requirements.
System Integrity
Hardware and/or software features shall be
provided that can be used to periodically
validate the correct operation of the on-site
hardware and firmware elements of the
TCB.
Security Testing
The security mechanisms of the ADP system shall
be tested and found to work as claimed in the
system documentation. Testing shall be done to
assure that there are no obvious ways for an
unauthorized user to bypass or otherwise defeat
the security protection mechanisms of the TCB.
Testing shall also include a search for obvious
flaws that would allow violation of resource
isolation, or that would permit unauthorized
access to the audit or authentication data. (See the
Security Testing guidelines.)
Documentation
• You must provide:
–
–
–
–
Security Features User's Guide
Trusted Facility Manual
Test Documentation
Design Documentation
TNI (Red Book)
• Trusted Network Interpretation
– Interpretations of the TCSEC for networks
– Added security services
• Did not include
– COMSEC
– Emanations security
– Physical security
• NTCB means the Network Trusted Computing
Base.
Some Important Network
Interpretations for C2 Systems
• Note that “users” do not include “operators,”
“system programmers,” “technical control
officers,” “system security officers,” and other
system support personnel. They are distinct
from users and are subject to the Trusted Facility
Manual and the System Architecture requirements.
Such individuals may change the system
parameters of the network system, for example, by
defining membership of a group. These
individuals may also have the separate role of
users.
Some Important Network
Interpretations for C2 Systems
• Policies:
– SECRECY POLICY: The network sponsor shall define
the form of the discretionary secrecy policy that is
enforced in the network to prevent unauthorized
users from reading the sensitive information
entrusted to the network.
– DATA INTEGRITY POLICY: The network sponsor
shall define the discretionary integrity policy to prevent
unauthorized users from modifying, viz., writing,
sensitive information. The definition of data
integrity presented by the network sponsor refers to the
requirement that the information has not been
subjected to unauthorized modification in the network.
Some Important Network
Interpretations for C2 Systems
• When group identifiers are acceptable for access
control, the identifier of some other host may be
employed, to eliminate the maintenance that
would be required if individual identification of
remote users was employed. However, it must be
possible from that audit record to identify
(immediately or at some later time) exactly the
individuals represented by a group identifier at the
time of the use of that identifier.
Some Important Network
Interpretations for C2 Systems
• The NTCB shall ensure that any storage
objects that it controls (e.g., Message
buffers under the control of a NTCB
partition in a component) contain no
information for which a subject in that
component is not authorized before granting
access. This requirement must be enforced
by each of the NTCB partitions.
Some Important Network
Interpretations for C2 Systems
• In cases where the NTCB is expected to mediate actions
of a host (or other network component) that is acting on
behalf of a user or group of users, the NTCB may
employ identification and authentication of the host (or
other component) in lieu of identification and
authentication of an individual user, so long as the
component identifier implies a list of specific users
uniquely associated with the identifier at the time of its use
for authentication. This requirement does not apply to
internal subjects.
Some Important Network
Interpretations for C2 Systems
• The sponsor must select which events are
auditable. If any such events are not
distinguishable by the NTCB alone, the
audit mechanism shall provide an interface,
which an authorized subject can invoke
with parameters sufficient to produce an
audit record. These audit records shall be
distinguishable from those provided by
the NTCB.
Some Important Network
Interpretations for C2 Systems
• In the context of a network system, “other
security relevant events” might require:
– Identification of each access event
– Identification of the starting and ending times
of each access event
– Identification of security-relevant exceptional
conditions
– Utilization of cryptographic variables
– Changing the configuration of the network
Some Important Network
Interpretations for C2 Systems
• Testing of a component will require a testbed
that exercises the interfaces and protocols of
the Component including tests under exceptional
conditions. The testing of a security
mechanism of the network system for meeting this
criterion shall be an integrated testing
procedure involving all components containing
an NTCB partition that implement the given
mechanism. This integrated testing is additional
to any individual component tests involved in the
evaluation of the network system.
Some Important Network
Interpretations for C2 Systems
• User documentation should describe user
visible protection mechanisms at the
global (network system) level and at the
user interface of each component, and the
interaction among these.
Some Important Network
Interpretations for C2 Systems
• The manual shall contain specifications and
procedures for the network configuration.
– The hardware configuration of the network itself;
– Attaching new components to the network;
– The case where certain components may periodically
leave the network and then rejoin;
– Network configuration aspects that can impact the
security of the network system;
– Loading or modifying NTCB software or firmware
• Any assumptions about security of a given
network should be clearly stated.
Communications Service
• The communications service itself has to be
evaluated in terms of
–
–
–
–
–
Design encapsulation
Testing
Design verification
Configuration management
Network control
Network Security Services
• Communications integrity
– Authentication, field integrity, and non-repudiation
• Denial of service (DoS)
– Continuity of operations, protocol-based protection from DoS, and
network management
• Compromise protection
– Data confidentiality, traffic flow confidentiality, and selective
routing
TCSEC and Commercial
Systems
• Examples of Class D systems
– Windows 98
– MacOS 9
• Examples of Class C2
–
–
–
–
Windows NT
Windows XP
Unix and Linux
MacOS X
• Example of Class A1
– DEC built a VAX system to A1 standard: “The Brick”
The TCSEC in Practice
• Experience with the TCSEC was negative. It was
used as the basis for the evaluation of a number of
systems, but the following issues were
encountered:
– Enormous bureaucracy and paperwork burden
– Any hardware or software change forced reevaluation
of the entire system
• Eventually, formal evaluation was abandoned, and
this led to the development of the Common
Criteria.
Common Criteria
• A International standard extending the TCSEC
• Different approach—no certification, rather the
vendor documents the security requirements met
by the system in a Protection Profile (PP) using a
standard security taxonomy. The user then
compares their security requirements (defined in
the same taxonomy) against the PP. Evaluation
still key.
• CCTool is an expert system to support the process.
• A bit behind the power curve.
Practical Experience
• The security problems over the last few years
have:
– Confirmed the correctness of the TCSEC/CC
requirements.
– Provided a great deal of experience with practical
security. The TCSEC/CC has not been particularly
useful in dealing with the current threat.
• We probably know enough now to rewrite the
TCSEC/CC in much greater detail.
Lessons Learnt
• The TCSEC standards were definable, but
certification of systems with varying software and
hardware to those standards was impossible.
• Certification of systems as meeting the Common
Criteria is still a serious burden.
• We are beginning to understand what beyond the
Common Criteria has to be done.
Assessment
• The original problem was understanding how to
secure a system formally. That was either solvable
(single computers) or infeasible (networks). Now
there are two open problems:
• How to make confidentiality, integrity,
availability, and trust compatible.
• How to address the practical problems of securing
systems on the network.
• Welcome to the future…