Outline for May 30, 2000
- Greetings and felicitations!
- Vulnerabilities Models
- RISOS (1975), to let managers, etc. know about integrity problems
- PA (1976-78), automated checking of programs
- NSA, contents unknown but similar to PA and RISOS
- Aslam, fault-based; for C programs
- Landwehr, classify according to attack purpose as well as type; based on RISOS
- Bishop, still being developed
- RISOS (Research Into Secure Operating Systems); Abbott et al.
- Improper parameter validation
- Inconsistent parameter validation
- Implicit sharing of privileged data
- Asynchronous validation/incorrect serialization (eg., TOCTTOU)
- Inadequate identification/authorization/authentication
- Violable prohibition/limit
- Exploitable logic error
- PA (Protection Analysis); Bisbey et al.
- Improper protection domain; 5 subclasses
- Improper initial protection domain
- Improper isolation of implementation details
- Improper change, (TOCTTOU flaws)
- Improper naming
- Improper deletion/deallocation
- Improper validation
- Improper synchronization; 2 subclasses
- Improper divisibility
- Improper sequencing
- Improper choice of operand and operation
Note: PA classes map into RISOS classes and vice versa
- Flaw Hypothesis Methodology
- Information gathering -- emphasize use of sources such as manuals,
protocol specs, design documentation, social engineering, source code,
knowledge of other systems, etc.
- Flaw hypothesis -- old rule of "if forbidden, try it; if required,
don't do it"; knowledge of other systems' flaws, analysis of
interfaces particularly fruitful, go for assumptions and trusts
- Flaw testing -- see if hypothesized flaw holds; preferable not to try
it out, but look at system closely enough to see if it will work, design
attack and be able to show why it works; but sometimes actual test
necessary -- do not use live production system and be sure it's backed
up!
- Flaw generalization -- given flaw, look at causes and try to generalize.
Example: UNIX environment variables.
- (sometimes) Flaw elimination -- fix it; may require redesign so the
penetrators may not do it
- Example penetrations
- MTS
- Burroughs
- Principles of Secure Design
- Refer to both designing secure systems and securing existing systems
- Speaks to limiting damage
- Principle of Least Privilege
-
Give process only those privileges it needs
-
Discuss use of roles; examples of systems which violate this (vanilla UNIX) and which maintain this (Secure Xenix)
-
Examples in programming (making things setuid to root unnecessarily, limiting protection domain; modularity, robust programming)
-
Example attacks (misuse of privileges, etc.)
- Principle of Fail-Safe Defaults
-
Default is to deny
-
Example of violation: su program
- Principle of Economy of Mechanism
-
KISS principle
-
Enables quick, easy verification
-
Example of complexity: sendmail
- Principle of Complete Mediation
-
All accesses must be checked
-
Forces system-wide view of controls
-
Sources of requests must be identified correatly
-
Source of problems: caching (because it may not reflect the state of the system correctly); examples are race conditions, DNS poisoning
- Principle of Open Design
-
Designs are open so everyone can examine them and know the limits of the security provided
-
Does not apply to cryptographic keys
-
Acceptance of reality: they can get this info anyway
- Principle of Separation of Privilege
-
Require multiple conditions to be satisfied before granting permission/access/etc.
-
Advantage: 2 accidents/errors/etc. must happen together to trigger failure
- Principle of Least Common Mechanism
-
Minimize sharing
-
New service: in kernel or as a library routine? Latter is better, as each user gets their own copy
- Principle of Psychological Acceptability
-
Willingness to use the mechanisms
-
Understanding model
-
Matching user's goal
Send email to
bishop@cs.ucdavis.edu.
Department of Computer Science
University of California at Davis
Davis, CA 95616-8562
Page last modified on 6/8/2000