Outline for May 30, 2000

  1. Greetings and felicitations!
  2. Vulnerabilities Models
    1. RISOS (1975), to let managers, etc. know about integrity problems
    2. PA (1976-78), automated checking of programs
    3. NSA, contents unknown but similar to PA and RISOS
    4. Aslam, fault-based; for C programs
    5. Landwehr, classify according to attack purpose as well as type; based on RISOS
    6. Bishop, still being developed
  3. RISOS (Research Into Secure Operating Systems); Abbott et al.
    1. Improper parameter validation
    2. Inconsistent parameter validation
    3. Implicit sharing of privileged data
    4. Asynchronous validation/incorrect serialization (eg., TOCTTOU)
    5. Inadequate identification/authorization/authentication
    6. Violable prohibition/limit
    7. Exploitable logic error
  4. PA (Protection Analysis); Bisbey et al.
    1. Improper protection domain; 5 subclasses
      1. Improper initial protection domain
      2. Improper isolation of implementation details
      3. Improper change, (TOCTTOU flaws)
      4. Improper naming
      5. Improper deletion/deallocation
    2. Improper validation
    3. Improper synchronization; 2 subclasses
      1. Improper divisibility
      2. Improper sequencing
    4. Improper choice of operand and operation
      Note: PA classes map into RISOS classes and vice versa
  5. Flaw Hypothesis Methodology
    1. Information gathering -- emphasize use of sources such as manuals, protocol specs, design documentation, social engineering, source code, knowledge of other systems, etc.
    2. Flaw hypothesis -- old rule of "if forbidden, try it; if required, don't do it"; knowledge of other systems' flaws, analysis of interfaces particularly fruitful, go for assumptions and trusts
    3. Flaw testing -- see if hypothesized flaw holds; preferable not to try it out, but look at system closely enough to see if it will work, design attack and be able to show why it works; but sometimes actual test necessary -- do not use live production system and be sure it's backed up!
    4. Flaw generalization -- given flaw, look at causes and try to generalize. Example: UNIX environment variables.
    5. (sometimes) Flaw elimination -- fix it; may require redesign so the penetrators may not do it
  6. Example penetrations
    1. MTS
    2. Burroughs
  7. Principles of Secure Design
    1. Refer to both designing secure systems and securing existing systems
    2. Speaks to limiting damage
  8. Principle of Least Privilege
    1. Give process only those privileges it needs
    2. Discuss use of roles; examples of systems which violate this (vanilla UNIX) and which maintain this (Secure Xenix)
    3. Examples in programming (making things setuid to root unnecessarily, limiting protection domain; modularity, robust programming)
    4. Example attacks (misuse of privileges, etc.)
  9. Principle of Fail-Safe Defaults
    1. Default is to deny
    2. Example of violation: su program
  10. Principle of Economy of Mechanism
    1. KISS principle
    2. Enables quick, easy verification
    3. Example of complexity: sendmail
  11. Principle of Complete Mediation
    1. All accesses must be checked
    2. Forces system-wide view of controls
    3. Sources of requests must be identified correatly
    4. Source of problems: caching (because it may not reflect the state of the system correctly); examples are race conditions, DNS poisoning
  12. Principle of Open Design
    1. Designs are open so everyone can examine them and know the limits of the security provided
    2. Does not apply to cryptographic keys
    3. Acceptance of reality: they can get this info anyway
  13. Principle of Separation of Privilege
    1. Require multiple conditions to be satisfied before granting permission/access/etc.
    2. Advantage: 2 accidents/errors/etc. must happen together to trigger failure
  14. Principle of Least Common Mechanism
    1. Minimize sharing
    2. New service: in kernel or as a library routine? Latter is better, as each user gets their own copy
  15. Principle of Psychological Acceptability
    1. Willingness to use the mechanisms
    2. Understanding model
    3. Matching user's goal


Send email to bishop@cs.ucdavis.edu.

Department of Computer Science
University of California at Davis
Davis, CA 95616-8562



Page last modified on 6/8/2000