13. Vulnerabilities and Threats in Distributed Systems* Prof. Bharat Bhargava Department of Computer Sciences and Center for Education and Research in Information Assurance and.

Download Report

Transcript 13. Vulnerabilities and Threats in Distributed Systems* Prof. Bharat Bhargava Department of Computer Sciences and Center for Education and Research in Information Assurance and.

13. Vulnerabilities and Threats in Distributed Systems

* Prof. Bharat Bhargava Department of Computer Sciences and Center for Education and Research in Information Assurance and Security (CERIAS ) Purdue University www.cs.purdue.edu/people/bb In collaboration with: Prof. Leszek Lilien Western Michigan University and CERIAS * Supported in part by NSF grants IIS-0209059 and IIS-0242840

From Vulnerabilities to Losses

   Growing business losses due to vulnerabilities in distributed systems   Identity theft in 2003 – expected loss of $220 bln worldwide ; 300%(!) annual growth rate [csoonline.com, 5/23/03] Computer virus attacks in 2003 – estimated loss of $55 bln worldwide [news.zdnet.com, 1/16/04] Vulnerabilities occur in:  Hardware / Networks / Operating Systems / DB systems / Applications Loss chain   Dormant vulnerabilities Potential threats enable threats against systems can materialize as (actual) attacks   Successful attacks Security breaches result in security breaches cause losses ICDCIT 2004 2

Vulnerabilities and Threats

   Vulnerabilities and threats start the loss chain  Best to deal with them first Deal with vulnerabilities   Gather in metabases and notification systems info on vulnerabilities and security incidents, then disseminate it Example vulnerability and incident metabases   CVE (Mitre), ICAT (NIST), OSVDB (osvdb.com) Example vulnerability notification systems  CERT (SEI-CMU), Cassandra (CERIAS-Purdue) Deal with threats   Threat assessment procedures  Specialized risk analysis using e.g. vulnerability and incident info Threat detection / threat avoidance / threat tolerance ICDCIT 2004 3

Outline 1.

Vulnerabilities 2. Threats 3. Examples of Mechanisms to Reduce Vulnerabilities and Threats

3.1. Applying Reliability and Fault Tolerance Principles to Security Research 3.2. Fraud Countermeasure Mechanisms ICDCIT 2004 4

Vulnerabilities - Topics

  

Models for Vulnerabilities Fraud Vulnerabilities Vulnerability Research Issues

ICDCIT 2004 5

Models for Vulnerabilities (1)

 A vulnerability in security domain – like a fault in reliability domain  A flaw or a weakness in system security procedures, design, implementation, or internal controls  Can be accidentally triggered or intentionally exploited, causing security breaches  Modeling vulnerabilities  Analyzing vulnerability features    Classifying vulnerabilities Building vulnerability taxonomies Providing formalized models  System design should not let an adversary know vulnerabilities unknown to the system owner ICDCIT 2004 6

Models for Vulnerabilities (2)

   Diverse models of vulnerabilities in the literature    In various environments Under varied assumptions Examples follow Analysis of four common computer vulnerabilities [17]  Identifies their characteristics, the policies violated by their exploitation, and the steps needed for their eradication in future software releases Vulnerability lifecycle model applied to three case studies [4]   Shows how systems remains vulnerable long after security fixes Vulnerability lifetime stages:  appears, discovered, disclosed, corrected, publicized, disappears ICDCIT 2004 7

Models for Vulnerabilities (3)

 Model-based analysis to identify configuration vulnerabilities [23]  Formal specification of desired security properties   Abstract model of the system that captures its security related behaviors Verification techniques to check whether the abstract model satisfies the security properties  Kinds of vulnerabilities [3]   Operational  E.g. an unexpected broken linkage in a distributed database Information-based  E.g. unauthorized access (secrecy/privacy), unauthorized modification (integrity), traffic analysis (inference problem), and Byzantine input ICDCIT 2004 8

Models for Vulnerabilities (4)

 Not all vulnerabilities can be removed, some shouldn’t Because:    Vulnerabilities create only a potential for attacks Some vulnerabilities cause no harm over entire system’s life cycle Some known vulnerabilities must be tolerated    Due to economic or technological limitations Removal of some vulnerabilities may reduce usability  E.g., removing vulnerabilities by adding passwords for each resource request lowers usability Some vulnerabilities are a side effect of a legitimate system feature  E.g., the setuid UNIX command creates vulnerabilities [14]  Need threat assessment to decide which vulnerabilities to remove first ICDCIT 2004 9

Fraud Vulnerabilities (1)

 Fraud : a deception deliberately practiced in order to secure unfair or unlawful gain [2]  Examples:  Using somebody else’s calling card number  Unauthorized selling of customer lists to telemarketers (example of an overlap of fraud with privacy breaches)  Fraud can make systems more vulnerable to subsequent fraud  Need for protection mechanisms to avoid future damage ICDCIT 2004 10

Fraud Vulnerabilities (2)

  Fraudsters: [13]   Impersonators illegitimate users who steal resources from victims (for instance by taking over their accounts) Swindlers legitimate users who intentionally benefit from the system or other users by deception (for instance, by obtaining legitimate telecommunications accounts and using them without paying bills) Fraud involves abuse of trust [12, 29]  Fraudster strives to present himself as a trustworthy individual and friend  The more trust one places in others the more vulnerable one becomes ICDCIT 2004 11

Vulnerability Research Issues (1)

  Analyze severity of a vulnerability and its potential impact on an application   Qualitative impact analysis  Expressed as a low/medium/high degree of performance/availability degradation Quantitative impact  E.g., economic loss, measurable cascade effects, time to recover Provide procedures and methods for efficient extraction of characteristics and properties of known vulnerabilities  Analogous to understanding how faults occur   Tools searching for known vulnerabilities in metabases can not anticipate attacker behavior Characteristics of high-risk vulnerabilities can be learnt from the behavior of attackers, using honeypots, etc.

ICDCIT 2004 12

Vulnerability Research Issues (2)

   Construct comprehensive taxonomies of vulnerabilities for different application areas   Medical systems may have critical privacy vulnerabilities Vulnerabilities in defense systems compromise homeland security Propose good taxonomies to facilitate both prevention and elimination of vulnerabilities Enhance metabases of vulnerabilities/incidents   Reveals characteristics for preventing not only identical but also similar vulnerabilities Contributes to identification of related vulnerabilities, including dangerous synergistic ones  Good model for a set of synergistic vulnerabilities can lead to uncovering gang attack threats or incidents ICDCIT 2004 13

Vulnerability Research Issues (3)

  Provide models for vulnerabilities and their contexts   The challenge: how vulnerability in one context propagates to another  If Dr. Smith is a high-risk driver, is he a trustworthy doctor?

Different kinds of vulnerabilities emphasized in different contexts Devise quantitative lifecycle vulnerability models for a given type of application or system  Exploit unique characteristics of vulnerabilities & application/system   In each lifecycle phase: - determine most dangerous and common types of vulnerabilities - use knowledge of such types of vulnerabilities to prevent them Best defensive procedures adaptively selected from a predefined set ICDCIT 2004 14

Vulnerability Research Issues (4)

 The lifecycle models helps solving a few problems  Avoiding system vulnerabilities most efficiently  By discovering & eliminating them at design and implementation stages  Evaluations/measurements of vulnerabilities at each lifecycle stage  In system components / subsystems / of the system as a whole  Assist in most efficient discovery of vulnerabilities before they are exploited by an attacker or a failure   Assist in most efficient elimination / masking of vulnerabilities (e.g. based on principles analogous to fault-tolerance) OR: Keep an attacker unaware or uncertain of important system parameters (e.g., by using non-deterministic or deceptive system behavior, increased component diversity, or multiple lines of defense) ICDCIT 2004 15

Vulnerability Research Issues (5)

  Provide methods of assessing impact of vulnerabilities on security in applications & systems   Create formal descriptions of the impact of vulnerabilities Develop quantitative vulnerability impact evaluation methods     Use resulting ranking for threat/risk analysis Identify the fundamental design principles and guidelines for dealing with system vulnerabilities at each lifecycle stage Propose best practices for reducing vulnerabilities at all lifecycle stages (based on the above principles and guidelines) Develop interactive or fully automatic tools and infrastructures encouraging or enforcing use of these best practices Other issues:  Investigate vulnerabilities in security mechanisms themselves  Investigate vulnerabilities due to non-malicious but threat-enabling uses of information [21] ICDCIT 2004 16

Outline 1. Vulnerabilities 2.

Threats 3. Examples of Mechanisms to Reduce Vulnerabilities and Threats

3.1. Applying Reliability and Fault Tolerance Principles to Security Research 3.2. Fraud Countermeasure Mechanisms ICDCIT 2004 17

Threats - Topics

   

Models of Threats Dealing with Threats

   Threat Avoidance Threat Tolerance Fraud Threat Detection for Threat Tolerance

Fraud Threats Threat Research Issues

ICDCIT 2004 18

Models of Threats

  Threats in security domain – like errors in reliability domain   Entities that can intentionally exploit or inadvertently trigger specific system vulnerabilities to cause security breaches [16, 27] Attacks or accidents materialize threats (changing them from potential to actual)   Attack - an intentional exploitation of vulnerabilities Accident - an inadvertent triggering of vulnerabilities Threat classifications: [26]  Based on actions , we have:  threats of illegal access, threats of destruction, threats of modification, and threats of emulation Based on consequences , we have: threats of disclosure, threats of (illegal) execution, threats of misrepresentation, and threats of repudiation ICDCIT 2004 19

Dealing with Threats

 Dealing with threats   Avoid (prevent) threats in systems Detect threats   Eliminate threats Tolerate threats  Deal with threats based on degree of risk acceptable to application   Avoid/eliminate threats to human life Tolerate threats to noncritical or redundant components ICDCIT 2004 20

Dealing with Threats – Threat Avoidance (1)

 Design of threat avoidance techniques - analogous to fault avoidance (in reliability)  Threat avoidance methods are frozen after system deployment  Effective only against less sophisticated attacks  Sophisticated attacks require threat tolerance [20]

adaptive schemes

for  Attackers have motivation, resources, and the whole system lifetime to discover its vulnerabilities  Can discover holes in threat avoidance methods ICDCIT 2004 21

Dealing with Threats – Threat Avoidance (2)

 Understanding threat sources   Understand threats by humans, their motivation and potential attack modes [27] Understand threats due to system faults and failures  Example design guidelines for preventing threats:  Model for secure protocols [15]   Formal models for analysis of authentication protocols [25, 10] Models for statistical databases to prevent data disclosures [1] ICDCIT 2004 22

Dealing with Threats – Threat Tolerance

 Useful features of fault-tolerant approach  Not concerned with each individual failure   Don’t spend all resources on dealing with individual failures Can ignore transient and non-catastrophic errors and failures  Need analogous intrusion-tolerant approach   Deal with lesser and common security breaches E.g.: intrusion tolerance for database systems [3]    Phase 1: attack detection  Optional (e.g., majority voting schemes don’t need detection) Phases 2-5: damage confinement, damage assessment, reconfiguration, continuation of service  can be implicit (e.g., voting schemes follow the same procedure whether attacked or not) Phase 6: report attack  to repair and fault treatment (to prevent a recurrence of similar attacks) ICDCIT 2004 23

Dealing with Threats – Fraud Threat Detection for Threat Tolerance

 Fraud threat identification is needed  Fraud detection systems  Widely used in telecommunications, online transactions, insurance   Effective systems use both fraud rules and pattern analysis of user behavior Challenge: a very high false alarm rate  Due to the skewed distribution of fraud occurrences ICDCIT 2004 24

Fraud Threats

  Analyze salient features of fraud threats Some salient features of fraud threats [9]   Fraud is often a malicious opportunistic reaction Fraud escalation is a natural phenomenon   Gang fraud can be especially damaging  Gang fraudsters can cooperate in misdirecting suspicion on others Individuals/gangs planning fraud thrive in fuzzy environments   Use fuzzy assignments of responsibilities to participating entities Powerful fraudsters create environments that facilitate fraud  E.g.: CEO’s involved in insider trading ICDCIT 2004 25

Threat Research Issues (1)

 Analysis of known threats in context  Identify (in metabases) known threats relevant for the context     Find salient features of these threats and associations between them   Threats can be associated also via their links to related vulnerabilities Infer threat features from features of vulnerabilities related to them Build a threat taxonomy for the considered context Propose qualitative and quantitative models of threats in context  Including lifecycle threat models Define measures to determine threat levels  Devise techniques for avoiding/tolerating threats via unpredictability or non-determinism   Detecting known threats Discovering unknown threats ICDCIT 2004 26

Threat Research Issues (2)

  Develop quantitative threat models using analogies to reliability models  E.g., rate threats or attacks using time and effort random variables   Describe the distribution of their random behavior Mean Effort To security Failure (METF)   Analogous to Mean Time To Failure (MTTF) reliability measure Mean Time To Patch and Mean Effort To Patch (new security measures)  Analogous to Mean Time To Repair (MTTR) reliability measure and METF security measure, respectively Propose evaluation methods for threat impacts    Mere threat (a potential for attack) has its impact Consider threat properties: direct damage, indirect damage, recovery cost, prevention overhead Consider interaction with other threats and defensive mechanisms ICDCIT 2004 27

Threat Research Issues (3)

 Invent algorithms, methods, and design guidelines to reduce number and severity of threats  Consider injection of unpredictability or uncertainty to reduce threats  E.g., reduce data transfer threats by sending portions of critical data through different routes  Investigate threats to security mechanisms themselves  Study threat detection  It might be needed for threat tolerance  Includes investigation of fraud threat detection ICDCIT 2004 28

Products, Services and Research Programs for Industry (1)

   There are numerous commercial products and services, and some free products and services Examples follow.

Notation used below: Product (Organization) Example vulnerability and incident metabases  CVE (Mitre), ICAT (NIST), OSVDB (osvdb.com), Apache Week Web Server (Red Hat), Cisco Secure Encyclopedia (Cisco), DOVESComputer Security Laboratory (UC Davis), DragonSoft Vulnerability Database (DragonSoft Security Associates), Secunia Security Advisories (Secunia), SecurityFocus Vulnerability Database (Symantec), SIOS (Yokogawa Electric Corp.), Verletzbarkeits-Datenbank (scip AG), Vigil@nce AQL (Alliance Qualité Logiciel)   Example vulnerability notification  systems CERT (SEI-CMU), Cassandra (CERIAS-Purdue), ALTAIR (esCERT-UPC), DeepSight Alert Services (Symantec), Mandrake Linux Security Advisories (MandrakeSoft) Example other tools   (1) Vulnerability Assessment Tools (for databases, applications, web applications, etc.)  AppDetective (Application Security), NeoScanner@ESM (Inzen), AuditPro for SQL Server (Network Intelligence India Pvt. Ltd.), eTrust Policy Compliance (Computer Associates), Foresight (Cubico Solutions CC), IBM Tivoli Risk Manager (IBM), Internet Scanner (Internet Security Systems), NetIQ Vulnerability Manager (NetIQ), N-Stealth (N Stalker), QualysGuard (Qualys), Retina Network Security Scannere (Eye Digital Security), SAINT (SAINT Corp.), SARA (Advanced Research Corp.), STAT-Scanner (Harris Corp.), StillSecure VAM (StillSecure), Symantec Vulnerability Assessment (Symantec) Automated Scanning Tools, Vulnerability Scanners  Automated Scanning (Beyond Security Ltd.), ipLegion/intraLegion (E*MAZE Networks), Managed Vulnerability Assessment (LURHQ Corp.), Nessus Security Scanner (The Nessus Project), NeVO (Tenable Network Security) ICDCIT 2004 29

Products, Services and Research Programs for Industry (2)

    Example other tools (2)    Vulnerability und Penetration Testing  Attack Tool Kit (Computec.ch), CORE IMPACT (Core Security Technologies), LANPATROL (Network Security Syst.) Intrusion Detection System  Cisco Secure IDS (Cisco), Cybervision Intrusion Detection System (Venus Information Technology), Dragon Sensor (Enterasys Networks), McAfee IntruShield (IDSMcAfee), NetScreen-IDP (NetScreen Technologies), Network Box Internet Threat Protection Device (Network Box Corp.) Threat Management Systems  Symantec ManHunt (Symantec) Example services     Vulnerability Scanning Services  Netcraft Network Examination Service (Netcraft Ltd.) Vulnerability Assessment and Risk Analysis Services  ActiveSentry (Intranode), Risk Analysis Subscription Service (Strongbox Security), SecuritySpace Security Audits (E Soft), Westpoint Enterprise Scan (Westpoint Ltd.) Threat Notification  TruSecure IntelliSHIELD Alert Manager (TruSecure Corp.) Pathches  Software Security Updates (Microsoft) More on metabases/tools/services: http://www.cve.mitre.org/compatible/product.html

Example Research Programs  Microsoft Trustworthy Computing (Security, Privacy, Reliability, Business Integrity)  IBM  Almaden: information security; Zurich: information security, privacy, and cryptography; Secure Systems Department; Internet Security group; Cryptography Research Group ICDCIT 2004 30

Outline 1. Vulnerabilities 2. Threats 3. Examples of Mechanisms to Reduce Vulnerabilities and Threats

3.1.

Applying Reliability and Fault Tolerance Principles to Security Research 3.2. Fraud Countermeasure Mechanisms ICDCIT 2004 31

Applying Reliability Principles to Security Research (1)

 Apply the science and engineering from Reliability to Security [6]  Analogies in basic notions [6, 7]    Fault Error – vulnerability (enabled by a fault) – threat (enabled by a vulnerability) Failure/crash (materializes a fault, consequence of an error) – Security breach (materializes a vulnerability, consequence of a threat)  Time - effort analogies: time-to-failure expended [18] distribution for accidental failures – effort-to-breach distribution for intentional security breaches  This is not a “direct” analogy: it considers important differences between Reliability and Security  Most important: intentional human factors in Security ICDCIT 2004 32

Applying Reliability Principles to Security Research (2)

 Analogies from fault avoidance/tolerance [27]   Fault avoidance - threat avoidance Fault tolerance - threat tolerance have materialized)  (gracefully adapts to threats that Maybe threat avoidance/tolerance should be named: avoidance/tolerance vulnerability (to be consistent with the vulnerability - fault analogy)  Analogy: To deal with failures, build fault-tolerant systems To deal with security breaches, build threat-tolerant systems ICDCIT 2004 33

Applying Reliability Principles to Security Research (3)

 Examples of solutions using fault tolerance analogies  Voting and quorums  To increase reliability - require a quorum of voting replicas To increase security - make forming voting quorums more difficult  This is not a “direct” analogy but a kind of its “reversal”  Checkpointing applied to intrusion detection   To increase reliability – use checkpoints to bring system back to a reliable (e.g., transaction consistent) state To increase security - use checkpoints to bring system back to a secure state  Adaptability / self-healing   Adapt to common and less severe security breaches as we adapt to every-day and relatively benign failures Adapt to: timing / severity / duration / extent of a security breach ICDCIT 2004 34

Applying Reliability Principles to Security Research (4)

 Beware: Reliability analogies are not always helpful  Differences between seemingly identical notions E.g., “system boundaries” are less open for Reliability than for Security  No simple analogies exist for intentional security breaches arising from planted malicious faults  In such cases, analogy of time (Reliability) to effort (Security) is meaningless   E.g., sequential time vs. non-sequential effort E.g., long time duration vs. “nearly instantaneous” effort  No simple analogies exist when attack efforts are concentrated in time  As before, analogy of time to effort is meaningless ICDCIT 2004 35

Outline 1. Vulnerabilities 2. Threats 3. Examples of Mechanisms to Reduce Vulnerabilities and Threats

3.1. Applying Reliability and Fault Tolerance Principles to Security Research 3.2.

Fraud Countermeasure Mechanisms ICDCIT 2004 36

Overview - Fraud Countermeasure Mechanisms (1)

 System monitors user behavior   System decides whether user’s behavior qualifies as fraudulent Three types of fraudulent behavior identified:  “Uncovered deceiving intention”  User misbehaves all the time   “Trapping intention”  User behaves well at first, then commits fraud “Illusive intention”  User exhibits cyclic behavior: longer periods of proper behavior separated by shorter periods of misbehavior ICDCIT 2004 37

Overview - Fraud Countermeasure Mechanisms (2)

 System architecture for swindler detection    Profile-based anomaly detector  Monitors suspicious actions searching for identified fraudulent behavior patterns State transition analysis  Provides state description when an activity results in entering a dangerous state Deceiving intention predictor  Discovers deceiving intention based on satisfaction ratings  Decision making  Decides whether to raise fraud alarm when deceiving pattern is discovered ICDCIT 2004 38

Overview - Fraud Countermeasure Mechanisms (3)

 Performed experiments validated the architecture  All three types of fraudulent behavior were quickly detected More details on “Fraud Countermeasure Mechanisms” available in the extended version of this presentation at: www.cs.purdue.edu/people/bb#colloqia ICDCIT 2004 39

Summary

 Presented: 1. Vulnerabilities 2. Threats 3. Mechanisms to Reduce Vulnerabilities and Threats 3.1. Applying Reliability and Fault Tolerance Principles to Security Research 3.2. Using Trust in Role-based Access Control 3.3. Privacy-preserving Data Dissemination 3.4. Fraud Countermeasure Mechanisms ICDCIT 2004 40

Conclusions

     Exciting area of research 20 years of research in Reliability can form a basis for vulnerability and threat studies in Security Need to quantify threats, risks, and potential impacts on distributed applications. Do not be terrorized and act scared Adapt and use resources to deal with different threat levels Government, industry, and the public are interested in progress in this research ICDCIT 2004 41

References (1)

1.

2.

3.

4.

5.

6.

7.

8.

9.

10.

11.

N.R. Adam and J.C. Wortmann, “Security-Control Methods for Statistical Databases: A Comparative Study,” ACM Computing Surveys , Vol. 21, No. 4, Dec. 1989.

The American Heritage Dictionary of the English Language, Fourth Edition, Houghton Mifflin, 2000.

P. Ammann, S. Jajodia, and P. Liu, “A Fault Tolerance Approach to Survivability,” in Computer Security, Dependability, and Assurance: From Needs to Solutions , IEEE Computer Society Press, Los Alamitos, CA, 1999.

W.A. Arbaugh, et al., “Windows of Vulnerability: A Case Study Analysis,” IEEE Computer, pp. 52-59, Vol. 33 (12), Dec. 2000.

A. Avizienis, J.C. Laprie, and B. Randell, “Fundamental Concepts of Dependability,” Research Report N01145 , LAAS-CNRS, Apr. 2001.

A. Bhargava and B. Bhargava, “Applying fault-tolerance principles to security research,” in Proc. of IEEE Symposium on Reliable Distributed Systems , New Orleans, Oct. 2001. B. Bhargava, “Security in Mobile Networks,” in NSF Workshop on Context-Aware Mobile Database Management (CAMM), Brown University, Jan. 2002.

B. Bhargava (ed.), Concurrency Control and Reliability in Distributed Systems Nostrand Reinhold, 1987.

B. Bhargava, “Vulnerabilities and Fraud in Computing Systems,” Stefan, Serbia and Montenegro, Oct. 2003.

management in mobile computing,” Proc. Intl. Conf. IPSI B. Bhargava, S. Kamisetty and S. Madria, “Fault-tolerant authentication and group key Intl. Conf. on Internet Comp.

B. Bhargava and L. Lilien, “Private and Trusted Collaborations,” Proc. Secure Knowledge Management (SKM 2004): A Workshop , Amherst, NY, Sep. 2004.

, Van , Sv. , Las Vegas, June 2000.

ICDCIT 2004 42

References (2)

12.

13.

14.

15.

16.

17.

18.

19.

20.

21.

B. Bhargava and Y. Zhong, “Authorization Based on Evidence and Trust,” on Data Warehousing and Knowledge Discovery DaWaK-2002, Sep. 2002.

B. Bhargava, Y. Zhong, and Y. Lu, "Fraud Formalization and Detection,” Data Warehousing and Knowledge Discovery DaWaK-2003, Security: Models and Tools,” Proc. Intl. Conf. Aix-en-Provence, France, Proc. Intl. Conf. on Prague, Czechia, Sep. 2003.

M. Dacier, Y. Deswarte, and M. Kaâniche, “Quantitative Assessment of Operational Technical Report, LAAS Report 96493 , May 1996.

N. Heintze and J.D. Tygar, “A Model for Secure Protocols and Their Compositions,” IEEE Transactions on Software Engineering , Vol. 22, No. 1, 1996, pp. 16-30.

E. Jonsson et al.

Impairments,” , “On the Functional Relation Between Security and Dependability Proc. 1999 Workshop on New Security Paradigms , Sep. 1999, pp. 104-111.

I. Krsul, E.H. Spafford, and M. Tripunitara, “Computer Vulnerability Analysis,” Technical Report, COAST TR 98-07, Dept. of Computer Sciences, Purdue University, 1998.

B. Littlewood at al.

, “Towards Operational Measures of Computer Security”, Journal of Computer Security , Vol. 2, 1993, pp. 211-229.

F. Maymir-Ducharme, P.C. Clements, K. Wallnau, and R. W. Krut, “The Unified Information Security Architecture,” Analysis Method,” Technical Report, CMU/SEI-95-TR-015 N.R. Mead, R.J. Ellison, R.C. Linger, T. Longstaff, and J. McHugh, “Survivable Network Tech. Rep. CMU/SEI-2000-TR-013 on New Security Paradigms , Sep. 1995, pp. 75-81.

, Oct. 1995.

, Pittsburgh, PA, Sep. 2000.

C. Meadows, “Applying the Dependability Paradigm to Computer Security,” Proc. Workshop ICDCIT 2004 43

Reference (3)

22.

23.

24.

25.

26.

27.

28.

29.

P.C. Meunier and E.H. Spafford, “Running the free vulnerability notification system Cassandra,” Jan. 2002.

Proc. 14th Annual Computer Security Incident Handling Conference , Hawaii, C. R. Ramakrishnan and R. Sekar, “Model-Based Analysis of Configuration Vulnerabilities,” Proc. Second Intl. Workshop on Verification, Model Checking, and Abstract Interpretation (VMCAI’98) , Pisa, Italy, 2000.

B. Randell, “Dependability—a Unifying Concept,” in: Assurance: From Needs to Solutions , IEEE Computer Society Press, Los Alamitos, CA, 1999.

A.D. Rubin and P. Honeyman, “Formal Methods for the Analysis of Authentication Protocols,” Tech. Rep. 93-7, Dept. of Electrical Engineering and Computer Science, University of Michigan, Nov. 1993.

G. Song et al M. Winslett ., “CERIAS Classic Vulnerability Database User Manual,” Technical Report 2000-17, CERIAS, Purdue University, West Lafayette, IN, 2000.

G. Stoneburner, A. Goguen, and A. Feringa, “Risk Management Guide for Information Technology Systems,” et al.

NIST Special Publication 800-30 , “Negotiating trust on the web,” Computer Security, Dependability, and , Washington, DC, 2001.

IEEE Internet Computing Spec. Issue on Trust Management , 6(6), Nov. 2002.

Y. Zhong, Y. Lu, and B. Bhargava, “Dynamic Trust Production Based on Interaction Sequence,” Tech. Rep. CSD-TR 03-006, Dept. Comp. Sciences, Purdue Univ., Mar.2003.

The extended version of this presentation available at: www.cs.purdue.edu/people/bb#colloqia ICDCIT 2004 44

Thank you!

ICDCIT 2004 45