Transcript Slide 1

TESTABILITY • Operability: Testin işlevselliğinin hızlı ve verimli olmasıdır.

• Observability: Testin açık, net ve takip edilebilir olmasıdır. • Controllability: Testin ve sonuçlarının kontrol edilebilir olmasıdır.

• Decomposability: Testin bileşenlerine ayrılabilir ve değerlendirilebilir olmasıdır.

• Simplicity: Testin kendisinin ve sonuçlarının basit ve anlaşılabilir olmasıdır.

• Stability: Testin sağlam, geçerli ve sürekli olmasıdır.

• Understandability: Testin anlaşılabilir olmasıdır.

1

TESTABILITY Kaner, Falk and Nguyen [KAN93] suggest the following attributes of a “good” test: 1. A good test has a high probability of finding an error. (İyi bir testin bir hata bulma olasılığı yüksektir) 2. A good test is not redundant. (İyi bir test lüzumsuz değildir) 3.

A good test should be “best of breed” [KAN93]. (İyi bir test türünün en iyisi olmalıdır) 4. A good test should be neither too simple nor too complex. (İyi bir test ne çok basittir nede çok karmaşık olmalıdır) 2

WHITE-BOX TESTING • Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed.

• We often believe that a logical path is not likely to be executed when, in fact it may be executed on a regular basis.

• Typographical errors are random.

3

BASIS PATH TESTING • Flow Graph Notation • Cyclomatic Complexity • Deriving Test Cases • Graph Matrices 4

Flow Graph Notation The structured constructs in flow graph from: Sequence If While Until Case where each circle represents one or more nonbranching PDL or source code statements

Figure 1.1. Graph Notation

5

CONTROL STRUCTURE TESTING • Condition Testing (Durum Testi).

• Data Flow Testing (Veri Akışı Testi).

• Loop Testing (Döngü Testi).

6

BLACK-BOX TESTING • Graph -Based Testing Methods (Grafik Tabanlı Test Metotları) • Equivalence Partitioning (Eşdeğer Bölümleme) • Boundary Value Analysis (Sınır Değerleri Analizi) • Comparison Testing (Mukayese Testleri) 7

TESTING FOR SPECIALIZED ENVIRONMENTS AND APPLICATIONS • Testing GUI’ s • Testing of Client/Server Architectures • Testing Documentation and Help Facilities • Testing for Real-Time Systems 8

TESTING GUI’ S (Graphical User Interface)

For Windows:

• Will the window open properly based on related typed or menu based mands?

• Can the window be resized, moved, and scrolled?

• Is all data content contained within the window properly addressable mouse, function, keys, directional arrows, and keyboard?

• Does the window properly regenerate when it is overwritten and the called?

• Are all functions that relate to the window avaible when needed?

• Are all functions that relate to the window operational?

9

TESTING GUI’ S (Graphical User Interface) • Are all relevant pull-down menus, tool bars, scroll bars, dialog boxes, buttons, icons, and other controls avaible and properly displayed for the windows?

• When multiple windows are displayed, is the name of the window properly represented?

• Is the active window properly highlighted?

• If multitasking is used, are all windows updated at appropriate times?

• Do multiple or incorrect mouse picks within the window cause unexpected side effects?

• Does the window properly close? 10

TESTING GUI’ S (Graphical User Interface)

For pull-down menus and mouse operations:

• Is the appropriate menu bar displayed in the appropriate context?

• Does the application menu bar display system related features (e.g., a clock display)?

• Do pull-down operations work properly?

• Do breakaway menus, palettes, and tool bars work properly?

• Are all menu function and pull-down subfunctions properly listed?

• Are all menu functions properly addressable by the mouse?

• Is text typeface, size, and format correct?

• Are menu functions highlighted (or grayed-out) based on the context 11 of current operations within a window?

TESTING GUI’ S (Graphical User Interface) • Does each menu function perform as advertised?

• Are the names of menu functions self -explanatory?

• Is help avaible for each menu item, and is it context sensitive?

• Are mouse operations properly recognized throughout the interactive context? • If multiple clicks are required, are they properly recognized in context?

• If the mouse has multiple buttons, are they properly recognized in context?

• Do the cursor, processing indicator (e.g., an hour glass or clock), and pointer properly change as different operations are invoked? 12

TESTING GUI’ S (Graphical User Interface)

Data Entry:

• Is alphanumeric data entry properly achoed and input to the system?

• Do graphical modes of data entry (e.g., a slide bar) work properly?

• Is invalid data properly recognized?

• Are data input messages intelligible?

13

A STRATEGIC APPROACH TO SOFTWARE TESTING • Verification and Validation • Organizing for Software Testing • A Software Testing Strategy • Criteria for Completion of Testing 14

Verification and Validation

Software Engineering Methods Formal Technical Reviews

quality

Standarts And Procedures SQA Figure 1.2. Achieving software quality Measurement Testing

15

Organizing for Software Testing Two point of view: • Constructive (Yapıcı) • Destructive (Yıkıcı) 16

A Software Testing Strategy

System engineering Requirements Design Code S R D C U I V ST Figure 1.3. Testing strategy Unit test Integration test Validation test System test

17

A Software Testing Strategy

requirements design code Unit test Testing “direction” Figure 1.4. Software testing steps High-order tests İntegration test

18

UNIT TESTING • Unit Test Considerations • Unit Test Procedures 19

Unit Test Considerations

Module

--------- ............

--------- ............

interface local data structures boundary conditions independent paths error handling paths Figure 1.5. Unit test test cases

20

Unit Test Procedures

driver module to be tested stub stub RESULTS Figure 1.6. Test environment interface local data structures boundary conditions independent paths error handling paths test cases

21

INTEGRATION TESTING • Top – Down Integration • Bottom – Up Integration • Regression Testing 22

Top-Down Integration

M 2 M 6 M 1 M 3 M 7 M 5 M 8 Figure 1.7. Top-down integration M 4

23

Top-Down Integration

Stub A Stub B = Direction of data flow Figure 1.8. Stubs Stub C Stub D

24

Bottom-Up Integration

M c M a D 1 D 2 M b D 3 Cluster 1 Figure 1.9. Bottom-up integration Cluster 2 Cluster 3

25

Bottom-Up Integration

Driver A Invoke subordinate Driver B Driver C Driver D Send parameter from a table (or external file) Display parameter

Y A B

A combination of drivers B and C = Direction of information flow Figure 1.10. Drivers

26

VALIDATION TESTING (Testi Doğrulama) • Validation Test Criteria (Test Kriterlerinin Doğrulanması) • Configuration Review (Yapının Tekrar Gözden Geçirilmesi) • Alpha and Beta Testing (Alfa ve Beta Testi) 27

SYSTEM TESTING (Sistem Testi) • Recovery Testing (İyileştirme Testi) • Security Testing (Güvenlik Testi) • Stress Testing (Aşırı Yükleme – Stres Testi) • Performance Testing (Performans Testi) 28

THE ART OF DEBUGGING (Hata Yöntemleri) • The Debugging Process (Hata Ayıklama Süreci) • Physiological Considerations (Fizyolojik Etmenler) • Debugging Approaches (Hata Ayıklama Yaklaşımları) 29

The Debugging Process (Hata Ayıklama Süreci)

Test cases Execution of cases Additional tests Suspected causes Regression tests Corrections Identified causes Debugging Figure 1.11. Debugging Results

30

SOFTWARE QUALITY • McCall’ s Quality Factors • Furps • The Transition to a Quantitative View 31

McCall’ s Quality Factors • Correctness. (Doğrulanabilirlik) • Reliability. • Efficiency. • Integrity. • Usability. (Güvenirlilik) (Etkinlik) (Bütünlük) (Kullanılabilirlik) • Maintainability. (Sürdürülebilirlik) • Flexibility. (Esneklik) • Testability. (Test Edilebilirlik) 32

McCall’ s Quality Factors Maintainability Flexibility Testability Portability Reusability Interoperability

Correctness Reliability Usability Integrity Efficiency Figure 1.12. McCall’ s software quality factors

33

Software quality metric Quality factor

Auditability Accuracy Communication Commonality Completeness Complexity Concision Consistency Data commonality Error tolerance Execution efficiency Expandability Generality Hardware Indep.

Instumentation Modularity Operability Security Self documentation Simplicity System Indep.

Traceability Training

x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x Figure 1.13. Quality factors and metrics x x

34

A FRAMEWORK FOR TECHNICAL SOFTWARE METRICS • The Challenge of Technical Metrics • Measurement Principles • The Attributes of Effective Software Metrics 35

Measurement Principles • Formulation (Formulasyon) • Collection (Toplama - Derleme) • Analysis (Analiz) • Interpretation (Yorumlama) • Feedback (Geribildirim) 36

The Attributes of Effective Software Metrics • Simple and computable • Empirically and intuitively persuasive • Consistent and objective • Consistent in its use of units and dimensions • Programming language independent • An effective mechanism for quality feedback 37

METRICS FOR THE ANALYSIS MODEL • Function-Based Metrics • The Bang Metric • Metrics for Specification Quality 38

Function-Based Metrics

User password test sensor Sensors zone inquiry sensor inquiry SafeHome User panic button activate/deactivate Interaction Function zone settings messages sensor status User password, sensors...

alarm alert activate/deactivate Monitoring & Response Subsystem System configuration data Figure 1.14. Sort of the analysis model for SafeHome software

39

Function-Based Metrics The data flow diagram is evaluated to determine the key measure required for computation of the function point metric: • number of user inputs • number of user outputs • number of user inquiries • number of files • number of external interfaces 40

Function-Based Metrics

Inquirement parameter Weighting Factor count 3 x simple 3 average complex 4 6 = number of user inputs number of user outputs 2 number of user inquiries 2 number of files 1 number of external interfaces 4 x x x x 4 3 7 5 5 4 10 7 7 6 15 10 = = = = 9 8 6 7 20 total 50

METRICS FOR THE DESIGN MODEL • High – Level Design Metrics • Component – Level Design Metrics • Interface Design Metrics 42

High-Level Design Metrics

node a depth f b g c d i h Figure 1.16. Morphology metrics m width n j p arc e k q

43

l r