CIS 573 Fall 2012
Midterm Exam Study Guide
This document lists the important questions that you should be able to answer about the assigned readings, as well as some other questions brought up in lecture.
Note that there is no implication to be made here regarding the types of questions you will see on the exam, or the level of difficulty.
Part 1: Intro to Software Testing
B. Kitchenham and S.L. Pfleeger, "Software quality: the elusive target"
P. Ammann and J. Offutt, Introduction to Software Testing, chapter 1
- What are the "views" of software quality that the authors identify? How do they differ from each other?
- How can software quality be measured?
- What are the important features of McCall's quality model? What are its shortcomings?
- What are the important features of the ISO 9126 quality model? What are its shortcomings?
P.C. Jorgensen, Software Testing: A Craftsman's Approach, chapters 5 and 6 (available in Blackboard)
- What is the definition of "software testing"? What are its goals?
- What are the differences between "validation", "verification", and "testing"?
- What are the definitions of "fault", "error", and "failure"?
- What are the definitions of "test requirement", "coverage", and "coverage level"?
- What is the definition of "subsumption"?
Other topics covered in lecture
- What is meant by "criteria" and "coverage"?
- What is the difference between black-box testing and white-box testing?
- What is exhaustive testing?
- What is meant by equivalence partitioning, boundary analysis, and robustness testing?
- What is the single fault assumption? What are its implications?
- What is meant by weak normal equivalence class testing? strong normal? weak robust? strong robust?
- What is a control flow graph? How is it used in white-box testing?
- What is meant by statement coverage, branch coverage, and path coverage? how are they related to white-box testing?
- What is a path condition? Why does path coverage subsume statement and branch coverage?
- Lecture notes: intro to testing, test case generation
- What is a "test oracle"? What does it mean for an oracle to be "sound"? "complete"?
- What are the differences between "black-box" and "white-box" testing?
Part 2: Testing Strategies and Adequacy
L.A. Clarke, "A System to Generate Test Data and Symbolically Execute Programs "
J.H. Andrews et al., "Is mutation an appropriate tool for testing experiments?"
- What are the goals of symbolic execution? how is it related to white-box testing?
- What can you learn about a program from symbolic execution?
- What are the practical limitations of symbolic execution?
- What does it mean for a path condition to be satisfiable? what is the implication if it is not?
Other topics covered in lecture
- What is meant by mutation analysis? what are its strengths/weaknesses as an adequacy criteria?
- Lecture notes: symbolic execution, test suite adequacy
- What is meant by prime path coverage? edge-pair coverage?
- What are the strengths/weaknesses of these structural criteria?
- What is meant by data flow criteria? What are its strengths/weaknesses?
Part 3: Other Testing Approaches
L.A. Clarke and D.S. Rosenblum, "A historical perspective on runtime assertion checking in software development"
C. Murphy and G. Kaiser, "Empirical Evaluation of Approaches to Testing Applications without Test Oracles"
- What are "invariants" and "assertions"? What is the difference between a pre-condition and a post-condition?
- How are invariants used to assist the testing process?
- How can runtime assertion checking be used to detect faults in applications without test oracles?
D. Beyer et al., "The software model checker BLAST"
- What is "metamorphic testing"? How is it used to create additional test cases?
- How can metamorphic testing be used to detect faults in applications without test oracles? How is it similar to the "N-version programming" or "pseudo-oracle" approach?
- What is the difference between metamorphic testing, statistical metamorphic testing, and heuristic metamorphic testing?
Other topics covered in lecture
- What is software model checking? How is it used in verification?
- How are counterexamples used in model checking?
- What are the practical limitations of model checking?
- Lecture notes: property-based testing, model checking, integration testing
- What does it mean for a property to be necessary? sufficient? sound?
- What is the difference between verification and testing?
- What is the difference between top-down integration and bottom-up integration?
- What is a test stub? test driver?
Part 4: Debugging and Regression Testing
K. Gallagher and D. Binkley, "Program slicing"
J.A. Jones, M.J. Harrold, J. Stasko, "Visualization of test information to assist fault localization"
- What is the definition of a program slice?
- What are data dependence and control dependence? How are they used to determine program slices?
- What is the difference between a forward slice and a backward slice? Between a static slice and a dynamic slice?
- How are slices used in debugging? What are some of the limitations of using slicing to localize faults in a program?
M.J. Harrold et al., "Regression test selection for Java software"
- In determining the color, why do the authors consider the percentage of passing (or failing) test cases that include a particular statement, as opposed to the number of passing (or failing) test cases?
S. Elbaum, A.G. Malishevsky, and G. Rothermel, "Prioritizing test cases for regression testing"
- In practice, why is test case selection necessary for regression testing?
- What is the definition of a "safe" regression test selection technique?
- How is a "dangerous entity" defined? How are they detected in the algorithm described in the paper?
- What are the special features of Java that affect regression testing?
Other topics covered in lecture
- In practice, why is test case prioritization necessary for regression testing?
- What test case prioritization techniques are described in the paper? How do the "total" techniques differ from their corresponding "additional" techniques?
- Which of the techniques are specific to regression testing?
- How is fault exposing potential (FEP) measured? How is the fault index (FI) determined? How are they different?
Part 5: Reliability and Fault Tolerance
M.R. Lyu, Handbook of Software Reliability Engineering, chapter 1
Z. Xie, H. Sun, and K. Saluja, "A survey of software fault tolerance techniques"
- What is the definition of "software reliability"? What is MTTF/MTBF?
- What is an operational profile? How is it used in determining software reliability
- What is the difference between reliability estimation and reliability prediction?
- How are software reliability models used?
- What is the difference between fault prevention, fault removal, fault forecasting, and fault tolerance?
- What is meant by "design diversity"? What is a Recovery Block? What is N-version Programming? What are their advantages/disadvantages as fault tolerance techniques?
- What is meant by "data diversity"? What is a Retry Block? How is it different from a Recovery Block?
- What is an Input Domain Reliability Model (IDRM)? What is a Software Reliability Growth Model (SRGM)? How are they related to software reliability?
- What is the goal of fault tolerant software?
- What is the difference between an interface exception, a local exception, and a failure exception?
Part 6: Performance and Efficiency
- What are some of the tradeoffs that Software Engineers make when they attempt to improve the speed of their code?
- What are some of the rules of thumb for making code faster?
- What performance-related issues could arise from using threads in Java?
- What is meant by "lazy evaluation"?
Part 7: Security
H.H. Thompson, "Why security testing is hard"
D.J. Bernstein, "Some thoughts on security after ten years of qmail 1.0"
- What are the four different types of security vulnerabilities? How are each of them exploited by attackers? How can you as a tester create test cases to check for them?
- What is meant by "trusted code"? What is "untrusted code"?
- What is the "principle of least privilege"?
- What is meant by "code-volume-minimization"? How does it increase the security of the code?
- What is meant by the "CIA Model" of security?
- What are some of the tradeoffs involved when trying to make a system secure?
- What assumptions are made when creating an attack model?
- What is the difference between a buffer overflow and an integer overflow? How might an attacker use these to his/her advantage?
Part 8: Usability
J. Noyes, "The Human Factors Toolkit" (available in Blackboard)
J. Rubin and D. Chisnell, Handbook of Usability Testing, chapter 3 (available in Blackboard)
- What are the five fundamental fallacies of user interface design?
- What is meant by "user-centered design"?
- What are subjective methods of evaluating a user interface? What are objective methods? What are empirical methods? What factors affect the method that you should use?
- What is a Formative Usability Study? What is the difference between analytic methods and empirical methods?
- What is a Summative Usability Study? How is it different from a Formative Usability Study?
- What activities related to user interface design happen during the Needs Analysis and Requirements Gathering stage?
- What is the difference between a Hi-Fi prototype and a Lo-Fi prototype?
Last updated: Thu Oct 18, 1:12pm