This course provides test engineers with advanced skills in test analysis, design, and execution. This hands-on course provides test engineers with the ability to define and carry out the tasks required to put the test strategy into action. The course will teach attendees how to analyze the system, taking into account the user’s quality expectations. They will learn how to evaluate system requirements as part of formal and informal reviews, using their understanding of the business domain to determine requirement validity. Attendees will know how to analyze, design, implement, and execute tests, using risk considerations to determine the appropriate effort and priority for tests. They will be able to report on testing progress and provide necessary evidence to support their evaluations of system quality. Attendees will learn how to implement a testing effort that supports the explicit and implicit testing objectives.
By the end of this course, an attendee should be able to:
• Perform the appropriate testing activities based on the software development lifecycle being used.
• Determine the proper prioritization of the testing activities based on the information provided by the risk analysis.
• Select and apply appropriate testing techniques to ensure that tests provide an adequate level of confidence, based on defined coverage criteria.
• Provide the appropriate level of documentation relevant to the testing activities.
• Determine the appropriate types of functional testing to be performed.
• Assume responsibility for the usability testing for a given project.
• Effectively participate in formal and informal reviews with stakeholders, applying knowledge of typical mistakes made in work products.
1. Testing Process
1.2 Testing in the Software Development Lifecycle
1.3 Test Planning, Monitoring and Control
1.3.1 Test Planning
1.3.2 Test Monitoring and Control
1.4 Test Analysis
1.5 Test Design
1.5.1 Concrete and Logical Test Cases
1.5.2 Creation of Test Cases
1.6 Test Implementation
1.7 Test Execution
1.8 Evaluating Exit Criteria and Reporting
1.9 Test Closure Activities
2. Test Management: Responsibilities for the Test Analyst
2.2 Test Progress Monitoring and Control
2.3 Distributed, Outsourced and Insourced Testing
2.4 The Test Analyst’s Tasks in Risk-Based Testing
2.4.2 Risk Identification
2.4.3 Risk Assessment
2.4.4 Risk Mitigation
3. Test Techniques
3.2 Specification-Based Techniques
3.2.1 Equivalence Partitioning
3.2.2 Boundary Value Analysis
3.2.3 Decision Tables
3.2.4 Cause-Effect Graphing
3.2.5 State Transition Testing
3.2.6 Combinatorial Testing Techniques
3.2.7 Use Case Testing
3.2.8 User Story Testing
3.2.9 Domain Analysis
3.2.10 Combining Techniques
3.3 Defect-Based Techniques
3.3.1 Using Defect-Based Techniques
3.3.2 Defect Taxonomies
3.4 Experience-Based Techniques
3.4.1 Error Guessing
3.4.2 Checklist-Based Testing
3.4.3 Exploratory Testing
3.4.4 Applying the Best Technique
4. Testing Software Quality Characteristics
4.2 Quality Characteristics for Business Domain Testing
4.2.1 Accuracy Testing
4.2.2 Suitability Testing
4.2.3 Interoperability Testing
4.2.4 Usability Testing
4.2.5 Accessibility Testing
5.2 Using Checklists in Reviews
6. Defect Management
6.2 When Can a Defect be Detected?
6.3 Defect Report Fields
6.4 Defect Classification
6.5 Root Cause Analysis
7. Test Tools
7.2 Test Tools and Automation
7.2.1 Test Design Tools
7.2.2 Test Data Preparation Tools
7.2.3 Automated Test Execution Tools