Testing is the process of evaluating a system or its component(s) with the intent to find that whether it satisfies the specified requirements or not.
Testing is executing a system in order to identify any gaps, errors or missing requirements in contrary to the actual desire or requirements.
Audience
This dictionary is an effort to put almost all the terms related to Software Testing at one place and explain them with suitable examples. The target audience for this dictionary is Software Testing Professionals, Software Quality Experts, and Software Developers.
Prerequisites
Before proceeding with the terms given in this dictionary, you should have a basic understanding of software development life cycle (SDLC). A basic understanding of software programming using any programming language is also required.
What is Acceptance Testing?
Acceptance testing, a testing technique performed to determine whether or not the software system has met the requirement specifications. The main purpose of this test is to evaluate the system's compliance with the business requirements and verify if it is has met the required criteria for delivery to end users.
There are various forms of acceptance testing:
-
User acceptance Testing
-
Business acceptance Testing
-
Alpha Testing
-
Beta Testing
User acceptance Testing
Business acceptance Testing
Alpha Testing
Beta Testing
Acceptance Testing - In SDLC
The following diagram explains the fitment of acceptance testing in the software development life cycle.
The acceptance test cases are executed against the test data or using an acceptance test script and then the results are compared with the expected ones.
Acceptance Criteria
Acceptance criteria are defined on the basis of the following attributes
-
Functional Correctness and Completeness
-
Data Integrity
-
Data Conversion
-
Usability
-
Performance
-
Timeliness
-
Confidentiality and Availability
-
Installability and Upgradability
-
Scalability
-
Documentation
Functional Correctness and Completeness
Data Integrity
Data Conversion
Usability
Performance
Timeliness
Confidentiality and Availability
Installability and Upgradability
Scalability
Documentation
Acceptance Test Plan - Attributes
The acceptance test activities are carried out in phases. Firstly, the basic tests are executed, and if the test results are satisfactory then the execution of more complex scenarios are carried out.
The Acceptance test plan has the following attributes:
-
Introduction
-
Acceptance Test Category
-
operation Environment
-
Test case ID
-
Test Title
-
Test Objective
-
Test Procedure
-
Test Schedule
-
Resources
The acceptance test activities are designed to reach at one of the conclusions:
-
Accept the system as delivered
-
Accept the system after the requested modifications have been made
-
Do not accept the system
Introduction
Acceptance Test Category
operation Environment
Test case ID
Test Title
Test Objective
Test Procedure
Test Schedule
Resources
Accept the system as delivered
Accept the system after the requested modifications have been made
Do not accept the system
Acceptance Test Report - Attributes
The Acceptance test Report has the following attributes:
-
Report Identifier
-
Summary of Results
-
Variations
-
Recommendations
-
Summary of To-DO List
-
Approval Decision
Report Identifier
Summary of Results
Variations
Recommendations
Summary of To-DO List
Approval Decision
What is accessibility Testing?
Accessibility testing is a subset of usability testing where in the users under consideration are people with all abilities and disabilities. The significance of this testing is to verify both usability and accessibility.
Accessibility aims to cater people of different abilities such as:
-
Visual Impairments
-
Physical Impairment
-
Hearing Impairment
-
Cognitive Impairment
-
Learning Impairment
A good web application should cater to all sets of people and NOT just limited to disabled people. These include:
-
Users with poor communications infrastructure
-
Older people and new users, who are often computer illiterate
-
Users using old system (NOT capable of running the latest software)
-
Users, who are using NON-Standard Equipment
-
Users, who are having restricted access
Visual Impairments
Physical Impairment
Hearing Impairment
Cognitive Impairment
Learning Impairment
Users with poor communications infrastructure
Older people and new users, who are often computer illiterate
Users using old system (NOT capable of running the latest software)
Users, who are using NON-Standard Equipment
Users, who are having restricted access
How to Perform Accessibility Testing
The Web Accessibility Initiative (WAI) describes the strategy for preliminary and conformance reviews of web sites. The Web Accessibility Initiative (WAI) includes a list of software tools to assist with conformance evaluations. These tools range from specific issues such as colour blindness to tools that will perform automated spidering tools.
Web accessibility Testing Tools
Product | Vendor | URL |
---|---|---|
AccVerify | HiSoftware | http://www.hisoftware.com |
Bobby | Watchfire | http://www.watchfire.com |
WebXM | Watchfire | http://www.watchfire.com |
Ramp Ascend | Deque | http://www.deque.com |
InFocus | SSB Technologies | http://www.ssbtechnologies.com/ |
Role of Automated Tools in Acceptance Testing
The above said automated accessibility testing tools are very good at identifying pages and lines of code that need to be manually checked for accessibility.
-
check the syntax of the site's code
-
Search for known patterns that humans have listed
-
identify pages containing elements that may cause problems
-
identify some actual accessibility problems
-
identify some potential problems
The interpretation of the results from the automated accessibility testing tools requires experience in accessibility techniques with an understanding of technical and usability issues.
check the syntax of the site's code
Search for known patterns that humans have listed
identify pages containing elements that may cause problems
identify some actual accessibility problems
identify some potential problems
What is Active Testing?
Active testing, a testing technique, where the user introduces test data and analyses the result.
During Active testing, a tester builds a mental model of the software under test which continues to grow and refine as your interaction with the software continues.
How we do Active Testing ?
-
At the end of each and every action performed on the application under test, we need to check if the model/application seems to fulfill client's needs.
-
If not, the application needs to be adapted, or that we got a problem in the application. We continuously engage in the testing process and help us to come up with new ideas, test cases, test data to fulfill.
-
At the same time, we need to note down things, which we might want to turn to later or we follow up with the concerned team on them, eventually finding and pinpointing problems in the software.
-
Hence, any application under test needs active testing which involves testers who spot the defects.
At the end of each and every action performed on the application under test, we need to check if the model/application seems to fulfill client's needs.
If not, the application needs to be adapted, or that we got a problem in the application. We continuously engage in the testing process and help us to come up with new ideas, test cases, test data to fulfill.
At the same time, we need to note down things, which we might want to turn to later or we follow up with the concerned team on them, eventually finding and pinpointing problems in the software.
Hence, any application under test needs active testing which involves testers who spot the defects.
What is Actual Outcome?
Actual Outcome also known as actual result, which a tester gets after performing the test.
Actual Outcome is always documented along with the test case during the test execution phase. After performing the tests, the actual outcome is compared with the expected outcome and the deviations are noted. The deviation, if any, is known as defect.
In short, after getting the actual outcome, we can mark whether the scenario is pass or fail.
While developing the test cases, we usually have the following fields:
-
Test Scenario
-
Test Steps
-
Parameters
-
Expected Result
-
Actual Result
Test Scenario
Test Steps
Parameters
Expected Result
Actual Result
Example:
Let us say, that we need to check an input field that can accept maximum of 10 characters.
While developing the test cases for the above scenario, the test cases are documented in the following way. In the below example, the first case is a pass scenario while the second case is a FAIL.
Scenario | Test Step | Expected Result | Actual Outcome |
---|---|---|---|
Verify that the input field that can accept maximum of 10 characters | Login to application and key in 10 characters | Application should be able to accept all 10 characters. | Application accepts all 10 characters. |
Verify that the input field that can accept maximum of 11 characters | Login to application and key in 11 characters | Application should NOT accept all 11 characters. | Application accepts all 10 characters. |
If the expected result don't match with the actual result, then we log a defect. The defect goes through the defect life cycle and the testers address the same after fix.
What is Adhoc Testing?
When a software testing performed without proper planning and documentation, it is said to be Adhoc Testing. Such kind of tests are executed only once unless we uncover the defects.
Adhoc Tests are done after formal testing is performed on the application. Adhoc methods are the least formal type of testing as it is NOT a structured approach. Hence, defects found using this method are hard to replicate as there are no test cases aligned for those scenarios.
Testing is carried out with the knowledge of the tester about the application and the tester tests randomly without following the specifications/requirements. Hence the success of Adhoc testing depends upon the capability of the tester, who carries out the test. The tester has to find defects without any proper planning and documentation, solely based on tester's intuition.
When to Execute Adhoc Testing ?
Adhoc testing can be performed when there is limited time to do exhaustive testing and usually performed after the formal test execution. Adhoc testing will be effective only if the tester has in-depth understanding about the System Under Test.
Forms of Adhoc Testing :
-
Buddy Testing: Two buddies, one from development team and one from test team mutually work on identifying defects in the same module. Buddy testing helps the testers develop better test cases while development team can also make design changes early. This kind of testing happens usually after completing the unit testing.
-
Pair Testing: Two testers are assigned the same modules and they share ideas and work on the same systems to find defects. One tester executes the tests while another tester records the notes on their findings.
-
Monkey Testing: Testing is performed randomly without any test cases in order to break the system.
Buddy Testing: Two buddies, one from development team and one from test team mutually work on identifying defects in the same module. Buddy testing helps the testers develop better test cases while development team can also make design changes early. This kind of testing happens usually after completing the unit testing.
Pair Testing: Two testers are assigned the same modules and they share ideas and work on the same systems to find defects. One tester executes the tests while another tester records the notes on their findings.
Monkey Testing: Testing is performed randomly without any test cases in order to break the system.
Various ways to make Adhoc Testing More Effective
-
Preparation: By getting the defect details of a similar application, the probability of finding defects in the application is more.
-
Creating a Rough Idea: By creating a rough idea in place the tester will have a focussed approach. It is NOT required to document a detailed plan as what to test and how to test.
-
Divide and Rule: By testing the application part by part, we will have a better focus and better understanding of the problems if any.
-
Targeting Critical Functionalities: A tester should target those areas that are NOT covered while designing test cases.
-
Using Tools: Defects can also be brought to the lime light by using profilers, debuggers and even task monitors. Hence being proficient in using these tools one can uncover several defects.
-
Documenting the findings: Though testing is performed randomly, it is better to document the tests if time permits and note down the deviations if any. If defects are found, corresponding test cases are created so that it helps the testers to retest the scenario.
Preparation: By getting the defect details of a similar application, the probability of finding defects in the application is more.
Creating a Rough Idea: By creating a rough idea in place the tester will have a focussed approach. It is NOT required to document a detailed plan as what to test and how to test.
Divide and Rule: By testing the application part by part, we will have a better focus and better understanding of the problems if any.
Targeting Critical Functionalities: A tester should target those areas that are NOT covered while designing test cases.
Using Tools: Defects can also be brought to the lime light by using profilers, debuggers and even task monitors. Hence being proficient in using these tools one can uncover several defects.
Documenting the findings: Though testing is performed randomly, it is better to document the tests if time permits and note down the deviations if any. If defects are found, corresponding test cases are created so that it helps the testers to retest the scenario.
What is Age Testing?
It is a testing technique that evaluates a system's ability to perform in the future and usually carried out by test teams. As the system gets older, how significantly the performance might drop is what is being measured in Age Testing.
Let us also understand the concept of Defect Age. It is measured in terms of two parameters:
1. Phases
2. Time
Defect Age - Phases:
Defect age in phases is defined as the difference between defect injection phase and defect detection phase.
Parameters:
1. 'defect injection phase' is the phase of the software development life cycle when the defect was introduced.
2. 'defect detection phase' is the phase of the software development life cycle when the defect was pinpointed.
Formula:
Defect Age in Phase = Defect Detection Phase - Defect Injection Phase
Example:
Consider, the SDLC Methodology that we have adopted has the following phases:
1. Requirements Development
2. Design
3. Coding
4. Unit Testing
5. Integration Testing
6. System Testing
7. Acceptance Testing, and if a defect is identified in Unit Testing (4) and the defect was introduced in Design stage (2) of the Development, then Defect Age is (4)-(2) = 2.
Defect Age - Time:
Defect age is defined as the time difference between defect detected date and the current date, provided the defect is still said to be open.
Parameters:
1. Defects are in "Open" and "Assigned" Status and NOT just in "New" Status.
2. Defects that are in "Closed" due to "non-reproducible" or "duplicate" are NOT considered.
3. Difference in days or hours is calculated, from the defect open date and current date.
Formula:
Defect Age in Time = Defect Fix Date (OR) Current Date - Defect Detection Date
Example:
If a defect was detected on 05/05/2013 11:30:00 AM and closed on 23/05/2013 12:00:00 PM, the Defect Age would be calculated as follows.
Defect Age in Days = 05/05/2013 11:30:00 AM - 23/05/2013 12:00:00 PM
Defect Age in Days = 19 days
Outcome:
For assessing the effectiveness of each phase and any review/testing activities, lesser the defect age, better the effectiveness.
What is Manual Testing?
Manual testing is a testing process that is carried out manually in order to find defects without the usage of tools or automation scripting.
A test plan document is prepared that acts as a guide to the testing process in order to have the complete test coverage.
What is Manual Testing?
Following are the testing techniques that are performed manually during the test life cycle:
- Acceptance Testing
- White Box Testing
- Black Box Testing
- Unit Testing
- System Testing
- Integration Testing
- What is Model-Based Testing?
- Model-based testing is a software testing technique in which the test cases are derived from a model that describes the functional aspects of the system under test.It makes use of a model to generate tests that includes both offline and online testing.
Model-Based Testing - Importance:
- Unit testing wont be sufficient to check the functionalities
- To ensure that the system is behaving in the same sequence of actions.
- Model-based testing technique has been adopted as an integrated part of the testing process.
- Commercial tools are developed to support model-based testing.
Advantages:
- Higher level of Automation is achieved.
- Exhaustive testing is possible.
- Changes to the model can be easily tested.
Disadvantages:
- Requires a formal specification or model to carry out testing.
- Changes to the model might result in a different set of tests altogether.
- Test Cases are tightly coupled to the model.
What is Modified Condition Coverage?
The Modified Condition/Decision Coverage enhances the condition/decision coverage criteria by requiring that each condition be shown to independently affect the outcome of the decision. This kind of testing is performed on mission critical application which might lead to death, injury or monetary loss.Designing Modified Condition Coverage or Decision Coverage requires more thoughtful selection of test cases which is carried out on a standalone module or integrated components.Characteristics of Modified Conditional Coverage:
- Every entry and exit point in the program has been invoked at least once.
- Every decision has been tested for all the possible outcomes of the branch.
- Every condition in a decision in the program has taken all possible outcomes at least once.
- Every condition in a decision has been shown to independently affect that decision's outcome.
What is Modularity Driven Testing?
Modularity driven testing is an automation testing framework in which small, independent modules of automation scripts are developed for the application under test. These individual scripts are constructed together to form a test realizing a particular test case.