Está en la página 1de 9

1.

Requirements based testing

Requirements-based testing is a testing approach in which test cases, conditions and data are derived
from requirements. It includes functional tests and also non-functional attributes such as
performance, reliability or usability.

Stages in Requirements based Testing:


 Defining Test Completion Criteria - Testing is completed only when all the functional and
non-functional testing is complete.
 Design Test Cases - A Test case has five parameters namely the initial state or precondition,
data setup, the inputs, expected outcomes and actual outcomes.
 Execute Tests - Execute the test cases against the system under test and document the
results.
 Verify Test Results - Verify if the expected and actual results match each other.
 Verify Test Coverage - Verify if the tests cover both functional and non-functional aspects of
the requirement.
 Track and Manage Defects - Any defects detected during the testing process goes through
the defect life cycle and are tracked to resolution. Defect Statistics are maintained which will
give us the overall status of the project.

Requirements Testing process:


 Testing must be carried out in a timely manner.
 Testing process should add value to the software life cycle, hence it needs to be effective.
 Testing the system exhaustively is impossible hence the testing process needs to be efficient
as well.
 Testing must provide the overall status of the project, hence it should be manageable.

Need for Requirements-based Testing


 It provides solution to problems identified in project.
 To discover and fix low quality of requirements thus making valid input which contributes
greatly in defining clear scope of the project.
 It provides a set of quality assurance activities and management tools that enable getting
requirements right from the outset.
 It is possible to discover requirements errors before they become extremely expensive to fix
and manage inevitable changes during software lifecycle.

The RBT methodology


1. Validate requirements against objectives.
Optimize project scope by ensuring that each requirement satisfies at least one business objective. If
there is no match between the requirements and business objectives (if “what” does not match the
“why”), refinement is necessary.
2. Apply use cases against requirements. Some organizations document their requirements with
use cases. Map requirements against a task oriented or interaction-oriented view of the system. If
one or more use-cases cannot be addressed by the requirements, then the requirements are not
complete.
3. Perform an initial ambiguity review. An ambiguity review is a technique for identifying and
eliminating ambiguous words, phrases, and constructs. It is not a review of the content of the

1
requirements. The ambiguity review produces a higher-quality set of requirements for review by the
rest of the project team.

Figure - Requirements-based testing process flow

4. Perform domain expert reviews. Feedback of users and domain experts should be used to refine
the requirements before additional work is done.
5. Structure and formalize requirements. To systematically achieve high test coverage formal and
structured representations of requirements need to be created. Multiple techniques can be used to
provide structure and formality to natural language requirements. The purpose of these techniques
is to reveal cause-effect relationships embedded within requirements, that is to express
requirements as a set of conditions (causes) and resulting actions (effects).
6. Logical consistency checks performed and test cases designed. A set of logical test cases can be
defined (manually or automatically), which is exactly equivalent to the functionality captured in the
requirements. However, this set of test cases may include many redundant cases (i.e. overlapping
with other test cases).
7. Review of test cases by requirements authors. The designed test cases are reviewed
by the requirements authors. If there is a problem with a test case, the requirements associated with
the test case can be corrected and the test cases redesigned.

2
8. Validate test cases with the users/domain experts. If there is a problem with the test case, the
requirements associated with it can be corrected and the test case redesigned. Users/domain experts
obtain a better understanding of what the deliverable system will be like.
9. Review of test cases by developers. The test cases are also reviewed by the developers. By doing
so, the developers understand what they are going to be tested on, and obtain a better understanding
of what they are to deliver so they can deliver for success.
10. Use test cases in design review. The test cases restate the requirements as a series of
causes and effects. As a result, the test cases can be used to validate that the design is robust enough
to satisfy the requirements. If the design cannot meet the requirements, then either the requirements
are infeasible or the design needs rework.
11. Use test cases in code review. Each code module must deliver a portion of the requirements.
The test cases can be used to validate that each code module delivers what is
expected.
12. Verify code against the test cases derived from requirements. The final step is to build test
cases from the logical test cases that have been designed by adding data and navigation to them, and
executing them against the code to compare the actual behavior to the expected behavior.

2. Positive and Negative testing

Software testing is process of Verification and Validation to check whether software application
under test is working as expected. To test the application we need to give some input and check if
getting result as per mentioned in the requirements or not. This testing activity is carried out to find
the defects in the code & improve the quality of software application. Testing of application can be
carried out in two different ways, Positive testing and Negative testing.

Positive Testing:
Positive Testing is testing process where the system validated against the valid input data. In this
testing tester always check for only valid set of values and check if an application behaves as
expected with its expected inputs. The main intention of this testing is to check whether software
application not showing error when not supposed to & showing error when supposed to. Such
testing is to be carried out keeping positive point of view & only execute the positive scenario.
Positive Testing always tries to prove that a given product and project always meets the requirements
and specifications.

Example of Positive testing:


Consider a scenario where you want to test an application which contains a simple textbox to enter
age and requirements say that it should take only integers values. So here provide only positive
integer values to check whether it is working as expected or not is the Positive Testing.

Figure – Positive Testing

Negative Testing:
Negative Testing is testing process where the system validated against the invalid input data.
A negative test checks if a application behaves as expected with its negative inputs. The main
3
intention of this testing is to check whether software application not showing error when supposed
to & showing error when not supposed to. Such testing is to be carried out keeping negative point of
view & only execute the test cases for only invalid set of input data.

Negative testing is a testing process to identify the inputs where system is not designed or un-
handled inputs by providing different invalid. The main reason behind Negative testing is to check
the stability of the software application against the influences of different variety of incorrect
validation data set.

Example of Negative Testing


Consider a same above age textbox example which should accept only integers values. So here
provide the characters like “abcd” in the age textbox & check the behavior of application, either it
should show a validation error message for all invalid inputs (for all other than integer values) or
system should not allow to enter a non integer values.

Figure – Negative Testing

The Negative testing helps to improve the testing coverage of your software application under test.
Both positive and negative testing approaches are equally important for making your application
more reliable and stable.

Positive Test Scenarios:


 Password textbox should accept 6 characters
 Password textbox should upto 20 characters
 Password textbox should accepts any value in between 6-20 chars length.
 Password textbox should accepts all numeric & alphabets values.
Negative Test scenarios:
 Password textbox should not accept less than 6 characters
 Password textbox should not exceeds more than 20 characters
 Password textbox should not accept special characters

In both the testing, following needs to be considered:


 Input data
 Action which needs to be performed
 Output Result

Testing Technique used for Positive and Negative Testing:


Following techniques are used for Positive and negative validation of testing is:
 Boundary Value Analysis
 Equivalence Partitioning
This is one of the software testing technique in which the test cases are designed to include values at
the boundary. If the input data is used within the boundary value limits, then it is said to be Positive

4
Testing. If the input data is picked outside the boundary value limits, then it is said to be Negative
Testing.

Figure – Boundary Value Analysis for Positive and Negative Testing


For example -
A system can accept the numbers from 0 to 10 numeric values. All other numbers are invalid values.
Under this technique , boundary values 0 , 10 and -10 will be tested.

Equivalence Partitioning:
This is a software testing technique which divides the input date into many partitions .Values from
each partition must be tested at least once. Partitions with valid values are used for Positive Testing.
While ,partitions with invalid values are used for negative testing.

Figure - Equivalence Partitioning for Positive and Negative Testing

3. Compatibility testing

 It is a type of non-functional testing.


 Compatibility testing is a type of software testing used to ensure compatibility of the
system/application/website built with various other objects such as other web browsers,
hardware platforms, users (in case if it’s very specific type of requirement, such as a user who
speaks and can read only a particular language), operating systems etc. This type of testing
helps find out how well a system performs in a particular environment that includes
hardware, network, operating system and other software etc.
5
 It is basically the testing of the application or the product built with the computing
environment.
 It tests whether the application or the software product built is compatible with the hardware,
operating system, database or other system software or not.

Types of compatibility tests


Let's look into compatibility testing types
 Hardware: It checks software to be compatible with different hardware configurations.
 Operating Systems: It checks your software to be compatible with different Operating
Systems like Windows, Unix, Mac OS etc.
 Software: It checks your developed software to be compatible with other software's. For
example: MS Word application should be compatible with other software’s like MS Outlook,
MS Excel , VBA etc.
 Network: Evaluation of performance of system In network with varying parameters such as
Bandwidth, Operating speed, Capacity. It also checks application in different networks with
all parameters mentioned earlier.
 Browser: It checks compatibility of your website with different browsers like Firefox , Google
Chrome , Internet Explorer etc.
 Devices: It checks compatibility of your software with different devices like USB port Devices,
Printers and Scanners, Other media devices and Blue tooth.
 Mobile: Checking you software is compatible with mobile platforms like Android , iOS etc.
 Versions of the software: It is verifying you software application to be compatible with
different versions of software. For instance checking your Microsoft Word to be compatible
with Windows 7, Windows 7 SP1 , Windows 7 SP 2 , Windows 7 SP 3.

There are two types of version checking


 Backward compatibility Testing is to verify the behavior of the developed
hardware/software with the older versions of the hardware/software.
 Forward compatibility Testing is to verify the behavior of the developed
hardware/software with the newer versions of the hardware/software.

Tools for compatibility testing


Adobe Browser Lab - Browser Compatibility Testing:
This tool helps us to check your application in different browsers.
Secure Platform - Hardware Compatibility tool:
This tool includes necessary drivers for a specific hardware platform and it provides
information on tool to check for CD burning process with CD burning tools.
Virtual Desktops - Operating System Compatibility:
This is used to run the applications in multiple operating systems as virtual machines. N
Number of systems can be connected and compare the results.

Compatibility testing process


1. Initial phase of compatibility testing is to define the set of environments or platforms the
application is expected to work on.
2. Tester should have enough knowledge on the platforms / software / hardware to understand
the expected application behavior under different configurations.
3. Environment needs to be set-up for testing with different platforms, devices, networks to
check whether your application runs well under different configurations.
4. Report the bugs .Fix the defects. Re-test to confirm defect fixing.
6
4. User documentation testing

User Documentation covers all the manuals, user guides, installation guides, setup guides, read me
files, software release notes, and online help that are provided along with the software to help the
end user to understand the software system. User Documentation Testing should have two
objectives:-
1. To check if what is stated in the document is available in the software
2. To check if what is there in the product is explained correctly in the document
This testing is plays a vital role as the users will refer this document when they start using the
software at their location. A badly written document can put off a user and bias them against the
product even the product offers rich functionality.

Defects found in the user documentation need to be tracked to closure like any regular software
defect. Because these documents are the first interactions the users have with the product. A good
User Documentation aids in reducing customer support calls. The effort and money spend on this
effort would form a valuable investment in the long run for the organization. Testing documentation
involves the documentation of artifacts which should be developed before or during the testing of
Software.

Documentation for Software testing helps in estimating the testing effort required, test coverage,
requirement tracking/tracing etc. This section includes the description of some commonly used
documented artifacts related to Software testing such as:
 Test Plan
 Test Scenario
 Test Case
 Traceability Matrix

(i) Test Plan


A test plan outlines the strategy that will be used to test an application, the resources that will be
used, the test environment in which testing will be performed, the limitations of the testing and the
schedule of testing activities. Typically the Quality Assurance Team Lead will be responsible for
writing a Test Plan. A test plan will include the following.
 Introduction to the Test Plan document
 Assumptions when testing the application
 List of test cases included in Testing the application
 List of features to be tested
 What sort of Approach to use when testing the software
 List of Deliverables that need to be tested
 The resources allocated for testing the application
 Any Risks involved during the testing process
 A Schedule of tasks and milestones as testing is started

(ii) Test Scenario


A one line statement that tells what area in the application will be tested. Test Scenarios are used to
ensure that all process flows are tested from end to end. A particular area of an application can have
as little as one test scenario to a few hundred scenarios depending on the magnitude and complexity
of the application. The term test scenario and test cases are used interchangeably however the main
difference being that test scenarios has several steps however test cases have a single step. When
viewed from this perspective test scenarios are test cases, but they include several test cases and the
7
sequence that they should be executed. Apart from this, each test is dependent on the output from
the previous test.

(iii) Test Case


Test cases involve the set of steps, conditions and inputs which can be used while performing the
testing tasks. The main intent of this activity is to ensure whether the Software Passes or Fails in
terms of its functionality and other aspects. There are many types of test cases like: functional,
negative, error, logical test cases, physical test cases, UI test cases etc. Furthermore test cases are
written to keep track of testing coverage of Software. Generally, there is no formal template which is
used during the test case writing. However, following are the main components which are always
available and included in every test case:
-> Test case ID. -> Product Module. -> Product version.
-> Revision history. -> Purpose -> Assumptions
-> Pre-Conditions. -> Steps. -> Expected Outcome.
-> Actual Outcome. -> Post Conditions.
Many Test cases can be derived from a single test scenario. In addition to this, some time it happened
that multiple test cases are written for single Software which is collectively known as test suites.

(iv) Traceability Matrix


Traceability Matrix (also known as Requirement Traceability Matrix - RTM) is a table which is used
to trace the requirements during the Software development life Cycle. It can be used for forward
tracing (i.e. from Requirements to Design or Coding) or backward (i.e. from Coding to Requirements).
There are many user defined templates for RTM. Each requirement in the RTM document is linked
with its associated test case, so that testing can be done as per the mentioned requirements.
Furthermore, Bug ID is also include and linked with its associated requirements and test case. The
main goals for this matrix are:
 Make sure Software is developed as per the mentioned requirements.
 Helps in finding the root cause of any bug.
 Helps in tracing the developed documents during different phases of SDLC.
A very important thing, Documentation keeps step-by-step processing record and result record
which we keep as a reference material. A project that is fine documented has higher level of maturity
and is more successful as compared to the un-documented project.

Benefits
 Highlighting problems over looked during reviews.
 It ensures consistency of documentation and product, thus minimizing possible defects
reported by customers.
 Results in less difficult support calls.
 New programmers and testers who join a project group can use the documentation to learn
the external functionality of the product.

8
 Customers need less training and can proceed more quickly to advanced training and product
usage.

5. Domain testing

Domain testing can be considered as the next level of testing which is based on the domain
knowledge and expertise in the domain of application. It requires critical understanding of the day-
to-day business activities for which the software is written. This type of testing requires business
domain knowledge rather than the knowledge of what the software specification contains or how the
software is written.

The test engineers performing this type of testing are selected because they have in-depth knowledge
of the business domain. This reduces the effort and time required for training the testers in domain
testing and also increases the effectiveness of domain testing. Domain testing is the ability to design
and execute test cases that relate to the people who will buy and use the software. It helps in
understanding the problems they are trying to solve and the ways in which they are using the
software to solve them. It is also characterized by how well an individual test engineer understands
the operation of the system and the business processes that system is support.

Domain testing involves testing the product, not by going through the logic built into the product.
The business flow determines the steps, not the software under test. This is also called “business
vertical testing”. Domain testing is done after all components are integrated and after the product has
been tested using other black box approaches.

También podría gustarte