Está en la página 1de 6

/opt/scribd/conversion/tmp/scratch1/20551997.

doc

such as planning and control, choosing test conditions, including deviations from the plan. It involves taking o Tests designed by a person(s) from a different
1. Fundamentals of testing designing test cases and checking results, evaluating actions necessary to meet the mission and objectives organization or company (i.e. outsourcing or
1.1 Why is testing necessary? (K2) exit criteria, reporting on the testing process and of the project. In order to control testing, it should be certification by an external body).
LO-1.1.1 Describe, with examples, the way in which a system under test, and finalizing or closure (e.g. after a monitored throughout the project. Test planning takes People and projects are driven by objectives. People
defect in software can cause harm to a test phase has been completed). Testing also includes into account the feedback from monitoring and control tend to align their plans with the objectives set
person, to the environment or to a company. (K2) reviewing of documents (including source code) and activities. Test planning and control tasks are defined in by management and other stakeholders, for example,
LO-1.1.2 Distinguish between the root cause of a static analysis. Both dynamic testing and static testing Chapter 5 of this syllabus. to find defects or to confirm that software works.
defect and its effects. (K2) can be used as a means for achieving similar 1.4.2 Test analysis and design (K1) Therefore, it is important to clearly state the objectives
LO-1.1.3 Give reasons why testing is necessary by objectives, and will provide information in order to Test analysis and design is the activity where general of testing. Identifying failures during testing may be
giving examples. (K2) improve both the system to be tested, and the testing objectives are transformed into tangible test perceived as criticism against the product and against
LO-1.1.4 Describe why testing is part of quality development and testing processes. conditions and test cases. the author. Testing is, therefore, often seen as a
assurance and give examples of how testing There can be different test objectives: Test analysis and design has the following major tasks: destructive activity, even though it is very constructive
contributes to higher quality. (K2) o finding defects; o Reviewing the test basis (such as requirements, in the management of product risks. Looking for failures
LO-1.1.5 Recall the terms error, defect, fault, failure o gaining confidence about the level of quality and architecture, design, interfaces). in a system requires curiosity, professional pessimism,
and corresponding terms mistake and bug. (K1) providing information; o Evaluating testability of the test basis and test a critical eye, attention to detail, good communication
o preventing defects. with development peers, and experience on which to
1.2 What is testing? (K2) objects.
base error guessing. If errors, defects or failures are
LO-1.2.1 Recall the common objectives of testing. (K1) The thought process of designing tests early in the life o Identifying and prioritizing test conditions based on
cycle (verifying the test basis via test design) can help communicated in a constructive way, bad feelings
LO-1.2.2 Describe the purpose of testing in software analysis of test items, the specification,
to prevent defects from being introduced into code. between the testers and the analysts, designers and
development, maintenance and operations behaviour and structure.
Reviews of documents (e.g. requirements) also help to developers can be avoided. This applies to reviewing
as a means to find defects, provide confidence and o Designing and prioritizing test cases.
prevent defects appearing in the code. Different as well as in testing. The tester and test leader need
information, and prevent defects. (K2) o Identifying necessary test data to support the test
viewpoints in testing take different objectives into good interpersonal skills to communicate factual
1.3 General testing principles (K2) account. For example, in development testing (e.g.
conditions and test cases. information about defects, progress and risks, in a
LO-1.3.1 Explain the fundamental principles in testing. o Designing the test environment set-up and constructive way. For the author of the software or
component, integration and system testing), the main
identifying any required infrastructure and tools.
1.4 Fundamental test process (K1) objective may be to cause as many failures as possible document, defect information can help them improve
LO-1.4.1 Recall the fundamental test activities from so that defects in the software are identified and can be 1.4.3 Test implementation and their skills. Defects found and fixed during testing will
fixed. In acceptance testing, the main objective may be save time and money later, and reduce risks.
planning to test closure activities and the execution (K1) Communication problems may occur, particularly if
main tasks of each test activity. (K1) to confirm that the system works as expected, to gain Test implementation and execution is the activity where testers are seen only as messengers of unwanted
confidence that it has met the requirements. In some
1.5 The psychology of testing (K2) cases the main objective of testing may be to assess
test procedures or scripts are specified by combining news about defects. However, there are several ways
LO-1.5.1 Recall that the success of testing is influenced the test cases in a particular order and including any to improve communication and relationships between
the quality of the software (with no intention of fixing
by psychological factors (K1): other information needed for test execution, the testers and others:
defects), to give information to stakeholders of the risk
o clear test objectives determine testers’ effectiveness; environment is set up and the tests are run.
of releasing the system at a given time. Maintenance Page 18
o blindness to one’s own errors; testing often includes testing that no new defects have Page 16 o Start with collaboration rather than battles – remind
o courteous communication and feedback on defects. been introduced during development of the changes. Test implementation and execution has the following
everyone of the common goal of better quality systems.
LO-1.5.2 Contrast the mindset of a tester and of a During operational testing, the main objective may be major tasks:
o Communicate findings on the product in a neutral,
developer. (K2) to assess system characteristics such as reliability or o Developing, implementing and prioritizing test cases.
fact-focused way without criticizing the person who
availability. Debugging and testing are different. Testing o Developing and prioritizing test procedures, creating
Page 11 created it, for example, write objective and factual
can show failures that are caused by defects. test data and, optionally, preparing test harnesses and incident reports and review findings.
1.1 Why is testing necessary (K2) Debugging is the development activity that identifies writing automated test scripts. o Try to understand how the other person feels and
the cause of a defect, repairs the code and checks that o Creating test suites from the test procedures for
Terms the defect has been fixed correctly. Subsequent efficient test execution.
why they react as they do.
Bug, defect, error, failure, fault, mistake, quality, risk. o Confirm that the other person has understood what
confirmation testing by a tester ensures that the fix o Verifying that the test environment has been set up
1.1.1 Software systems context (K1) you have said and vice versa.
does indeed resolve the failure. The responsibility for correctly.
Software systems are an increasing part of life, from each activity is very different, i.e. testers test and o Executing test procedures either manually or by Page 19
business applications (e.g. banking) to consumer
products (e.g. cars). Most people have had an
developers debug. The process of testing and its
activities is explained in Section 1.4.
using test execution tools, according to the planned
sequence.
2. Testing throughout the
experience with software that did not work as expected.
Software that does not work correctly can lead to many
Page 14 o Logging the outcome of test execution and recording software life cycle (K2)
the identities and versions of the software under test,
problems, including loss of money, time or business 1.3 General testing principles test tools and testware.
2.1 Software development models
reputation, and could even cause injury or death. LO-2.1.1 Understand the relationship between
1.1.2 Causes of software defects
(K2) o Comparing actual results with expected results.
development, test activities and work products in the
o Reporting discrepancies as incidents and analyzing
A human being can make an error (mistake), which Terms them in order to establish their cause (e.g. a defect in
development life cycle, and give examples based on
Exhaustive testing. project and product characteristics and context (K2).
produces a defect (fault, bug) in the code, in software the code, in specified test data, in the test document, or LO-2.1.2 Recognize the fact that software development
or a system, or in a document. If a defect in code is Principles a mistake in the way the test was executed).
models must be adapted to the context of project and
executed, the system will fail to do what it should do (or A number of testing principles have been suggested o Repeating test activities as a result of action taken product characteristics. (K1)
do something it shouldn’t), causing a failure. Defects in over the past 40 years and offer general guidelines for each discrepancy. For example, re-execution of a LO-2.1.3 Recall reasons for different levels of testing,
software, systems or documents may result in failures, common for all testing. test that previously failed in order to confirm a fix and characteristics of good testing in any life cycle
but not all defects do so. Defects occur because human Principle 1 – Testing shows presence of defects (confirmation testing), execution of a corrected test model. (K1)
beings are fallible and because there is time pressure, Testing can show that defects are present, but cannot and/or execution of tests in order to ensure that defects
complex code, complexity of infrastructure, changed prove that there are no defects. Testing reduces the have not been introduced in unchanged areas of the 2.2 Test levels (K2)
technologies, and/or many system interactions. probability of undiscovered defects remaining in the software or that defect fixing did not uncover other LO-2.2.1 Compare the different levels of testing: major
Failures can be caused by environmental conditions as software but, even if no defects are found, it is not a defects (regression testing). objectives, typical objects of testing, typical targets of
well: radiation, magnetism, electronic fields, and testing (e.g. functional or structural) and related work
pollution can cause faults in firmware or influence the
proof of correctness. 1.4.4 Evaluating exit criteria and products, people who test, types of defects and failures
Principle 2 – Exhaustive testing is impossible
execution of software by changing hardware conditions. Testing everything (all combinations of inputs and reporting (K1) to be identified. (K2)
1.1.3 Role of testing in software preconditions) is not feasible except for trivial cases. Evaluating exit criteria is the activity where test 2.3 Test types (K2)
Instead of exhaustive testing, risk analysis and execution is assessed against the defined objectives. LO-2.3.1 Compare four software test types (functional,
development, maintenance and priorities should be used to focus testing efforts. This should be done for each test level. non-functional, structural and changerelated) by
operations (K2) Principle 3 – Early testing Evaluating exit criteria has the following major tasks: example. (K2)
Rigorous testing of systems and documentation can Testing activities should start as early as possible in the o Checking test logs against the exit criteria specified LO-2.3.2 Recognize that functional and structural tests
help to reduce the risk of problems occurring during software or system development life cycle, and should in test planning. occur at any test level. (K1)
operation and contribute to the quality of the software be focused on defined objectives. o Assessing if more tests are needed or if the exit LO-2.3.3 Identify and describe non-functional test types
system, if defects found are corrected before the Principle 4 – Defect clustering criteria specified should be changed. based on non-functional requirements. (K2)
system is released for operational use. Software testing A small number of modules contain most of the defects o Writing a test summary report for stakeholders. LO-2.3.4 Identify and describe test types based on the
may also be required to meet contractual or legal discovered during pre-release testing, or are 1.4.5 Test closure activities (K1) analysis of a software system’s structure or
requirements, or industry-specific standards. responsible for the most operational failures. Test closure activities collect data from completed test architecture. (K2)
1.1.4 Testing and quality (K2) Principle 5 – Pesticide paradox activities to consolidate experience, testware, facts and LO-2.3.5 Describe the purpose of confirmation testing
With the help of testing, it is possible to measure the If the same tests are repeated over and over again, numbers. For example, when a software system is and regression testing. (K2)
eventually the same set of test cases will no longer find
quality of software in terms of defects found, for both
any new defects. To overcome this “pesticide paradox”,
released, a test project is completed (or cancelled), a 2.4 Maintenance testing (K2)
functional and non-functional software requirements milestone has been achieved, or a maintenance LO-2.4.1 Compare maintenance testing (testing an
and characteristics (e.g. reliability, usability, efficiency, the test cases need to be regularly reviewed and release has been completed. existing system) to testing a new application with
maintainability and portability). For more information on revised, and new and different tests need to be written Test closure activities include the following major tasks: respect to test types, triggers for testing and amount of
non-functional testing see Chapter 2; for more to exercise different parts of the software or system to o Checking which planned deliverables have been testing. (K2)
information on software characteristics see ‘Software potentially find more defects.
delivered, the closure of incident reports or raising of LO-2.4.2 Identify reasons for maintenance testing
Engineering – Software Product Quality’ (ISO 9126). Principle 6 – Testing is context dependent
change records for any that remain open, and the (modification, migration and retirement). (K1)
Testing can give confidence in the quality of the Testing is done differently in different contexts. For
documentation of the acceptance of the system. LO-2.4.3. Describe the role of regression testing and
software if it finds few or no defects. A properly example, safety-critical software is tested differently
o Finalizing and archiving testware, the test impact analysis in maintenance. (K2)
designed test that passes reduces the overall level of from an e-commerce site.
environment and the test infrastructure for later reuse. Page 20
risk in a system. When testing does find defects, the Principle 7 – Absence-of-errors fallacy
o Handover of testware to the maintenance
quality of the software system increases when those Finding and fixing defects does not help if the system
built is unusable and does not fulfill the users’ organization. 2.1 Software development models
defects are fixed. Lessons should be learned from o Analyzing lessons learned for future releases and
previous projects. By understanding the root causes of needs and expectations. Terms
projects, and the improvement of test maturity. Commercial off-the-shelf (COTS), iterative-incremental
defects found in other projects, processes can be Page 15
Page 17 development model, validation, verification, V-model.
improved, which in turn should prevent those defects
from reoccurring and, as a consequence, improve the 1.4 Fundamental test process 1.5 The psychology of testing Background
quality of future systems. This is an aspect of quality Terms Testing does not exist in isolation; test activities are
assurance. Confirmation testing, retesting, exit criteria, incident, (K2) related to software development activities. Different
Page 12 regression testing, test basis, test condition, test Terms development life cycle models need different
coverage, test data, test execution, test log, test plan, Error guessing, independence. approaches to testing.
Testing should be integrated as one of the quality
assurance activities (i.e. alongside development test procedure, test policy, test strategy, test suite, test
Background 2.1.1 V-model (sequential
summary report, testware.
standards, training and defect analysis). The mindset to be used while testing and reviewing is development model) (K2)
1.1.5 How much testing is enough? Background different to that used while developing software. With Although variants of the V-model exist, a common type
Deciding how much testing is enough should take The most visible part of testing is executing tests. But the right mindset developers are able to test their own of V-model uses four test levels, corresponding to the
account of the level of risk, including technical and to be effective and efficient, test plans should also code, but separation of this responsibility to a tester is four development levels.
business product and project risks, and project include time to be spent on planning the tests, typically done to help focus effort and provide additional The four levels used in this syllabus are:
constraints such as time and budget. (Risk is discussed designing test cases, preparing for execution and benefits, such as an independent view by trained and o component (unit) testing;
further in Chapter 5.) Testing should provide sufficient evaluating status. professional testing resources. Independent testing o integration testing;
information to stakeholders to make informed decisions The fundamental test process consists of the following may be carried out at any level of testing. o system testing;
about the release of the software or system being main activities: A certain degree of independence (avoiding the author o acceptance testing.
tested, for the next development step or handover to o planning and control; bias) is often more effective at finding defects and In practice, a V-model may have more, fewer or
customers. o analysis and design; failures. Independence is not, however, a replacement different levels of development and testing, depending
o implementation and execution; for familiarity, and developers can efficiently find many
Page 13 on the project and the software product. For example,
o evaluating exit criteria and reporting; defects in their own code. Several levels of there may be component integration testing after
1.2 What is testing? (K2) o test closure activities. independence can be defined: component testing, and system integration testing after
Although logically sequential, the activities in the o Tests designed by the person(s) who wrote the
Terms process may overlap or take place concurrently. software under test (low level of independence).
system testing. Software work products (such as
Debugging, requirement, review, test case, testing, test business scenarios or use cases, requirements
1.4.1 Test planning and control (K1) o Tests designed by another person(s) (e.g. from the specifications, design documents and code) produced
objective.
Test planning is the activity of verifying the mission of development team). during development are often the basis of testing in one
Background testing, defining the objectives of testing and the o Tests designed by a person(s) from a different or more test levels. References for generic work
A common perception of testing is that it only consists organizational group (e.g. an independent test team) or products include Capability Maturity Model Integration
specification of test activities in order to meet the
of running tests, i.e. executing the software. This is part test specialists (e.g. usability or performance test (CMMI) or ‘Software life cycle processes’ (IEEE/IEC
objectives and mission.
of testing, but not all of the testing activities. Test specialists). 12207). Verification and validation (and early test
Test control is the ongoing activity of comparing actual
activities exist before and after test execution: activities
progress against the plan, and reporting the status,
/opt/scribd/conversion/tmp/scratch1/20551997.doc

design) can be carried out during the development of module. Both functional and structural approaches may interoperability with specific systems, and may be LO-3.1.2 Describe the importance and value of
the software work products. be used. Ideally, testers should understand the performed at all test levels (e.g. tests for components considering static techniques for the assessment of
2.1.2 Iterative-incremental architecture and influence integration planning. If may be based on a component specification). software work products. (K2)
integration tests are planned before components or Specification-based techniques may be used to derive LO-3.1.3 Explain the difference between static and
development models (K2) systems are built, they can be built in the order required test conditions and test cases from the functionality of dynamic techniques. (K2)
Iterative-incremental development is the process of for most efficient testing. the software or system. (See Chapter 4.) Functional LO-3.1.4 Describe the objectives of static analysis and
establishing requirements, designing, building and testing considers the external behaviour of the software reviews and compare them to dynamic testing. (K2)
testing a system, done as a series of shorter
2.2.3 System testing (K2)
System testing is concerned with the behaviour of a (black-box testing). A type of functional testing, security 3.2 Review process (K2)
development cycles. Examples are: prototyping, rapid testing, investigates the functions (e.g. a firewall)
application development (RAD), Rational Unified whole system/product as defined by the scope of a LO-3.2.1 Recall the phases, roles and responsibilities
development project or programme. In system testing, relating to detection of threats, such as viruses, from of a typical formal review. (K1)
Process (RUP) and agile development models. The malicious outsiders. Another type of functional testing,
resulting system produced by an iteration may be the test environment should correspond to the final LO-3.2.2 Explain the differences between different
target or production environment as much as possible interoperability testing, evaluates the capability of the types of review: informal review, technical review,
tested at several levels as part of its development. An software product to interact with one or more specified
increment, added to others developed previously, forms in order to minimize the risk of environment-specific walkthrough and inspection. (K2)
failures not being found in testing. System testing may components or systems. LO-3.2.3 Explain the factors for successful
a growing partial system, which should also be tested.
Regression testing is increasingly important on all include tests based on risks and/or on requirements 2.3.2 Testing of non-functional performance of reviews. (K2)
iterations after the first one. Verification and validation specifications, business processes, use cases, or other software characteristics (non- 3.3 Static analysis by tools (K2)
can be carried out on each increment. high level descriptions of system behaviour, LO-3.3.1 Recall typical defects and errors identified by
interactions with the operating system, and system functional testing) (K2)
2.1.3 Testing within a life cycle Non-functional testing includes, but is not limited to, static analysis and compare them to reviews and
resources. System testing should investigate both dynamic testing. (K1)
model (K2) functional and non-functional requirements of the performance testing, load testing, stress testing,
usability testing, maintainability testing, reliability testing LO-3.3.2 List typical benefits of static analysis. (K1)
In any life cycle model, there are several characteristics system. Requirements may exist as text and/or models. LO-3.3.3 List typical code and design defects that may
of good testing: Testers also need to deal with incomplete or and portability testing. It is the testing of “how” the
system works. Non-functional testing may be be identified by static analysis tools. (K1)
o For every development activity there is a undocumented requirements. System testing of
functional requirements starts by using the most performed at all test levels. The term non-functional Page 29
corresponding testing activity.
o Each test level has test objectives specific to that appropriate specification-based (black-box) techniques
for the aspect of the system to be tested.
testing describes the tests required to measure
characteristics of systems and software that can be 3.1 Static techniques and the test
level.
o The analysis and design of tests for a given test For example, a decision table may be created for quantified on a varying scale, such as response times process (K2) 15 minutes
combinations of effects described in business rules. for performance testing. These tests can be referenced
level should begin during the corresponding
Structure-based techniques (white-box) may then be to a quality model such as the one defined in ‘Software Terms
development activity. Engineering – Software Product Quality’ (ISO Dynamic testing, static testing, static technique.
used to assess the thoroughness of the testing with
o Testers should be involved in reviewing documents
respect to a structural element, such as menu structure 9126). Background
as soon as drafts are available in the development life
or web page navigation. (See Chapter 4.) An Page 26 Unlike dynamic testing, which requires the execution of
cycle.
independent test team often carries out system testing. software; static testing techniques rely on the manual
Page 21 2.3.3 Testing of software examination (reviews) and automated analysis (static
Test levels can be combined or reorganized depending
2.2.4 Acceptance testing (K2) structure/architecture (structural analysis) of the code or other project documentation.
Acceptance testing is often the responsibility of the
on the nature of the project or the system architecture. testing) (K2) Reviews are a way of testing software work products
customers or users of a system; other stakeholders
For example, for the integration of a commercial off- (including code) and can be performed well before
may be involved as well. The goal in acceptance Structural (white-box) testing may be performed at all
the-shelf (COTS) software product into a system, the dynamic test execution. Defects detected during
testing is to establish confidence in the system, parts of test levels. Structural techniques are best used after
purchaser may perform integration testing at the reviews early in the life cycle are often much cheaper to
the system or specific non-functional characteristics of specification-based techniques, in order to help
system level (e.g. integration to the infrastructure and remove than those detected while running tests (e.g.
the system. Finding defects is not the main focus in measure the thoroughness of testing through
other systems, or system deployment) and acceptance defects found in requirements). A review could be done
acceptance testing. Acceptance testing may assess the assessment of coverage of a type of structure.
testing (functional and/or non-functional, and user entirely as a manual activity, but there is also tool
system’s readiness for deployment and use, although it Coverage is the extent that a structure has been
and/or operational testing). support. The main manual activity is to examine a work
is not necessarily the final level of testing. For example, exercised by a test suite, expressed as a percentage of
Page 22 the items being covered. If coverage is not 100%, then product and make comments about it. Any software
a large-scale system integration test may come after
work product can be reviewed, including requirements
2.2 Test levels (K2) 40 minutes the acceptance test for a system. more tests may be designed to test those items that
were missed and, therefore, increase coverage. specifications, design specifications, code, test plans,
Page 24 test specifications, test cases, test scripts, user guides
Terms Acceptance testing may occur as more than just a Coverage techniques are covered in Chapter 4.
At all test levels, but especially in component testing or web pages. Benefits of reviews include early defect
Alpha testing, beta testing, component testing (also single test level, for example:
and component integration testing, tools can be used to detection and correction, development productivity
known as unit, module or program testing), driver, field o A COTS software product may be acceptance tested
measure the code coverage of elements, such as improvements, reduced development timescales,
testing, functional requirement, integration, integration when it is installed or integrated.
statements or decisions. Structural testing may be reduced testing cost and time, lifetime cost reductions,
testing, non-functional requirement, robustness testing, o Acceptance testing of the usability of a component
based on the architecture of the system, such as a fewer defects and improved communication. Reviews
stub, system testing, test level, test-driven
may be done during component testing. can find omissions, for example, in requirements, which
development, test environment, user acceptance calling hierarchy. Structural testing approaches can
o Acceptance testing of a new functional enhancement are unlikely to be found in dynamic testing. Reviews,
testing. also be applied at system, system integration or
may come before system testing. acceptance testing levels (e.g. to business models or static analysis and dynamic testing have the same
Background Typical forms of acceptance testing include the objective – identifying defects. They are
menu structures).
For each of the test levels, the following can be following: complementary: the different techniques can find
identified: their generic objectives, the work product(s) User acceptance testing 2.3.4 Testing related to changes different types of defects effectively and efficiently.
being referenced for deriving test cases (i.e. the test Typically verifies the fitness for use of the system by (confirmation testing (retesting) and Compared to dynamic testing, static techniques find
basis), the test object (i.e. what is being tested), typical business users. causes of failures (defects) rather than the failures
defects and failures to be found, test harness Operational (acceptance) testing
regression testing) (K2) themselves. Typical defects that are easier to find in
requirements and tool support, and specific approaches After a defect is detected and fixed, the software should reviews than in dynamic testing are: deviations from
The acceptance of the system by the system
and responsibilities. be retested to confirm that the original defect has been standards, requirement defects, design defects,
administrators, including:
successfully removed. This is called confirmation.
2.2.1 Component testing (K2) o testing of backup/restore;
Debugging (defect fixing) is a development activity, not
insufficient maintainability and incorrect interface
Component testing searches for defects in, and verifies o disaster recovery; specifications.
a testing activity. Regression testing is the repeated
the functioning of, software (e.g. modules, programs, o user management; testing of an already tested program, after modification, Page 30
objects, classes, etc.) that are separately testable. It o maintenance tasks;
may be done in isolation from the rest of the system, o periodic checks of security vulnerabilities.
to discover any defects introduced or uncovered as a
result of the change(s). These defects may be either in
3.2 Review process (K2) 25 min
depending on the context of the development life cycle Contract and regulation acceptance testing the software being tested, or in another related or Terms
and the system. Stubs, drivers and simulators may be Contract acceptance testing is performed against a unrelated software component. It is performed when Entry criteria, formal review, informal review, inspection,
used. Component testing may include testing of contract’s acceptance criteria for producing custom- the software, or its environment, is changed. The extent metric, moderator/inspection leader, peer review,
functionality and specific non-functional characteristics, developed software. Acceptance criteria should be of regression testing is based on the risk of not finding reviewer, scribe, technical review, walkthrough.
such as resource-behaviour (e.g. memory leaks) or defined when the contract is agreed. Regulation
robustness testing, as well as structural testing (e.g. acceptance testing is performed against any
defects in software that was working previously. Tests Background
should be repeatable if they are to be used for The different types of reviews vary from very informal
branch coverage). Test cases are derived from work regulations that must be adhered to, such as confirmation testing and to assist regression testing.
products such as a specification of the component, the (e.g. no written instructions for reviewers) to very formal
governmental, legal or safety regulations. Regression testing may be performed at all test levels,
software design or the data model. Typically, (i.e. well structured and regulated). The formality of a
Alpha and beta (or field) testing and applies to functional, non-functional and structural
component testing occurs with access to the code review process is related to factors such as the maturity
Developers of market, or COTS, software often want to testing. Regression test suites are run many times and
being tested and with the support of the development of the development process, any legal or regulatory
get feedback from potential or existing customers in generally evolve slowly, so regression testing is a
environment, such as a unit test framework or requirements or the need for an audit trail. The way a
their market before the software product is put up for strong candidate for automation.
debugging tool, and, in practice, usually involves the review is carried out depends on the agreed objective
sale commercially. Alpha testing is performed at the
programmer who wrote the code. Defects are typically Page 27 of the review (e.g. find defects, gain understanding, or
developing organization’s site. Beta testing, or field
discussion and decision by consensus).
fixed as soon as they are found, without formally
recording incidents. One approach to component
testing, is performed by people at their own locations. 2.4 Maintenance testing (K2) 3.2.1 Phases of a formal review (K1)
Both are performed by potential customers, not the
testing is to prepare and automate test cases before developers of the product. Organizations may use other Terms A typical formal review has the following main phases:
coding. This is called a test-first approach or test-driven terms as well, such as factory acceptance testing and Impact analysis, maintenance testing. 1. Planning: selecting the personnel, allocating roles;
development. This approach is highly iterative and is site acceptance testing for systems that are tested Background defining the entry and exit criteria for more formal
based on cycles of developing test cases, then building before and after being moved to a customer’s site. Once deployed, a software system is often in service review types (e.g. inspection); and selecting which
and integrating small pieces of code, and executing the for years or decades. During this time the system and parts of documents to look at.
Page 25
component tests until they pass. its environment are often corrected, changed or 2. Kick-off: distributing documents; explaining the
2.2.2 Integration testing (K2) 2.3 Test types (K2) 40 minutes extended. Maintenance testing is done on an existing objectives, process and documents to the participants;
operational system, and is triggered by modifications, and checking entry criteria (for more formal review
Integration testing tests interfaces between Terms types).
components, interactions with different parts of a Black-box testing, code coverage, functional testing, migration, or retirement of the software or system.
system, such as the operating system, file system, Modifications include planned enhancement changes 3. Individual preparation: work done by each of the
interoperability testing, load testing, maintainability participants on their own before the review meeting,
hardware, or interfaces between systems. There may testing, performance testing, portability testing, (e.g. release-based), corrective and emergency
be more than one level of integration testing and it may changes, and changes of environment, such as noting potential defects, questions and comments.
reliability testing, security testing, specification-based 4. Review meeting: discussion or logging, with
be carried out on test objects of varying size. For testing, stress testing, structural testing, usability planned operating system or database upgrades, or
example: patches to newly exposed or discovered vulnerabilities documented results or minutes (for more formal review
testing, white-box testing. types). The meeting participants may simply note
1. Component integration testing tests the interactions of the operating system. Maintenance testing for
between software components and is done after Background migration (e.g. from one platform to another) should defects, make recommendations for handling the
component testing; A group of test activities can be aimed at verifying the include operational tests of the new environment, as defects, or make decisions about the defects.
2. System integration testing tests the interactions software system (or a part of a system) based on a well as of the changed software. Maintenance testing 5. Rework: fixing defects found, typically done by the
between different systems and may be done after specific reason or target for testing. A test type is for the retirement of a system may include the testing of author.
system testing. In this case, the developing focused on a particular test objective, which could be data migration or archiving if long data-retention 6. Follow-up: checking that defects have been
organization may control only one side of the interface, the testing of a function to be performed by the periods are required. In addition to testing what has addressed, gathering metrics and checking on exit
so changes may be destabilizing. Business processes software; a non-functional quality characteristic, such been changed, maintenance testing includes extensive criteria (for more formal review types).
implemented as workflows may involve a series of as reliability or usability, the structure or architecture of regression testing to parts of the system that have not 3.2.2 Roles and responsibilities (K1)
systems. Cross-platform issues may be significant. the software or system; or related to changes, i.e. been changed. The scope of maintenance testing is A typical formal review will include the roles below:
The greater the scope of integration, the more difficult it confirming that defects have been fixed (confirmation related to the risk of the change, the size of the existing o Manager: decides on the execution of reviews,
becomes to isolate failures to a specific component or testing) and looking for unintended changes system and to the size of the change. Depending on allocates time in project schedules and determines if
system, which may lead to increased risk. (regression testing). A model of the software may be the changes, maintenance testing may be done at any the review objectives have been met.
developed and/or used in structural and functional or all test levels and for any or all test types.
Page 23 o Moderator: the person who leads the review of the
testing, for example, in functional testing a process flow Determining how the existing system may be affected
Systematic integration strategies may be based on the document or set of documents, including planning the
model, a state transition model or a plain language by changes is called impact analysis, and is used to
system architecture (such as top-down and review, running the meeting, and follow-up after the
specification; and for structural testing a control flow help decide how much regression testing to do.
bottom-up), functional tasks, transaction processing meeting. If necessary, the moderator may mediate
model or menu structure model. Maintenance testing can be difficult if specifications are
sequences, or some other aspect of the system or between the various points of view and is often the
component. In order to reduce the risk of late defect 2.3.1 Testing of function (functional out of date or missing. person upon whom the success of the review rests.
discovery, integration should normally be incremental testing) (K2) Page 28 o Author: the writer or person with chief responsibility
rather than “big bang”. Testing of specific non-functional
characteristics (e.g. performance) may be included in
The functions that a system, subsystem or component
are to perform may be described in work products such
3. Static techniques (K2) 60m for the document(s) to be reviewed.
o Reviewers: individuals with a specific technical or
integration testing. At each stage of integration, testers as a requirements specification, use cases, or a 3.1 Static techniques and the test business background (also called checkers or
concentrate solely on the integration itself. For functional specification, or they may be undocumented. process (K2) inspectors) who, after the necessary preparation,
example, if they are integrating module A with module B The functions are “what” the system does. Functional identify and describe findings (e.g. defects) in the
LO-3.1.1 Recognize software work products that can
they are interested in testing the communication tests are based on functions and features (described in product under review. Reviewers should be chosen to
be examined by the different static techniques. (K1)
between the modules, not the functionality of either documents or understood by the testers) and their
/opt/scribd/conversion/tmp/scratch1/20551997.doc

represent different perspectives and roles in the review o Detecting dependencies and inconsistencies in During test design the test cases and test data are Decision tables are a good way to capture system
process, and should take part in any review meetings. software models, such as links. created and specified. A test case consists of a set of requirements that contain logical conditions, and to
o Scribe (or recorder): documents all the issues, o Improved maintainability of code and design. input values, execution preconditions, expected results document internal system design. They may be used to
problems and open points that were identified during o Prevention of defects, if lessons are learned in and execution post-conditions, developed to cover record complex business rules that a system is to
the meeting. development. certain test condition(s). The ‘Standard for Software implement. The specification is analyzed, and
Looking at documents from different perspectives and Typical defects discovered by static analysis tools Test Documentation’ (IEEE 829) describes the content conditions and actions of the system are identified. The
using checklists can make reviews more effective and include: of test design specifications (containing test conditions) input conditions and actions are most often stated in
efficient, for example, a checklist based on o referencing a variable with an undefined value; and test case specifications. Expected results should such a way that they can either be true or false
perspectives such as user, maintainer, tester or o inconsistent interface between modules and be produced as part of the specification of a test case (Boolean). The decision table contains the triggering
operations, or a checklist of typical requirements components; and include outputs, changes to data and states, and conditions, often combinations of true and false for all
problems. o variables that are never used; any other consequences of the test. If expected results input conditions, and the resulting actions for each
Page 31 have not been defined then a plausible, but erroneous, combination of conditions. Each column of the table
o unreachable (dead) code;
result may be interpreted as the correct one. Expected corresponds to a business rule that defines a unique
3.2.3 Types of review (K2) o programming standards violations;
results should ideally be defined prior to test execution. combination of conditions, which result in the execution
A single document may be the subject of more than o security vulnerabilities; During test implementation the test cases are of the actions associated with that rule. The coverage
one review. If more than one type of review is used, the o syntax violations of code and software models. developed, implemented, prioritized and organized in standard commonly used with decision table testing is
order may vary. For example, an informal review may Static analysis tools are typically used by developers the test procedure specification. The test procedure (or to have at least one test per column, which typically
be carried out before a technical review, or an (checking against predefined rules or programming manual test script) specifies the sequence of action for involves covering all combinations of triggering
inspection may be carried out on a requirements standards) before and during component and the execution of a test. If tests are run using a test conditions. The strength of decision table testing is that
specification before a walkthrough with customers. The integration testing, and by designers during software execution tool, the sequence of actions is specified in a it creates combinations of conditions that might not
main characteristics, options and purposes of common modeling. Static analysis tools may produce a large test script (which is an automated test procedure). The otherwise have been exercised during testing. It may
review types are: number of warning messages, which need to be well various test procedures and automated test scripts are be applied to all situations when the action of the
Informal review managed to allow the most effective use of the tool. subsequently formed into a test execution schedule software depends on several logical decisions.
Key characteristics: Compilers may offer some support for static analysis, that defines the order in which the various test Page 39
o no formal process; including the calculation of metrics. procedures, and possibly automated test scripts, are
o there may be pair programming or a technical lead Page 34 executed, when they are to be carried out and by 4.3.4 State transition testing (K3)
reviewing designs and code; whom. The test execution schedule will take into A system may exhibit a different response depending
o optionally may be documented; 4. Test design techniques account such factors as regression tests, prioritization, on current conditions or previous history (its state). In
this case, that aspect of the system can be shown as a
o may vary in usefulness depending on the reviewer; 4.1 The test development process and technical and logical dependencies.
state transition diagram. It allows the tester to view the
o main purpose: inexpensive way to get some benefit. LO-4.1.1 Differentiate between a test design Page 37 software in terms of its states, transitions between
Walkthrough specification, test case specification and test procedure
Key characteristics: specification. (K2) 4.2 Categories of test design states, the inputs or events that trigger state changes
(transitions) and the actions which may result from
o meeting led by author;
o scenarios, dry runs, peer group;
LO-4.1.2 Compare the terms test condition, test case techniques (K2) 15 minutes those transitions. The states of the system or object
and test procedure. (K2) under test are separate, identifiable and finite in
o open-ended sessions; LO-4.1.3 Evaluate the quality of test cases. Do they: Terms
Black-box test design technique, experience-based test number. A state table shows the relationship between
o optionally a pre-meeting preparation of reviewers, o show clear traceability to the requirements; the states and inputs, and can highlight possible
review report, list of findings and scribe (who is not the o contain an expected result. (K2) design technique, specification-based test design
technique, structure-based test design technique, transitions that are invalid. Tests can be designed to
author); LO-4.1.4 Translate test cases into a well-structured test cover a typical sequence of states, to cover every state,
o may vary in practice from quite informal to very procedure specification at a level of detail relevant to white-box test design technique.
to exercise every transition, to exercise specific
formal; the knowledge of the testers. (K3) Background sequences of transitions or to test invalid transitions.
o main purposes: learning, gaining understanding, 4.2 Categories of test design The purpose of a test design technique is to identify State transition testing is much used within the
defect finding. test conditions and test cases. It is a classic distinction embedded software industry and technical automation
Technical review
techniques (K2) to denote test techniques as black box or white box. in general. However, the technique is also suitable for
Key characteristics: LO-4.2.1 Recall reasons that both specification-based Black-box techniques (which include specification- modeling a business object having specific states or
o documented, defined defect-detection process that (black-box) and structure-based (whitebox) approaches based and experienced-based techniques) are a way to testing screen-dialogue flows (e.g. for internet
to test case design are useful, and list the common derive and select test conditions or test cases based on
includes peers and technical experts; applications or business scenarios).
techniques for each. (K1) an analysis of the test basis documentation and the
o may be performed as a peer review without 4.3.5 Use case testing (K2)
LO-4.2.2 Explain the characteristics and differences experience of developers, testers and users, whether
management participation; Tests can be specified from use cases or business
between specification-based testing, structure-based functional or non-functional, for a component or system
o ideally led by trained moderator (not the author); scenarios. A use case describes interactions between
testing and experience-based testing. (K2) without reference to its internal structure. White-box
o pre-meeting preparation; actors, including users and the system, which produce
o optionally the use of checklists, review report, list of 4.3 Specification-based or black-box techniques (also called structural or structure-based
a result of value to a system user. Each use case has
techniques) are based on an analysis of the structure of
findings and management participation; techniques (K3) the component or system. Some techniques fall clearly preconditions, which need to be met for a use case to
o may vary in practice from quite informal to very LO-4.3.1 Write test cases from given software models work successfully. Each use case terminates with post-
into a single category; others have elements of more
formal; using the following test design techniques: (K3) conditions, which are the observable results and final
than one category. This syllabus refers to specification-
o main purposes: discuss, make decisions, evaluate o equivalence partitioning; state of the system after the use case has been
based or experience-based approaches as black-box
alternatives, find defects, solve technical problems and o boundary value analysis; techniques and structure-based as white-box completed. A use case usually has a mainstream (i.e.
check conformance to specifications and standards. o decision table testing; techniques. most likely) scenario, and sometimes alternative
Inspection o state transition testing. Common features of specification-based techniques: branches. Use cases describe the “process flows”
Key characteristics: LO-4.3.2 Understand the main purpose of each of the o Models, either formal or informal, are used for the through a system based on its actual likely use, so the
o led by trained moderator (not the author); four techniques, what level and type of testing could test cases derived from use cases are most useful in
specification of the problem to be solved, the software
o usually peer examination; use the technique, and how coverage may be uncovering defects in the process flows during real-
or its components.
o defined roles; measured. (K2) world use of the system. Use cases, often referred to
o From these models test cases can be derived
o includes metrics; LO-4.3.3 Understand the concept of use case testing as scenarios, are very useful for designing acceptance
systematically.
o formal process based on rules and checklists with and its benefits. (K2) tests with customer/user participation. They also help
Common features of structure-based techniques: uncover integration defects caused by the interaction
entry and exit criteria; 4.4 Structure-based or white-box o Information about how the software is constructed is
and interference of different components, which
o pre-meeting preparation; used to derive the test cases, for example, code and
o inspection report, list of findings;
techniques (K3) individual component testing would not see.
LO-4.4.1 Describe the concept and importance of code design.
o formal follow-up process; o The extent of coverage of the software can be Page 40
coverage. (K2)
o optionally, process improvement and reader; LO-4.4.2 Explain the concepts of statement and measured for existing test cases, and further test cases
can be derived systematically to increase coverage.
4.4 Structure-based or white-box
o main purpose: find defects. decision coverage, and understand that these concepts
Walkthroughs, technical reviews and inspections can can also be used at other test levels than component Common features of experience-based techniques: techniques (K3) 60 minutes
be performed within a peer group – colleagues at the testing (e.g. on business procedures at system level). o The knowledge and experience of people are used Terms
same organizational level. This type of review is called LO-4.4.3 Write test cases from given control flows to derive the test cases. Code coverage, decision coverage, statement
a “peer review”. using the following test design techniques: o Knowledge of testers, developers, users and other coverage, structure-based testing.
o statement testing; stakeholders about the software, its usage and its
Page 32
environment. Background
3.2.4 Success factors for reviews o decision testing. (K3) Structure-based testing/white-box testing is based on
LO-4.4.4 Assess statement and decision coverage for o Knowledge about likely defects and their distribution.
Success factors for reviews include: an identified structure of the software or system, as
o Each review has a clear predefined objective. completeness. (K3) Page 38 seen in the following examples:
o The right people for the review objectives are 4.5 Experience-based techniques 4.3 Specification-based or black- o Component level: the structure is that of the code
involved. Page 35 itself, i.e. statements, decisions or branches.
o Defects found are welcomed, and expressed LO-4.5.1 Recall reasons for writing test cases based on box techniques (K3) 150 minutes o Integration level: the structure may be a call tree (a
objectively. intuition, experience and knowledge about common Terms diagram in which modules call other modules).
o People issues and psychological aspects are dealt defects. (K1) Boundary value analysis, decision table testing, o System level: the structure may be a menu structure,
with (e.g. making it a positive experience for the LO-4.5.2 Compare experience-based techniques with equivalence partitioning, state transition testing, use business process or web page structure.
author). specification-based testing techniques. (K2) case testing. In this section, two code-related structural techniques
o Review techniques are applied that are suitable to for code coverage, based on statements and decisions,
4.6 Choosing test techniques (K2) 4.3.1 Equivalence partitioning (K3) are discussed. For decision testing, a control flow
the type and level of software work products and LO-4.6.1 List the factors that influence the selection of Inputs to the software or system are divided into groups diagram may be used to visualize the alternatives for
reviewers. the appropriate test design technique for a particular that are expected to exhibit similar behaviour, so they each decision.
o Checklists or roles are used if appropriate to kind of problem, such as the type of system, risk, are likely to be processed in the same way.
increase effectiveness of defect identification. customer requirements, models for use case modeling, Equivalence partitions (or classes) can be found for 4.4.1 Statement testing and coverage
o Training is given in review techniques, especially the requirements models or tester knowledge. (K2) both valid data and invalid data, i.e. values that should (K3)
more formal techniques, such as inspection. be rejected. Partitions can also be identified for outputs, In component testing, statement coverage is the
Page 36
o Management supports a good review process (e.g. internal values, time-related values (e.g. before or after assessment of the percentage of executable
by incorporating adequate time for review activities in 4.1 The TEST DEVELOPMENT an event) and for interface parameters (e.g. during statements that have been exercised by a test case
project schedules). integration testing). Tests can be designed to cover
o There is an emphasis on learning and process PROCESS (K2) 15 minutes partitions. Equivalence partitioning is applicable at all
suite. Statement testing derives test cases to execute
specific statements, normally to increase statement
improvement. Terms levels of testing. Equivalence partitioning as a coverage.
Page 33 Test case specification, test design, test execution technique can be used to achieve input and output 4.4.2 Decision testing and coverage
schedule, test procedure specification, test script, coverage. It can be applied to human input, input via
3.3 Static analysis by tools (K2) traceability. interfaces to a system, or interface parameters in
Decision coverage, related to branch testing, is the
assessment of the percentage of decision outcomes
Terms Background integration testing.
(e.g. the True and False options of an IF statement)
Compiler, complexity, control flow, data flow, static The process described in this section can be done in 4.3.2 Boundary value analysis (K3) that have been exercised by a test case suite. Decision
analysis. different ways, from very informal with little or no Behaviour at the edge of each equivalence partition is testing derives test cases to execute specific decision
Background documentation, to very formal (as it is described more likely to be incorrect, so boundaries are an area outcomes, normally to increase decision coverage.
The objective of static analysis is to find defects in below). The level of formality depends on the context of where testing is likely to yield defects. The maximum Decision testing is a form of control flow testing as it
software source code and software models. Static the testing, including the organization, the maturity of and minimum values of a partition are its boundary generates a specific flow of control through the decision
analysis is performed without actually executing the testing and development processes, time constraints values. A boundary value for a valid partition is a valid points. Decision coverage is stronger than statement
software being examined by the tool; dynamic testing and the people involved. During test analysis, the test boundary value; the boundary of an invalid partition is coverage: 100% decision coverage guarantees 100%
does execute the software code. Static analysis can basis documentation is analyzed in order to determine an invalid boundary value. Tests can be designed to statement coverage, but not vice versa.
locate defects that are hard to find in testing. As with what to test, i.e. to identify the test conditions. A test cover both valid and invalid boundary values. When 4.4.3 Other structure-based
reviews, static analysis finds defects rather than condition is defined as an item or event that could be designing test cases, a test for each boundary value is
failures. Static analysis tools analyze program code verified by one or more test cases (e.g. a function, chosen. Boundary value analysis can be applied at all techniques (K1)
transaction, quality characteristic or structural element). test levels. It is relatively easy to apply and its defect There are stronger levels of structural coverage beyond
(e.g. control flow and data flow), as well as generated
Establishing traceability from test conditions back to the finding capability is high; detailed specifications are decision coverage, for example, condition coverage
output such as HTML and XML.
specifications and requirements enables both impact helpful. This technique is often considered as an and multiple condition coverage. The concept of
The value of static analysis is:
analysis, when requirements change, and requirements extension of equivalence partitioning. It can be used on coverage can also be applied at other test levels (e.g.
o Early detection of defects prior to test execution.
coverage to be determined for a set of tests. During equivalence classes for user input on screen as well at integration level) where the percentage of modules,
o Early warning about suspicious aspects of the code
test analysis the detailed test approach is implemented as, for example, on time ranges (e.g. time out, components or classes that have been exercised by a
or design, by the calculation of metrics, such as a high test case suite could be expressed as module,
to select the test design techniques to use, based on, transactional speed requirements) or table ranges (e.g.
complexity measure. component or class coverage. Tool support is useful for
among other considerations, the risks identified (see table size is 256*256). Boundary values may also be
o Identification of defects not easily found by dynamic the structural testing of code.
Chapter 5 for more on risk analysis). used for test data selection.
testing.
4.3.3 Decision table testing (K3) Page 41
/opt/scribd/conversion/tmp/scratch1/20551997.doc

Page 45 may take over the role of tester, keeping some degree of technology and/or business domain experts outside
4.5 Experience-based techniques of independence. Typically testers at the component the test team.
Terms 5.1 Test organization (K2) 30 min and integration level would be developers, testers at o Regression-averse approaches, such as those that
Exploratory testing, fault attack. Terms the acceptance test level would be business experts include reuse of existing test material, extensive
Tester, test leader, test manager. and users, and testers for operational acceptance automation of functional regression tests, and standard
Background testing would be operators. test suites.
Experienced-based testing is where tests are derived 5.1.1 Test organization and Different approaches may be combined, for example, a
from the tester’s skill and intuition and their experience Page 47
independence (K2) risk-based dynamic approach.
with similar applications and technologies. When used
to augment systematic techniques, these techniques
The effectiveness of finding defects by testing and 5.2 Test planning and estimation The selection of a test approach should consider the
reviews can be improved by using independent testers. context, including:
can be useful in identifying special tests not easily Options for independence are:
Terms o Risk of failure of the project, hazards to the product
captured by formal techniques, especially when applied Test approach
o No independent testers. Developers test their own and risks of product failure to humans, the environment
after more formal approaches. However, this technique
code. 5.2.1 Test planning (K2) and the company.
may yield widely varying degrees of effectiveness, This section covers the purpose of test planning within
o Independent testers within the development teams. o Skills and experience of the people in the proposed
depending on the testers’ experience. A commonly development and implementation projects, and for
o Independent test team or group within the techniques, tools and methods.
used experienced-based technique is error guessing. maintenance activities. Planning may be documented
Generally testers anticipate defects based on organization, reporting to project management or o The objective of the testing endeavour and the
executive management. in a project or master test plan, and in separate test mission of the testing team.
experience. A structured approach to the error guessing plans for test levels, such as system testing and
technique is to enumerate a list of possible errors and o Independent testers from the business organization o Regulatory aspects, such as external and internal
or user community. acceptance testing. Outlines of test planning
to design tests that attack these errors. This systematic regulations for the development process.
o Independent test specialists for specific test targets documents are covered by the ‘Standard for Software
approach is called fault attack. These defect and failure o The nature of the product and the business.
Test Documentation’ (IEEE 829). Planning is influenced
lists can be built based on experience, available defect such as usability testers, security testers or certification Page 49
by the test policy of the organization, the scope of
and failure data, and from common knowledge about testers (who certify a software product against
why software fails. Exploratory testing is concurrent test standards and regulations).
testing, objectives, risks, constraints, criticality,
testability and the availability of resources. The more
5.3 Test progress monitoring and
design, test execution, test logging and learning, based o Independent testers outsourced or external to the
on a test charter containing test objectives, and carried organization.
the project and test planning progresses, the more control (K2) 20 minutes
information is available, and the more detail that can be
out within time-boxes. It is an approach that is most For large, complex or safety critical projects, it is
included in the plan. Test planning is a continuous Terms
useful where there are few or inadequate specifications usually best to have multiple levels of testing, with Defect density, failure rate, test control, test monitoring,
activity and is performed in all life cycle processes and
and severe time pressure, or in order to augment or some or all of the levels done by independent testers. test report.
activities. Feedback from test activities is used to
complement other, more formal testing. It can serve as Development staff may participate in testing, especially
at the lower levels, but their lack of objectivity often
recognize changing risks so that planning can be 5.3.1 Test progress monitoring (K1)
a check on the test process, to help ensure that the adjusted. The purpose of test monitoring is to give feedback and
most serious defects are found. limits their effectiveness. The independent testers may
have the authority to require and define test processes 5.2.2 Test planning activities (K2) visibility about test activities. Information to be
Page 42 Test planning activities may include: monitored may be collected manually or automatically
and rules, but testers should take on such process-
4.6 Choosing test techniques (K2) related roles only in the presence of a clear o Determining the scope and risks, and identifying the and may be used to measure exit criteria, such as
coverage. Metrics may also be used to assess
management mandate to do so. objectives of testing.
Terms The benefits of independence include: o Defining the overall approach of testing (the test progress against the planned schedule and budget.
No specific terms. Common test metrics include:
o Independent testers see other and different defects, strategy), including the definition of the test levels and
o Percentage of work done in test case preparation (or
Background and are unbiased. entry and exit criteria.
o Integrating and coordinating the testing activities into percentage of planned test cases prepared).
The choice of which test techniques to use depends on o An independent tester can verify assumptions
o Percentage of work done in test environment
a number of factors, including the type of system, people made during specification and implementation the software life cycle activities: acquisition, supply,
regulatory standards, customer or contractual development, operation and maintenance. preparation.
of the system.
requirements, level of risk, type of risk, test objective, o Making decisions about what to test, what roles will o Test case execution (e.g. number of test cases
Drawbacks include:
documentation available, knowledge of the testers, time o Isolation from the development team (if treated as perform the test activities, how the test activities should run/not run, and test cases passed/failed).
and budget, development life cycle, use case models be done, and how the test results will be evaluated. o Defect information (e.g. defect density, defects found
totally independent).
and previous experience of types of defects found. o Independent testers may be the bottleneck as the o Scheduling test analysis and design activities. and fixed, failure rate, and retest results).
Some techniques are more applicable to certain o Scheduling test implementation, execution and o Test coverage of requirements, risks or code.
last checkpoint.
situations and test levels; others are applicable to o Developers may lose a sense of responsibility for evaluation. o Subjective confidence of testers in the product.
all test levels. quality. o Assigning resources for the different activities o Dates of test milestones.
Page 43 Testing tasks may be done by people in a specific defined. o Testing costs, including the cost compared to the
testing role, or may be done by someone in another o Defining the amount, level of detail, structure and benefit of finding the next defect or to run the next test.
5. Test management (K3) role, such as a project manager, quality manager, templates for the test documentation. 5.3.2 Test Reporting (K2)
5.1 Test organization (K2) developer, business and domain expert, infrastructure o Selecting metrics for monitoring and controlling test Test reporting is concerned with summarizing
LO-5.1.1 Recognize the importance of independent or IT operations. preparation and execution, defect resolution and risk information about the testing endeavour, including:
testing. (K1) 5.1.2 Tasks of the test leader and issues. o What happened during a period of testing, such as
LO-5.1.2 List the benefits and drawbacks of o Setting the level of detail for test procedures in order dates when exit criteria were met.
independent testing within an organization. (K2)
tester (K1) to provide enough information to support reproducible o Analyzed information and metrics to support
In this syllabus two test positions are covered, test
LO-5.1.3 Recognize the different team members to be test preparation and execution. recommendations and decisions about future actions,
leader and tester. The activities and tasks performed by
considered for the creation of a test team. (K1) 5.2.3 Exit criteria (K2) such as an assessment of defects remaining, the
people in these two roles depend on the project and
LO-5.1.4 Recall the tasks of typical test leader and economic benefit of continued testing, outstanding
product context, the people in the roles, and the The purpose of exit criteria is to define when to stop
tester. (K1) risks, and the level of confidence in tested software.
organization. Sometimes the test leader is called a test testing, such as at the end of a test level or when a set
5.2 Test planning and estimation K2 manager or test coordinator. The role of the test leader of tests has a specific goal. The outline of a test summary report is given in
LO-5.2.1 Recognize the different levels and objectives Typically exit criteria may consist of: ‘Standard for Software Test Documentation’ (IEEE
may be performed by a project manager, a
of test planning. (K1) o Thoroughness measures, such as coverage of code, 829).
development manager, a quality assurance manager or
LO-5.2.2 Summarize the purpose and content of the functionality or risk. Metrics should be collected during and at the end of a
the manager of a test group. In larger projects two
test plan, test design specification and test procedure o Estimates of defect density or reliability measures. test level in order to assess:
positions may exist: test leader and test manager.
documents according to the ‘Standard for Software Test o The adequacy of the test objectives for that test
Typically the test leader plans, monitors and controls o Cost.
Documentation’ (IEEE 829). (K2) the testing activities and tasks as defined in Section level.
o Residual risks, such as defects not fixed or lack of
LO-5.2.3 Differentiate between conceptually different 1.4. o The adequacy of the test approaches taken.
test coverage in certain areas.
test approaches, such as analytical, model based, o The effectiveness of the testing with respect to its
Page 46 o Schedules such as those based on time to market.
methodical, process/standard compliant, objectives.
Typical test leader tasks may include: Page 48
dynamic/heuristic, consultative and regression averse.
o Coordinate the test strategy and plan with project 5.3.3 Test control (K2)
LO-5.2.4 Differentiate between the subject of test
managers and others.
5.2.4 Test estimation (K2) Test control describes any guiding or corrective actions
planning for a system and for scheduling test Two approaches for the estimation of test effort are taken as a result of information and metrics gathered
execution. (K2) o Write or review a test strategy for the project, and
covered in this syllabus: and reported. Actions may cover any test activity and
LO-5.2.5 Write a test execution schedule for a given set test policy for the organization.
o The metrics-based approach: estimating the testing may affect any other software life cycle activity or task.
of test cases, considering prioritization, and technical o Contribute the testing perspective to other project
effort based on metrics of former or similar projects or
and logical dependencies. (K3) activities, such as integration planning. Page 50
based on typical values.
LO-5.2.6 List test preparation and execution activities o Plan the tests – considering the context and Examples of test control actions are:
o The expert-based approach: estimating the tasks by
that should be considered during test planning. (K1) understanding the test objectives and risks – including o Making decisions based on information from test
the owner of these tasks or by experts.
LO-5.2.7 Recall typical factors that influence the effort selecting test approaches, estimating the time, effort monitoring.
Once the test effort is estimated, resources can be
related to testing. (K1) and cost of testing, acquiring resources, defining test o Re-prioritize tests when an identified risk occurs (e.g.
identified and a schedule can be drawn up.
LO-5.2.8 Differentiate between two conceptually levels, cycles, and planning incident management. software delivered late).
The testing effort may depend on a number of factors,
different estimation approaches: the metrics based o Initiate the specification, preparation, implementation o Change the test schedule due to availability of a test
including:
approach and the expert-based approach. (K2) and execution of tests, monitor the test results and environment.
o Characteristics of the product: the quality of the
LO-5.2.9 Recognize/justify adequate exit criteria for check the exit criteria. o Set an entry criterion requiring fixes to have been
specification and other information used for test models
specific test levels and groups of test cases (e.g. for o Adapt planning based on test results and progress retested (confirmation tested) by a developer before
(i.e. the test basis), the size of the product, the
integration testing, acceptance testing or test cases for (sometimes documented in status reports) and take accepting them into a build.
complexity of the problem domain, the requirements for
usability testing). (K2) any action necessary to compensate for problems. reliability and security, and the requirements for Page 51
5.3 Test progress monitoring and o Set up adequate configuration management of

control (K2)
testware for traceability.
documentation.
o Characteristics of the development process: the 5.4 Configuration management
o Introduce suitable metrics for measuring test stability of the organization, tools used, test process, Terms
LO-5.3.1 Recall common metrics used for monitoring progress and evaluating the quality of the testing and
test preparation and execution. (K1) skills of the people involved, and time pressure. Configuration management, version control.
the product. o The outcome of testing: the number of defects and
LO-5.3.2 Understand and interpret test metrics for test o Decide what should be automated, to what degree, Background
reporting and test control (e.g. defects found and fixed, the amount of rework required. The purpose of configuration management is to
and how.
and tests passed and failed). (K2) o Select tools to support testing and organize any 5.2.5 Test approaches (test establish and maintain the integrity of the products
LO-5.3.3 Summarize the purpose and content of the (components, data and documentation) of the software
test summary report document according to the
training in tool use for testers. strategies) (K2)
o Decide about the implementation of the test One way to classify test approaches or strategies is or system through the project and product life cycle.
‘Standard for Software Test Documentation’ (IEEE For testing, configuration management may involve
environment. based on the point in time at which the bulk of the test
829). (K2) ensuring that:
o Write test summary reports based on the information design work is begun:
5.4 Configuration management (K2) gathered during testing. o Preventative approaches, where tests are designed o All items of testware are identified, version
LO-5.4.1 Summarize how configuration management as early as possible. controlled, tracked for changes, related to each other
Typical tester tasks may include:
supports testing. (K2) o Review and contribute to test plans. o Reactive approaches, where test design comes after and related to development items (test objects) so that
traceability can be maintained throughout the test
5.5 Risk and testing (K2) o Analyze, review and assess user requirements, the software or system has been produced.
process.
LO-5.5.1 Describe a risk as a possible problem that specifications and models for testability. Typical approaches or strategies include:
o All identified documents and software items are
would threaten the achievement of one or more o Create test specifications. o Analytical approaches, such as risk-based testing
referenced unambiguously in test documentation.
stakeholders’ project objectives. (K2) o Set up the test environment (often coordinating with where testing is directed to areas of greatest risk.
For the tester, configuration management helps to
LO-5.5.2 Remember that risks are determined by system administration and network management). o Model-based approaches, such as stochastic testing
uniquely identify (and to reproduce) the tested item,
likelihood (of happening) and impact (harm resulting if it o Prepare and acquire test data. using statistical information about failure rates (such as test documents, the tests and the test harness.
does happen). (K1) o Implement tests on all test levels, execute and log reliability growth models) or usage (such as operational During test planning, the configuration management
LO-5.5.3 Distinguish between the project and product the tests, evaluate the results and document the profiles). procedures and infrastructure (tools) should be chosen,
risks. (K2) deviations from expected results. o Methodical approaches, such as failure-based documented and implemented.
LO-5.5.4 Recognize typical product and project risks. o Use test administration or management tools and (including error guessing and fault-attacks),
(K1) experienced-based, check-list based, and quality Page 52
test monitoring tools as required.
LO-5.5.5 Describe, using examples, how risk analysis
and risk management may be used for test planning.
o Automate tests (may be supported by a developer or characteristic based.
o Process- or standard-compliant approaches, such as
5.5 Risk and testing (K2) 30 min
a test automation expert).
Page 44 o Measure performance of components and systems those specified by industry-specific standards or the Terms
various agile methodologies. Product risk, project risk, risk, risk-based testing.
5.6 Incident Management (K3) (if applicable).
LO-5.6.1 Recognize the content of an incident report o Review tests developed by others. o Dynamic and heuristic approaches, such as Background
People who work on test analysis, test design, specific exploratory testing where testing is more reactive Risk can be defined as the chance of an event, hazard,
according to the ‘Standard for Software Test
test types or test automation may be specialists in to events than pre-planned, and where execution and threat or situation occurring and its undesirable
Documentation’ (IEEE 829). (K1)
these roles. Depending on the test level and the risks evaluation are concurrent tasks. consequences, a potential problem. The level of risk
LO-5.6.2 Write an incident report covering the
related to the product and the project, different people o Consultative approaches, such as those where test will be determined by the likelihood of an adverse event
observation of a failure during testing. (K3)
coverage is driven primarily by the advice and guidance
/opt/scribd/conversion/tmp/scratch1/20551997.doc

happening and the impact (the harm resulting from that o Global issues, such as other areas that may be Configuration management (CM) tools are not strictly techniques used, what is measured and the coding
event). affected by a change resulting from the incident. testing tools, but are typically necessary to keep track language. Code coverage tools measure the
5.5.1 Project risks (K2) o Change history, such as the sequence of actions of different versions and builds of the software and percentage of specific types of code structure that have
Project risks are the risks that surround the project’s taken by project team members with respect to the tests. been exercised (e.g. statements, branches or
capability to deliver its objectives, such as: incident to isolate, repair, and confirm it as fixed. Configuration Management tools: decisions, and module or function calls). These tools
o Organizational factors: o References, including the identity of the test case o Store information about versions and builds of show how thoroughly the measured type of structure
specification that revealed the problem. software and testware. has been exercised by a set of tests.
o skill and staff shortages;
The structure of an incident report is also covered in o Enable traceability between testware and software Security tools
o personal and training issues;
the ‘Standard for Software Test Documentation’ (IEEE work products and product variants. Security tools check for computer viruses and denial of
o political issues, such as: service attacks. A firewall, for example, is not strictly a
- problems with testers communicating their needs and test results; 829). o Are particularly useful when developing on more
- failure to follow up on information found in testing and reviews than one configuration of the hardware/software testing tool, but may be used in security testing.
Page 55 Security testing tools search for specific vulnerabilities
(e.g. not improving development and testing practices). environment (e.g. for different operating system
o improper attitude toward or expectations of testing versions, different libraries or compilers, different of the system.
(e.g. not appreciating the value of finding defects during Page 56 browsers or different computers). 6.1.6 Tool support for performance
testing).
o Technical issues:
6. Tool support for testing 6.1.3 Tool support for static testing and monitoring (K1)
Review tools
o problems in defining the right requirements; 6.1 Types of test tool (K2) Review tools (also known as review process support
Dynamic analysis tools (D)
LO-6.1.1 Classify different types of test tools according Dynamic analysis tools find defects that are evident
o the extent that requirements can be met given tools) may store information about review processes,
to the test process activities. (K2) only when software is executing, such as time
existing constraints; store and communicate review comments, report on dependencies or memory leaks. They are typically used
o the quality of the design, code and tests. LO-6.1.2 Recognize tools that may help developers in
defects and effort, manage references to review rules in component and component integration testing, and
o Supplier issues: their testing. (K1)
and/or checklists and keep track of traceability between when testing middleware.
o failure of a third party; 6.2 Effective use of tools: potential documents and source code. They may also provide Performance/Load/Stress testing tools
o contractual issues. benefits and risks (K2) aid for online reviews, which is useful if the team is Performance testing tools monitor and report on how a
When analyzing, managing and mitigating these risks, LO-6.2.1 Summarize the potential benefits and risks of geographically dispersed. system behaves under a variety of simulated usage
the test manager is following well established project test automation and tool support for testing. (K2) Static analysis tools (D) conditions. They simulate a load on an application, a
management principles. The ‘Standard for Software LO-6.2.2 Recognize that test execution tools can have Static analysis tools support developers, testers and database, or a system environment, such as a network
Test Documentation’ (IEEE 829) outline for test plans different scripting techniques, including data driven and quality assurance personnel in finding defects before or server. The tools are often named after the aspect of
requires risks and contingencies to be stated. keyword driven. (K1) dynamic testing. Their major purposes include: performance that they measure, such as load or stress,
o The enforcement of coding standards.
5.5.2 Product risks (K2) 6.3 Introducing a tool into an so are also known as load testing tools or stress testing
o The analysis of structures and dependencies (e.g. tools. They are often based on automated repetitive
Potential failure areas (adverse future events or
hazards) in the software or system are known as
organization (K1) linked web pages). execution of tests, controlled by parameters.
LO-6.3.1 State the main principles of introducing a tool o Aiding in understanding the code. Monitoring tools
product risks, as they are a risk to the quality of the
into an organization. (K1) Static analysis tools can calculate metrics from the Monitoring tools are not strictly testing tools but provide
product, such as:
LO-6.3.2 State the goals of a proof-of-concept/piloting code (e.g. complexity), which can give valuable information that can be used for testing purposes and
o Failure-prone software delivered.
phase for tool evaluation. (K1) information, for example, for planning or risk analysis. which is not available by other means. Monitoring tools
o The potential that the software/hardware could
LO-6.3.3 Recognize that factors other than simply continuously analyze, verify and report on usage of
cause harm to an individual or company. Page 59
acquiring a tool are required for good tool support. (K1) specific system resources, and give warnings of
o Poor software characteristics (e.g. functionality, Modeling tools (D)
Page 57 Modeling tools are able to validate models of the possible service problems. They store information
reliability, usability and performance). about the version and build of the software and
o Software that does not perform its intended
functions.
6.1 Types of test tool (K2) 45 min software. For example, a database model checker may
find defects and inconsistencies in the data model; testware, and enable traceability.
Risks are used to decide where to start testing and Terms other modeling tools may find defects in a state model 6.1.7 Tool support for specific
where to test more; testing is used to reduce the risk of Configuration management tool, coverage tool, or an object model. These tools can often aid in application areas (K1)
an adverse effect occurring, or to reduce the impact of debugging tool, dynamic analysis tool, incident generating some test cases based on the model (see Individual examples of the types of tool classified above
an adverse effect. Product risks are a special type of management tool, load testing tool, modeling tool, also Test design tools below). The major benefit of can be specialized for use in a particular type of
risk to the success of a project. Testing as a risk-control monitoring tool, performance testing tool, probe effect, static analysis tools and modeling tools is the cost application. For example, there are performance testing
activity provides feedback about the residual risk by requirements management tool, review tool, security effectiveness of finding more defects at an earlier time tools specifically for web-based applications, static
measuring the effectiveness of critical defect removal tool, static analysis tool, stress testing tool, test in the development process. As a result, the analysis tools for specific development platforms, and
and of contingency plans. comparator, test data preparation tool, test design tool, development process may accelerate and improve by dynamic analysis tools specifically for testing security
test harness, test execution tool, test management tool, having less rework. aspects. Commercial tool suites may target specific
Page 53
A risk-based approach to testing provides proactive
unit test framework tool. 6.1.4 Tool support for test specificat. application areas (e.g. embedded systems).
opportunities to reduce the levels of product risk, 6.1.1 Test tool classification (K2) Test design tools Page 61
starting in the initial stages of a project. It involves the There are a number of tools that support different Test design tools generate test inputs or executable
identification of product risks and their use in guiding aspects of testing. Tools are classified in this syllabus tests from requirements, from a graphical user 6.1.8 Tool support using other tools
according to the testing activities that they support. interface, from design models (state, data or object) or The test tools listed here are not the only types of tools
test planning and control, specification, preparation and
Some tools clearly support one activity; others may from code. This type of tool may generate expected used by testers – they may also use spreadsheets,
execution of tests. In a risk based approach the risks
support more than one activity, but are classified under outcomes as well (i.e. may use a test oracle). The SQL, resource or debugging tools (D), for example.
identified may be used to:
o Determine the test techniques to be employed. the activity with which they are most closely associated. generated tests from a state or object model are useful Page 62
Some commercial tools offer support for only one type for verifying the implementation of the model in the
o Determine the extent of testing to be carried out.
o Prioritize testing in an attempt to find the critical of activity; other commercial tool vendors offer suites or software, but are seldom sufficient for verifying all 6.2 Effective use of tools:
defects as early as possible. families of tools that provide support for many or all of aspects of the software or system. They can save
valuable time and provide increased thoroughness of
potential benefits and risks (K2)
o Determine whether any non-testing activities could these activities. Testing tools can improve the efficiency
of testing activities by automating repetitive tasks. testing because of the completeness of the tests that Terms
be employed to reduce risk (e.g. providing training to Data-driven (testing), keyword-driven (testing), scripting
Testing tools can also improve the reliability of testing the tool can generate. Other tools in this category can
inexperienced designers). language.
by, for example, automating large data comparisons or aid in supporting the generation of tests by providing
Risk-based testing draws on the collective knowledge
and insight of the project stakeholders to determine the
simulating behaviour. Some types of test tool can be structured templates, sometimes called a test frame, 6.2.1 Potential benefits and risks of
intrusive in that the tool itself can affect the actual that generate tests or test stubs, and thus speed up the
risks and the levels of testing required to address those
outcome of the test. For example, the actual timing may test design process. tool support for testing (for all tools)
risks. Simply purchasing or leasing a tool does not guarantee
be different depending on how you measure it with Test data preparation tools
To ensure that the chance of a product failure is success with that tool. Each type of tool may require
different performance tools, or you may get a different Test data preparation tools manipulate databases, files
minimized, risk management activities provide a additional effort to achieve real and lasting benefits.
measure of code coverage depending on which or data transmissions to set up test data to be used
disciplined approach to: There are potential benefits and opportunities with the
coverage tool you use. The consequence of intrusive during the execution of tests. A benefit of these tools is
o Assess (and reassess on a regular basis) what can use of tools in testing, but there are also risks.
tools is called the probe effect. Some tools offer to ensure that live data transferred to a test
go wrong (risks). environment is made anonymous, for data protection. Potential benefits of using tools include:
support more appropriate for developers (e.g. during
o Determine what risks are important to deal with. o Repetitive work is reduced (e.g. running regression
o Implement actions to deal with those risks.
component and component integration testing). Such 6.1.5 Tool support for test execution tests, re-entering the same test data, and checking
tools are marked with “(D)” in the classifications below.
In addition, testing may support the identification of new and logging (K1) against coding standards).
risks, may help to determine what risks should be 6.1.2 Tool support for management Test execution tools o Greater consistency and repeatability (e.g. tests
reduced, and may lower uncertainty about risks. of testing and tests (K1) Test execution tools enable tests to be executed executed by a tool, and tests derived from
Page 54 Management tools apply to all test activities over the automatically, or semi-automatically, using stored requirements).
entire software life cycle. inputs and expected outcomes, through the use of a o Objective assessment (e.g. static measures,
5.6 Incident management (K3) Test management tools scripting language. The scripting language makes it coverage).
Characteristics of test management tools include: possible to manipulate the tests with limited effort, for
Terms example, to repeat the test with different data or to test
o Ease of access to information about tests or testing
o Support for the management of tests and the testing (e.g. statistics and graphs about test progress, incident
Incident logging, incident management.
activities carried out. a different part of the system with similar steps. rates and performance).
Background o Interfaces to test execution tools, defect tracking Generally these tools include dynamic comparison Risks of using tools include:
Since one of the objectives of testing is to find defects, tools and requirement management tools. features and provide a test log for each test run. o Unrealistic expectations for the tool (including
the discrepancies between actual and expected o Independent version control or interface with an Test execution tools can also be used to record tests, functionality and ease of use).
outcomes need to be logged as incidents. Incidents external configuration management tool. when they may be referred to as capture playback o Underestimating the time, cost and effort for the
should be tracked from discovery and classification to o Support for traceability of tests, test results and tools. Capturing test inputs during exploratory testing or initial introduction of a tool (including training and
correction and confirmation of the solution. In order to unscripted testing can be useful in order to reproduce
incidents to source documents, such as requirements external expertise).
manage all incidents to completion, an organization and/or document a test, for example, if a failure occurs.
specifications. o Underestimating the time and effort needed to
should establish a process and rules for classification. Test harness/unit test framework tools (D)
o Logging of test results and generation of progress achieve significant and continuing benefits from the tool
Incidents may be raised during development, review, A test harness may facilitate the testing of components
reports. (including the need for changes in the testing process
testing or use of a software product. They may be or part of a system by simulating the environment in
o Quantitative analysis (metrics) related to the tests and continuous improvement of the way the tool is
raised for issues in code or the working system, or in which that test object will run. This may be done either
(e.g. tests run and tests passed) and the test object used).
any type of documentation including requirements, because other components of that environment are not
(e.g. incidents raised), in order to give information o Underestimating the effort required to maintain the
development documents, test documents, and user yet available and are replaced by stubs and/or drivers,
information such as “Help” or installation guides. about the test object, and to control and improve the test assets generated by the tool.
or simply to provide a predictable and controllable
Incident reports have the following objectives: test process. o Over-reliance on the tool (replacement for test
environment in which any faults can be localized to the
o Provide developers and other parties with feedback Page 58 object under test. A framework may be created where design or where manual testing would be better).
about the problem to enable identification, isolation and Requirements management tools part of the code, object, method or function, unit or 6.2.2 Special considerations for
correction as necessary. Requirements management tools store requirement component can be executed, by calling the object to be
statements, check for consistency and undefined
some types of tool (K1)
o Provide test leaders a means of tracking the quality tested and/or giving feedback to that object. It can do Test execution tools
of the system under test and the progress of the (missing) requirements, allow requirements to be this by providing artificial means of supplying input to Test execution tools replay scripts designed to
testing. prioritized and enable individual tests to be traceable to the test object, and/or by supplying stubs to take output implement tests that are stored electronically. This type
o Provide ideas for test process improvement. requirements, functions and/or features. Traceability from the object, in place of the real output targets. of tool often requires significant effort in order to
Details of the incident report may include: may be reported in test management progress reports.
Page 60 achieve significant benefits. Capturing tests by
o Date of issue, issuing organization, and author. The coverage of requirements, functions and/or
Test harness tools can also be used to provide an recording the actions of a manual tester seems
o Expected and actual results. features by a set of tests may also be reported.
execution framework in middleware, where languages, attractive, but this approach does not scale to large
o Identification of the test item (configuration item) and Incident management tools
operating systems or hardware must be tested numbers of automated tests. A captured script is a
Incident management tools store and manage incident
environment. together. They may be called unit test framework tools linear representation with specific data and actions as
reports, i.e. defects, failures or perceived problems and
o Software or system life cycle process in which the when they have a particular focus on the component part of each script. This type of script may be unstable
anomalies, and support management of incident
incident was observed. test level. This type of tool aids in executing the when unexpected events occur.
reports in ways that include:
o Description of the incident to enable reproduction component tests in parallel with building the code. Page 63
o Facilitating their prioritization.
and resolution, including logs, database dumps or Test comparators A data-driven approach separates out the test inputs
o Assignment of actions to people (e.g. fix or Test comparators determine differences between files,
screenshots. (the data), usually into a spreadsheet, and uses a more
confirmation test). databases or test results. Test execution tools typically
o Scope or degree of impact on stakeholder(s) generic script that can read the test data and perform
o Attribution of status (e.g. rejected, ready to be tested include dynamic comparators, but post-execution
interests. the same test with different data. Testers who are not
or deferred to next release). comparison may be done by a separate comparison
o Severity of the impact on the system. familiar with the scripting language can enter test data
These tools enable the progress of incidents to be tool. A test comparator may use a test oracle,
o Urgency/priority to fix. monitored over time, often provide support for statistical for these predefined scripts. In a keyword-driven
o Status of the incident (e.g. open, deferred, duplicate, especially if it is automated. approach, the spreadsheet contains keywords
analysis and provide reports about incidents. They are Coverage measurement tools (D)
waiting to be fixed, fixed awaiting retest, closed). also known as defect tracking tools. describing the actions to be taken (also called action
Coverage measurement tools can be either intrusive or words), and test data. Testers (even if they are not
o Conclusions, recommendations and approvals. Configuration management tools non-intrusive depending on the measurement familiar with the scripting language) can then define
/opt/scribd/conversion/tmp/scratch1/20551997.doc

tests using the keywords, which can be tailored to the development ..8, 10, 11, 12, 13, 14, 17, 19, special considerations for some types of tool62
application being tested. Technical expertise in the 20, 22, 23, 26, 29, 30, 33, 36, 42, 45, 47, test case ................................................ 36
scripting language is needed for all approaches (either 48, 51, 52, 54, 59, 60, 67 specification-based technique.... 26, 37, 38
by testers or by specialists in test automation). development model.......................... 19, 20 specification-based test design technique37
Whichever scripting technique is used, the expected drawbacks of independence................... 45 specification-based testing......... 25, 34, 35
results for each test need to be stored for later driver................................................ 22, 59 stakeholders..12, 13, 16, 17, 23, 37, 43, 53
comparison. dynamic analysis tool ....................... 57, 60 state transition testing ................ 34, 38, 39
Performance testing tools dynamic testing ...............13, 28, 29, 33, 58 statement coverage................................ 40
Performance testing tools need someone with expertise embedded system.................................. 60 statement testing.............................. 34, 40
in performance testing to help design the tests and emergency change................................. 27 static analysis................................... 29, 33
interpret the results. enhancement ................................... 24, 27 static analysis tool28, 33, 57, 58, 59, 60, 63
Static analysis tools entry criteria ........................................... 30 static technique ................................ 28, 29
Static analysis tools applied to source code can enforce equivalence partitioning ................... 34, 38 static testing ..................................... 13, 29
coding standards, but if applied to existing code may error ................................10, 11, 17, 41, 48 stress testing.............................. 25, 57, 60
generate a lot of messages. Warning messages do not error guessing ............................ 17, 41, 48 stress testing tool ............................. 57, 60
stop the code being translated into an executable exhaustive testing .................................. 14 structural testing....................22, 25, 26, 40
program, but should ideally be addressed so that exit criteria13, 15, 16, 30, 31, 43, 46, 47, 49 structure-based technique................ 37, 40
maintenance of the code is easier in the future. A expected result................16, 34, 36, 46, 63 structure-based test design technique37, 40
gradual implementation with initial filters to exclude experience-based technique ...... 35, 37, 41 structure-based testing..................... 34, 40
some messages would be an effective approach. experience-based test design technique 37 stub .................................................. 22, 59
Test management tools exploratory testing...................... 41, 48, 59 success factors ...................................... 32
Test management tools need to interface with other factory acceptance testing ..................... 24 system integration testing ................ 20, 22
tools or spreadsheets in order to produce information in failure10, 11, 13, 14, 17, 19, 22, 23, 29, 33, system testing .....13, 20, 22, 23, 24, 47, 69
the best format for the current needs of the 41, 44, 48, 49, 52, 53, 58, 59, 69 technical review ......................... 28, 30, 31
organization. The reports need to be designed and failure rate ........................................ 48, 49 test analysis ..........................15, 36, 46, 47
monitored so that they provide benefit. fault .......................................10, 11, 41, 59 test approach ........................36, 46, 48, 49
Page 64 fault attack.............................................. 41 test approach ................................... 47, 48
field testing....................................... 22, 24 test basis................................................ 15
6.3 Introducing a tool into an follow-up .......................................... 30, 31 test case 13, 14, 15, 16, 22, 25, 29, 34, 35,
formal review.................................... 28, 30 36, 37, 38, 39, 40, 43, 49, 54, 59, 69
organization (K1) 15 minutes functional requirement...................... 22, 23 test case specification................ 34, 36, 54
Terms functional specification........................... 25 test design.............................................. 36
No specific terms. functional task ........................................ 23 test case .................................... 13, 25, 36
functional test......................................... 25 test closure................................. 10, 15, 16
Background functional testing .................................... 25 test condition.......................................... 36
The main considerations in selecting a tool for an
functionality .........22, 23, 25, 47, 52, 62, 64 Page 76
organization include:
impact analysis .......................... 19, 27, 36 test conditions.................13, 15, 25, 36, 37
o Assessment of organizational maturity, strengths
incident15, 16, 18, 22, 44, 46, 54, 57, 58, 62 test control ............................15, 43, 49, 50
and weaknesses and identification of opportunities for incident logging ...................................... 54
an improved test process supported by tools. test coverage ..............................15, 47, 48
incident management................. 46, 54, 57 test data.. 15, 16, 36, 38, 46, 57, 59, 62, 63
o Evaluation against clear requirements and objective incident management tool ................ 57, 58
criteria. test data preparation tool ..................57, 59
incident report .................................. 44, 54 test design13, 15, 20, 34, 35, 36, 37, 41, 46,
o A proof-of-concept to test the required functionality independence ............................ 17, 45, 46 48, 57, 59, 62
and determine whether the product meets its informal review........................... 28, 30, 31 test design specification......................... 43
objectives. inspection..............................28, 30, 31, 32 test design technique ............34, 35, 36, 37
o Evaluation of the vendor (including training, support inspection leader.................................... 30 test design tool..................................57, 59
and commercial aspects). integration13, 20, 21, 22, 23, 26, 33, 38, 39, TEST DEVELOPMENT PROCESS............36, 73
o Identification of internal requirements for coaching 40, 43, 46, 57, 60, 69 test effort................................................ 48
and mentoring in the use of the tool. integration testing20, 21, 22, 23, 26, 33, 38, test environment 15, 16, 22, 23, 46, 49, 50,
Introducing the selected tool into an organization starts 43, 57, 60, 69 59
with a pilot project, which has the following objectives: interoperability testing ............................ 25 test estimation........................................ 48
o Learn more detail about the tool. introducing a tool into an organization56, 64 test execution13, 15, 16, 29, 33, 36, 41, 43,
o Evaluate how the tool fits with existing processes Page 75 56, 57, 59
and practices, and determine what would need to ISO 9126...............................11, 25, 27, 65 test execution schedule ......................... 36
change. development model................................ 20 test execution tool16, 36, 56, 57, 59, 60, 62
o Decide on standard ways of using, managing, iterative-incremental development model20 test harness ....................16, 22, 51, 57, 59
storing and maintaining the tool and the test assets keyword-driven approach....................... 63 test implementation.....................15, 36, 47
(e.g. deciding on naming conventions for files and keyword-driven testing........................... 62 test leader.......................17, 43, 45, 46, 54
tests, creating libraries and defining the modularity of kick-off ................................................... 30 test leader tasks..................................... 46
test suites). learning objective... 8, 9, 10, 19, 28, 34, 43, test level 19, 20, 22, 25, 26, 27, 34, 38, 40,
o Assess whether the benefits will be achieved at 56, 69, 70 42, 43, 46, 47
reasonable cost. load testing .................................25, 57, 60 test log ..................................15, 16, 41, 59
Success factors for the deployment of the tool within an load testing tool.................................57, 60 test management ........................43, 57, 58
organization include: test case ................................................ 36 test management tool .......................57, 63
o Rolling out the tool to the rest of the organization maintainability testing............................. 25 test manager.................................8, 45, 52
incrementally. maintenance testing..........................19, 27 test monitoring ............................46, 49, 50
o Adapting and improving processes to fit with the use management tool ..................46, 57, 58, 63 test objective.......13, 20, 25, 41, 42, 46, 49
of the tool. maturity.................................16, 30, 36, 64 test objective.......................................... 13
o Providing training and coaching/mentoring for new metric..........................................30, 31, 43 test oracle .........................................59, 60
users. mistake .......................................10, 11, 16 test organization .................................... 45
o Defining usage guidelines. modelling tool......................................... 59 test plan . 15, 16, 29, 43, 46, 47, 51, 52, 53,
o Implementing a way to learn lessons from tool use. moderator .........................................30, 31 73
o Monitoring tool use and benefits.\ monitoring tool ............................46, 57, 60 test planning .......15, 16, 43, 47, 51, 53, 73
non-functional requirement .........19, 22, 23 test planning activities............................ 47
Page 66 non-functional testing........................11, 25 test procedure...........15, 16, 34, 36, 43, 47
13. Index objectives for testing .............................. 13 test procedure specification ..............34, 36
action word ............................................ 63 operational acceptance testing .............. 24 test progress monitoring ........................ 49
alpha testing .....................................22, 24 operational test ...........................13, 21, 27 test report..........................................43, 49
architecture ...............15, 19, 21, 23, 25, 26 patch...................................................... 27 test reporting.....................................43, 49
archiving ...........................................16, 27 peer review .......................................30, 31 test script ....................................16, 29, 36
automation ............................................. 26 performance testing ..............25, 57, 60, 63 test strategy ..........................15, 46, 47, 48
benefits of independence....................... 45 performance testing tool .............57, 60, 63 test suite ................................................ 26
benefits of using tool .............................. 62 pesticide paradox................................... 14 test summary report ........15, 16, 43, 46, 49
beta testing .......................................22, 24 portability testing.................................... 25 test tool classification ............................. 57
black-box technique....................34, 37, 38 probe effect............................................ 57 test type ................................19, 25, 27, 46
black-box test design technique............. 37 product risk ...........................17, 43, 52, 53 test-driven development......................... 22
black-box testing.................................... 25 project risk ..................................12, 43, 52 tester 10, 13, 17, 30, 35, 39, 41, 43, 45, 46,
bottom-up............................................... 23 prototyping ............................................. 20 51, 62, 67
boundary value analysis ...................34, 38 quality 8, 10, 11, 12, 13, 18, 25, 34, 36, 45, tester tasks............................................. 46
bug....................................................10, 11 46, 48, 52, 54, 58 test-first approach .................................. 22
capture playback tool ............................. 59 rapid application development (RAD)..... 20 testing and quality .................................. 11
captured script ....................................... 62 Rational Unified Process (RUP)............. 20 testing principles .............................. 10, 14
checklists ....................................30, 31, 58 recorder ................................................. 30 testware ....................15, 16, 46, 51, 58, 60
choosing test technique ......................... 42 regression testing......15, 16, 19, 25, 26, 27 tool support .....................22, 29, 40, 56, 62
code coverage ................25, 26, 34, 40, 57 Regulation acceptance testing............... 24 tool support for management of testing and
commercial off the shelf (COTS)............ 21 reliability..............11, 13, 25, 47, 48, 52, 57 tests................................................... 57
off-the-shelf............................................ 20 reliability testing ..................................... 25 tool support for performance and monitoring 60
compiler ................................................. 33 requirement.........13, 20, 22, 29, 31, 57, 58 tool support for specific application areas60
complexity.............................11, 33, 48, 58 requirements management tool ........57, 58 tool support for static testing .................. 58
component integration testing20, 22, 26, 57, requirements specification ................23, 25 tool support for test execution and logging59
60 responsibilities ............................22, 28, 30 tool support for test specification............ 59
component testing20, 22, 24, 26, 34, 39, 40 re-testing.... 26, See confirmation testing, tool support for testing...................... 56, 62
configuration management ...43, 46, 51, 57 See confirmation testing tool support using other tool................... 61
configuration management tool.........57, 58 review13, 18, 28, 29, 30, 31, 32, 33, 45, 46, top-down ................................................ 23
Configuration management tool ............. 57 52, 54, 57, 58, 67, 70 traceability...........34, 36, 46, 51, 57, 58, 60
confirmation testing...13, 15, 16, 19, 25, 26 review tool.............................................. 58 transaction processing sequences ......... 23
contract acceptance testing ................... 24 review tool.............................................. 57 types of test tool............................... 56, 57
control flow............................25, 33, 34, 40 reviewer ........................................... 30, 31 unit test framework................22, 57, 59, 60
coverage 15, 22, 25, 26, 34, 36, 37, 38, 40, risk11, 12, 13, 14, 22, 23, 26, 27, 35, 36, 42, unit test framework tool .............. 57, 59, 60
47, 48, 49, 57, 58, 60, 62 43, 47, 48, 50, 52, 53, 58 upgrades................................................ 27
coverage tool ....................................57, 60 risk-based approach............................... 53 usability.....................11, 24, 25, 43, 45, 52
custom-developed software ................... 24 risk-based testing....................... 48, 52, 53 usability testing ................................ 25, 43
data flow ................................................ 33 risks ......................................11, 22, 47, 52 use case test.................................... 34, 38
data-driven approach............................. 63 risks of using tool ................................... 62 use case testing ......................... 34, 38, 39
data-driven testing ................................. 62 robustness testing.................................. 22 use cases..............................20, 23, 25, 39
debugging.......................13, 22, 26, 57, 61 roles ................8, 28, 30, 31, 32, 45, 46, 47 user acceptance testing ................... 22, 24
debugging tool ............................22, 57, 61 root cause ........................................ 10, 11 validation................................................ 20
decision coverage.............................34, 40 scribe ............................................... 30, 31 verification.............................................. 20
decision table testing ........................34, 38 scripting language...................... 59, 62, 63 version control.................................. 51, 57
decision testing .................................34, 40 security ...............24, 25, 33, 45, 48, 57, 60 V-model ................................................. 20
defect10, 11, 12, 13, 14, 16, 17, 19, 22, 23, security testing ................................. 25, 60 walkthrough................................ 28, 30, 31
25, 26, 28, 29, 30, 31, 32, 33, 35, 37, 38, security tool...................................... 57, 60 white-box test design technique ....... 37, 40
39, 41, 42, 43, 45, 47, 48, 49, 52, 53, 54, simulators............................................... 22 white-box testing .............................. 25, 40
57, 58, 59, 60, 69 site acceptance testing........................... 24
defect density....................................47, 49 software development .......8, 10, 11, 19, 20
defect tracking tool............................57, 58 software development model ................. 20

También podría gustarte