Está en la página 1de 8

WELCOME

QA/QC SOLUTIONS

TESTING THE PROGRESS OF


THE SOFTWARE SYSTEM

sufficiently fine to measure incremental progress on a


weekly or monthly basis. The measurement should be
timely in that it measures the current state of development.
Providing accurate, current performance
information on a periodic basis can be a positive
motivating factor for a programming staff.

PRACTICE OBJECTIVE
The objective of this practice is to provide the
testing group with a simple testing tool for measuring
the progress of software development. The tester needs
to know the progress of the system under development.
It is the purpose of project management systems, and
accounting systems, to monitor this progress. However,
many of these systems are more budget and schedule
oriented than they are project completion oriented.

Finally, the method must be efficient. It should


require minimum resources to collect, collate, and report
performance data and minimum time to interpret the
results. Systems that require constant input from the
programming staff, updates by clerical personnel, or
integration of large amounts of data by management are
not used.

The suggested test is a simple point accumulation


system for measuring progress.
This system of
accumulating points can then be compared to the project
management or accounting systems reporting of
progress. If there is a significant difference between the
two progress measurement systems, the tester can
challenge the validity of the results produced by the
project management and/or accounting system.

The workbench for determining project status is


illustrated in Figure 1. The workbench shows that the
input is the project status data which can be collected
from the project personnel, or captured from automated
project status software systems.
The workbench
calculates status through accumulating points earned for
completed work versus total points needed to complete
the project.
The calculation is then subject to
reasonableness checks and if it appears reasonable, a
project status report is prepared for use by the testers.
Note that this report can also be used by I/S management and project personnel.

The point system for progress measurement


during software development provides an objective,
accurate, efficient means of collecting and reporting
performance data in an engineering field which often
lacks visibility. The method uses data which is based on
deliverable software items and which is collected as a
normal part of the development process. The results are
easily interpreted and can be presented in a number of
formats and subdivisions. The scheme is flexible and
can be modified to meet the needs of projects, both large
and small.
PRACTICE WORKBENCH

Figure 1. Determining Project Status Workbench

A progress measurement
method should meet several
criteria. First and foremost, the
method should be objective;
degrees of completion should not
be estimated.
Ideally, the
measurement method should be
clear-cut enough to allow any
project member to make the
actual measurement.
Second, the resolution of
the measuring scheme should be
Copyright 1997 Quality Assurance Institute Orlando, Florida
TESTING PROGRESS OF SOFTWARE SYSTEM

April 1997

QC. 4.1.1

QA/QC SOLUTIONS
INPUT PRODUCTS

Verification of test results

There are two categories of input products, as


follows:

Module release

Each milestone for each software item is worth a


point. At each design review, code walkthrough, test
verification, or release, a milestone is achieved and a
corresponding point earned. By maintaining all the
modules and milestones achieved in a file or spreadsheet
and creating a few simple report generators, the software
development team can devise an objective, accurate, and
timely measure of performance. Figure 2 shows what a
simple status report might look like, in which a
maximum of 40 points can be earned.

Project work units - The project needs to


be divided into work units, which require
approximately the same amount of effort
to complete. Examples of work units
could be program modules, project deliverables such as internal design, requirements, and subcontracts to third parties.
If one work unit appears significantly
larger than another work unit, the work
unit should be broken down into smaller
pieces, if possible, and if not a weight
might be assigned to that module, which
would indicate its relationship to other
modules.
Project milestones - These are the
checkpoints that the work units must go
through to be completed. The work units
are normally tied to the
software development life
cycle. It is important that
there is some means of valiModule A
dating that work is completed
when a work unit reaches a
Module B
checkpoint. Ideally, the mileModule C
stones or checkpoints are
Module D
points when one aspect of
Module E
work is completed and the
work unit is turned over to
Module F
another group to perform
Module G
another milestone effort.
Module H

IMPLEMENTATION PROCEDURES

Module I
Module J

This simplified scheme works well when all


modules have the same complexity and each of the
milestones represents an approximately equal amount of
work. Weighting factors are used to handle modules of
varying complexity or milestones representing unequal
effort to complete.
Figure 2. Simple Status Report

Design
1
1
1
1
1
1
1
1
1
1

Code
1

Test

1
1

1
1

Release

Points
Earned

2
1
1
3
2
1
2
4
1
2

1
The point system is really an
extension of the milestone system. In its
Totals
10
6
2
2
19
simplest form, this method ensures that each
software module follows a similar
development process and that several clearly
Percent Complete = 19/40 = 48%
identifiable milestones exist within that
process. For example, on one software
project ten modules will be developed and four
The heart of the system is the file or a spreadmilestones will define the development process. The
sheet and a few simple reports. The file contains one
milestones are:
record for each item to be tracked; each record contains
fields to indicate whether a particular milestone has
Review and acceptance of design

been met. It is recommended that fields be included to


allow for description of the item, responsible analyst,
Completion of code walkthrough

work package identification, and various file

QC. 4.1.2

April 1997

TESTING PROGRESS OF SOFTWARE SYSTEM

QA/QC SOLUTIONS
identification fields. Figure 3 shows a sample record
layout.

because an unfamiliar language may be involved.


The weighting scheme is easily implemented by
assigning points to each milestone for all modules. As a
milestone is earned, the assigned points are added to the
total earned and divided by the total defined points to
compute the percentage of completion. The number of
points assigned to each milestone is proportional to the
difficulty of its achievement and relates directly to the
estimated number of hours needed for its completion. It
is recommended that points first be assigned to each of
the modules and then reapportioned to the milestones.

Updating the file can be as straightforward as


modifying records with a line editor, or as complex as
building a special-purpose, interactive update program.
Access should be limited to avoid unauthorized
modification of the file, particularly if some of the other
uses of the file are sensitive to change.
The milestone status fields are updatedin some
cases manuallyas milestones are met.
Once a
milestone is achieved, a program librarian or other
authorized individual updates the status file. In other
cases, a computer program could determine that a
milestone has occurred (e.g., error-free compilation of a
successful test run) and automatically update its status.

A second extension is selecting and sorting


options to the report programs. Selecting options allows
the user to select all entries in the field by a field (e.g.,
work package number, file name, software family tree
component, or responsible analyst). Once the entries of
interest are selected, the sort option allows the user to
order the entries by a key. The defined and earned
points are summed from the selected entries, and the
percent-complete figure calculated. Reports can then be
printed listing all modules and the percent complete for
a certain analyst, work package, or other selected
criterion. It is helpful to allow Boolean operations on
selection fields (e.g., Analyst A AND Subsystem B) and
to provide for major and minor sort fields (e.g., listing
modules in alphabetic order by analyst).

After the file is built, report generator programs


are written to print the status. For smaller projects, a
program that simply prints each record, sums the points
defined and earned, and calculates the percentage of
points earned may be sufficient. Larger projects may
need several reports for different subsystems or
summary reports that emphasize change.

KEY CONCEPT:

THE IMPLEMENTATION
PROCEDURES BECOME AN EASY-TO-PERFORM TASK IF A SPREADSHEET PACKAGE IS
USED TO PROCESS THE DATA.

A third extension includes target dates and actual


completion dates for each module record. In this
extension, the individual milestone status fields are
replaced by two dates. The first date field is a target
date for when the milestone should be met. Target dates
need not be used for all modules or milestones, but they
are useful when an interdependency exists between a
particular module milestone and another element in the
system. These interdependencies may exist in the
design stage to some extent, but they become very
important during the integration phase of a project.

Extensions
Several extensions can be added to the scheme as
described so far. The first is a method of weighting
modules or milestones. Although treating all modules
equally on a large program (e.g., more than 1,000
modules) appears to give accurate results, smaller programs with few modules may require that modules be
weighted to yield a sufficiently accurate performance
measurement. In addition, there may be a tendency to
do all the "easy" modules first to show progress early.

The completion date field signals when the


milestone is achieved. If the points assigned to a
milestone that has an actual date entered in the file are
added, the percent-complete number can be computed.

A similar argument can be made for weighting


milestones. Depending on the acceptance criteria to
meet a milestone, some may involve more work than
others; achieving the more difficult milestones represents accomplishing a greater amount of work. In some
cases, a combination of module and milestone weights
may interact. For example, the amount of design work
for a module written for an earlier project may be
considerably less than one designed from scratch, but
the amount of effort to code the routine might be greater

TESTING PROGRESS OF SOFTWARE SYSTEM

Using the two date fields has two advantages:


allowing schedule interdependency to be monitored, and
providing a record for future analysis. If the date fields
are made selectable and sortable, additional reports can
be generated.
Assuming that an integration milestone has been
identified, a list of modules can be selected by work

April 1997

QC. 4.1.3

QA/QC SOLUTIONS
package number, family tree identification, or individual
module name. Target dates can then be entered, and as
the date comes closer, lists of all modules that have a
particular due date and have not been completed can be
provided to the responsible analyst or work package
manager. Judicious use of these lists on a periodic basis
can be an effective tool for monitoring and motivating
the programming staff to ensure that the milestone is
met. Usually, several of these lists in various stages are
active at once as key milestones come up. Choosing one
major milestone per month and starting the list several
months in advance of the target date is very effective.
Having more milestones than this tends to set up
multiple or conflicting goals for the individual analysts.
In addition, the lists need to be started far enough in
advance to allow suitable time for the work to be
completed and to institute contingency work plans if
problems arise.

because of an oversight or contract change, the effort is


reprogrammed and new work packages are created, or
existing work packages are expanded with a
corresponding increase of points.
This has the effect of measuring performance
only on active or open work packages, not on the system
as a whole. Because performance is being compared
with an established schedule that is also made up of
units of work, however, the comparison is valid and
useful.
CHECK PROCEDURES
The point system for measuring project progress
is a system built on logic, not statistics. Thus, it is
subject to error. The checking procedures need to do
these two things:

It should be noted that the meeting of these


interdependency dates is really separate from performance measurement. It is possible that in a given
subsystem the performance may be adequate (e.g., it is
75% complete) but a key integration event may have
been missed. The manager must be aware of both
elements.
If performance is acceptable but an
integration event has been missed, it may mean that
personnel are not concentrating on the right items and
need to be redirected.
Rolling Baseline (i.e., Change the Requirements to be
Implemented by the Project)

Validate the point calculation was performed correctly - This can be accomplished by double-checking one's work, or
to have a second party check the work.

Test the reasonableness of the progress


calculation - There are three ways that
have proven effective in verifying reasonableness. First, progress calculated by the
point system can be compared against
project status calculated by the project
status system. Second, the status calculated by the point system can be verified
as reasonable by project personnel. Third,
the progress calculated by the point system can be compared against other projects of similar size at approximately the
same milestones. If these methods indicate the point-calculated project progress
is reasonable, the testers can use it for test
purposes. If there appears to be an unreasonable status, then additional checking
needs to take place to determine what is
the correct project status at this point in
time.

A potential problem with the point system arises


from an effect known as a rolling baseline, which occurs
over the life of a program as new items (e.g., modules)
are continually defined and added to the status file. This
has the effect of changing the baseline, causing percentcomplete numbers to vary independently of milestones
earned. During periods when few new items are added
to the file, the percent-complete figure accurately
reflects real performance. At other times, new items are
added as quickly as previously defined milestones are
met, and reported progress tends to flatten out. In some
cases, more new items are added than old items
completed, and negative progress is reported.

DELIVERABLES

This problem is overcome by freezing the


baseline for a unit of work or work package and
reporting progress on the unit. That is, once a package
of work is defined, no new points are allocated to it. If
certain modules must be split up for the sake of
modularity or computing efficiency, the points are
likewise split up. When the scope of work changes

Several detail and summary reports can be


generated from the data file. The most encompassing
detail report is a listing of all elements, useful in
creating inventory lists of software items to be delivered
and used during certain systems development audits.
Other lists may be sorted or selected by work package or
family tree identification number. Such lists show the

QC. 4.1.4

April 1997

TESTING PROGRESS OF SOFTWARE SYSTEM

QA/QC SOLUTIONS
should be complete and modules can be released to
production. The information contained in the point
system is the same information the test manager needs
for reporting test results.

status of specific modules within subsets of the work


breakdown structure or functional items of the system.
Other sorts or selections by a responsible analyst show
the status of a particular individual's effort. Figures 4
and 5 show sample detail reports.
The summary reports simply total the items in
each selected category. For example, a detail status
listing of all elements within a work package can
generate a summary that indicates the total number of
elements and the number of elements that have met each
milestone. Each of the milestones can also be expressed
as a percentage. The summary reports can then be used
to report performance by whatever category is needed
(e.g., work package, family tree element, responsible
analyst, event milestone). Figures 6 and 7 show sample
summary reports.
Collecting data from several summary runs
allows rates of completion to be calculated and current
trends or performance predictions to be made.
USAGE TIPS
The point method for tracking software progress
can be used by the team in the following ways:

Validating software progress tracking


results

Test planning

Test status reporting

Validating Software Progress Tracking Results


The point system can be used by the test group to
develop a progress evaluation that can be compared with
the project leader's progress reports.
If progress
tracking using the two results is approximately the
same, the test team can validate that the project team's
progress reports are reasonable. This is simply an
extension of what testing usually does, but it can be a
valuable extension for I/S management.
Test Planning
The point method of progress tracking indicates
when testing should occur. This can assist in planning
the use of test resources.
Test Status Reporting
The point system also indicates when testing

TESTING PROGRESS OF SOFTWARE SYSTEM

April 1997

QC. 4.1.5

QA/QC SOLUTIONS
Figure 3. File Layout

Figure 4. Detail Interdependency Listing


Interdependency Status Report
File Name

ID

RA

Class

F.UDHEAD

DF-U150

MKM

F.UDLIST

DF-U151

MKM

F.UDLTST

DF-U152

MKM

F.UDMAT

DF-U153

MKM

F.UDMOVE

DF-U154

MKM

F.UDOPT

DF-8155

MKM

QC. 4.1.6

Description
PRINT HEADING FOR DELTA LISTING
(CONFIG)
PRINT DELTA LISTING (CONFIG)
START UDELTA SUBTASKING
(CONFIG)
CHECK BUFFERS FOR MATCH
(CONFIG)
MOVE DATA INTO MEMORY (CONFIG)
SET OPTIONS IN DELTA (CONFIG)

April 1997

Design

Code

Test

Release

/ /
01/27/95
/ /
01/31/95
/ /
01/31/95
/ /
01/14/95
/ /
02/02/95
/ /
02/01/95

/ /
01/27/95
/ /
02/10/95
/ /
01/15/95
/ /

/ /
01/27/95
/ /
03/15/95
/ /

04/15/95
04/21/95
04/15/95
05/21/95
04/15/95

/ /

04/15/95

/ /
03/01/95
/ /
02/28/95

/ /
04/04/95
/ /
04/14/95

04/15/95
04/11/95
04/15/95
04/11/95

TESTING PROGRESS OF SOFTWARE SYSTEM

QA/QC SOLUTIONS
Figure 5. Detail Status Listing
Work Package Status Report
WP: TACTICS LIBRARY SOFTWARE
MANAGER: NFB
Work
Package

173F
173F
173F
173F
173F
173F
173F
173F
173F
173F
173F
173F
173F
173F
173F
173F
173F
173F
173F
173F
173F
TOTALS:

File Name

F.LEDCPY
F.LEDEL
F.LEDFIL
F.LEDINF
F.LEDPRT
F.LIBEDT
F.LIBGEN
F.LTACGN
F.LTACID
F.LTASTA
F.LTCMPR
F.LTCMST
F.LTCVRT
F.LTGNUM
F.LTINIT
F.LTMDID
F.LTREC
F.LTSSTM
F.LTUCHK
F.LTUCVT
F.LTVALU
21

TESTING PROGRESS OF SOFTWARE SYSTEM

Weight

8
8
44
20
12
16
28
16
8
32
16
56
12
12
12
16
32
48
8
12
8
424

Milestones

Module Status

Design

Code

Test

Release

2
2
11
5
3
4
7
4
2
16
8
28
3
3
3
4
8
24
4
6
4
106

2
2
11
5
3
4
7
4
2
0
0
14
0
3
3
4
8
6
1
2
1
106

2
2
11
5
3
4
7
4
2
0
0
14
0
3
3
4
8
12
2
3
2
106

2
2
11
5
3
4
7
4
2
16
8
0
3
3
3
4
8
6
1
1
1
106

April 1997

Status
Code
3
3
1
1
7
3
15
3
15
7
15
0
0
0
0
0
0
1
3
7
15

Score

%
Complete

4
4
11
5
12
8
28
8
8
16
16
0
0
0
0
0
0
24
5
11
8
168

50
50
25
25
75
75
100
50
100
50
100
0
0
0
0
0
0
50
63
92
100
40

QC. 4.1.7

QA/QC SOLUTIONS
Figure 6. Summary Report
Status Summary
WORK PACKAGE 1234
Design
Total Items
Target Complete
Actual Complete
Late
Less Than 1 Week Late
1-2 Weeks Late
2-4 Weeks Late
4-8 Weeks Late
More Than 8 Weeks Late

24
10
9
1

Code
42%
38%
4%
0
1
0
0
0

24
7
5
2

Test
29%
21%
8%
1
0
1
0
0

Release

24
3
1
2

13%
4%
8%
0
2
0
0
0

24
0
0
0

Total
0%
0%
0%
0
0
0
0
0

96
20
15
5

21%
16%
5%

Figure 7. Summary Status Report


Work Package Summary Report
Work
Package

173G
173H
173K
17A1
17A3
17A4
17A5
17A7
TOTALS:

Milestones
Description

Manager

Scan library software


PPG library software
Emitter scripting: EMTR 1-50
TD Reporting CPPS
TD Reporting SW development
Scan processor documentation
Tims, debug, SVL documentation
Software dev. tools document

NFB
NFB
NFB
TJR
TJR
TJR
TJR
TJR

WP Status

Weight

480
296
2500
310
1230
1078
7420
4818
18132

Design

Code

Test

Release

Score

% Complete

120
74
2250
155
375
863
6550
3563
13950

120
74
250
155
375
215
870
1255
3314

120
74
0
0
240
0
0
0
434

120
74
0
310
240
0
0
0
434

150
74
1035
310
575
0
3465
3563
9192

31
25
42
100
47
0
47
73
51

Copyright 1997 Quality Assurance Institute Orlando, Florida

QC. 4.1.8

April 1997

TESTING PROGRESS OF SOFTWARE SYSTEM

También podría gustarte