Está en la página 1de 26

Software Testing

Practical Skills for Testing Practitioners


2009

Testing Best Practices (2)


Yue Zhao Consulting Scientist Institute for Software Research Carnegie Mellon University

Objectives
and help you:

This lecture will explore best practices for software testing,


Enhance your understanding and skills for effective testing work. Implement a state-of-the-art testing program.

2009, CMU-ISR

Structure the Development Approach to Support Effective Unit Testing


Having several developers examine the source code and unit-test results may increase the effectiveness of the unit-testing process. The developer of the component is in a good position to update the unit test, as modifications are made to the code. Unit tests must be written in an appropriate language, capable of testing the code or component with questions. It is usually effective to assign a given developer to work with a single system layer.

2009, CMU-ISR

Develop Unit Tests in Parallel or Before the Implementation


The concept of developing unit tests prior to the actual software is useful. Unit testing forces development of software to be pursued in a way that meets each requirement. It also forces the developer to focus effort on satisfying the exact problem, rather than developing a larger solution that may also satisfy the requirement. It provides a useful reference for determining what the developer intended to accomplish. To ease the development of unit testing, developers should consider an interface-based approach to implementing components.

2009, CMU-ISR

Make Unit-Testing Execution Part of the Build Process


Unit-test programs can also be used to verify that the latest version of a software component functions as expected, prior to compiling other components that depend on it. Requiring each build to also execute the associated unit tests avoids out-dated unit test cases. Adding automated unit-test execution to the build adds another dimension of quality to the build.

2009, CMU-ISR

Know the Different Types of Testing-Support Tools


Types of tools:
Test-procedure generator Code coverage analyzers Memory-leak detection Metrics-reporting tools Usability-measurement tools Test-data generators Test-management tools Network-testing tools GUI-testing tools Load, performance, and stress testing tools Specialized tools

2009, CMU-ISR

Consider Building a Tool Instead of Buying One


There are situations that offer no other choice than to build the tool.
Operating system incompatibility Application incompatibility Specialized testing needs

Steps in building a tool:


Determine the resources, budgets, and schedules. Get buy-in and approval from management for the effort. Manage the tools source code in version control with the rest of the system. Treat the development of a testing tool as a main objective. As with any piece of code, test the home-grown testing tool itself to verify that it works according to its requirements.

2009, CMU-ISR

Know the Impact of Automated Tools on the Testing Effort


Some automated tools might fail, because of unrealistic expectations, incorrect implementation, or selection of the wrong tool. Avoid common misconceptions:
Multiple tools are often required Testing effort does not decrease Testing schedules do not decrease Automated testing follows the software development lifecycle A somewhat stable application is required Not all tests should be automated The tools entire cost includes more than its shelf price Training is required Testing tools can be unpredictable

2009, CMU-ISR

Focus on the Needs of Your Organization


Best practices to consider when choosing a testing tool:
Decide on the type of testing life-cycle tool is needed Identify the various system architectures Determine whether more than one tool is required Understand how data is managed by the applications under testing Review help-desk problem reports Know the types of tests to be developed Know the schedule Know the budget

2009, CMU-ISR

Test the Tools on an Application Prototype


It is usually impossible to have the vendor demonstrate the tool on the application being tested, since the system under testing is often not yet available during the tool evaluation phase. The development staff can create a system prototype for evaluating a set of testing tools. The staff member evaluating test tools must also have the appropriate background.

2009, CMU-ISR

10

Do Not Rely Solely on Capture/ Playback


Capture/Playback mechanisms can enhance the testing effort, but should not be the sole method used in automated testing. Capture/Playback scripts must be modified after an initial recording. Other limitations of capture/playback:
Hard-coded data values Non-modular, hard-to-maintain scripts Lack of standards for reusability

To avoid the problems associated with unmodified capture/playback scripts, development guidelines for reusable scripts should be created.
2009, CMU-ISR 11

Develop a Test Harness When Necessary


A test harness can be a tool that performs automated testing of the core components of a program or system. The term refers to code developed in-house that tests the underlying logic of an application. Although creating a test harness can be timeconsuming, it offers various advantages including deeper coverage of sensitive applications, and the ability to compare two applications that cannot be tested using a single off-the-shelf test tool.

2009, CMU-ISR

12

Use Proven Test-Script Development Techniques


Consider the following techniques when developing test scripts using functional testing tools:
Data-driven framework, which means that values are read from either spreadsheets or data pools, rather than being hard coded into the script. Develop script by modules, where each performs a separate part of the job. Modular user-interface navigation to increase the usability of test scripts. Separating common actions into shared script libraries usable by all test engineers can greatly enhance the efficiency of the testing effort. Pre-built libraries of functions for certain testing tools are available on the Internet.
2009, CMU-ISR 13

Automate Regression Tests When Feasible


Consider the following questions for developing an effective regression test program:
When should regression tests be performed? What should be included in the regression test? How can the regression test suite be optimized and improved? Why automated regression tests? How does one analyze the results of a regression test?

It is important to determine that the system is stable, that its functionality, underlying technology and implementation are not constantly changing before automating a regression test suite.
2009, CMU-ISR 14

Implement Automated Builds and Smoke Tests


Automated builds are typically executed once or twice per day, using the latest set of stable code. A smoke test is a condensed version of a regression test suite. It is automatically focused on testing of critical highlevel functionality of the applications. Implementing automated builds can greatly streamline the efforts of the development and configuration management teams. In addition to automating the software build, an automated smoke test can further optimize the development and testing environment. The typical software build sequence is as follows:
Software build Smoke test Regression test
2009, CMU-ISR 15

Do Not Make Nonfunctional Testing an Afterthought


Ideally, nonfunctional considerations are investigated early in an applications architecture and design phase. When planning a software project, consider the following nonfunctional risks:
Poor performance Incompatibility Inadequate security Insufficient usability

2009, CMU-ISR

16

Conduct Performance Testing with Product-Sized Database


It is recommended that testing teams, and perhaps development teams, use production-sized databases that include a wide range of possible data combinations and scenarios while the application is under development. One way to gather realistic data is to poll potential or existing customers to learn about the data they use or plan to use in the application. Building a large database earlier, helps bring these issues to the surface while there is time to do something about them: Disk space Process power Bandwidth

2009, CMU-ISR

17

Tailor Usability Tests to the Intended Audience


There are several ways to determine the needs of the target audience from a usability perspective.
Subject-matter experts Focus groups Surveys Study of similar products Observation of users in action

An effective tool in the development of a usable application is the user-interface prototype. Later in the development cycle, end-user representatives or subject-matter experts should participate in usability tests.
2009, CMU-ISR 18

Consider All Aspects of Security, for Specific Requirements and System-Wide


Security requirements, like other nonfunctional issues, should be associated with each functional requirement. With the security-related requirements properly documented, test procedures can be created to verify that the system meets them. If the security risk associated with an application has been determined to be substantial, it is worth investigating options for outsourcing securityrelated testing.

2009, CMU-ISR

19

Investigate the Systems Implementation To Plan for Concurrency Tests


In a multiuser system, concurrency is a major issue that the development team must address. It is important to design tests to verify that the application properly handles concurrency, following the concurrency model that has been selected for the project. Testing application concurrency can be difficult. A combination of manual and automated techniques can be used for concurrency testing.

2009, CMU-ISR

20

Set Up an Efficient Environment for Compatibility Testing


Testing an application for compatibility can be a complex job. With all the possible configurations and potential compatibility concerns, it is probably impossible to explicitly test every permutation. Software testers should consider ranking the possible configurations in order, from most to least common, for the target application. Testers must identify the appropriate test cases and data for compatibility testing. A beta-test program is another potential source of information on end-user configurations and compatibility issues.
2009, CMU-ISR 21

Clearly Define the Beginning and End of the Test-Execution Cycle


Regardless of the testing phase, it is important to define the entrance criteria and the exit criteria for software test execution cycles. Examples of entrance criteria to start a specific build:
All unit and integration tests have been executed successfully. The software builds without any issues. The build passes a smoke test. The build has accompanying documentation describing whats new in the build and what has been changed. Defects have been repaired and are ready for retesting. The source code is stored in a version-control system.
2009, CMU-ISR 22

Isolate the Test Environment from the Development Environment


The test environment must be separated from the development environment to avoid costly oversight and untracked changes to the software during testing. Without a separate test environment, the testing effort is likely to encounter several of the following problems:
Changes to the environment Version management Changes to the operating environment

2009, CMU-ISR

23

Implement a Defect-Tracking Life Cycle


The defect tracking life cycle is a critical aspect of the testing program. Each test team must perform defect reporting using a defined process that includes the following steps:
Analysis and defect record entry Prioritization Reoccurrence Closure

2009, CMU-ISR

24

Track the Execution of the Testing Program


Sample progress metrics include:
Test procedure execution status (%) = executed number of test procedures / total number of test procedures. Defect aging = Span from date defect was opened to date defect was closed. Defect fix time to retest = Span from date defect was fixed and released in new build to date defect was retested. Defect trend analysis = Trend in number of defects found as the testing life cycle progresses. Quality of fixes = Number of errors (newly introduced or reoccurring errors in previously working functionality) remaining per fix. Defect Density = Total number of defects found for a requirement / number of test procedures executed for that requirement.

2009, CMU-ISR

25

Summary
In this lecture, testing best practices are covered in the following testing phases/activities:
Automated Testing Tools Automated Testing Best Practices Nonfunctional Testing Managing Test Execution

2009, CMU-ISR

26

También podría gustarte