Está en la página 1de 29

Software Testing

Objectives of Testing
Executing a program with the intent of finding an error. To check if the system meets the requirements and be executed successfully in the Intended environment. To check if the system is Fit for purpose. To check if the system does what it is expected to do. A good test case is one that has a probability of finding an as yet undiscovered error. A successful test is one that uncovers a yet undiscovered error. A good test is not redundant. A good test should be best of breed. A good test should neither be too simple nor too complex.

Testing Levels
Unit testing Integration testing System testing Acceptance testing

Unit Testing
The most micro scale of testing. Tests done on particular functions or code modules. Requires knowledge of the internal program design and code. Done by Programmers (not by testers).

Entry Criteria
Business Requirements are at least 80% complete and have been approved to-date. Technical Design has been finalized and approved. Development environment has been established and is stable. Code development for the module is complete.

Exit Criteria
Code has version control in place No known major or critical defects prevents any modules from moving to.

Unit testing
Objectives To test the function of a program or unit

When

Input

Output

of code such as a program or module To test internal logic To verify internal design To test path & conditions coverage To test exception conditions & error handling After modules are coded Internal Application Design Master Test Plan Unit Test Plan Unit Test Report

Who Methods Tools

Developer

White Box testing techniques Test Coverage techniques Debug Re-structure Code Analyzers Path/statement coverage

tools

Education

Testing Methodology Effective use of tools

Integration Testing
Testing of combined parts of an application to determine their functional correctness. Integration testing examines all the components and modules that are new, changed, affected by a change, or needed to form a complete system. Integration testing requires involvement of other systems and interfaces with other applications Parts can be code modules
individual applications client/server applications on a network.

Entry Criteria
System testing has been completed and signed off Outstanding issues and defects have been identified and documented Test scripts and schedule are ready The integration testing environment is established

Exit Criteria
All systems involved passed integration testing and meet agreed upon functionality and performance requirements. Outstanding defects have been identified, documented, and presented to the business sponsor. Stress, performance, and load tests have been satisfactorily conducted The implementation plan is final draft stage. A testing transition meeting has been held and everyone has signed off.

Integration testing Objectives

To technically verify proper interfacing between modules, and within sub-systems After modules are unit tested

When

Input

Internal & External Application Design Master Test Plan Integration Test Plan
Integration Test report

Output

Who
Methods

Developers White

Tools

Education

and Black Box techniques Problem / Configuration Management Debug Re-structure Code Analyzers Testing Methodology Effective use of tools

Integration testing has a number of sub-types of tests that may or may not be used, depending on the application being tested or expected usage patterns.

Compatibility Testing Compatibility tests insures that the application works with
differently configured systems based on what the users have or may have. When testing a web interface, this means testing for compatibility with different browsers and connection speeds.

Performance Testing Performance tests are used to evaluate and understand the
applications scalability when, for example, more users are added or the volume of data increases. This is particularly important for identifying bottlenecks in high usage applications. The basic approach is to collect timings of the critical business processes while the test system is under a very low load (a quiet box condition) and then collect the same timings with progressively higher loads until the maximum required load is reached. For a data retrieval application, reviewing the performance pattern may show that a change needs to be made in a stored SQL procedure or that an index should be added to the database design.

Stress Testing Stress Testing is performance testing at higher than normal simulated loads. Stressing runs the system or application beyond the limits of its specified requirements to determine the load under which it fails and how it fails. A gradual performance slow-down leading to a noncatastrophic system halt is the desired result, but if the system will suddenly crash and burn its important to know the point where that will happen. Catastrophic failure in production means beepers going off, people coming in after hours, system restarts, frayed tempers, and possible financial losses. This test is arguably the most important test for mission-critical systems. Load Testing Load tests are the opposite of stress tests. They test the capability of the application to function properly under expected normal production conditions and measure the response times for critical transactions or processes to determine if they are within limits specified in the business requirements and design documents or that they meet Service Level Agreements. For database applications, load testing must be executed on a current production-size database. If some database tables are forecast to grow much larger in the foreseeable future then serious consideration should be given to testing against a database of the projected size.

Systems Testing
Objectives

To test the co-existence of products and applications that are required to perform together in the production-like operational environment (hardware, software, network) To ensure that the system functions together with all the components of its environment as a total system To ensure that the system releases can be deployed in the current environment

When Input Output

After system testing Often performed outside of project life-cycle Test Strategy Master Test Plan Systems Integration Test Plan Systems Integration Test report

Who Methods

System Testers White

Tools
Education

and Black Box techniques Problem / Configuration Management Recommended set of tools
Testing

Methodology Effective use of tools

Acceptance Testing
Objectives
When Input

Output

To verify that the system meets the user requirements After System Testing Business Needs & Detailed Requirements Master Test Plan User Acceptance Test Plan User Acceptance Test report

Who
Methods

Users / End Users


Black Box techniques Problem / Configuration

Management Tools Education Compare, keystroke capture & playback, regression testing
Testing Methodology Effective use of tools Product knowledge Business Release Strategy

Testing Methodologies & Types Black box testing


White box testing Incremental testing

Black box testing treats the system as a black-box, It doesnt explicitly use Knowledge of the internal structure or code. Tester need not know the internal working of the application to perform Black Box Testing. Main focus in black box testing is on functionality of the system as a whole.

Black Box Testing

Black Box Testing


Also called functional or open box testing. Test the artifacts from external point of view. Specs are used to generate test data

Input

Output

- E.g: a data sorting function is tested on different sets of data.

Date can be randomly generated based of input types. Tester need not look into the internal woking of the application to perform Black Box Testing.

Black Box Testing Includes


Functional Testing
User Acceptance Testing (UAT) Alpha and Beta Testing Regression Testing

Functional Testing
Software is tested for the functinal requirements.
The tests cases are written in order to check if the application behaves as expected. Includes testing of: user commands data manipulation business processes user screens integrations

User Acceptance Testing (UAT)


Also called- End-User Testing.
Software is handed over to the user i.e enable the customer to determine whether or not to accept the system. Software is tested in the "real world" by the intended audience. UAT Example: Free trial /test version of software over the Web

Alpha and Beta Testing


It is impossible for S/W developer to foresee how the customer will use a program. S/W companies let end users test the application to uncover errors. Users explore the software to find defects. Alpha testing is conducted at the developer's site by a customer. Alpha tests are conducted in a controlled environment. Beta testing comes after alpha testing. Beta test is conducted at one or more customer sites by the end-user of the software. (in absence of developer) Beta test is a "live" application of the software in an environment that cannot be controlled by the developer.

Regression testing

Regression testing is any type of software testing that seeks to uncover new software bugs, or regressions, in existing functional and nonfunctional areas of a system after changes, such as enhancements, patches or configuration changes, have been made to them. The intent of regression testing is to ensure that a change, such as a bugfix, did not introduce new faults. Common methods of regression testing include rerunning previously run tests and checking whether program behavior has changed and whether previously fixed faults have re-emerged. Regression testing can be used to test a system efficiently by systematically selecting the appropriate minimum set of tests needed to adequately cover a particular change.

White Box Testing


White-box testing is testing that takes into account the internal mechanism of a system or component (IEEE, 1990). Deals with the internal logic and structure of the code. Tester has to deal with the code and hence is needed to possess knowledge of both coding and logic Tester to look into the code and find out which unit/statement/chunk of the code is malfunctioning. White box testing includes Unit Test/Module Test.

White Box Testing Process


Perform risk analysis to guide the whole testing process.
Develop a test strategy that defines what testing activities are needed to accomplish testing goals. Develop a detailed test plan that organizes the subsequent testing process. Prepare the test environment for test execution. Execute test cases and communicate results.

Smoke Testing
Smoke testing refers to physical tests made to closed systems of pipes to test for leaks. It is also used for the first test made after assembly or repairs to a system to provide some assurance that the system or the software will not crash outright. i.e the system is ready for more stressful testing.

Agile Testing
Agile testing is a software testing practice that follows the principles of agile software development. It involves all members of a cross-functional agile team, with special expertise contributed by testers, to ensure delivering the business value desired by the customer at frequent intervals, working at a sustainable pace.

Incremental Testing
A disciplined method of testing the interfaces between unit-tested programs as well as between system components. Involves adding unit-testing program module or component one by one, and testing each result and combination.

There are two types of Incremental Testing


Top-down testing form the top of the module hierarchy and work down to the bottom. Modules are added in descending hierarchical order. Bottom-up testing from the bottom of the hierarchy and works up to the top. Modules are added in ascending hierarchical order.

Testing Levels/ White Techniques Box Unit Testing Integration Testing System Testing Acceptance Testing
X

Black Box

Incre- Thread mental

X
X X X X

También podría gustarte