Está en la página 1de 65

Software Testing

by

QA Department
<dd-Mmm-yy>

Agenda

About software Product Addressing Defects Role of software testing The Goal of software testing

Software Test Life Cycle [STLC] - Phases


Software testing types - different kinds Writing Test Strategy, Test plan and Test cases

Defect reporting standards

Typical Software Product

Good Software Product

A good software product is expected to meet the following criteria: Fulfill all the Customer and end user requirements like: Functional requirements

Non functional requirements


Fit for the environment and fit to use Zero defects and acceptable bugs

Addressing Defects

Software Testing Definitions

Process of exercising or evaluating a system or system component by manual or automated means to verify that it satisfies the specified requirements - (IEEE 83a)

Structured process that uncovers the defects in a software product (Myers) Destructive in nature - dismantling the wishful assumption that the code is bug-free (Boris Byzer) Testing is a process of executing a program with the intent of finding errors (Myers)

Roles of Software Testing

Primary role Verification -are we doing things right? (Process) Validation - are we doing the right things? (Requirements) Secondary role Confidence to Ship the Product Insight into software process Improve the software testing process

The Goal of Software Testing

The goal of Software Testing process is to ensure the verification

and validation of the following:


Verified for: Testability Test factors Validated against: Specified requirements Users expectations Product success criteria

Testability and Test Factors


Testability Operability Controllability Observability Simplicity Understandability Suitability Stability Accessibility Navigability Scalability Context sensitivity Structural continuity Test Factors Ease of use Continuity of processing Coupling Portability Performance Correctness File Integrity Service levels Authorization and access control Compliance Reliability Audit trail

Users Expectations
Reliable Software: Should not crash Should not cause loss of life, money or property Should not cause loss or alteration of data If fails, should do so Gracefully If fails, should recover from its failure easily Attractive Software: Pleasant to use and attractive to the prospective user Attractive and consistent visual design Compatible Software: Should run on all intended hardware platforms Should run on all intended software configurations Should not interfere with the operation of other software Should work with earlier versions of the same software

Efficient Software: Should meet the end users expectations Fit to use by design and by nature Should not monopolize hardware or software

resources
Usable software: Should not annoy its intended user Consistent in its design and not confusing Not be overly complex to learn Support documentation like online help Always indicate to the user what is happening Should not take control away from the user without any indication Installable Software: Should be able to install quickly and easily Should not interfere with any other software Follow the principles of good software Availability of additional information for unsuccessful installations Uninstall in a clean fashion

Product Success Criteria


The success of a product may be dependent on some key factors of the product. This may be different for each type of product, based on the targeted user market, user culture, season, price factor etc. Some major product success criteria are: Functionality

Usability
Likeability Maintainability

Interoperability
Reliability

Testers Role Today

Testers Thinking

Are the requirements complete? Is the design scalable? Will the software be Maintainable? How would be the User Experience? Are the Builds stable? Is the product Ready? If the number of bugs Acceptable?

Aspects that Hinder Effective Testing

Optimism

Negative attitude towards effective testing


Ego Do not want to fail Conflict between testers and developers Testing is least structured Testing is expensive Delivery commitments Lack of proper resources

Software Test Management and Test Process

Effective test process Characteristics Test approach Collaborative approach Concept of STLC Testing tasks Test deliverables

Test audit

Software Test Management


Step #1: Initial Test Planning Activities Verification at every phase Appropriate and required level of testing

Step #2: Defining and Managing the Test Objectives Identification of:

Functions and logics to be tested


Program constraints Software states Input and output data conditions

Usage scenarios

Driven by concerns, risks, and business logic

Software Test Management (Contd)

Step #3: Test Progress Monitoring Activities Accurate time accounting information Product and test quality tracking activities

Step #4: Test Configuration Control Activities

Identifying and managing key test work products


Require careful planning and configuration control practices Deal with test artifacts

Result: Quality and Productivity Improvements

Collaborative Approach
Roles of Developers and Testers are as follows:
Developer briefs tester on: How the product/ module works ? Intentions of each feature Areas of features which are more error/ risk prone Schedule for delivering the remaining feature sets Concerns about hidden/ abstract activities Tester briefs developer on: Test coverage including types and levels of testing Feature(s) as understood by the testers Required information so as to make feature(s) more granular Testing phases and schedule What testers expect from the development team Methods/ details of status reporting/ defect tracking

Software Development Life Cycle

Software Test Life Cycle (STLC)

Concept of STLC

Software Testing Life Cycle, STLC is the road map to the


project/ product success It involves the continuous testing of the system during the

development cycle of the project


In STLC, the results of the development process are evaluated to determine the correctness of the implementation

STLC Tasks

To standardize the testing process Eliminate redundancy in functional testing To enforce defect prevention activities To incorporate configuration control STLC is a systematic approach

STLC Phases

Prepare Test plan

Test case designing


Testing or test execution Bug reporting, analysis and regression testing

Inspection and release


Customer acceptance Test Summary analysis & Test metrics

Preparing Test Plan

Preparing the Test Plan involves the following activities: Analyze scope of the project Product/ test requirement document Develop risk assessment criteria

Identify acceptance criteria


Document product definition, testing strategies Define problem reporting procedures Preparation of master Test plan

Test Case Designing

This involves the following activities: Set up test environment Design test cases: Requirement based and Code-based test cases Decide if any set of test cases to be automated

Testing or Test Execution

Test execution involves: Testing initial test cycles, bug fixes and re-testing Final testing and implementation Setup database to track components of the automated testing system i.e. reusable modules

Bug/ Defect Reporting

The tasks involved are: Identify the bugs & Defects as per the test cases Reporting the bugs/Defects through bug reporting system

Analyze the Error/ Defect/ Bug


Debugging the system Regression testing

Inspection and Release

The activities are: Maintaining the configuration of related work products Final review of testing Metrics to measure improvements Replication of the product Product delivery records Evaluate test effectiveness

Customer/ Client Acceptance

Customer acceptance testing includes:

Software installation
Provide support during acceptance testing Analyze and address the Error/ Defect/ Bug

Track changes and maintenance


Final testing and implementation Submission to Customer and signoff Update respective process documents

STLC: Best Practices

Test process is complex, time consuming. So testing life cycle should start at the same time as the development life cycle starts, else it may lead to:

Affect the time schedule


Delay in exposing gaps/ flaws in application definition

Testing Tasks

Regular tasks Ongoing activities Test deliverables Test audit

Regular Testing Tasks

Test strategy

Test planning - Unit, Integration and System


Test design Test execution and Status reporting

Report Test results


Test deliverables Test audit Test metrics

Ongoing Activities

Configuration Management and Change Control Independent Verification and Validation (IV&V) Interim reviews Milestone reviews Group reviews Peer reviews Quality Processes Compliance Check

Ongoing Activities (Contd)

Configuration audits
Process audits Technical audits

Back-up/ Security audits


Risk Management and Project Management Status and Progress update meetings

Test Deliverables

Test Scripts/ Cases/ Scenarios Post Test Validation Form Test Summary Report Benchmarking Report Records on Test Process

Test Deliverables (Contd)

Frozen configuration items in Configuration Management tool Problem reports indicating list of open bugs Release sign-off Test metrics Test Library for future use

Test Audit

To verify the adherence to test process in a project To ensure tests are practical, understood and followed To identify adherence to plans To ensure integrity of status reports To ensure test deliverables To ensure review records To ensure risk mitigation

To record the findings


Improving test process

Test Strategy

Should be developed for each project separately

Defines the scope and general direction for testing of the project
High-level, prepared together with the project plan What are the trade-offs?

Who will conduct the testing? (Org Level)


How much testing will be done? What tools will be used?

Test Exit Criteria

Executed at least once?


Requirements been tested or verified? Test Documentation

Documents updated and submitted


Configuration Manager Test Incidents

Testing Techniques

Specification based (Black box Testing/ Functional) Equivalence partitioning Cause effect graphing Boundary value Analysis

Category partition
Formal Specification based Control Flow based criteria Data flow based criteria

Testing Techniques (Contd)


Fault Based Error guessing Mutation Fault seeding Usage Based Statistical testing Specific Techniques Object Oriented testing Component Based testing

Testing Techniques (Contd)

Code Based (White Box Testing/ Structured)


Statement coverage Edge coverage (Branch coverage)

Condition coverage
Path coverage Cyclomatic complexity

Software Testing Phases

Unit testing Functional testing Integration testing System testing

Acceptance testing
Interface testing Regression testing Special testing

Unit Testing

Purpose Typos Basic Logic problems Syntax errors Assumptions/ Pre-conditions Developer should test basic functionality and normal processing paths

Unit Testing (Contd)

Expectations Every path/ line of code new or modified should be executed, tested Code inspection should verify functionality Possible values be tested for data entry fields Error cases to be verified and required to end gracefully

Unit Testing (Contd)

Return values Data accuracy Decimal places Help screens Performance tests Memory leaks Specialized hardware

Unit Testing (Contd)

Affected documentation
Interface Local data structures

Boundary conditions
Independent paths Error handling paths

Functional Testing

Purpose
Small groups of modules that are functionality related Expectations

Passing parameters
Functional outputs or module exit values Error cases gracefully

Functional Testing (Contd)

Report formats
Help screens Performance tests

Clean up
Memory leaks Documentation

Integration Testing

Purpose Test all functional groups and areas Testable requirements Hardware/ Software specifications External interfaces Performance tests Documentation

System Testing

Purpose Test the entire system as a whole Assumptions Completed: o Unit Testing o Functional Testing o Integration Testing

System Testing (Contd)

Expectations Verification of the system Software requirements Business workflow perspective Final verification of requirements and design

System Testing (Contd)

External interfaces Performance tests Affected documents Non-testable requirements

Interface Testing

Purpose Interfaces with the external system Assumptions Completed unit, functional and integration tests Fixed all critical errors

Interface Testing (Contd)

Both normal cases and exceptions should be tested, on both sides of the interface (if both sides exchange data). The interface should be tested for handling the normal amount and flow of data as well as peak processing

volumes and traffic.


If appropriate, the batch processing or file transmission window should be tested to ensure that both systems

complete their processing within the allocated time

Performance Testing

Purpose

The purpose is to verify that the system meets the


performance requirements Assumptions/ Pre-conditions

Completed system testing successfully


Ensure no unexpected performance Prior to acceptance testing Tests should use business cases including normal, error and unlikely or ad-hoc cases

Performance Testing (Contd)

Performance Tests are:

Load Test
Stress Test Volume Test

Test Data
Response Time End-to-end Tests and workflows should be performed Tracking tool for comparison

Regression Testing

Purpose

Purpose of regression testing is to ensure that areas that


were modified and not directly modified have not been adversely or unexpectedly affected by the changes made

while bug fixing


Assumptions/ Pre-conditions System testing completed

High priority errors are fixed

Regression Testing (Contd)

Expectations

End-to-end
Few targeted test cases Typical tests include:

Normal/ typical work flows


High volume exceptions Affected areas Results from the previous releases should be compared

(User) Acceptance Testing

Purpose Purpose of acceptance testing is to verify the system from users perspective Assumptions/ Pre-conditions Completed system and regression Configuration manager

(User) Acceptance Testing (Contd)

Test data
Final versions of all documents are ready Overview of the testing procedures

Exit decision
Specific procedures Acceptance criteria must be documented

(User) Acceptance Testing (Contd)

Expectations Verification from users perspective Performance testing should be conducted again Extra time

User manuals to the testers


Non-testable requirements Review with the sponsor and user Plans for the implementation

Field Testing

Purpose Purpose of field testing is to verify the systems in actual user environment Assumptions/ Pre-conditions System and acceptance testing successfully completed

Field Testing (Contd)

Expectations Verification of the systems working in the actual user environment Pilot test with the final product Pilot system should work during a problem

Thank You

Q&A

También podría gustarte