Está en la página 1de 6

Verification VS Validation

Verification - Are we building the product Right? Its the process of determining whether or not the products of a given phase of software development fulfill the requirements established during the previous phase. Validation- Are we building the Right Product? Process of evaluating software at the end of its development to ensure that it is free from failures & complies with the requirements.

Test Strategy and Test Plan


Test Strategy A Test Strategy document is a high level document and normally developed by project manager. This document defines Testing Approach to achieve testing objectives. The Test Strategy is normally derived from the Business Requirement Specification document. The Test Strategy document is a static document meaning that it is not updated too often. It sets the standards for testing processes and activities and other documents such as the Test Plan draws its contents from those standards set in the Test Strategy Document. Some companies include the Test Approach or Strategy inside the Test Plan, which is fine and it is usually the case for small projects. However, for larger projects, there is one Test Strategy document and different number of Test Plans for each phase or level of testing. Components of the Test Strategy document

Scope and Objectives Business issues Roles and responsibilities Communication and status reporting Test deliverability Industry standards to follow Test automation and tools Testing measurements and metrices Risks and mitigation Defect reporting and tracking Change and configuration management Training plan

Test Plan The Test Plan document on the other hand, is derived from the Product Description, Software Requirement Specification SRS, or Use Case Documents. The Test Plan document is usually prepared by the Test Lead or Test Manager and the focus of the document is to describe what to test, how to test, when to test and who will do what test. It is not uncommon to have one Master Test Plan which is a common document for the test phases and each test phase have their own Test Plan documents. There is much debate, as to whether the Test Plan document should also be a static document like the Test Strategy document mentioned above or should it be updated every often to reflect changes according to the direction of the project and activities.

Test Plan id Introduction Test items Features to be tested Features not to be tested Test techniques

Testing tasks Suspension criteria Features pass or fail criteria Test environment (Entry criteria, Exit criteria) Test deliverables Staff and training needs Responsibilities Schedule

This is a standard approach to prepare test plan and test strategy documents, but things can vary company-to-company

Smoke testing and Sanity testing


SMOKE TESTING:

Smoke testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire and smoke. In software industry, smoke testing is a shallow and wide approach whereby all areas of the application without getting into too deep, is tested. A smoke test is scripted, either using a written set of tests or an automated test A Smoke test is designed to touch every part of the application in a cursory way. Its shallow and wide. Smoke testing is conducted to ensure whether the most crucial functions of a program are working, but not bothering with finer details. (Such as build verification). Smoke testing is normal health check up to a build of an application before taking it to testing in depth.

SANITY TESTING:

A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep. A sanity test is usually unscripted. A Sanity test is used to determine a small section of the application is still working after a minor change. Sanity testing is a cursory testing; it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. Sanity testing is to verify whether requirements are met or not, checking all features breadth-first.

Software Testing Life Cycle (STLC):


1. Requirements stage a. Requirement Specification documents b. Functional Specification documents c. Use case Documents d. Test Trace-ability Matrix for identifying Test Coverage 2. Test Plan a. Test Scope, Test Environment b. Different Test phase and Test Methodologies c. Manual and Automation Testing d. Defect Mgmt, Configuration Mgmt, Risk Mgmt. Etc 3.Test Design a. Test Case preparation. b. Test Traceability Matrix for identifying Test Cases c. Test case reviews and Approval 4.Test Execution a. Executing Test cases b. Capture, review and analyze Test Results 5. Defect Tracking a. Find the defect & tracking for its closure. 6. Bug Reporting a. Report the defect on tool/Excels 7.Regression/retesting

Software Testing Methods



White box testing Black box testing Gray box testing

Unit testing Integration testing Regression testing Usability testing Performance testing Scalability testing Software stress testing Recovery testing Security testing Conformance testing Smoke testing Compatibility testing System testing Alpha testing Beta testing

Software Testing Methodologies



Waterfall model V model Spiral model RUP Agile model RAD

Software Testing Techniques


Normally software testing is carried out in all stages of the software development life cycle. The advantage of testing at all stages is, that it helps to find different defects in different stages of software development. This helps to minimize the cost of software, as it is easier to log the defects and fix the defects in the early stage of software development. When the entire product is ready the cost increases, as there are a number of other components, which are also dependent on the component, which has defects in it. The software testing methodologies are broadly divided into two, namely static techniques and dynamic techniques.

Static Software Testing Techniques In this type of technique, testing of a component is carried out without execution of the software. There is static analysis of the code, which is carried out. There are different types of static techniques of software testing as well. Review Review is said to be one powerful static technique, which is carried out in the early stages of software testing life cycle. The reviews can either be formal or informal. Inspection is the most documented and formal review technique. However, in practice the informal review is perhaps the most commonly used technique. In the initial stages of the development, the number of people attending the reviews, whether formal or informal are less, but they increase in the later stages of the software development. Peer Review is a review of a software product undertaken by the peers and colleagues of the author of the software component, to identify the defects in the component and also to recommend any improvements in the system if required. The types of reviews are:

Walkthrough: The author of the document to be reviewed guides the participants through the document, along with the his/her thought process to come to a common understanding as well as to gather feedback on the component document under review. Technical Review: It is a peer group discussion, where the focus of the discussion, is to achieve consensus on the technical approach taken, while developing the system. Inspection: This is also a type of peer review, where the focus is on the visual examination of various documents to detect any defects in the system. This type of review is always based on a documented procedure.

Static Analysis by Tools Static analysis tools focus on the software code. These tools are used by software developers before as well as sometimes during component and integration testing. The tools used include

Coding Standards: Here, there is a check conducted to verify adherence to coding standards. Code Metrics: The code metrics helps to measure structural attributes of the code. When the system becomes increasingly complex, it helps to decide the design alternatives, more so while redesigning portions of the code. Code Structure: Three main aspects of the code structure are control flow structure, data flow structure and data structure.

Dynamic Software Testing Techniques In the dynamic software testing techniques, the code is actually tested for defects. This technique is further divided into three sub-categories, namely specification based technique, structure based technique and experience based technique. We will now see each one of them. Specification Based Testing Techniques The procedure used to derive and/or select test cases based on the analysis of either functional specifications or non functional specifications, of a component or system, without any reference to the internal structure of the component or system. It is also known as black box testing or input/output driven testing techniques. They are so called as the tester has no knowledge of how the system is structured inside. The tester concentrates on what the software does and is not bothered about how it does it. Functional testing concentrates on what the system does, along with its features or functions. On the other hand, the non functional testing concentrates on how well the system does something. There are five main specification based testing techniques:

1. Equivalence Partitioning: The test cases are designed to execute representative inputs from an equivalence
partition or equivalence classes. The test cases are designed, such that the test cases cover every partition at least once. To explain it further, an equivalence partitioning technique, the idea is to divide - a set of test conditions into sub groups or sets, which can be considered the same. If any value from the group is used in the system, the result should be the same. This helps to reduce the execution of a number of test cases, as only one condition from each partition can be tested. Example: If 1 to 100 are the valid values, then the valid partitioning is 1 to 50, 50 to 100. Therefore, for valid partitioning, 1, 50 and 100 are the values for which the system will have to be checked. But it does not end there, the system will have to be checked also for invalid partitions as well. Hence, random values like -10, 125, etc. are invalid partitions. While choosing the values for invalid partitioning, the values should be away from the valid boundaries. Boundary Value Analysis: An input or output value that is on the edge of an equivalence partition or is at the smallest incremental distance on either side of an edge. This technique is based on testing the boundaries between the partitions for both valid boundaries and invalid boundaries. Example: If 1 to 99 are the valid inputs. Therefore, values 0 and 100 are the invalid values. Hence, the test cases should be so designed to include values 0, 1, 99 and 100, to know the working of the system. Decision Table: This technique focuses on business logic or business rules. A decision table is also known as cause effect table. In this table there is combination of inputs with their associated outputs, which are used to design test cases. This technique works well in conjunction with equivalence partitioning. Here the first task is to identify a suitable function, which has behavioral traits, that react according to a combination of inputs. If there are a large number of conditions, then dividing them into subsets helps to come up with the accurate results. If there are two conditions, then you will have 4 combination of input sets. Likewise, for 3 conditions there are 8 combination and for 4 conditions there are 16 combination, etc. State Transition Testing: This technique is used, where any aspect of the component or system can be described as a finite state machine. The test cases for this technique are designed to execute valid and invalid state transition. In any given state, one event can give rise to only one action, but the same event from another state may cause a different action and a different end state. Use Case Testing: It helps to identify the test cases, which exercise the whole system on a transaction by transaction basis from the beginning to end. The test cases are designed to execute real life scenarios. They help to unravel integration defects.

2.

3.

4.

5.

Structure Based Testing Techniques: There are two purposes of structure based testing techniques, viz. Test coverage measurement and structural test case design. They are a good way to generate additional test cases, which are different from existing test cases, derived from the specification based techniques. This is also known as white box testing strategy or white box testing techniques. Test Coverage: The degree expressed as a percentage, to which a specified coverage item has been exercised by a test suite. The basic coverage measure is Coverage = Number of coverage items

exercised Total number of coverage items

*100%

There is a danger in using the coverage measure. Contrary to the belief, 100% coverage does not mean that the code is 100% tested. Statement Coverage and Statement Testing: This is the percentage of executable statements, which have been exercised by a particular test suite. It is important to note, that a statement may be on one single line or it can be spread over several lines. At the same time, one line may contain more than one statement or a part of a statement as well and not to forget statements, which contain other statement inside them. The formula to be used for statement coverage is: Statement Coverage = Number of statements exercised Total number of statements

*100%

Decision Coverage and Decision Testing: Decision statements are statements like If statements, loop statements, Case statements, etc. where there are two or more possible outcomes from the same statement. To calculate decision coverage, the formula you will use is Decision Coverage = Number of decision outcomes exercised Total number of decision outcomes

*100%

Decision coverage is stronger than statement coverage, as 100% decision coverage always guarantees statement coverage, but the other way around is not true. While checking decision coverage, each decision needs to have both a true as well as a false outcome. Other Structure based Testing Techniques: Apart from the structure based techniques mentioned above, there are some other techniques as well. They include linear code sequence and jump (LCSAJ) coverage, multiple condition decision coverage (MCDC), path testing, etc.

Experience Based Testing Techniques

Although testing needs to be rigorous, systematic and thorough, there are some non systematic techniques that are based on a persons knowledge, experience, imagination and intuition. A bug hunter often is able to locate an elusive defect in the system. The two techniques, which fall under this category, they are: Error Guessing It is a test design technique, where the experience of the tester is put to test, to hunt for elusive bugs, which may be a part of the component or the system, due to errors made. This technique is often used after the formal techniques have been used to test the code and has proved to be very useful. A structured approach to be used in error guessing approach is to list the possible defects, which can be a part of the system and then test cases in an attempt to reproduce them. Exploratory Testing Techniques Exploratory testing is also known as 'monkey testing'. It is a hands-on approach, where there is minimum planning of testing the component, but maximum testing takes place. The test design and test execution happen simultaneously without formally documenting the test conditions, test cases or test scripts. This approach is useful, when the project specifications are poor or when the time at hand is extremely limited. There different types of software testing estimation techniques. One of the techniques involves consulting people who will perform the testing activities and the people who have expertise on the tasks to be done. The software testing techniques to be used to test the project depends on a number of factors. The main factors are urgency of the project, severity of the project, resources in hand, etc. At the same time not all techniques of software testing will be utilized in all the projects, depending on the organizational polices techniques will be decided.

También podría gustarte