Está en la página 1de 33

different models(waterfall,v model,spiral etc) maturity models. 1.

. Acceptance Testing: Formal testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system. It is usually performed by the customer. 2. Accessibility Testing: Type of testing which determines the usability of a product to the people having disabilities (deaf, blind, mentally disabled etc). The evaluation process is conducted by persons having disabilities. 3. Active Testing: Type of testing consisting in introducing test data and analyzing the execution results. It is usually conducted by the testing teams. 4. Agile Testing: Software testing practice that follows the principles of the agile manifesto, emphasizing testing from the perspective of customers who will utilize the system. It is usually performed by the QA teams. 5. Age Testing: Type of testing which evaluates a system's ability to perform in the future. The evaluation process is conducted by testing teams. 6. Ad-hoc Testing: Testing performed without planning and documentation - the tester tries to 'break' the system by randomly trying the system's functionality. It is performed by the testing teams. 7. Alpha Testing: Type of testing a software product or system conducted at the developer's site. Usually it is performed by the end user. 8. Assertion Testing: Type of testing consisting in verifying if the conditions confirm the product requirements. It is performed by the testing teams. 9. API Testing: Testing technique similar to unit testing in that it targets the code level. API Testing differs from unit testing in that it is typically a QA task and not a developer task. 10. All-pairs Testing: Combinatorial testing method that tests all possible discrete combinations of input parameters. It is performed by the testing teams. 11. Automated Testing: Testing technique that uses automation testing tools to control the environment set-up, test execution and results reporting. It is performed by a computer and is used inside the testing teams. 12. Basis Path Testing: A testing mechanism which derives a logical complexity measure of a procedural design and use this as a guide for defining a basic set of execution paths. It is used by testing teams when defining test cases. 13. Backward Compatibility Testing: Testing method which verifies the behavior of the developed software with older versions of the test environment. It is performed by testing teams. 14. Beta Testing: Final testing before releasing application for commercial purpose. It is typically done by end-users or others. 15. Benchmark Testing: Testing technique that uses representative sets of programs and data designed to evaluate the performance of computer hardware and software in a given configuration. It is performed by testing teams. 16. Big Bang Integration Testing: Testing technique which integrates individual program modules only when everything is ready. It is performed by the testing teams. 17. Binary Portability Testing: Technique that tests an executable application for portability across system platforms and environments, usually for conformation to an ABI specification. It is performed by the testing teams. 18. Boundary Value Testing: Software testing technique in which tests are designed to include representatives of boundary values. It is performed by the QA testing teams.

19. Bottom Up Integration Testing: In bottom up integration testing, module at the lowest level are developed first and other modules which go towards the 'main' program are integrated and tested one at a time. It is usually performed by the testing teams. 20. Branch Testing: Testing technique in which all branches in the program source code are tested at least once. This is done by the developer. 21. Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail. It is performed by testing teams. 22. Black box Testing: A method of software testing that verifies the functionality of an application without having specific knowledge of the application's code/internal structure. Tests are based on requirements and functionality. It is performed by QA teams. 23. Code-driven Testing: Testing technique that uses testing frameworks (such as xUnit) that allow the execution of unit tests to determine whether various sections of the code are acting as expected under various circumstances. It is performed by the development teams. 24. Compatibility Testing: Testing technique that validates how well a software performs in a particular hardware/software/operating system/network environment. It is performed by the testing teams. 25. Comparison Testing: Testing technique which compares the product strengths and weaknesses with previous versions or other similar products. Can be performed by tester, developers, product managers or product owners. 26. Component Testing: Testing technique similar to unit testing but with a higher level of integration - testing is done in the context of the application instead of just directly testing a specific method. Can be performed by testing or development teams. 27. Configuration Testing: Testing technique which determines minimal and optimal configuration of hardware and software, and the effect of adding or modifying resources such as memory, disk drives and CPU. Usually it is performed by the performance testing engineers. 28. Condition Coverage Testing: Type of software testing where each condition is executed by making it true and false, in each of the ways at least once. It is typically made by the automation testing teams. 29. Compliance Testing: Type of testing which checks whether the system was developed in accordance with standards, procedures and guidelines. It is usually performed by external companies which offer "Certified OGC Compliant" brand. 30. Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. It it usually done by performance engineers. 31. Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. It is usually performed by testing teams. 32. Context Driven Testing: An Agile Testing technique that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization at a specific moment. It is usually performed by Agile testing teams. 33. Conversion Testing: Testing of programs or procedures used to convert data from existing systems for use in replacement systems. It is usually performed by the QA teams. 34. Decision Coverage Testing: Type of software testing where each condition/decision is executed by setting it on true/false. It is typically made by the automation testing teams. 35. Destructive Testing: Type of testing in which the tests are carried out to the specimen's failure, in order to understand a specimen's structural performance or material behaviour under different loads. It is usually performed by QA teams. 36. Dependency Testing: Testing type which examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality. It is usually performed by testing teams. 37. Dynamic Testing: Term used in software engineering to describe the testing of the dynamic behavior of code. It is typically performed by testing teams.

38. Domain Testing: White box testing technique which contains checkings that the program accepts only valid input. It is usually done by software development teams and occasionally by automation testing teams. 39. Error-Handling Testing: Software testing type which determines the ability of the system to properly process erroneous transactions. It is usually performed by the testing teams. 40. End-to-end Testing: Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. It is performed by QA teams. 41. Endurance Testing: Type of testing which checks for memory leaks or other problems that may occur with prolonged execution. It is usually performed by performance engineers. 42. Exploratory Testing: Black box testing technique performed without planning and documentation. It is usually performed by manual testers. 43. Equivalence Partitioning Testing: Software testing technique that divides the input data of a software unit into partitions of data from which test cases can be derived. it is usually performed by the QA teams. 44. Fault injection Testing: Element of a comprehensive test strategy that enables the tester to concentrate on the manner in which the application under test is able to handle exceptions. It is performed by QA teams. 45. Formal verification Testing: The act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics. It is usually performed by QA teams. 46. Functional Testing: Type of black box testing that bases its test cases on the specifications of the software component under test. It is performed by testing teams. 47. Fuzz Testing: Software testing technique that provides invalid, unexpected, or random data to the inputs of a program - a special area of mutation testing. Fuzz testing is performed by testing teams. 48. Gorilla Testing: Software testing technique which focuses on heavily testing of one particular module. It is performed by quality assurance teams, usually when running full testing. 49. Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings. It can be performed by either development or testing teams. 50. Glass box Testing: Similar to white box testing, based on knowledge of the internal logic of an applications code. It is performed by development teams. 51. GUI software Testing: The process of testing a product that uses a graphical user interface, to ensure it meets its written specifications. This is normally done by the testing teams. 52. Globalization Testing: Testing method that checks proper functionality of the product with any of the culture/locale settings using every type of international input possible. It is performed by the testing team. 53. Hybrid Integration Testing: Testing technique which combines top-down and bottom-up integration techniques in order leverage benefits of these kind of testing. It is usually performed by the testing teams. 54. Integration Testing: The phase in software testing in which individual software modules are combined and tested as a group. It is usually conducted by testing teams. 55. Interface Testing: Testing conducted to evaluate whether systems or components pass data and control correctly to one another. It is usually performed by both testing and development teams. 56. Install/uninstall Testing: Quality assurance work that focuses on what customers will need to do to install and set up the new software successfully. It may involve full, partial or upgrades install/uninstall processes and is typically done by the software testing engineer in conjunction with the configuration manager.

57. Internationalization Testing: The process which ensures that products functionality is not broken and all the messages are properly externalized when used in different languages and locale. It is usually performed by the testing teams. 58. Inter-Systems Testing: Testing technique that focuses on testing the application to ensure that interconnection between application functions correctly. It is usually done by the testing teams. 59. Keyword-driven Testing: Also known as table-driven testing or action-word testing, is a software testing methodology for automated testing that separates the test creation process into two distinct stages: a Planning Stage and an Implementation Stage. It can be used by either manual or automation testing teams. 60. Load Testing: Testing technique that puts demand on a system or device and measures its response. It is usually conducted by the performance engineers. 61. Localization Testing: Part of software testing process focused on adapting a globalized application to a particular culture/locale. It is normally done by the testing teams. 62. Loop Testing: A white box testing technique that exercises program loops. It is performed by the development teams. 63. Manual Scripted Testing: Testing method in which the test cases are designed and reviewed by the team before executing it. It is done by manual testing teams. 64. Manual-Support Testing: Testing technique that involves testing of all the functions performed by the people while preparing the data and using these data from automated system. it is conducted by testing teams. 65. Model-Based Testing: The application of Model based design for designing and executing the necessary artifacts to perform software testing. It is usually performed by testing teams. 66. Mutation Testing: Method of software testing which involves modifying programs' source code or byte code in small ways in order to test sections of the code that are seldom or never accessed during normal tests execution. It is normally conducted by testers. 67. Modularity-driven Testing: Software testing technique which requires the creation of small, independent scripts that represent modules, sections, and functions of the application under test. It is usually performed by the testing team. 68. Non-functional Testing: Testing technique which focuses on testing of a software application for its non-functional requirements. Can be conducted by the performance engineers or by manual testing teams. 69. Negative Testing: Also known as "test to fail" - testing method where the tests' aim is showing that a component or system does not work. It is performed by manual or automation testers. 70. Operational Testing: Testing technique conducted to evaluate a system or component in its operational environment. Usually it is performed by testing teams. 71. Orthogonal array Testing: Systematic, statistical way of testing which can be applied in user interface testing, system testing, regression testing, configuration testing and performance testing. It is performed by the testing team. 72. Pair Testing: Software development technique in which two team members work together at one keyboard to test the software application. One does the testing and the other analyzes or reviews the testing. This can be done between one Tester and Developer or Business Analyst or between two testers with both participants taking turns at driving the keyboard. 73. Passive Testing: Testing technique consisting in monitoring the results of a running system without introducing any special test data. It is performed by the testing team. 74. Parallel Testing: Testing technique which has the purpose to ensure that a new application which has replaced its older version has been installed and is running correctly. It is conducted by the testing team. 75. Path Testing: Typical white box testing which has the goal to satisfy coverage criteria for each logical path through the program. It is usually performed by the development team. 76. Penetration Testing: Testing method which evaluates the security of a computer system or network by simulating an attack from a malicious source. Usually they are conductedby specialized penetration testing companies.

77. Performance Testing: Functional testing conducted to evaluate the compliance of a system or component with specified performance requirements. It is usually conducted by the performance engineer. 78. Qualification Testing: Testing against the specifications of the previous release, usually conducted by the developer for the consumer, to demonstrate that the software meets its specified requirements. 79. Ramp Testing: Type of testing consisting in raising an input signal continuously until the system breaks down. It may be conducted by the testing team or the performance engineer. 80. Regression Testing: Type of software testing that seeks to uncover software errors after changes to the program (e.g. bug fixes or new functionality) have been made, by retesting the program. It is performed by the testing teams. 81. Recovery Testing: Testing technique which evaluates how well a system recovers from crashes, hardware failures, or other catastrophic problems. It is performed by the testing teams. 82. Requirements Testing: Testing technique which validates that the requirements are correct, complete, unambiguous, and logically consistent and allows designing a necessary and sufficient set of test cases from those requirements. It is performed by QA teams. 83. Security Testing: A process to determine that an information system protects data and maintains functionality as intended. It can be performed by testing teams or by specialized security-testing companies. 84. Sanity Testing: Testing technique which determines if a new software version is performing well enough to accept it for a major testing effort. It is performed by the testing teams. 85. Scenario Testing: Testing activity that uses scenarios based on a hypothetical story to help a person think through a complex problem or system for a testing environment. It is performed by the testing teams. 86. Scalability Testing: Part of the battery of non-functional tests which tests a software application for measuring its capability to scale up - be it the user load supported, the number of transactions, the data volume etc. It is conducted by the performance engineer. 87. Statement Testing: White box testing which satisfies the criterion that each statement in a program is executed at least once during program testing. It is usually performed by the development team. 88. Static Testing: A form of software testing where the software isn't actually used it checks mainly for the sanity of the code, algorithm, or document. It is used by the developer who wrote the code. 89. Stability Testing: Testing technique which attempts to determine if an application will crash. It is usually conducted by the performance engineer. 90. Smoke Testing: Testing technique which examines all the basic components of a software system to ensure that they work properly. Typically, smoke testing is conducted by the testing team, immediately after a software build is made . 91. Storage Testing: Testing type that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. It is usually performed by the testing team. 92. Stress Testing: Testing technique which evaluates a system or component at or beyond the limits of its specified requirements. It is usually conducted by the performance engineer. 93. Structural Testing: White box testing technique which takes into account the internal structure of a system or component and ensures that each program statement performs its intended function. It is usually performed by the software developers. 94. System Testing: The process of testing an integrated hardware and software system to verify that the system meets its specified requirements. It is conducted by the testing teams in both development and target environment. 95. System integration Testing: Testing process that exercises a software system's coexistence with others. It is usually performed by the testing teams.

96. Top Down Integration Testing: Testing technique that involves starting at the stop of a system hierarchy at the user interface and using stubs to test from the top down until the entire system has been implemented. It is conducted by the testing teams. 97. Thread Testing: A variation of top-down testing technique where the progressive integration of components follows the implementation of subsets of the requirements. It is usually performed by the testing teams. 98. Upgrade Testing: Testing technique that verifies if assets created with older versions can be used properly and that user's learning is not challenged. It is performed by the testing teams. 99. Unit Testing: Software verification and validation method in which a programmer tests if individual units of source code are fit for use. It is usually conducted by the development team. 100. User Interface Testing: Type of testing which is performed to check how user-friendly the application is. It is performed by testing teams. 101. Usability Testing: Testing technique which verifies the ease with which a user can learn to operate, prepare inputs for, and interpret outputs of a system or component. It is usually performed by end users. 102. Volume Testing: Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner. It is usually conducted by the performance engineer. 103. Vulnerability Testing: Type of testing which regards application security and has the purpose to prevent problems which may affect the application integrity and stability. It can be performed by the internal testing teams or outsourced to specialized companies. 104. White box Testing: Testing technique based on knowledge of the internal logic of an applications code and includes tests like coverage of code statements, branches, paths, conditions. It is performed by software developers. 105. Workflow Testing: Scripted end-to-end testing technique which duplicates specific workflows which are expected to be utilized by the end-user. It is usually conducted by testing teams. Incremental integration testing Bottom up approach for testing i.e continuous testing of an application as new functionality is added; Application functionality and modules should be independent enough to test separately. done by programmers or by testers. Integration testing Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems. Functional testing This type of testing ignores the internal parts and focus on the output is as per requirement or not. Blackbox type testing geared to functional requirements of an application. System testing Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system. End-to-end testing Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. Sanity testing - Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix. Regression testing Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the system in regression testing so typically automation tools are used for these testing types. Acceptance testing -Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this testing to determine whether to accept application.

Load testing Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the systems response time degrades or fails. Stress testing System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load. Performance testing Term often used interchangeably with stress and load testing. To check whether system meets performance requirements. Used different performance and load tools to do this. Usability testing User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing. Install/uninstall testing - Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware, software environment. Compatibility testing Testing how well software performs in a particular hardware/software/operating system/network environment and different combination s of above. Comparison testing Comparison of product strengths and weaknesses with previous versions or other similar products. In Black Box Testing we just focus on inputs and output of the software system without bothering about internal knowledge of the software program.

The above Black Box can be any software system you want to test. For example : an operating system like Windows, a website like Google ,a database like Oracle or even your own custom application. Under Black Box Testing , you can test these applications by just focusing on the inputs and outputs without knowing their internal code implementation. Black box testing - Steps Here are the generic steps followed to carry out any type of Black Box Testing.

Initially requirements and specifications of the system are examined. Tester chooses valid inputs (positive test scenario) to check whether SUT processes them correctly . Also some invalid inputs (negative test scenario) are chosen to verify that the SUT is able to detect them. Tester determines expected outputs for all those inputs. Software tester constructs test cases with the selected inputs. The test cases are executed. Software tester compares the actual outputs with the expected outputs. Defects if any are fixed and re-tested.

Types of Black Box Testing There are many types of Black Box Testing but following are the prominent ones -

Functional testing This black box testing type is related to functional requirements of a system; it is done by software testers. Non-functional testing This type of black box testing is not related to testing of a specific functionality , but nonfunctional requirements such as performance, scalability, usability. Regression testing Regression testing is done after code fixes , upgrades or any other system maintenance to check the new code has not affected the existing code.

Tools used for Black Box Testing: Tools used for Black box testing largely depends on the type of black box testing your are doing.

For Functional/ Regression Tests you can use - QTP For Non-Functional Tests you can use - Loadrunner Black box testing strategy: Following are the prominent test strategy amongst the many used in Black box Testing

Equivalence Class Testing: It is used to minimize the number of possible test cases to an optimum level while maintains reasonable test coverage. Boundary Value Testing: Boundary value testing is focused on the values at boundaries. This technique determines whether a certain range of values are acceptable by the system or not.It is very useful in reducing the number of test cases. It is mostly suitable for the systems where input is within certain ranges. Decision Table Testing: A decision table puts causes and their effects in a matrix. There is unique combination in each column.

Comparison of Black Box and White Box Testing: While White Box Testing (Unit Testing) validates internal structure and working of your software code, the main focus of black box testing is on the validation of your functional requirements. To conduct White Box Testing , knowledge of underlying programming language is essential. Current day software systems use a variety of programming languages and technologies and its not possible to know all of them. Black box testing gives abstraction from code and focuses testing effort on the software system behaviour. Also software systems are not developed in a single chunk but development is broken down in different modules. Black box testing facilitates testing communication amongst modules (Integration Testing) . In case you push code fixes in your live software system , a complete system check (black box regression tests) becomes essential. Though White box testing has its own merits and help detect many internal errors which may degrade system performance Black Box Testing and Software Development Life Cycle (SDLC) Black box testing has its own life cycle called Software Test Life Cycle (STLC) and it is relative to every stage of Software Development Life Cycle.

Requirement This is the initial stage of SDLC and in this stage requirement is gathered. Software testers also take part in this stage. Test Planning & Analysis Testing Types applicable to the project are determined. A Test Plan is created which determines possible project risks and their mitigation. Design In this stage Test cases/scripts are created on the basis of software requirement documents Test Execution- In this stage Test Cases prepared are executed. Bugs if any are fixed and re-tested.

***************STLC************************ The different stages in Software Test Life Cycle -

Each of these stages have a definite Entry and Exit criteria , Activities & Deliverables associated with it.

In an Ideal world you will not enter the next stage until the exit criteria for the previous stage is met. But practically this is not always possible. So for this tutorial , we will focus of activities and deliverables for the different stages in STLC. Lets look into them in detail. Requirement Analysis During this phase, test team studies the requirements from a testing point of view to identify the testable requirements. The QA team may interact with various stakeholders (Client, Business Analyst, Technical Leads, System Architects etc) to understand the requirements in detail. Requirements could be either Functional (defining what the software must do) or Non Functional (defining system performance /security availability ) .Automation feasibility for the given testing project is also done in this stage. Activities

Identify types of tests to be performed. Gather details about testing priorities and focus. Prepare Requirement Traceability Matrix (RTM). Identify test environment details where testing is supposed to be carried out. Automation feasibility analysis (if required).

Deliverables

RTM Automation feasibility report. (if applicable)

Test Planning This phase is also called Test Strategy phase. Typically , in this stage, a Senior QA manager will determine effort and cost estimates for the project and would prepare and finalize the Test Plan. Activities

Preparation of test plan/strategy document for various types of testing Test tool selection Test effort estimation Resource planning and determining roles and responsibilities. Training requirement

Deliverables

Test plan /strategy document. Effort estimation document.

Test Case Development This phase involves creation, verification and rework of test cases & test scripts. Test data , is identified/created and is reviewed and then reworked as well. Activities

Create test cases, automation scripts (if applicable) Review and baseline test cases and scripts Create test data (If Test Environment is available)

Deliverables

Test cases/scripts Test data

Test Environment Setup Test environment decides the software and hardware conditions under which a work product is tested. Test environment set-up is one of the critical aspects of testing process and can be done in parallel with Test Case Development Stage. Test team may not be involved in this activity if the customer/development team provides the test environment in which case the test team is required to do a readiness check (smoke testing) of the given environment. Activities

Understand the required architecture, environment set-up and prepare hardware and software requirement list for the Test Environment. Setup test Environment and test data Perform smoke test on the build

Deliverables

Environment ready with test data set up Smoke Test Results.

Test Execution During this phase test team will carry out the testing based on the test plans and the test cases prepared. Bugs will be reported back to the development team for correction and retesting will be performed. Activities

Execute tests as per plan Document test results, and log defects for failed cases Map defects to test cases in RTM Retest the defect fixes Track the defects to closure

Deliverables

Completed RTM with execution status Test cases updated with results Defect reports

Test Cycle Closure Testing team will meet , discuss and analyze testing artifacts to identify strategies that have to be implemented in future, taking lessons from the current test cycle. The idea is to remove the process bottlenecks for future test cycles and share best practices for any similar projects in future. Activities

Evaluate cycle completion criteria based on Time,Test coverage,Cost,Software,Critical Business Objectives , Quality Prepare test metrics based on the above parameters. Document the learning out of the project Prepare Test closure report Qualitative and quantitative reporting of quality of the work product to the customer.

10

Test result analysis to find out the defect distribution by type and severity.

Deliverables

Test Closure report Test metrics

Equivalence Partitioning: In this method the input domain data is divided into different equivalence data classes. This method is typically used to reduce the total number of test cases to a finite set of testable test cases, still covering maximum requirements. In short it is the process of taking all possible test cases and placing them into classes. One test value is picked from each class while testing. E.g.: If you are testing for an input box accepting numbers from 1 to 1000 then there is no use in writing thousand test cases for all 1000 valid input numbers plus other test cases for invalid data. Using equivalence partitioning method above test cases can be divided into three sets of input data called as classes. Each test case is a representative of respective class. So in above example we can divide our test cases into three equivalence classes of some valid and invalid inputs. Test cases for input box accepting numbers between 1 and 1000 using Equivalence Partitioning: 1) One input data class with all valid inputs. Pick a single value from range 1 to 1000 as a valid test case. If you select other values between 1 and 1000 then result is going to be same. So one test case for valid input data should be sufficient. 2) Input data class with all values below lower limit. I.e. any value below 1, as a invalid input data test case. 3) Input data with any value greater than 1000 to represent third invalid input class. So using equivalence partitioning you have categorized all possible test cases into three classes. Test cases with other values from any class should give you the same result. We have selected one representative from every input class to design our test cases. Test case values are selected in such a way that largest number of attributes of equivalence class can be exercised. Equivalence partitioning uses fewest test cases to cover maximum requirements. Boundary value analysis: Its widely recognized that input values at the extreme ends of input domain cause more errors in system. More application errors occur at the boundaries of input domain. Boundary value analysis testing technique is used to identify errors at boundaries rather than finding those exist in center of input domain. Boundary value analysis is a next part of Equivalence partitioning for designing test cases where test cases are selected at the edges of the equivalence classes. Test cases for input box accepting numbers between 1 and 1000 using Boundary value analysis: 1) Test cases with test data exactly as the input boundaries of input domain i.e. values 1 and 1000 in our case. 2) Test data with values just below the extreme edges of input domains i.e. values 0 and 999. 3) Test data with values just above the extreme edges of input domain i.e. values 2 and 1001. Boundary value analysis is often called as a part of stress and negative testing.

11

Note: There is no hard-and-fast rule to test only one value from each equivalence class you created for input domains. You can select multiple valid and invalid values from each equivalence class according to your needs and previous judgments. E.g. if you divided 1 to 1000 input values in valid data equivalence class, then you can select test case values like: 1, 11, 100, 950 etc. Same case for other test cases having invalid data classes. This should be a very basic and simple example to understand the Boundary value analysis and Equivalence partitioning concept. What is White Box Testing? White box testing (WBT) is also called Structural or Glass box testing. White box testing involves looking at the structure of the code. When you know the internal structure of a product, tests can be conducted to ensure that the internal operations performed according to the specification. And all internal components have been adequately exercised. White Box Testing is coverage of the specification in the code. Code coverage: Segment coverage: Ensure that each code statement is executed once. Branch Coverage or Node Testing: Coverage of each code branch in from all possible was. Compound Condition Coverage: For multiple condition test each condition with multiple paths and combination of different path to reach that condition. Basis Path Testing: Each independent path in the code is taken for testing. Data Flow Testing (DFT): In this approach you track the specific variables through each possible calculation, thus defining the set of intermediate paths through the code.DFT tends to reflect dependencies but it is mainly through sequences of data manipulation. In short each data variable is tracked and its use is verified. This approach tends to uncover bugs like variables used but not initialize, or declared but not used, and so on. Path Testing: Path testing is where all possible paths through the code are defined and covered. Its a time consuming task. Loop Testing: These strategies relate to testing single loops, concatenated loops, and nested loops. Independent and dependent code loops and values are tested by this approach. Why we do White Box Testing? To ensure:

That all independent paths within a module have been exercised at least once. All logical decisions verified on their true and false values. All loops executed at their boundaries and within their operational bounds internal data structures validity.

Need of White Box Testing? To discover the following types of bugs:

Logical error tend to creep into our work when we design and implement functions, conditions or controls that are out of the program The design errors due to difference between logical flow of the program and the actual implementation Typographical errors and syntax checking

12

Skills Required: We need to write test cases that ensure the complete coverage of the program logic. For this we need to know the program well i.e. We should know the specification and the code to be tested. Knowledge of programming languages and logic. Limitations of WBT: Not possible for testing each and every path of the loops in program. This means exhaustive testing is impossible for large systems. This does not mean that WBT is not effective. By selecting important logical paths and data structure for testing is practically possible and effective. Black box testing treats the system as a black-box, so it doesnt explicitly use Knowledge of the internal structure or code. Or in other words the Test engineer need not know the internal working of the Black box or application. Main focus in black box testing is on functionality of the system as a whole. The term behavioral testing is also used for black box testing and white box testing is also sometimes called structural testing. Behavioral test design is slightly different from black-box test design because the use of internal knowledge isnt strictly forbidden, but its still discouraged. Each testing method has its own advantages and disadvantages. There are some bugs that cannot be found using only black box or only white box. Majority of the applicationa are tested by black box testing method. We need to cover majority of test cases so that most of the bugs will get discovered by blackbox testing. Black box testing occurs throughout the software development and Testing life cycle i.e in Unit, Integration, System, Acceptance and regression testing stages. Tools used for Black Box testing: Black box testing tools are mainly record and playback tools. These tools are used for regression testing that to check whether new build has created any bug in previous working application functionality. These record and playback tools records test cases in the form of some scripts like TSL, VB script, Java script, Perl. Advantages of Black Box Testing - Tester can be non-technical. - Used to verify contradictions in actual system and the specifications. - Test cases can be designed as soon as the functional specifications are complete Disadvantages of Black Box Testing - The test inputs needs to be from large sample space. - It is difficult to identify all possible inputs in limited testing time. So writing test cases is slow and difficult - Chances of having unidentified paths during this testing Methods of Black box Testing: Graph Based Testing Methods: Each and every application is build up of some objects. All such objects are identified and graph is prepared. From this object graph each object relationship is identified and test cases written accordingly to discover the errors. Error Guessing: This is purely based on previous experience and judgment of tester. Error Guessing is the art of guessing where errors can be hidden. For this technique there are no specific tools, writing the test cases that cover all the application paths. Boundary Value Analysis: Many systems have tendency to fail on boundary. So testing boundry values of application is important. Boundary Value Analysis (BVA) is a test Functional Testing technique where the extreme boundary values are chosen. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values. Extends equivalence partitioning Test both sides of each boundary Look at output boundaries for test cases too Test min, min-1, max, max+1, typical values BVA techniques: 1. Number of variables

13

For n variables: BVA yields 4n + 1 test cases. 2. Kinds of ranges Generalizing ranges depends on the nature or type of variables Advantages of Boundary Value Analysis 1. Robustness Testing Boundary Value Analysis plus values that go beyond the limits 2. Min 1, Min, Min +1, Nom, Max -1, Max, Max +1 3. Forces attention to exception handling Limitations of Boundary Value Analysis Boundary value testing is efficient only for variables of fixed values i.e boundary. Equivalence Partitioning: Equivalence partitioning is a black box testing method that divides the input domain of a program into classes of data from which test cases can be derived. How is this partitioning performed while testing: 1. If an input condition specifies a range, one valid and one two invalid classes are defined. 2. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined. 3. If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined. 4. If an input condition is Boolean, one valid and one invalid class is defined. Comparison Testing: Different independent versions of same software are used to compare to each other for testing in this method. Error guessing: Test data selection technique. The selection criterion is to pick values that seem likely to cause errors Error guessing is based mostly upon experience, with some assistance from other techniques such as boundary value analysis. Based on experience, the test designer guesses the types of errors that could occur in a particular type of software and designs test cases to uncover them. E.g. For example, if any type of resource is allocated dynamically, a good place to look for errors is in the de-allocation of resources. Are all resources correctly deallocated, or are some lost as the software executes? 1-d) Desk checking: Desk checking is conducted by the developer of the system or program. The process involves reviewing the complete product to ensure that it is structurally sound and that the standards and requirements have been met. This is the most traditional means for analyzing a system or program. 1-e) Control Flow Analysis: It is based upon graphical representation of the program process. In control flow analysis; the program graphs has nodes which represent a statement or segment possibly ending in an unresolved branch. The graph illustrates the flow of program control from one segment to another as illustrated through branches .the objective of control flow analysis is to determine the potential problems in logic branches that might result in a loop condition or improper processing . What is a test case? A test case has components that describes an input, action or event and an expected response, to determine if a feature of a n application is working correctly. Definition by Glossary There are levels in which each test case will fall in order to avoid duplication efforts. Level 1: In this level you will write the basic test cases from the available specification and user documentation. Level 2: This is the practical stage in which writing test cases depend on actual functional and system flow of the application. Level 3: This is the stage in which you will group some test cases and write a test procedure. Test procedure is nothing but a group of small test cases maximum of 10. Level 4: Automation of the project. This will minimize human interaction with system and thus QA can focus on current updated functionalities to test rather than remaining busy with regression testing. So you can observe a systematic growth from no testable item to a Automation suit.

14

**************SOFTWARE METRICS*********************

Metrics can be defined as STANDARDS OF MEASUREMENT Metric is a unit used for describing or measuring an attribute Test metrics are the means by which the software quality can be measured Test metrics provides the visibility into the readiness of the product, and gives clear measurement of the quality and completeness of the product

WHY WE NEED METRICS? You cannot improve what you cannot measure You cannot Control what you cannot measure Without measurement it is impossible to tell whether the process implemented is improving or not Metrics helps in taking decisions for next phase of activities

Metrics helps in understanding the type of improvement required and helps in taking decisions on process or technology change

WHY METRICS IN SOFTWARE TESTING There will be certain questions during and after testing such as : How long would it take to test ? How Bad / Good is the product? How many bugs still remain in the product? Will testing be completed on time? Was the testing done effectively? How much effort went into testing the product? To Answer these questions properly we need some type of measurements and record keeping to justify the answers. This is where the testing metrics comes into picture TYPES OF METRICS Base metrics (Direct Measure) The Base metrics constitute the raw data gathered by the test Engineers throughout the testing effort The Base metrics are used to provide project status reports to the Test lead and to the project manager The Base metrics provide the input data to feed into the formulas used to derive Calculated metrics Examples of Base metrics are:

# of test cases # of test cases executed Calculated Metrics (Indirect Measure) The Calculated Metrics convert the Base metrics data into more useful information

15

The Calculated Metrics are generally prepared by the Test lead and is used to track the progress of the project at different levels like at Module level, at Tester level and for the project as a whole The Calculated Metrics provide valuable information that when used and implemented often times leads to significant improvements in the Overall SDLC

CALCULATED METRICS AND PHASES:The Following Calculated metrics are created at Test Reporting Phase or Post Test Analysis Phase: % of Test cases Passed % of Test Coverage % of Defects corrected % of Test cases Blocked % of Rework % of Test Effectiveness 1 Run Fail Rate Defect discovery rate Overall Fail rate Test case Defect Density The number of errors found in test cases v/s test cases developed and executed ( Defective Test cases / Total Test cases ) * 100 Example : Total no of test cases developed is 1360, total test cases executed is 1280, total no of test cases passed is 1065, total no of test scripts failed is 215 So Test case Defect Density is : 215 X 100 ------------------------------- = 16.8 % 1280 The 16.8 % value can also be called as Test Case Efficiency % which depends upon the total number of Test cases which found defects DEFECT SLIPPAGE RATIO No of bugs reported from Production V/S No of defects reported during execution No of Defects slipped / ( Number of Defects Raised Number Defects Withdrawn) * 100 Example : Customer reported 21 defects, total no of defects found while testing are, total no of Invalid defects are 17 So Slippage ratio is : [ 21 / (267 17) ] X 100 = 8.4% REQUIREMENT VOLATILITY METRIC This metric ensures that the requirements are normalized or defined properly while estimating No of requirements agreed V/S No of requirements changed (No of requirements Added + Deleted + Modified) * 100 / No of original requirements
st

16

Example : SVN 1.3 release has 67 requirements initially, later 7 new requirements are added, 3 requirements are deleted from initial requirements and modified 11 requirements Hence Requirement volatility is calculated as : (7 + 3 + 11 ) X 100 / 67 = 31.34 % This means that almost 1/3 of the requirements changed after the initial identification of requirements The Test metrics should be reviewed & interpreted on regular basis throughout the test effort and particularly after the application is released into production ***IMPORTANT POINTS*** 1. Cost of finding a defect in testing (CFDT) = Total effort spent on testing / defects found in testing Note: Total time spent on testing including time to create, review, rework, execute the test cases and record the defects. This should not include time spent in fixing the defects. 2. Test Case Adequacy: This defines the number of actual test cases created vs estimated test cases at the end of test case preparation phase. It is calculated as No. of actual test cases / No: of test cases estimated 3. Test Case Effectiveness: This defines the effectiveness of test cases which is measured in number of defects found in testing without using the test cases. It is calculated as No. of defects detected using test cases*100/Total no: of defects detected 4. Effort Variance can be calculated as {(Actual Efforts-Estimated Efforts) / Estimated Efforts} *100 5. Schedule Variance: It can be calculated as {(Actual Duration - Estimated Duration)/Estimated Duration} *100 6. Schedule Slippage: Slippage is defined as the amount of time a task has been delayed from its original baseline schedule. The slippage is the difference between the scheduled start or finish date for a task and the baseline start or finish date. It is calculated as ((Actual End date - Estimated End date) / (Planned End Date Planned Start Date) * 100 7. Rework Effort Ratio: {(Actual rework efforts spent in that phase / Total actual efforts spent in that phase)} * 100 8. Review Effort Ratio: (Actual review effort spent in that phase / Total actual efforts spent in that phase) * 100 9. Requirements Stability Index: {1 - (Total No. of changes /No of initial requirements)} 10. Requirements Creep: (Total No. of requirements added / No of initial requirements) * 100 11. Weighted Defect Density: WDD = (5*Count of fatal defects)+(3*Count of Major defects)+(1*Count of minor defects)

17

Note: Here the Values 5, 3, 1 correspond to severities as mentioned below: Fatal - 5 Major - 3 Minor - 1 *******DEFECT REMOVABLE DENSITY********** Test Efficiency Vs Test Effectiveness Software Test Efficiency: - It is internal in the organization how much resources were consumed how much of these resources were utilized. - Software Test Efficiency is number of test cases executed divided by unit of time (generally per hour). - Test Efficiency test the amount of code and testing resources required by a program to perform a particular function. Here are some formulas to calculate Software Test Efficiency (for different factors): Test efficiency = (total number of defects found in unit+integration+system) / (total number of defects found in unit+integration+system+User acceptance testing) Testing Efficiency = (No. of defects Resolved / Total No. of Defects Submitted)* 100 Software Test Effectiveness:Software Test Effectiveness covers three aspects: - How much the customer's requirements are satisfied by the system. - How well the customer specifications are achieved by the system. - How much effort is put in developing the system. Software Test Effectivness judge the Effect of the test enviornment on the application. Here are some formulas to calculate Software Test Effectiveness (for different factors): - Test effectiveness = Number of defects found divided by number of test cases executed. - Test effectiveness = (total number of defects injected +total number of defect found) / (total number of defect escaped)* 100 - Test Effectiveness = Loss due to problems / Total resources processed by the system

Software Test Metrics Software Test Metrics is used in decision making. The test metrics is derived from raw test data. Because what cannot be measured cannot be managed. Hence Test Metrics is used in test management. It helps in showcasing the progress of testing. Some of the Software Test Metrics are as below, What is Test Summary It is a document summarizing testing activities and results, and it contains an evaluation of the test items. Requirements Volatility Formula = {(No. of requirements added + No. of requirements deleted + No. of requirements modified) / No. of initial approved requirements} * 100 Unit Of measure = Percentage

18

Review Efficiency Components - No. of Critical, Major & Minor review defects - Effort spent on review in hours - Weightage Factors for defects: - Critical = 1; Major = 0.4; Minor = 0.1 Formula = (No. of weighted review defects/ Effort spent on reviews) Unit Of measure = Defects per person hour Productivity in Test Execution Formula = (No. of test cases executed / Time spent in test execution) Unit Of measure = Test Cases per person per day Here the time is the cumilative time of all the resources. example - If there were 1000 Test cases executed in a cycle by 4 resources. If resource 1 executed 300 test cases in 2 days, resource 2 executed 400 test cases in 3 days resource 3 executed 75 test cases in 1 day resource 4 executed 225 test cases in 4 days. Then the cumulative time spent for executing 1000 test cases is 10 man days. Then the Productivity in Test Execution = 1000/10=100 So the productivity in test execution is 100 test cases per person per day. Defect Rejection Ratio Formula = (No. of defects rejected / Total no. of defects raised) * 100 Unit of Measure = Percentage Defect Fix Rejection Ratio Formula = (No. of defect fixes rejected / No. of defects fixed) * 100 Unit of Measure = Percentage Delivered Defect Density Components - No. of Critical, Major & Minor review defects - Weightage Factors for defects: - Critical = 1; Major = 0.4; Minor = 0.1 Formula = [(No of weighted defects found during Validation/customer review + acceptance testing)/ (Size of the work product)] Unit Of measure = Defects for the work product / Cycle. Outstanding defect ratio Formula = (Total number of open defects/Total number of defects found) * 100 Unit Of measure = Percentage COQ (Cost of Quality) Formula = [(Appraisal Effort + Prevention Effort + Failure Effort) / Total Project effort] * 100 Unit Of measure = Percentage ************SOFTWARE TESTING TOOLS******** Software Testing Tools The software testing tools can be classified into the following broad category. Test Management Tools White Box Testing Tools

19

Performance Testing Tools Automation Testing Tools Test Management Tools: Some of the objectives of a Test Management Tool are as below. However all of these characteristics may not be available in one single tool. So the team may end up using multiple tools, with each tool focusing on a set of key areas. * To manage Requirements. * To Manage Manual Test Cases, Suites and Scripts. * To Manage Automated Test Scripts. * To mange Test Execution and the various execution activities. (recording results, etc) * To be able to generate various reports with regard to status, execution, etc. * To Manage defects. In other words a defect tracking tool. * Configuration management Tool. Version Control Tool ( example - for Controlling and Sharing the Test Plan, Test Reports, Test Status, etc.) Some of the tool which can be used along with their key areas of expertise are as below, Uses of Telelogic/IBM Doors 1. Used for Writing requirements/Test cases. 2. Baseline functionality available. 3. The document can be exported into microsoft excel/word. 4. Traceability matrix implemented in doors. So the requirements can be mapped to the test cases and vice versa. Uses of HP Quality Center. 1. Used for Writing requirements/Test cases. 2. Used for baselineing of documents. 3. For exporting of documents. 4. Traceability matrix. So the requirements can be mapped to the test cases and vice versa. Some of the defect tracking tools are as below, * IBM Lotus Notes * Bugzilla (Open Source/Free) A Comparison of different issue tracking systems. Some of the other Test management tools are as below, * Bugzilla Testopia * qaManager * TestLink Configuration management Tool. Roger Pressman, in his book Software Engineering: A Practitioner's Approach, states that CM "is a set of activities designed to control change by identifying the work products that are likely to change, establishing relationships among them, defining mechanisms for managing different versions of these work products, controlling the changes imposed, and auditing and reporting on the changes made." In Software Testing, configuration management plays the role of tracking and controlling changes in the various test components (example - for Controlling and Sharing the Test Plan, Test Reports, Test Status, etc.). Configuration management practices include revision control and the establishment of baselines. Some of the configuration management tools are, * IBM Clearcase * CVS * Microsoft VSS White Box Testing Tools White Box Testing Tools: Some of the aspects and characteristics which are required in a White Box Testing Tools are,

20

To check the Code Coverage Code coverage is a measure used in software testing. It describes the degree to which the source code of a program has been tested. It is a form of testing that inspects the code directly and is therefore a form of white box testing Some of the tools available in this space are, Tools for C / C++ * IBM Rational Pure Coverage * Cantata++ * Insure++ * BullseyeCoverage Tools for C# .NET * NCover * Testwell CTC++ * Semantic Designs Test Coverage * TestDriven.NET * Visual Studio 2010 * PartCover (Open Source) * OpenCover (Open Source) Tools for COBOL * Semantic Designs Test Coverage Tools for Java * Clover * Cobertura * Jtest * Serenity * Testwell CTC++ * Semantic Designs Test Coverage Tools for Perl * Devel::Cover Tools for PHP * PHPUnit with Xdebug * Semantic Designs Test Coverage To check Coding Standards A comprehensive list of tools in coding standards can be found at List of tools to check coding standards To check the Code Complexity Code Complexity is a measure of the number of linearly-independent paths through a program module and is calculated by counting the number of decision points found in the code (if, else, do, while, throw, catch, return, break etc.). Some of the free tools available for checking the code complexity are as below, * devMetrics by Anticipating minds. * Reflector Add-In. To check Memory Leaks A memory leak happens when a application/program has utilized memory but is unable to release it back to the operating system. A memory leak can reduce the performance of the computer by reducing the amount of available memory. Eventually, in the worst case, too much of the available memory may become utilized and all or part of the system or device stops working

21

correctly, the application fails, or the system crashes. Some of the free tools available for checking the memory leak issues are as below, * Valgrind * Mpatrol * Memwatch ********PERFORMANCE TESTING TOOLS******* Agileload,load Impact,keynote test perspective,monitis,skill performer(BORLAND),rational performance tester(IBM),apploader(nrg global),qtst(quotium),RTI,Apica load test,forecast,hp load runner,wapt *******AUTOMATION TESTING TOOLS******** QTP(HP QUICK TEST PROFESSIONAL),WATIR,TOSCA TESTSUITE,SELENIUM,ROTATIONAL FUNCTIONAL TESTER,TESTCOMPLETE,TESTPARTNER,SOATEST,TESTDRIVE *********AUTOMATED TESTING**************** Test Automation demands considerable investments of money and resources. Successive development cycles will require execution of same test suite repeatedly. Using a test automation tool its possible to record this test suite and re-play it as required. Once the test suite is automated, no human intervention is required . This improved ROI of Test Automation. Goal of Automation is to reduce number of test cases to be run manually and not eliminate manual testing all together. Read more at http://www.guru99.com/automation-testing.html#HxfM5WJWyu73Oj0B.99 Automated testing is important due to following reasons:

Manual Testing of all work flows, all fields , all negatice scenarios is time and cost consuming It is difficult to test for multi lingual sites manually Automation does not require Human intervention. You can run automated test unattended (overnight) Automation increases speed of test execution Automation helps increase Test Coverage Manual Testing can become boring and hence error prone.

Which Test Cases to Automate? Test cases to be automated can be selected using the following criterion to increase the automation ROI

High Risk - Business Critical test cases Test cases that are executed repeatedly Test Cases that are very tedious or difficult to perform manually Test Cases which are time consuming

The following category of test cases are not suitable for automation:

Test Cases that are newly designed and not executed manually atleast once Test Cases for which the requirements are changing frequently Test cases which are executed on ad-hoc basis.

Automation Process Following steps are followed in an Automation Process (1) Test tool selection

22

Test Tool selection largely depends on the technology the Application Under Test is built on. For instance QTP does not support Informatica. So QTP cannot be used for testing Informatica applications. Its a good idea to conduct Proof of Concept of Tool on AUT (2) Define the scope of Automation Scope of automation is the area of your Application Under Test which will be automated. Following points help determine scope:

Feature that are important for the business Scenarios which have large amount of data Common functionalities across applications Technical feasibility Extent to which business components are reused Complexity of test cases Ability to use the same test cases for cross browser testing

(3)Planning, Design and Development During this phase you create Automation strategy & plan, which contains following details-

Automation tools selected Framework design and its features In-Scope and Out-of-scope items of automation Automation test bed preparation Schedule and Timeline of scripting and execution Deliverables of automation testing

(4)Test Execution Automation Scripts are executed during this phase. The scripts need input test data before there are set to run. Once executed they provide detailed test reports. Execution can be performed using the automation tool directly or through the Test Management tool which will invoke the automation tool. Example: Quality center is the Test Management tool which in turn it will invoke QTP for execution of automation scripts. Scripts can be executed in a single machine or a group of machines. The execution can be done during night , to save time. (5)Maintenance As new functionalities are added to the System Under Test with successive cycles, Automation Scripts need to be added, reviewed and maintained for each release cycle. Maintenance becomes necessary to improve effectiveness of Automation Scripts. Automation tools Following are the most popular test tools : QTP : HPs Quick Test Professional ( now known as HP Functional Test) is the market leader in Functional Testing Tool. The tool supports plethora of environments including SAP , Java , Delphi amongst others. QTP can be used in conjunction with Quality Center which is a comprehensive Test Management Tool. know is light tool which can be recommended for web or client/server applications. Rational Robot: Its is an IBM tool used to automate regression, functional and configuration tests for client server, ecommerce as well as ERP applications. It can be used with Rational Test Manager which aided in Test Management Activities

23

Selenium: Its an open source Web Automation Tool. It supports all types of web browsers. Despite being open source its actively developed and supported How to Choose an Automation Tool? Selecting the right tool can be a tricky task. Following criterion will help you select the best tool for your requirement-

Environment Support Ease of use Testing of Database Object identification Image Testing Error Recovery Testing Object Mapping Scripting Language Used Support for various types of test including functional, test management, mobile, etc Support for multiple testing frameworks Easy to debug the automation software scripts Ability to recognize objects in any environment Extensive test reports and results Minimize training cost of selected tools

Tool selection is one of biggest challenges to be tackled before going for automation. First, Identify the requirements, explore various tools and its capabilities, set the expectation from the tool and go for a Proof Of Concept. Framework in Automation A framework is set of automation guidelines which help in

Maintaining consistency of Testing Improves test structuring Minimum usage of code Less Maintenance of code Improve re-usability Non Technical testers can be involved in code Training period of using the tool can be reduced Involves Data wherever appropriate

There are four types of framework used in software automation testing:

1. 1. 2. 3. 4. Data Driven Automation Framework Keyword Driven Automation Framework Modular Automation Framework Hybrid Automation Framework

Automation Best Practices: To get maximum ROI of automation, observe the following Scope of Automation needs to be determined in detail before the start of the project. This sets expectations from Automation right.

Select the right automation tool: A tool must not be selected based on its popularity but its fit to the automation requirements.

24

Choose appropriate framework Scripting Standards- Standards have to be followed while writing the scripts for Automation .Some of them areo Create uniform scripts, comments and indentation of the code o Adequate Exception handling How error is handled on system failure or unexpected behavior of the application. o User defined messages should be coded or standardized for Error Logging for testers to understand. Measure metrics- Success of automation cannot be determined by comparing the manual effort with the automation effort but by also capturing the following metrics. o Percent of defects found o Time required for automation testing for each and every release cycle o Minimal Time taken for release o Customer satisfaction Index o Productivity improvement

The above guidelines if observed can greatly help in making your automation successful. Benefits of automated testing Following are benefits of automated testing:

70% faster than the manual testing Wider test coverage of application features Reliable in results Ensure Consistency Saves Time and Cost Improves accuracy Human Intervention is not required while execution Increases Efficiency Better speed in executing tests Re-usable test scripts Test Frequently and thoroughly More cycle of execution can be achieved through automation Early time to market

What is the difference between stub and driver? stubs are dummy modules which are known as "called programs" which is used in integration testing (top down approach),used when sub programs are under construction. Drivers are also kind of dummy modules which are known as "calling programs",which is used in bottom up integration testing, used when main programs are under construction There is also another type of testing which contains both stubs and drives is called SandWitch Integration Testing Unit test suite: a sequence of many unit tests The other important technique is regression testing. In this technique, you maintain a suite of tests (called the regression suite), which are usually run nightly as well as before every checkin. Every time you have a bug fix you add one or more tests to the suite. The purpose is to stop you from re-introducing old bugs that have already been fixed. pointless testing: testing the same basic case more than one way, or testing things so trivial that they really do not need to be tested (like auto-generated getters and setters) There are several different levels of testing that are done throughout the software development process. These are outlined in the table below:

25

Test Type

Description Testing conducted by a customer to verify that the system meets the acceptance criteria of the requested application.

Acceptance

Integration Tests the interaction of small modules of a software application.

Unit Regression

Tests a small unit (i.e. a class) of a software application, separate from other units of the application. Tests new functionality in a program. Regression testing is done by running all of the previous unit tests written for a program, if they all pass, then the new functionality is added to the code base.

Functional Verifies that the entire software system satisfies the requirements. and System Beta Ad-hoc, third party testing.

White or Black Box? Black Box White or Black Box White Box White Box Black Box Black Box

There are 4 types of Unit testing. i. Basic Path Coverage- In this stage the continuation or the continuity of the program is tested. ii.Control Structure Coverage- After completion of basic path coverage , the program is tested in terms of input/ output. iii. Program Technique Coverage- In this stage program execution time is taken into consideration .If the time is not acceptable then the programmer changes the coding without disturbing the functionality. iv.Mutation- Mutation means change.They perform one known change & applu the previously applied tests, if they are successfull in identifying the change then the program is perfect. There are many types of unit testing are there :1. Boundary conditions :- This is a part of unit testing to ensure that the module operates as desired within the specified boundaries. 2. Indepedent paths :- This is a part of unit testing to ensure that all statements in a unit/module have been executed at least once. 3. Module interface :- This is a type of unit testing which tests whether information are flowing into and out of the unit/module in a proper manner or not. 4. Local data structure :- Type of unit testing which ensures that the temporarily stored data maintains its integrity while an algorithm is being executed. 5. Error-handling paths :- After successful completion of the various tests, error-handling paths are tested. Unit testing techniques A number of effective testing techniques are usable in unit testing stage. The testing techniques may be broadly divided into three types: Functional Testing Structural Testing Heuristic or Intuitive Testing The defects in software can in general be classified as Omissions, Surprises and Wrong Implementations. Omissions are requirements that are missed in the implementation, Surprises are implementation that are not found in the requirements and wrong implementation is incorrect implementation of a requirement.

26

Figure 1 Unit Testing Technique shows the major categories of testing techniques and what types of defects they are effective against. While Functional Testing techniques help catch omissions and wrong implementations, Structural Testing techniques help catch surprises and wrong implementations. Heuristic or intuitive testing techniques help catch all types of defects. But Intuitive testing is effective only when complementing the systematic types of Functional and Structural testing techniques. Functional testing Techniques (Some examples) Boundary Value Analysis: Testing the edge conditions of boundaries Equivalence Partitioning: Grouping test cases into classes in which executing one test cases is equivalent to executing any other test cases in the same group Cause Effect Graphing: When the behaviour of the unit under test is specified as cause and effect. Design test cases that validate this relationship. Structural test Cases Techniques (Some examples) Statement Coverage: Identify Test cases such that every line of code is executed in one test case or other. Branch Coverage: Identify Test cases such that every branch of code is executed in one test case or other. 100% Branch Coverage automatically assures 100% Statement Coverage. Condition Coverage:: Identify Test cases such that condition in each predicate expression is evaluated in all possible ways. Modified Condition-Decision Coverage: Identify Test cases such that each Boolean operand can independently affect the outcome of a decision. INTEGRATION TESTING TYPES: 1.BIG BANG 2. TOP DOWN 3. BOTTOM UP Test Strategy A Test Strategy document is a high level document and normally developed by project manager. This document defines Testing Approach to achieve testing objectives. The Test Strategy is normally derived from the Business Requirement Specification document. The Test Stategy document is a static document meaning that it is not updated too often. It sets the standards for testing processes and activities and other documents such as the Test Plan draws its contents from those standards set in the Test Strategy Document. Some companies include the Test Approach or Strategy inside the Test Plan, which is fine and it is usually the case for small projects. However, for larger projects, there is one Test Strategy document and different number of Test Plans for each phase or level of testing. Components of the Test Strategy document

Scope and Objectives Business issues Roles and responsibilities Communication and status reporting Test deliverability Industry standards to follow Test automation and tools Testing measurements and metrices

27

Risks and mitigation Defect reporting and tracking Change and configuration management Training plan

Test Plan The Test Plan document on the other hand, is derived from the Product Description, Software Requirement Specification SRS, or Use Case Documents. The Test Plan document is usually prepared by the Test Lead or Test Manager and the focus of the document is to describe what to test, how to test, when to test and who will do what test. It is not uncommon to have one Master Test Plan which is a common document for the test phases and each test phase have their own Test Plan documents. There is much debate, as to whether the Test Plan document should also be a static document like the Test Strategy document mentioned above or should it be updated every often to reflect changes according to the direction of the project and activities. My own personal view is that when a testing phase starts and the Test Manager is controlling the activities, the test plan should be updated to reflect any deviation from the original plan. After all, Planning and Control are continuous activities in the formal test process.

Test Plan id Introduction Test items Features to be tested Features not to be tested Test techniques Testing tasks Suspension criteria Features pass or fail criteria Test environment (Entry criteria, Exit criteria) Test delivarables Staff and training needs Responsibilities Schedule

This is a standard approach to prepare test plan and test strategy documents, but things can vary company-to-company Testing methodologies means different testing tech. that we will apply like black box,white box,unit testing.

Test Automation Methodologies The following is a description of two methods that have proven effective in implementing an Automated Testing Solution: "Functional Decomposition" Method: The main concept behind the "Functional Decomposition" script development methodology is to reduce all test cases to their most fundamental tasks, and write User-Defined Functions, Business Function Scripts, and "Sub-routine" or "Utility" Scripts which perform these tasks independently of one another. In general, these fundamental areas include: Navigation (e.g. "Access Payment Screen from Main Menu") Specific Business Function (e.g. "Post a Payment") Data Verification (e.g. "Verify Payment Updates Current Balance") Return (e.g. "Return to Main Menu") In order to accomplish this, it is necessary to separate Data from Function. This allows an automated test script to be written for a Business Function, using data-files to provide the both the input and the expected-results verification. A hierarchical architecture is employed, using a structured or modular design.

28

The highest level is the Driver script, which is the engine of the test. The Driver begins a chain of calls to the lower level components of the test. Drivers may perform one or more test case scenarios by calling one or more Main scripts. The Main scripts contain the test case logic, calling the Business Function scripts necessary to do the application testing. All utility scripts and functions are called as needed by Drivers, Main, and Business Function scripts. Driver Scripts: Perform initialization (if required), then call the Main Scripts in the desired order. Main Scripts: Perform the application test case logic using Business Function Scripts. Business Function Scripts: Perform specific Business Functions within the application. Subroutine Scripts: Perform application specific tasks required by two or more Business Function scripts. User-Defined Functions: General, Application-Specific, and Screen-Access Functions. (Note that Functions can be called from any of the above script types.) Processing Note: If an error occurs in the processing, such that we cannot continue with the Test Case, a "FALSE" condition is returned to the calling script. This script, in turn, returns the "FALSE" condition to its calling script, etc., until control is returned back to the Driver script. If the Test Case dependencies have been properly controlled, the Driver can then continue with the next Test Case, otherwise the Driver would have to exit. Advantages: * Utilizing a modular design, and using files or records to both input and verify data, reduces redundancy and duplication of effort in creating automated test scripts. * Scripts may be developed while application development is still in progress. If functionality changes, only the specific "Business Function" script needs to be updated. * Since scripts are written to perform and test individual Business Functions, they can easily be combined in a "higher level" test script in order to accommodate complex test scenarios. * Data input/output and expected results is stored as easily maintainable text records. The user's expected results are used for verification, which is a requirement for System Testing. * Functions return "TRUE" or "FALSE" values to the calling script, rather than aborting, allowing for more effective error handling, and increasing the robustness of the test scripts. This, along with a well-designed "recovery" routine, enables "unattended" execution of test scripts. Disadvantages: * Requires proficiency in the tool's Scripting Language; * Multiple data-files are required for each Test Case. There may be any number of data-inputs and verifications required, depending on how many different screens are accessed. This requires data-files to be kept in separate directories by Test Case. * Tester must not only maintain the Detail Test Plan with specific data, but must also re-enter this data in the various data-files. * If a simple "text editor" such as Notepad is used to create and maintain the data-files, careful attention must be paid to the format required by the scripts/functions that process the files, or processing-errors will occur. Automation Framework A test automation framework consists of a set of assumptions, concepts and tools that provide support for automated software testing. The main advantage of such a framework is the low cost for maintenance. If there is change to any test case then only the test case file needs to be updated. However the Driver Script and Startup script will remain the same. Choosing the right framework/scripting technique helps in maintaining lower costs. The costs associated with test scripting are due to development and maintenance efforts. The approach of scripting used during test automation has effect on costs. Various Framework/scripting techniques are generally used: 1. Linear (procedural code, possibly generated by tools like those that use record and playback) 2. Structured (uses control structures - typically if-else, switch, for, while conditions/ statements) 3. Data-driven (data is persisted outside of tests in a database, spreadsheet, or other mechanism)

29

4. Keyword-driven 5. Hybrid (two or more of the above patterns are used) The Testing framework is responsible for: 1. Defining the format in which to express expectations. 2. Creating a mechanism to hook into or drive the application under test. 3. Executing the tests. 4. Reporting results. **********TEST AUTOMATION PROCESS*********** Test Automation - Plan This is the first step in the test automation process. The major action item here is to create a plan that specifies Purpose, Scope, Strategies, Major Requirements, Schedule, Budget. #2 Test Automation - Design and Development The major action item here is to create a detailed automation solution. This will adress the major objectives and meet all the automation requirements. This is more of a detailed breakup adress majority of the automation plan items. In the development phase the various test automation framework and scripts are developed. #3 Test Automation Tool - Preparation The major action item here is evaluate the various automation tools and decide a on a tool to be used for the project. This is more of a feasibility study. In this stage an inhouse tool can also be developed (if feasibile). Once the tool is decided upon, the tool is deployed with the various required configuration required for the project. #4 Test Automation solution - Deployment Once the tool and the scripts are ready, they are integrated together and deployed on the test environment. #5 Test Automation - Review The working of the automation solution is reviewed and to identify issues and limitations and provide feedback. This will help to further enhance the solution. ****FUNCTIONAL REQUIREMENTS:-GIVING THE DESIRED OUTPUT NON FUNCTIONAL :-RELIABLITY,PERFORMANCE,THROUGHPUT. Waterfall model

What is it?

30

In the waterfall model ,software development progress through various phases like Requirements Analysis , Design etc sequentially. In this model, next phase begins only when the earlier phase is completed. What Is The Testing Approach? The first phase in waterfall model is the requirements phase in which all the project requirements are completely defined before starting the testing. During this phase , the test team brainstorms the scope of testing , test strategy and drafts a detailed test plan. Only once the design of software is complete, the team will move on to execution of the test cases to ensure that the developed software behaves as it expected. In this methodology, the testing team proceeds to the next phase only when the previous phase is completed. Advantages This model is very simple to plan and manage. Hence, projects where requirements are clearly defined and stated beforehand can be easily tested using waterfall model. Disadvantages In the waterfall model , you can begin with the next phase only once the previous phase is completed. Hence , this model cannot accommodate unplanned events and uncertainty. This methodology is not suitable for projects where the requirements change frequently. Iterative development

What is it? In this model , a big project is divided into small parts , and each part is subjected to multiple iterations of the waterfall model. At the end of iteration, a new module is developed or an existing module is enhanced. This module is integrated into the software architecture and the entire system is tested all together What is the testing Approach? As soon as iteration is completed, the entire system is subjected to testing. Feedback from testing is immediately available and is incorporated in next cycle. The testing time required in successive iteration can be reduced based on the experience gained from past iterations. Advantages The main advantage of iterative development is the test feedback is immediately available at the end of each cycle. Disadvantages This model increases communication overheads significantly since at the end of each cycle, feedback about deliverables , effort etc must be given. Agile methodology

31

What is it ? Traditional software development methodologies work on the premise that software requirements remain constant throughout the project. But with increase in complexity , the requirements undergo numerous changes and continuously evolve. At times, the customer himself is not sure what he wants. Though iterative model addresses this issue, its still based on the waterfall model. In Agile methodology , software is developed in incremental, rapid cycles. Interactions amongst customers, developers and client are emphasized rather than processes and tools. Agile methodology focuses on responding to change rather than extensive planning. What Is The Testing Approach? Incremental testing is used in agile development methods and hence, every release of the project is tested thoroughly. This ensures that any bugs in the system are fixed before the next release. Advantages It is possible to make changes in the project at any time to comply with the requirements. This incremental testing minimizes risks. Disadvantages Constant client interaction means added time pressure on all stake holders including the client themselves , software development and test teams . Extreme programming

What is it?

32

Extreme programming is a type of agile methodology which believes in short development cycles. A project is divided into simple engineering tasks. Programmers code a simple piece of software and get back to customer for feedback. Review points from the customer are incorporated and the developers proceed with the next task. In extreme programming developers usually work in pairs. Extreme Programming is used in places where customer requirements are constantly changing. What Is The Testing Approach? Extreme programming follows a Test-driven development which is described as follows 1. 2. 3. 4. Add a test case to the test suite to verify the new functionality which is yet to be developed Run the all tests and obviously the new test case added must fail since the functionality is not coded yet Write some code to implement the feature/functionality Run the test suite again .This time , the new test case should pass since the functionally has been coded

Advantages Customers having a vague software design in mind could use extreme programming Continuous testing and continuous integration of small releases ensure software code is delivered is of high quality Disadvantages Meetings amongst the software development team and clients add to time requirements.

33

También podría gustarte