Está en la página 1de 30

A Reflection on Test Automation

Automated software testing is nothing but automating the existing process of manual software testing. It requires a solid testing infrastructure and a thoughtful software testing life cycle, both supported and valued by the management. Test automation is not obviously the right thing to do without answering questions such as why, what, when and how to automate. Moreover, it is an expensive process, contrary to what test tool vendors would like us to believe. It can take between 3 to 10 times longer to develop automated test suite than to create and execute manual test cases. Costs of test automation include personnel to support test automation for the long term, dedicated test environment as well as the costs for the purchase, development and maintenance of tools. The benefit of test automation comes only from running the automated tests on every subsequent release and after carefully making a cost/benefit analysis beforehand. Test automation can provide valuable assistance if it is done by the right people, in the right environment and where it makes sense to do so. As a matter of fact, it is an addition to your testing process and does not replace manual testing nor does it enable you to downsize your testing department.

The Triangular Concept of Communication in Test Automation


As a professional tester, you know that everything does not always go smoothly on your project: applications not delivered on time, test blocking defects, people insufficiently trained, unmotivated personnel, etc. People tend to take these problems into account when setting up project milestones in their test plan, but often one aspect is overlooked: a detailed description of how the major parties involved in the project will COMMUNICATE.

Be aware of the importance of good communication between users, developers and testers. Start the communication channel as soon as possible. Gather information from the developers regarding regularly planned changes in the application and develop your automation scripts based on the development life cycle. Have users (manual testers) and the test automation team use the same test designs to run tests and have a standard way of filling out data. All agreements and procedures should be well-documented so new people get the picture easily. Avoid misunderstandings at a later stage by making sure both users and developers understand and agree upon well-defined business requirements for the system about to be developed. Appoint one person (plus backup) on each team to serve as main contact.

A Perspective on Test Automation


Many people think that test automation is obviously the right thing and do not bother to state what they hope to get. The key is to follow the rules of software development when automating testing. This includes answering questions like why, what, when and how to automate. To make any effective use of automated testing, a structured process of software testing must already be in place. Otherwise there is no real point in trying to automate something that does not exist. Why should we automate ? The efforts of test automation are an investment. More time and resources are needed up front in order to obtain the benefits later on. It should be understood that the benefits of test automation come mainly from running the automated tests every subsequent release. Some of the reasons why one would choose to automate tests include:

improve test coverage during regression test; prevent previous defects from reappearing in the new releases; speed up testing to accelerate releases; reduce costs of testing; ensure consistency; improve the reliability of testing; testers can focus more on test-depth instead of repetitive work; Tests can be run over and over again with less overhead.

When should we automate ? Developers nowadays can produce code faster and faster with more complexity than ever before. Advancements in code generation tools and code reuse are making it difficult for testers to keep up with software development. Test automation, especially if applied only at the end of the testing cycle, will not be able to keep up with these advances. Therefore automating at the early stages should be the best strategy. Like structured software testing, test automation has also its planning, designing, testing and implementation phases. The best approach is to integrate and synchronize those different phases of test automation with the Implementation Model of structured software testing.

Test Automation Tips


When starting to use test automation tools, you are likely to experience more or less serious problems that even might endanger the final outcome of your project. This article will provide you with some practical guidelines for test automation in order to avoid unnecessary problems. Some of them may just seem common sense, but we consider it worthwhile mentioning them here, since we have seen projects being jeopardised because one or more of them had not been taken into account. Do not try to automate every requirement. For some tests, the effort that should be spent to set up automation is just too high. Test automation is software development, a test script is code. This does not only imply that you need some people with programming experience in your test team, but also that your testers, operating to a certain extent as software developers, must be disciplined and create the scripts according to predefined standards. Use "general purpose" scripts for initialisation of test data, for navigation from one screen in the application to another, ... Also make sure that there exists only one version of these scripts, instead of every tester developing his own script. Separate the test data from the actions. Put the test data in separate data files. This makes it much easier to add or remove test data. This way of working also facilitates the maintenance of the script. Make your scripts robust. Try to anticipate problems. If necessary, program your script in such a way that it skips some tests rather than blocking the execution of your entire test set. Make independent test scripts. Do not base your tests on the data generated in the previous script(s), but make sure you always have control over your test data. Do not be scared off by these warnings. The same experience that revealed those pitfalls, proved that a good automated test set really speeds up the turnaround time of regression test execution. This allows you to focus on dynamic areas in your application.

What is Automation ? Automated software testing means automating the existing process of manual software testing. This implies that a structured manual software testing process already exists. Automation is not an island unto itself. It requires a solid testing infrastructure and a thoughtful software testing life cycle, both supported and valued by the management. Automation is usually the fantasy of software testers, who are most of the time under pressure to do more testing in less time. Performing manual testing especially those labour-intensive tasks is usually unappealing thus automation seems to be the solution to make their job more simple and helps them to meet the unrealistic schedules. There is still a common myth in the testing community that test automation tools alone can bring solutions to our software testing problems. Many people think that test automation is obviously the right thing and do not bother to state what they hope to get. This includes answering questions like why, what, when and how to automate. Contrary to what test tool vendors would like us to believe automated testing is an expensive process. Studies show that it can take between 3 to 10 times longer to develop automated Test Suite than to create and execute manual test cases. Costs of test automation include personnel to support test automation for the long term, dedicated test environment as well as the costs for the purchase, development and maintenance of tools. The benefits of test automation only come from running the automated tests every subsequent release and after carefully making a cost/benefit analysis beforehand, that is after making an informed decision about what is the best for your situation. Test automation can add a lot of complexity and cost to a test team's effort. In addition problems like including unrealistic expectations, poor testing practices, a false sense of security, maintenance costs, and other technical and organisational problems might arise. But it can also provide some valuable assistance if its done by the right people, in the right environment and done where it makes sense to do so. Test automation has its limitations. It does not replace manual testing, which will find more defects than automated testing. As a matter of fact automated testing is an addition to your testing process and does not enable you to downsize your testing department.

Starting Test Automation - Introduction


If you start test automation, you should opt for a long-term strategic solution. The efforts made are to be considered as an investment as more time and resources are needed up front in order to benefit from it later on. Automation for one test cycle does not pay off. You and I know any release will take a number of test cycles. That is why before venturing into test automation, realistic expectations should be defined by means of investing thoroughly into planning. Furthermore, it is a "change management" process and this requires management commitment. In an initial stage, the development of a test automation strategy requires mapping out what is to be automated, where in the testing life cycle, how it is going to be done, how the scripts will be maintained and what will be the expected costs and benefits.

Firstly, ensure that a structured software testing process is already in place. If not, concentrate on implementing a good structured software testing process in parallel. Without a good testing methodology, test automation will never succeed. During the planning phase specify your goals, scope, organisation, milestone plan, budget, test environment, roles and responsibilities for your automation. This preliminary stage also involves the process of evaluation, selection and training of automated testing tools. Defect tracking procedures and defect workflow should also be defined and documented. In general, define the strategy on how you intend to implement and maintain your test automation process. Plan to achieve small successes and grow steadily. It's better to incur a small investment and see the effort it really takes before trying to automate the whole regression test suite. It is important to implement a strategy that keeps the maintenance cost to a minimum. Maintenance costs are usually more significant for automation than for manual tests and if test scripts are not maintained for re-use, test automation will have little value. Below, you will find a typical process for implementing a structured test automation framework

Figure 1. Structured Test Automation Process overview.

Starting Test Automation - Planning Phase


To Automate or not to Automate An organisation might decide to introduce automation without analysing whether automation is appropriate or not. Expectations might be too high, as managers and software engineers quite often consider automated testing as being the "silver bullet" for all quality related problems. It is therefore important to make a thorough investigation and analysis. Below, youll find some guidelines that will make it easier to decide if automation really is a solution to the problem.

Figure 2. The process of decision making.

At first, investigate if a structured testing process is already in place. If not, try implementing a testing process in parallel with automation. Before even starting to think of automation, a structured testing process should at least be in place. If a structured testing process is already present, an investigation should be done to find out whether automation is really the solution to your problem (i.e. take into account the number of planned releases and test cycles). You will only benefit from automation if more than two cycles or releases are planned, which is mostly the case in reality. Acquiring managements commitment is also a crucial element to go ahead with the automation process. Assuming their expectations towards automated testing are realistic and if they are aware of the costs for introducing the tool, allocating appropriate staff and resources. After acquiring management commitment a Test Automation Assessment report should be prepared to obtain the backing for the budget and the resource needed.

Measuring Test Automation - Regression Testing


There are several reasons why one would like to measure test automation, i.e. to know whether it was a good investment, to monitor improvement, to evaluate and compare alternatives, to have early warning, to be able to make predictions, to benchmark against a standard, etc. Before we plan to measure we should always know and understand our objectives. As someone once said: "Is fuel economy a useful measure for an automobile? Not if you want to know whether it will fit into your garage!"

However, reducing costs does not mean shorter time to market. It can be advantageous to have high investment just to reduce the time to market, providing it gives the company a competitive edge. Cost benefit analysis should always reflect the customers business perspective and goals. So what are we supposed to measure? An answer is not easy to find when it comes to test automation. An important starting point is to know what your objectives are and to measure the attributes related to them. Which attributes of test automation can we measure? Below listed are some of the attributes of test automation that can be measured: Maintainability Definition : The effort needed to update the test automation suites for each new release. Possible measurements : The possible measurements can be e.g. the average work effort in hours to update a test suite.

Reliability Definition : The accuracy and repeatability of your test automation. Possible measurements : Number of times a test failed due to defects in the tests or in the test scripts.

Flexibility Definition : The ease of working with all the different kinds of automation test ware. Possible measurements : The time and effort needed to identify, locate, restore, combine and execute the different test automation test ware.

Efficiency Definition : The total cost related to the effort needed for the automation. Possible measurements : Monitoring over time the total cost of automated testing, i.e. resources, material, etc.

Portability Definition : The ability of the automated test to run on different environments. Possible measurements : The effort and time needed to set-up and run test automation in a new environment.

Robustness Definition : The effectiveness of automation on an unstable or rapidly changing system. Possible measurements : Number of tests failed due to unexpected events.

Usability

Definition : The extent to which automation can be used by different types of users (Developers, non-technical people or other users etc.,) Possible measurements : The time needed to train users to become confident and productive with test automation.

Measurements may be quite different from project to project and one cannot know what is best unless one has clearly understood the objectives of the project. For example, for software that is regularly changing, with frequent releases on many platforms, the important attributes will be ease of maintaining the tests and - of course - portability.

Figure 3. A highly maintainable, portable and usable automation suite.

Figure 4. Example of a highly flexible, robust but less usable and portable automation suite

Even though several attributes of automation can be measured, measuring them all is not really necessary. Begin measuring the few that are really important depending on the goals you want to achieve. A few things to measure could be e.g. average time to automate a test case, total effort or time spent on maintaining the test suite, number of tests run per cycle or number of cycles completed per release.

The Triangular Concept of Communication in Test Automation


As a professional tester, you know that not everything always goes smoothly on your project: the application isnt delivered on time, test blocking defects, people insufficiently trained, unmotivated personnel, etc. People tend to take these problems into account when setting up project milestones in their test plan, but often one aspect is overlooked: a detailed description of how the major parties involved in the project will COMMUNICATE. Hereunder we will investigate the relationship between three parties, namely: users, developers and testers. Setting up a proper communication channel might take up some time but many of the problems mentioned can be solved or even avoided.

Figure 1. Basic Triangular Communication Channel - (seems obvious but often overlooked) A tester is always in the midst of problems. Lets have a look at a few examples. Test Automation Engineer Developer: Let us take a scenario where a new version of a webbased application is released for testing. Running your test automation scripts, you notice that object names have been altered without prior notification. Because you will have quite a maintenance task updating your scripts, a conflict may arise. If developers did inform the automation team, maintenance could have been done before the release (at a time when practically everybody is just waiting to start testing).

User (Business) Automation tester: Imagine a complex application where users (business experts) have developed test designs coached by few professional testers. The idea behind this approach is to get a good quality of test designs mixing business knowledge with the expert use of testing techniques. Next to manual execution, the test designs will be used to automate the testing. The latter makes the situation more complex when no standard procedures and templates are defined for designing the tests. Hence, when design and test data problems demand a solution, not only regular communication is at hand, but both teams (manual and automation team) should work together as one. User (Business) Developer: This concerns technical versus business knowledge. These people often find it hard to communicate because they have different views on the application during development. Even though the defined business requirements should be the common ground for communication, both parties usually misinterpret them. A testers task should be to ensure and

verify that users and developers have a common understanding regarding the requirements and to make sure the application becomes what it should be. This is not always an easy job. I think most of you will admit to have encountered some of these situations. So what should you do about it? Being aware of the importance of a good communication between these groups is already a big step forward. Make sure you start the communication channel as soon as possible in order to avoid problems. Once certain processes have been set up (i.e. no proper communication channels), it can be a tough job to change them. Make sure you have regular meetings with developers (depending on the release schedule) regarding application alterations. Ensure to receive all necessary information regarding the regularly planned changes in the application. The idea here is to develop your automation scripts based on the development life cycle. If there is no real planning of the development life cycle, then it becomes difficult for an automation team to predict changes, causing an additional script maintenance effort. Make sure that users (manual testers) and test automation team use the same test designs to run tests and that you have a standard way of filling out data. All agreements and procedures should be well documented so that new people can easily get the picture. Ensure regular communication in order to synchronise the needs of both teams. At the start of the project, both users and developers should understand and agree upon welldefined business requirements for the system about to be developed. If business rules are not yet defined, a lot of misunderstandings may arise at a later stage (knowing that developers are working on a very technical level). Make sure to have one person (plus backup) in each team who serves as the main contact point. This approach will make it a lot easier in case of problems. If the person cannot answer your question immediately, he can consult his team in order to provide an answer.

"Key-phrase" Approach to Test Automation

Definition: It is an application independent automation framework designed to process our tests. The tests are developed using a keyphrase vocabulary that describes the business processes of the AUT (Application Under Test) and that is independent of the test automation tool which is used to execute them. Why: The advantage of this approach is that it enables us to develop automated tests that are business driven without worrying about the technicality which lays under it. Testers can develop their tests independent from the technical scripts needed to run the test. They will be more concerned about what the test case does instead of how it does it. Tests became more descriptive, more business related and more independent from the test tools and their scripting languages.

Figure.1 Pure separation of the business from the technical More and more tests cases can be run without the need to increase the number of scripts. The latter will be more proportional to the size of the AUT and not to the number of tests. Tests can be developed and maintained independent from the test tool or platform. Thousands of tests can be run with out the necessity of adding more scripts. Thus reducing the maintenance costs substantially. In addition to this, non-technical personnel are able to easily implement it as they won't need to deal with the scripts but concentrate mostly on the test scenario's. How: To implement Key-phrase approach it is important that the business requirements and functional specification be defined, ready and documented. One should plan enough time to analyse and understand what and how the system is supposed to work in order to develop the Key-phrases. They will be the basis for developing the test designs or scenario's in the next phase especially the prologue, epilogue, navigation, validation key-phrases and test data that are necessary for formulating test designs. Key-phrases should purely reflect the business processes of the AUT and should be readable, understandable, meaningful and consistent as they are meant to describe what and how the AUT is supposed to function.

Figure. 2 The process of Key-phrase Analysis and Development During execution process all the necessary interpretation from the Key-phrase (business) into technical is handled by a single control script which is generic in nature. Furthermore, all the automation testware needed can be developed outside this control script. This level of separation of business from the automation script will of course require a layer of technical implementation but is advantageous in the long run. In addition, ones implemented it doesn't require high maintenance effort.

Figure.3 An overview of the core low level Key-phrase engine. Key-phrase approach to automation is not about automation scripts or testing tools, it is a concept that tries to bring both structured testing process and structured test automation into one coherent process. It is a methodology which will force us to structure not only our automation but also our testing processes, and independent of any test tool we use. It is a common known fact that test automation is nothing but automating the testing process, and it is the quality of our testing process which will at the end determine the quality of our automation. Finally, we see more and more researches and experiments being carried out on integrated approach to test automation world wide, and several exciting concepts are popping-up. Hopefully test automation will continue to evolve until the "silver bullet" myth becomes a reality.

The Basics of Implementing Performance Testing


In this article I will try to highlight major points that are important to remember when implementing performance testing. Please note that each of these points are just guidelines and should be worked out and tackled in detail depending on your specific situation. 1. Define your goals Is your goal to verify system capacity ?

Is your goal to verify performance requirements ? Is your goal to verify the scalability of your system (preparing for growth and change)? Is your goal to determine optimal hardware/application configurations ? Is your goal finding bottle necks ? Is your goal to gauge how well your products stack up against the competition etc.,

You can address all these objective at once if you want and that is if you have all the time and money. Prepare your strategy and planning on how you intend to tackle your performance objectives and how you intend to measure and test them. Define clearly what you want to measure and the pass/fail criteria (determining the measurement criteria for the performance tests is easier once you determine what your performance test objectives are) . 2. Formulate your performance requirements and identify the performance critical business processes What are the business SLA's ? Which business processes are performance critical? How many transactions defined ? How many transactions per second need to be processed ? How many concurrent and total number of users is the application supposed to support ? What are the profiles of your real users ? What is the ratio between the different user profiles ? What are the acceptable response times for the transactions ? How many operations at what time of the day ? What are the risks if the business process fails ? Which protocols are supported ? etc.,

3. Develop performance test scenario's and define metrics and measurements After you have defined WHAT your goals are, you will need to work out the details on HOW you intend to achieve your goals. You will need to formulate detailed test procedures and define the measurements that you will need for each individual test cases. For example you would like to measure: Response time of the end user System behaviour under load Number of concurrent users Application server performance Webserver performance Database server performance Middleware performance Performance of network components etc.,

In addition you will need to describe for example how many users, which user profiles, at what time of the day, how many iterations, what think-time, how in ramping-up etc., to be able to achieve the objective of your test cases. You will not only simulate the real user and system application working situation but you will also need to predict different 'what if' scenario's.

Finally, you might also want to select the appropriate server resource and other measurements like for example CPU Usage, memory, disk traffic, swap in/out rate, incoming/out going packet rate, throughput, hits/seconds etc., and know how to use them in your test cases and interpret your results correctly. 4. Script development, test execution and result Analysis Forget about doing your performance tests manually with stop watch ! There are lot's of performance testing tools out there. Ensure that you have done the proper testing tool evaluation and selection process before committing to any tool(see our previous articles on testing tool evaluation and selection processes). For example, your tool needs to correctly simulate your real users, support your required amount of virtual user licenses and also should be suitable to your organisational system engineering environment needs. Unfortunately, cost plays a major role in most organisations when making decision regarding testing tools but a careful cost/benefit analysis should be done before coming to any conclusion. Script development depends much on the tool selected for testing. Even though scripting languages differ from tool to tool, the common thing about most performance testing tools is that they all will try to simulate the behaviour of your real users. Do not forget that a tool is nothing but a means to help you achieve the objectives of your tests, therefore focus on your test scenario. It is not executing a performance tests that's difficult but interpreting your collected system performance metrics to pin point your performance bottlenecks. Therefore ensure that all your system and test tool parameters are correctly set-up before executing your tests. This will guarantee that your results are interpreted correctly and your advice for fine-tuning the system will be much appreciated. Do not forget that sometimes overseeing a small configuration setting somewhere on your system or on your tool can make a huge difference in performance, causing unnecessary panic.

5. Type of performance tests Even though different definitions are given by different people regarding performance testing. I usually like to put Load, Volume and Stress testing under performance testing. The difference between those three types of performance tests is in their objectives: Load Testing: To verify if your system can sustain a requested number of users with acceptable response time. Volume Testing: To determine the maximum number of concurrent users your system can manage without failure (to benchmark the maximum loads). Stress Testing: To load test your system over extended periods of time to validate the stability and reliability (i.e., memory leaks).

As each of the test types above have it's own objectives, plan carefully which type of tests you need, to achieve your objective and which ones to execute first. Remember that some tests will take longer to execute and some not. Some will load the system until it breaks while the others will just load the system modestly depending on what you intend to measure. 6. Test vs. production environment The key requirement for a test environment is the accurate representation and testing of the target production environment. Please keep in mind that accurate does not

necessarily mean identical. Sometimes in might be costly and unrealistic to set-up test environment identical to that of production especially with multi-tiered systems. Perform upgrades to the hardware, operating systems, database, and application server software in the test environment before upgrading the production environment. After the test environment has been checked successfully, the upgrades to the production environment will be fast, and with less downtime. Most performance problems are not due to software code but can be attributed to settingup and configuration of the system architecture. Therefore try setting up and configuring your system architecture of your test environment to represent your final production environment if surprises are to be avoided.

7. Define clearly the role and responsibilities of all involved parties Unlike other type of testing projects, in performance testing you will be working VERY closely with different parties among which are (depending on your organisation): System architecture experts Software & Middleware Database administrators Network administrators Mainframe specialists End users Management etc.,

Planning and executing your performance tests in co-ordination with all the different parties is one of the toughest challenges you will face when implementing performance testing, especially when dealing with multi-tiered systems. Performance testing is a team work and not only do you need to verify the versions, settings and configurations of your individual system components with the responsible parties, but your test results will be the bases for the other teams to pin point the bottlenecks and tune-up the system. You will need to sit down together with the other teams to analyse and interpret results and discuss the next strategies. Therefore, appropriate communication and decision making channels should be established to facilitate the co-operation between the parties, because most of the time performance issues need immediate intervention and above all the active participation of all the parties.

Give Me a Test Hook or Else


In case you don't know it, without automation, software testing is hopeless. The latest development tools and component libraries allow developers to develop so much functionality so fast that it is literally impossible, actually inconceivable, that you can test the finished applications by hand. Most test budgets are measured in fractions of development, so it's not like you can spend more time and people testing than you do developing. Of course it's easy for developers to automate unit testing; after all, they have control of the source. They can use debuggers, instrument their code, insert breakpoints, whatever. But if you have ever tried to automate black box automated testing, you quickly discover that test tools can't drive applications

whose components aren't strictly vanilla. Custom controls, third party components, complex objects within containers and just about any user-defined or modified object classes give test tools fits. They can't get the object names, let alone the methods and properties needed to interact with them. In my experience, more than half of all test automation time is spentwasted trying to deal with these complications, and in too many cases automation fails completely. But what is really inexcusable is that it doesn't have to be that way. Most test tools provide source code implants or DLL files that, when compiled into the code, give the tools access to the object names, methods and properties that are needed to do test automation. Adding this capability takes minutes usually only a single line of codebut it can make the difference between automated and manual testing. So what's the problem? In my opinion, it's either ignorance, paranoia or pure laziness. Ignorance because a lot of companies think that if they compile in a test hook, then do the testing, then remove it for shipment that they have not really tested the production code. This is nonsense. True, the production code does not have the same hook as the tested version, but so long as the source is otherwise identical between the two compiles the only difference is the test hook. The key is that these hooks don't do anything unless called. They are usually just a DLL that lets the tool inside the application's process space so it can see the objects. They only provide information, they don't alter or create it. So, compiling the code without the hook should have zero effect on the application functionality. But if you just can't shake this superstition, then leave the hook in when you ship. This is where paranoia comes in. Haven't you just created a security problem? After all, now someone could use that hook to spy on your software. To that I say, so what? Let's face it. The newer runtime-based languages (Java, .Net) are basically interpreted anyway. You can reverse the original source code out to the letter. So don't kid yourself that someone might be able to somehow peek into your software just because of a hook. Good grief, there are OS security holes right now that let strangers across the globe take complete control of your whole computer and everything it is attached to. If you are still really freaked out about it, then identify those objects that are high risk and make their methods and properties private. Even the hooks can't get into those. But for goodness' sake don't do it to any that are needed for interaction with user interface objects; since they are exposed to the user there can't be much to hide in the first place, and in the second place you'll cripple automation.

Which leads me to the last problem: laziness. I go nonlinear when I hear developers complain about the "effort" it takes to address automated testability. Last time I checked it takes only a few minutes to either add or remove lines of code, and since compilation is usually automated anyway it may take zero time after the initial setup. If you can't be bothered to invest a few precious minutes to save your company weeks or months of work, or to enable automation that could make a difference in orders of magnitude in quality or time to market, maybe you should just retire since you don't really want to work in the first place. But in the final analysis, the reason doesn't really matter. The real question is not whether you should provide a test hook for automation, it is why would there even be a controversy in the first place. Why would management even contemplate, let alone tolerate, applications requiring manual testing that increases costs while reducing quality? Why aren't developers required to deliver test hooks as a matter of course? If you think about it, it's completely crazy to even argue about it. The benefits are so undeniable and the risks are so debatable that there should not even be a question, let alone a war. Of course I could be wrong, but in 20 years of test automation I have never had or even heard of a test hook backfiring. Has anyone else?

Principles of Performance Testing


Picture this: you convinced management of the need for performance testing your web site or multi client client/server application. You even managed to get a budget. Now, how do you start? First of all: gather the right people to start an interdisciplinary task force. Since people of different departments will be working together, its important to straighten out the roles and responsibilities. Defining your strategy starts by setting the goal. Performance testing is a wide term and can mean different things. You will have to choose between different types of performance testing in order to predict the real-world impact on your system (load testing, endurance testing, stress testing, ). Once this is clear, you need to gather information on system usage. What different types of transactions (login, search, buy something, ) exist and which are critical, from a performance point of view? In a lot of cases, 10% of the transactions constitute 90% or more of the load on the system. How many users do you expect during peak time and what kind of user profiles do they have? Based on the information you have already gathered, calculate the necessary workload (number of users) and determine the necessary duration and workload patterns. You also decide on what kind of measurements are required. You will at least want to measure the typical performance system characteristics: throughput, (perceived) response time and availability. By now, you have obtained the information to come to a well-founded decision on tool selection. The classic performance testing tool lets you generate a large number of virtual users, imitating

real business processes. However, if you want to check the complete system architecture, you might need extra tools for database monitoring, log analysis, Once your tool is in place, the rest of the test environment needs to be set up before virtual users can start attacking your application. Make sure that the features of the entire infrastructure can be exercised exactly the same way as in production. One solution is to use a scaled down version of the production system (e.g. 2 instead of 6 Web servers) and extrapolate the results. The execution phase is iterative: test a little, tune a little. Save the crucial measurements as a benchmark in order to compare and check if the system adjustments have had an impact. After each test run, the log files, reports and graphs should provide enough input to detect the systems bottleneck. Isolation testing can be done on the specific module or system component where you think the bottleneck is residing. Try to define the minimum test that is needed to reproduce the defect. Your job is not over once the web site is up and running. Especially for web sites, monitoring in production is still required. Monitoring allows you to check if the tolerance levels for the response times for business process transactions are not violated.

Performance Fine-Tuning - A Case Study


In previous articles, we explained the need for network monitoring in order to find bottlenecks in an application. Now we take it a step further and look into a case of application fine-tuning and elimination of bottlenecks. Performance fine-tuning is the activity of increasing the performance of software, while capacity planning is making a decision on what hardware to purchase to fulfill a given role. We were testing an application that would enable brokers to manage insurance applications online. In brief, the main functionality had already been approved by the acceptance test team, but when we were called in the application was still facing a huge performance problem. From the start we emphasized that, before starting tests with multiple users, all errors for a single user needed to disappear from the logs. When for weeks in a row no satisfactory improvements were made, we finally got their attention. We introduced a benchmark as the cumulative response time of the different screens accessed, when running a standard business process with a single user or multiple users. After a year, this benchmark was still obtained generating multiple errors. The performance test team entered the project at a time when the application was in such an advanced stage of development that everyone had by then realized that performance was becoming the issue. Running a single user consumed up to 50 percent of the application servers CPU resources. As long as this situation was holding up, the application evidently had problems to run more than three users. As a result, several architectural changes were implemented and the development team fixed the (java) code on the application server. After having installed several patches and new releases, CPU usage decreased below 25 percent for a single user with a single CPU on the application server. Error-removal further decreased CPU usage, however, the application still had problems to concurrently run more than 25 users, while it was designed for about 250 concurrent users. The code and architectural changes for performance took such a big part out of the budget that the final delivery could no longer be postponed no matter how good or bad the application

performed. The deadline had been shifted for almost a year, so no more major delays could be allowed anymore. What I'm trying to prove here is that part of this debacle could have been avoided by setting up a performance strategy from day 1. The deadline would not have been crossed - and definitely not by a whole year if we had been there from the start of the project to deploy a performance strategy at the highest level: analysing business processes and tuning the rules of the business, tuning the application design and the structure of and access to the database, analysing components and testing on integration level. During all these steps, effective performance analysis should be done, based on a proven methodology, but first of all, you need to understand your environment. Typical Web Performance Problems and their Causes: A. Long resonse time from the end-users' point of view B. Long response time as measured by the servers C. Memory leaks D. High CPU usage E. Too many open connections between the application and end-users F. Lengthy queues for end-user requests G. Too many table scans of the database H. Database deadlocks I. Erroneous data returned J. HTTP errors

Application of Statistics in Load Testing


Performance testers mostly use common sense to determine how many tests to run. In most cases thats quite alright. But when response time figures have to be presented to the management, even good testers have the urge to run far too many tests just to be sure. So, instead of wasting valuable time, it is more appropriate to base the number of tests on some statistical theories. By doing this, you can clearly motivate the amount of iterations you run and maybe give your conscience a rest. Imagine the following. You are willing to assume your tests are distributed normally around a mean. You are at ease with using the standard deviation of a benchmark test (50 iterations minimum) as the real standard deviation and you feel comfortable letting the error be a function of the standard deviation. In this situation, the answer is nothing more than straightforward: Use between 10 and 25 samples. In case of 10, youve got a 90 percent certainty that (confidence interval = 90) your real transaction average is located between the sample average minus half of

the standard deviation and the sample average plus half of the standard deviation. If you want 95 percent certainty, you need to run 15 tests and for 99 percent, 27 iterations. When you are looking for an absolute error which is not depending on the standard deviation things get a bit more complicated. A presentation could seem clearer when the error is expressed in function of the mean. If this is the case, you should run a different number of tests for each transaction, because their standard deviations will differ. This could make sense for some of us, but in the end the result can be too high for some transactions when the standard deviation increases.

E = Z /2 * /n
E is the allowed error you are willing to accept and corresponds with the confidence interval width. Z is the critical value and can be found in the Gauss table for normal distribution. Solve the equation for n and when you have found n, it is better to round up to the higher whole number. So, practically we use both approaches; the error in function of the standard deviation and in function of the mean. Of course the results could always be manipulated based on the error we were willing to accept. We wanted the real mean between the sample mean plus and minus half of the standard deviation with a confidence level of 99. After using that in the formula above (in our situation), it meant that 25 tests had to be run. Between the mean plus and minus a tenth of the mean for the main business process, resulted in 28 tests to be run. I suggested to use only this amount of iterations in a final stage, not during fine-tuning. Here we used a confidence level of 90 which resulted in respectively 11 and 12 runs. We choose 10, corresponding to an 88% certainty with a relatively small error (0.5 and 0.1 ). To determine whether or not your population is normal, you could do a 2-test (Goodness of fit) for normal distribution, but that is a different topic. What you should do to increase the chances of having good samples and benchmark tests, is to remove extreme values and start from there. Transaction E=5 sec E=2 sec E=0,5* E=0,1 664 664 664 664 664 E=0,1 100 95 46 22 28 (Z /2)

Transaction 1 Transaction 2 Transaction 3 Transaction 4 Whole Contract

5 4 1 1 12

31 25 1 5 71

27 27 27 27 27

2,575 00:04,3 2,575 00:03,8 2,575 00:00,5 2,575 00:01,6 2,575 00:06,5

00:11,1 00:10,1 00:01,9 00:08,8 00:31,9

Table 1: Illustration of the above analysis for confidence level of 99 (Z /2)

Figure 2. Example of a test results used for the above analysis.

Measure Twice, Cut Once


It is an old carpenter's rule: measure twice, cut once. But it applies just as much when you cut your first code for test scripts for automated load and stress testing. In place of the carpenter's trusty set square and tape measure, we now have a host of open-source test ware that can help concentrate your testing effort where it is most needed. In this workshop, we shall analyse static log data prior to the creation of LoadRunner scripts.

Our scenario is that of a major upgrade of a pre-existing application. The application uses a webbased architecture and is a good fit for load testing using the HTTP recording mode of LoadRunner. On the server side, we have Apache, which by default provides a standard log file (see figure 1).

Figure 1. Apache log file. A pre-requisite of this analysis technique is to capture log files that provide a good representation of typical usage data, and this is entirely application-specific. The capture period may be hours, days, weeks or a longer period. It may be that the usage reflects different types of users readers, people who update, and administrators. In all cases, the scope should match testing requirements. Stage 1 - Import As log files are generally unwieldy, the first hop of our journey is to import them into a relational format. We used a Perl script to parse the Apache logs and to import the data in a mySql database (see figure 2). Perl and mySql are both open-source tools, but there is nothing to prevent the same technique being adopted in Visual Basic with Oracle, Python with Access, or PL/I or COBOL and DB2.

Figure 2. Populate a SQL database with logfile data. On Windows NT and Windows 2000, there are APIs that can access the system logs, which are not held in plain text format as they are on other platforms: these can still be used to populate a relational database.

Stage 2 - Extraction Now our usage data is stored in a uniform form, we can begin extraction and analysis. There are a number of tools to use for this. 1) Perl or other scripting languages. Write custom programs to extract the data and combine it. 2) Excel. Use the Data menu to create queries on the data. 3) GUI database tools like TOAD (for Oracle).

Figure 3. Querying the data from a Perl script. What we are looking for are items such as the proportion of updates to queries. If, for instance, the log captures a url such as 'http://www.myapp.com/query.pl?since= 56' for queries since 56 days ago and 'http://www.myapp.com/update.pl?cust_id=109?item_id=2077?quantity=6', then queries need to be formulated to search out the 'query' and 'update' strings and count them. Stage 3 - Additional analysis There is a good opportunity here to do additional analysis. Suppose our application has logged 1200 updates and 8000 queries, but the log contains 10000 rows? There are 800 rows unaccounted for. Always check these. They may represent attempted hacks or application errors: who knows? In any case, they are worth knowing about. These results are used to provide the profile of the instruction mix for the load generation scripts. Note that the "Profile" page is displayed by default whenever the user first logs in and therefore accounts for a high percentage of the total even though the users do not specifically request it. Stage 4 - Reporting and using the data Having now our usage data, when we create a pair of scenarios in LoadRunner (scenarios to exercise the application), we can easily write a driver routine as follows. This driver will ensure that the t

esting of the application respects known usage.

Figure 4. Consolidated usage profile presented in a report.

Conclusion
Static analysis of log data from a hosting environment provides a useful way to determine how best to test an application both for performance and, potentially, for correctness or security. Open-source tools can cater for log analysis for little outlay, provided that your team has the requisite skills and tools. These are often the ones they use day in day out or are "weekend" skills. The amount of time spent is easily recouped by that saved in waiting for seldom-needed tests to complete their execution. Subsequent testing can reuse the methodology, thereby amortising the additional cost over successive deliveries, and maintaining quality at a nominal cost. Encourage your developers to make full use of the logging and tracing infrastructure. Fewer proprietary developments to support.

Overview on Testing Tools Evaluation and Selection Process


Youre planning to develop software and you want to ensure good quality. Testing it thoroughly is the thing to do. If management approves the proposal to automate part of the manual testing, which tool would you select? Before deciding on which tool to buy, broaden your perspective: dont just think about the project you are responsible for, but inform yourself on future projects in the organisation. Ideally, the test tool should both fit the criteria of the organisations system engineering project as well as the needs of a pilot project that is still in a very preliminary stage. In practice, most projects are already in the system design phase, when the matter of test tool selection is raised. Once management has given the green light for automated testing, a detailed list of test tool requirements should be developed, based on the general needs combined with the specific needs of the project at hand. If you want to gain broad support, include management, project staff and end-users expectations in the evaluation criteria. Questions to ask are: How will the tool be used within the organisation? Will other groups and departments use the tool? What is the most important function of the tool, what is the least important function? How will the tool mainly be used? How portable must the tool be? Do not forget to include the expectations from the (eventually existing) test team itself. As in most cases a single tool will not live up to all organisational test tool interests and requirements, it should certainly meet the more immediate requirements. Next step is that you define the criteria for a tool evaluation environment, based upon the analysis of the system and software architectures available in your organisation. You then identify the test tool types (e.g. for regression tests, volume or stress testing, usability testing) that might apply to the particular project. For each phase in the testing life cycle, there is a tool that can support you. Once identification is finished, determine a list of possible test tools. These tools should then be screened using the pilot project environment, resulting in a detailed evaluation report. The report contains the results of the different tools related to the evaluation criteria. If you want to evaluate one or more test tools, it is better to first test the tool in an isolated test environment and then apply the test tool on a pilot project. Before doing this however, let the tool vendor demonstrate how the tool operates.

Together with the evaluation of the tool, one has to evaluate the structure of the testing team: do you foresee a centralised test team or do you opt for a distributed test team? A centralised test team often prefers a more powerful automated test tool with great flexibility, programming language capabilities and growth potential. A decentralised organisation will be better served by a user-friendly tool, minimising cost and the time associated with learning how to use it. Choosing the tool is one thing, identifying the people who will perform the evaluation of the test tool is also critical. Evaluating and choosing an automated test tool is not a simple thing to do. In a big organisation, take sufficient time to scrutinise the tool in order to be sure that it can be used on other projects. If there isnt any tool that really meets the evaluation criteria, do not decide on buying one anyway, because by doing so you can lose a lot of credibility and endanger future test automation projects. "For the most part, testers have been testers, not programmers. Consequently, the simple commercial solutions have been far too complex to implement and maintain; and they become shelfware." Highly repeatable testing can actually minimize the chance of discovering all the important problems, for the same reason that stepping in someone else's footprints minimizes the chance of being blown up by a land mine."

Software Testing and Continuous Quality Improvement

Software Testing: A Craftsman's Approach, Second Edition

Test Driven Development: By Example

How to Break Software: A Practical Guide to Testing

Effective Software Testing: 50 Specific Ways to Improve Your Testing Automated Software Testing: Introduction, Management, and Performance

The Complete Guide to Software Testing

Unit Testing in Java: How Tests Drive the Code

Testing Applications on the Web: Test Planning for Internet-Based Systems Managing the Testing Process: Practical Tools and Techniques for Managing Hardware and Software Testing, 2nd Edition

Quality Web Systems: Performance, Security, and Usability

Black-Box Testing : Techniques for Functional Testing of Software and Systems

Rapid Testing

Practical Guide to Testing Object-Oriented Software

Introducing Software Testing

Test Process Improvement: A Practical Step-by-Step Guide to Structured Testing

Testing Embedded Software

Automated Web Testing Toolkit: Expert Methods for Testing and Managing Web Applications

"A tool is only as good as the process being used to implement the tool. How a tool is implemented and used is what really matters"

"If software defects were gold, then software testing would be gold mining. Test planning would be the geologist's surveys and preparation activities done before mining takes place: The geologist establishes a plan and strategy, increasing the probablity that digging at particular spots using a particular strategy would prove to be successful." "Why is there so much involved with automation? The answer is straightforward: because its difficult. Decisions about which tool(s) to use, how to architect and implement a test suite, and who will do the work are complicated. Then there are software design considerations, supporting scripts necessary to run the automation, source control, and library construction. Add to that the complexity of managing a large automation suite, continued maintenance of scripts, and adding to the test suite with new functionalityone can easily be left with the looming question, "Is this really going to be worth it?" Software testing is like fishing, but you get paid.

También podría gustarte