Documentos de Académico
Documentos de Profesional
Documentos de Cultura
Automated software testing is nothing but automating the existing process of manual software testing. It requires a solid testing infrastructure and a thoughtful software testing life cycle, both supported and valued by the management. Test automation is not obviously the right thing to do without answering questions such as why, what, when and how to automate. Moreover, it is an expensive process, contrary to what test tool vendors would like us to believe. It can take between 3 to 10 times longer to develop automated test suite than to create and execute manual test cases. Costs of test automation include personnel to support test automation for the long term, dedicated test environment as well as the costs for the purchase, development and maintenance of tools. The benefit of test automation comes only from running the automated tests on every subsequent release and after carefully making a cost/benefit analysis beforehand. Test automation can provide valuable assistance if it is done by the right people, in the right environment and where it makes sense to do so. As a matter of fact, it is an addition to your testing process and does not replace manual testing nor does it enable you to downsize your testing department.
Be aware of the importance of good communication between users, developers and testers. Start the communication channel as soon as possible. Gather information from the developers regarding regularly planned changes in the application and develop your automation scripts based on the development life cycle. Have users (manual testers) and the test automation team use the same test designs to run tests and have a standard way of filling out data. All agreements and procedures should be well-documented so new people get the picture easily. Avoid misunderstandings at a later stage by making sure both users and developers understand and agree upon well-defined business requirements for the system about to be developed. Appoint one person (plus backup) on each team to serve as main contact.
improve test coverage during regression test; prevent previous defects from reappearing in the new releases; speed up testing to accelerate releases; reduce costs of testing; ensure consistency; improve the reliability of testing; testers can focus more on test-depth instead of repetitive work; Tests can be run over and over again with less overhead.
When should we automate ? Developers nowadays can produce code faster and faster with more complexity than ever before. Advancements in code generation tools and code reuse are making it difficult for testers to keep up with software development. Test automation, especially if applied only at the end of the testing cycle, will not be able to keep up with these advances. Therefore automating at the early stages should be the best strategy. Like structured software testing, test automation has also its planning, designing, testing and implementation phases. The best approach is to integrate and synchronize those different phases of test automation with the Implementation Model of structured software testing.
What is Automation ? Automated software testing means automating the existing process of manual software testing. This implies that a structured manual software testing process already exists. Automation is not an island unto itself. It requires a solid testing infrastructure and a thoughtful software testing life cycle, both supported and valued by the management. Automation is usually the fantasy of software testers, who are most of the time under pressure to do more testing in less time. Performing manual testing especially those labour-intensive tasks is usually unappealing thus automation seems to be the solution to make their job more simple and helps them to meet the unrealistic schedules. There is still a common myth in the testing community that test automation tools alone can bring solutions to our software testing problems. Many people think that test automation is obviously the right thing and do not bother to state what they hope to get. This includes answering questions like why, what, when and how to automate. Contrary to what test tool vendors would like us to believe automated testing is an expensive process. Studies show that it can take between 3 to 10 times longer to develop automated Test Suite than to create and execute manual test cases. Costs of test automation include personnel to support test automation for the long term, dedicated test environment as well as the costs for the purchase, development and maintenance of tools. The benefits of test automation only come from running the automated tests every subsequent release and after carefully making a cost/benefit analysis beforehand, that is after making an informed decision about what is the best for your situation. Test automation can add a lot of complexity and cost to a test team's effort. In addition problems like including unrealistic expectations, poor testing practices, a false sense of security, maintenance costs, and other technical and organisational problems might arise. But it can also provide some valuable assistance if its done by the right people, in the right environment and done where it makes sense to do so. Test automation has its limitations. It does not replace manual testing, which will find more defects than automated testing. As a matter of fact automated testing is an addition to your testing process and does not enable you to downsize your testing department.
Firstly, ensure that a structured software testing process is already in place. If not, concentrate on implementing a good structured software testing process in parallel. Without a good testing methodology, test automation will never succeed. During the planning phase specify your goals, scope, organisation, milestone plan, budget, test environment, roles and responsibilities for your automation. This preliminary stage also involves the process of evaluation, selection and training of automated testing tools. Defect tracking procedures and defect workflow should also be defined and documented. In general, define the strategy on how you intend to implement and maintain your test automation process. Plan to achieve small successes and grow steadily. It's better to incur a small investment and see the effort it really takes before trying to automate the whole regression test suite. It is important to implement a strategy that keeps the maintenance cost to a minimum. Maintenance costs are usually more significant for automation than for manual tests and if test scripts are not maintained for re-use, test automation will have little value. Below, you will find a typical process for implementing a structured test automation framework
At first, investigate if a structured testing process is already in place. If not, try implementing a testing process in parallel with automation. Before even starting to think of automation, a structured testing process should at least be in place. If a structured testing process is already present, an investigation should be done to find out whether automation is really the solution to your problem (i.e. take into account the number of planned releases and test cycles). You will only benefit from automation if more than two cycles or releases are planned, which is mostly the case in reality. Acquiring managements commitment is also a crucial element to go ahead with the automation process. Assuming their expectations towards automated testing are realistic and if they are aware of the costs for introducing the tool, allocating appropriate staff and resources. After acquiring management commitment a Test Automation Assessment report should be prepared to obtain the backing for the budget and the resource needed.
However, reducing costs does not mean shorter time to market. It can be advantageous to have high investment just to reduce the time to market, providing it gives the company a competitive edge. Cost benefit analysis should always reflect the customers business perspective and goals. So what are we supposed to measure? An answer is not easy to find when it comes to test automation. An important starting point is to know what your objectives are and to measure the attributes related to them. Which attributes of test automation can we measure? Below listed are some of the attributes of test automation that can be measured: Maintainability Definition : The effort needed to update the test automation suites for each new release. Possible measurements : The possible measurements can be e.g. the average work effort in hours to update a test suite.
Reliability Definition : The accuracy and repeatability of your test automation. Possible measurements : Number of times a test failed due to defects in the tests or in the test scripts.
Flexibility Definition : The ease of working with all the different kinds of automation test ware. Possible measurements : The time and effort needed to identify, locate, restore, combine and execute the different test automation test ware.
Efficiency Definition : The total cost related to the effort needed for the automation. Possible measurements : Monitoring over time the total cost of automated testing, i.e. resources, material, etc.
Portability Definition : The ability of the automated test to run on different environments. Possible measurements : The effort and time needed to set-up and run test automation in a new environment.
Robustness Definition : The effectiveness of automation on an unstable or rapidly changing system. Possible measurements : Number of tests failed due to unexpected events.
Usability
Definition : The extent to which automation can be used by different types of users (Developers, non-technical people or other users etc.,) Possible measurements : The time needed to train users to become confident and productive with test automation.
Measurements may be quite different from project to project and one cannot know what is best unless one has clearly understood the objectives of the project. For example, for software that is regularly changing, with frequent releases on many platforms, the important attributes will be ease of maintaining the tests and - of course - portability.
Figure 4. Example of a highly flexible, robust but less usable and portable automation suite
Even though several attributes of automation can be measured, measuring them all is not really necessary. Begin measuring the few that are really important depending on the goals you want to achieve. A few things to measure could be e.g. average time to automate a test case, total effort or time spent on maintaining the test suite, number of tests run per cycle or number of cycles completed per release.
Figure 1. Basic Triangular Communication Channel - (seems obvious but often overlooked) A tester is always in the midst of problems. Lets have a look at a few examples. Test Automation Engineer Developer: Let us take a scenario where a new version of a webbased application is released for testing. Running your test automation scripts, you notice that object names have been altered without prior notification. Because you will have quite a maintenance task updating your scripts, a conflict may arise. If developers did inform the automation team, maintenance could have been done before the release (at a time when practically everybody is just waiting to start testing).
User (Business) Automation tester: Imagine a complex application where users (business experts) have developed test designs coached by few professional testers. The idea behind this approach is to get a good quality of test designs mixing business knowledge with the expert use of testing techniques. Next to manual execution, the test designs will be used to automate the testing. The latter makes the situation more complex when no standard procedures and templates are defined for designing the tests. Hence, when design and test data problems demand a solution, not only regular communication is at hand, but both teams (manual and automation team) should work together as one. User (Business) Developer: This concerns technical versus business knowledge. These people often find it hard to communicate because they have different views on the application during development. Even though the defined business requirements should be the common ground for communication, both parties usually misinterpret them. A testers task should be to ensure and
verify that users and developers have a common understanding regarding the requirements and to make sure the application becomes what it should be. This is not always an easy job. I think most of you will admit to have encountered some of these situations. So what should you do about it? Being aware of the importance of a good communication between these groups is already a big step forward. Make sure you start the communication channel as soon as possible in order to avoid problems. Once certain processes have been set up (i.e. no proper communication channels), it can be a tough job to change them. Make sure you have regular meetings with developers (depending on the release schedule) regarding application alterations. Ensure to receive all necessary information regarding the regularly planned changes in the application. The idea here is to develop your automation scripts based on the development life cycle. If there is no real planning of the development life cycle, then it becomes difficult for an automation team to predict changes, causing an additional script maintenance effort. Make sure that users (manual testers) and test automation team use the same test designs to run tests and that you have a standard way of filling out data. All agreements and procedures should be well documented so that new people can easily get the picture. Ensure regular communication in order to synchronise the needs of both teams. At the start of the project, both users and developers should understand and agree upon welldefined business requirements for the system about to be developed. If business rules are not yet defined, a lot of misunderstandings may arise at a later stage (knowing that developers are working on a very technical level). Make sure to have one person (plus backup) in each team who serves as the main contact point. This approach will make it a lot easier in case of problems. If the person cannot answer your question immediately, he can consult his team in order to provide an answer.
Definition: It is an application independent automation framework designed to process our tests. The tests are developed using a keyphrase vocabulary that describes the business processes of the AUT (Application Under Test) and that is independent of the test automation tool which is used to execute them. Why: The advantage of this approach is that it enables us to develop automated tests that are business driven without worrying about the technicality which lays under it. Testers can develop their tests independent from the technical scripts needed to run the test. They will be more concerned about what the test case does instead of how it does it. Tests became more descriptive, more business related and more independent from the test tools and their scripting languages.
Figure.1 Pure separation of the business from the technical More and more tests cases can be run without the need to increase the number of scripts. The latter will be more proportional to the size of the AUT and not to the number of tests. Tests can be developed and maintained independent from the test tool or platform. Thousands of tests can be run with out the necessity of adding more scripts. Thus reducing the maintenance costs substantially. In addition to this, non-technical personnel are able to easily implement it as they won't need to deal with the scripts but concentrate mostly on the test scenario's. How: To implement Key-phrase approach it is important that the business requirements and functional specification be defined, ready and documented. One should plan enough time to analyse and understand what and how the system is supposed to work in order to develop the Key-phrases. They will be the basis for developing the test designs or scenario's in the next phase especially the prologue, epilogue, navigation, validation key-phrases and test data that are necessary for formulating test designs. Key-phrases should purely reflect the business processes of the AUT and should be readable, understandable, meaningful and consistent as they are meant to describe what and how the AUT is supposed to function.
Figure. 2 The process of Key-phrase Analysis and Development During execution process all the necessary interpretation from the Key-phrase (business) into technical is handled by a single control script which is generic in nature. Furthermore, all the automation testware needed can be developed outside this control script. This level of separation of business from the automation script will of course require a layer of technical implementation but is advantageous in the long run. In addition, ones implemented it doesn't require high maintenance effort.
Figure.3 An overview of the core low level Key-phrase engine. Key-phrase approach to automation is not about automation scripts or testing tools, it is a concept that tries to bring both structured testing process and structured test automation into one coherent process. It is a methodology which will force us to structure not only our automation but also our testing processes, and independent of any test tool we use. It is a common known fact that test automation is nothing but automating the testing process, and it is the quality of our testing process which will at the end determine the quality of our automation. Finally, we see more and more researches and experiments being carried out on integrated approach to test automation world wide, and several exciting concepts are popping-up. Hopefully test automation will continue to evolve until the "silver bullet" myth becomes a reality.
Is your goal to verify performance requirements ? Is your goal to verify the scalability of your system (preparing for growth and change)? Is your goal to determine optimal hardware/application configurations ? Is your goal finding bottle necks ? Is your goal to gauge how well your products stack up against the competition etc.,
You can address all these objective at once if you want and that is if you have all the time and money. Prepare your strategy and planning on how you intend to tackle your performance objectives and how you intend to measure and test them. Define clearly what you want to measure and the pass/fail criteria (determining the measurement criteria for the performance tests is easier once you determine what your performance test objectives are) . 2. Formulate your performance requirements and identify the performance critical business processes What are the business SLA's ? Which business processes are performance critical? How many transactions defined ? How many transactions per second need to be processed ? How many concurrent and total number of users is the application supposed to support ? What are the profiles of your real users ? What is the ratio between the different user profiles ? What are the acceptable response times for the transactions ? How many operations at what time of the day ? What are the risks if the business process fails ? Which protocols are supported ? etc.,
3. Develop performance test scenario's and define metrics and measurements After you have defined WHAT your goals are, you will need to work out the details on HOW you intend to achieve your goals. You will need to formulate detailed test procedures and define the measurements that you will need for each individual test cases. For example you would like to measure: Response time of the end user System behaviour under load Number of concurrent users Application server performance Webserver performance Database server performance Middleware performance Performance of network components etc.,
In addition you will need to describe for example how many users, which user profiles, at what time of the day, how many iterations, what think-time, how in ramping-up etc., to be able to achieve the objective of your test cases. You will not only simulate the real user and system application working situation but you will also need to predict different 'what if' scenario's.
Finally, you might also want to select the appropriate server resource and other measurements like for example CPU Usage, memory, disk traffic, swap in/out rate, incoming/out going packet rate, throughput, hits/seconds etc., and know how to use them in your test cases and interpret your results correctly. 4. Script development, test execution and result Analysis Forget about doing your performance tests manually with stop watch ! There are lot's of performance testing tools out there. Ensure that you have done the proper testing tool evaluation and selection process before committing to any tool(see our previous articles on testing tool evaluation and selection processes). For example, your tool needs to correctly simulate your real users, support your required amount of virtual user licenses and also should be suitable to your organisational system engineering environment needs. Unfortunately, cost plays a major role in most organisations when making decision regarding testing tools but a careful cost/benefit analysis should be done before coming to any conclusion. Script development depends much on the tool selected for testing. Even though scripting languages differ from tool to tool, the common thing about most performance testing tools is that they all will try to simulate the behaviour of your real users. Do not forget that a tool is nothing but a means to help you achieve the objectives of your tests, therefore focus on your test scenario. It is not executing a performance tests that's difficult but interpreting your collected system performance metrics to pin point your performance bottlenecks. Therefore ensure that all your system and test tool parameters are correctly set-up before executing your tests. This will guarantee that your results are interpreted correctly and your advice for fine-tuning the system will be much appreciated. Do not forget that sometimes overseeing a small configuration setting somewhere on your system or on your tool can make a huge difference in performance, causing unnecessary panic.
5. Type of performance tests Even though different definitions are given by different people regarding performance testing. I usually like to put Load, Volume and Stress testing under performance testing. The difference between those three types of performance tests is in their objectives: Load Testing: To verify if your system can sustain a requested number of users with acceptable response time. Volume Testing: To determine the maximum number of concurrent users your system can manage without failure (to benchmark the maximum loads). Stress Testing: To load test your system over extended periods of time to validate the stability and reliability (i.e., memory leaks).
As each of the test types above have it's own objectives, plan carefully which type of tests you need, to achieve your objective and which ones to execute first. Remember that some tests will take longer to execute and some not. Some will load the system until it breaks while the others will just load the system modestly depending on what you intend to measure. 6. Test vs. production environment The key requirement for a test environment is the accurate representation and testing of the target production environment. Please keep in mind that accurate does not
necessarily mean identical. Sometimes in might be costly and unrealistic to set-up test environment identical to that of production especially with multi-tiered systems. Perform upgrades to the hardware, operating systems, database, and application server software in the test environment before upgrading the production environment. After the test environment has been checked successfully, the upgrades to the production environment will be fast, and with less downtime. Most performance problems are not due to software code but can be attributed to settingup and configuration of the system architecture. Therefore try setting up and configuring your system architecture of your test environment to represent your final production environment if surprises are to be avoided.
7. Define clearly the role and responsibilities of all involved parties Unlike other type of testing projects, in performance testing you will be working VERY closely with different parties among which are (depending on your organisation): System architecture experts Software & Middleware Database administrators Network administrators Mainframe specialists End users Management etc.,
Planning and executing your performance tests in co-ordination with all the different parties is one of the toughest challenges you will face when implementing performance testing, especially when dealing with multi-tiered systems. Performance testing is a team work and not only do you need to verify the versions, settings and configurations of your individual system components with the responsible parties, but your test results will be the bases for the other teams to pin point the bottlenecks and tune-up the system. You will need to sit down together with the other teams to analyse and interpret results and discuss the next strategies. Therefore, appropriate communication and decision making channels should be established to facilitate the co-operation between the parties, because most of the time performance issues need immediate intervention and above all the active participation of all the parties.
whose components aren't strictly vanilla. Custom controls, third party components, complex objects within containers and just about any user-defined or modified object classes give test tools fits. They can't get the object names, let alone the methods and properties needed to interact with them. In my experience, more than half of all test automation time is spentwasted trying to deal with these complications, and in too many cases automation fails completely. But what is really inexcusable is that it doesn't have to be that way. Most test tools provide source code implants or DLL files that, when compiled into the code, give the tools access to the object names, methods and properties that are needed to do test automation. Adding this capability takes minutes usually only a single line of codebut it can make the difference between automated and manual testing. So what's the problem? In my opinion, it's either ignorance, paranoia or pure laziness. Ignorance because a lot of companies think that if they compile in a test hook, then do the testing, then remove it for shipment that they have not really tested the production code. This is nonsense. True, the production code does not have the same hook as the tested version, but so long as the source is otherwise identical between the two compiles the only difference is the test hook. The key is that these hooks don't do anything unless called. They are usually just a DLL that lets the tool inside the application's process space so it can see the objects. They only provide information, they don't alter or create it. So, compiling the code without the hook should have zero effect on the application functionality. But if you just can't shake this superstition, then leave the hook in when you ship. This is where paranoia comes in. Haven't you just created a security problem? After all, now someone could use that hook to spy on your software. To that I say, so what? Let's face it. The newer runtime-based languages (Java, .Net) are basically interpreted anyway. You can reverse the original source code out to the letter. So don't kid yourself that someone might be able to somehow peek into your software just because of a hook. Good grief, there are OS security holes right now that let strangers across the globe take complete control of your whole computer and everything it is attached to. If you are still really freaked out about it, then identify those objects that are high risk and make their methods and properties private. Even the hooks can't get into those. But for goodness' sake don't do it to any that are needed for interaction with user interface objects; since they are exposed to the user there can't be much to hide in the first place, and in the second place you'll cripple automation.
Which leads me to the last problem: laziness. I go nonlinear when I hear developers complain about the "effort" it takes to address automated testability. Last time I checked it takes only a few minutes to either add or remove lines of code, and since compilation is usually automated anyway it may take zero time after the initial setup. If you can't be bothered to invest a few precious minutes to save your company weeks or months of work, or to enable automation that could make a difference in orders of magnitude in quality or time to market, maybe you should just retire since you don't really want to work in the first place. But in the final analysis, the reason doesn't really matter. The real question is not whether you should provide a test hook for automation, it is why would there even be a controversy in the first place. Why would management even contemplate, let alone tolerate, applications requiring manual testing that increases costs while reducing quality? Why aren't developers required to deliver test hooks as a matter of course? If you think about it, it's completely crazy to even argue about it. The benefits are so undeniable and the risks are so debatable that there should not even be a question, let alone a war. Of course I could be wrong, but in 20 years of test automation I have never had or even heard of a test hook backfiring. Has anyone else?
real business processes. However, if you want to check the complete system architecture, you might need extra tools for database monitoring, log analysis, Once your tool is in place, the rest of the test environment needs to be set up before virtual users can start attacking your application. Make sure that the features of the entire infrastructure can be exercised exactly the same way as in production. One solution is to use a scaled down version of the production system (e.g. 2 instead of 6 Web servers) and extrapolate the results. The execution phase is iterative: test a little, tune a little. Save the crucial measurements as a benchmark in order to compare and check if the system adjustments have had an impact. After each test run, the log files, reports and graphs should provide enough input to detect the systems bottleneck. Isolation testing can be done on the specific module or system component where you think the bottleneck is residing. Try to define the minimum test that is needed to reproduce the defect. Your job is not over once the web site is up and running. Especially for web sites, monitoring in production is still required. Monitoring allows you to check if the tolerance levels for the response times for business process transactions are not violated.
performed. The deadline had been shifted for almost a year, so no more major delays could be allowed anymore. What I'm trying to prove here is that part of this debacle could have been avoided by setting up a performance strategy from day 1. The deadline would not have been crossed - and definitely not by a whole year if we had been there from the start of the project to deploy a performance strategy at the highest level: analysing business processes and tuning the rules of the business, tuning the application design and the structure of and access to the database, analysing components and testing on integration level. During all these steps, effective performance analysis should be done, based on a proven methodology, but first of all, you need to understand your environment. Typical Web Performance Problems and their Causes: A. Long resonse time from the end-users' point of view B. Long response time as measured by the servers C. Memory leaks D. High CPU usage E. Too many open connections between the application and end-users F. Lengthy queues for end-user requests G. Too many table scans of the database H. Database deadlocks I. Erroneous data returned J. HTTP errors
the standard deviation and the sample average plus half of the standard deviation. If you want 95 percent certainty, you need to run 15 tests and for 99 percent, 27 iterations. When you are looking for an absolute error which is not depending on the standard deviation things get a bit more complicated. A presentation could seem clearer when the error is expressed in function of the mean. If this is the case, you should run a different number of tests for each transaction, because their standard deviations will differ. This could make sense for some of us, but in the end the result can be too high for some transactions when the standard deviation increases.
E = Z /2 * /n
E is the allowed error you are willing to accept and corresponds with the confidence interval width. Z is the critical value and can be found in the Gauss table for normal distribution. Solve the equation for n and when you have found n, it is better to round up to the higher whole number. So, practically we use both approaches; the error in function of the standard deviation and in function of the mean. Of course the results could always be manipulated based on the error we were willing to accept. We wanted the real mean between the sample mean plus and minus half of the standard deviation with a confidence level of 99. After using that in the formula above (in our situation), it meant that 25 tests had to be run. Between the mean plus and minus a tenth of the mean for the main business process, resulted in 28 tests to be run. I suggested to use only this amount of iterations in a final stage, not during fine-tuning. Here we used a confidence level of 90 which resulted in respectively 11 and 12 runs. We choose 10, corresponding to an 88% certainty with a relatively small error (0.5 and 0.1 ). To determine whether or not your population is normal, you could do a 2-test (Goodness of fit) for normal distribution, but that is a different topic. What you should do to increase the chances of having good samples and benchmark tests, is to remove extreme values and start from there. Transaction E=5 sec E=2 sec E=0,5* E=0,1 664 664 664 664 664 E=0,1 100 95 46 22 28 (Z /2)
5 4 1 1 12
31 25 1 5 71
27 27 27 27 27
2,575 00:04,3 2,575 00:03,8 2,575 00:00,5 2,575 00:01,6 2,575 00:06,5
Our scenario is that of a major upgrade of a pre-existing application. The application uses a webbased architecture and is a good fit for load testing using the HTTP recording mode of LoadRunner. On the server side, we have Apache, which by default provides a standard log file (see figure 1).
Figure 1. Apache log file. A pre-requisite of this analysis technique is to capture log files that provide a good representation of typical usage data, and this is entirely application-specific. The capture period may be hours, days, weeks or a longer period. It may be that the usage reflects different types of users readers, people who update, and administrators. In all cases, the scope should match testing requirements. Stage 1 - Import As log files are generally unwieldy, the first hop of our journey is to import them into a relational format. We used a Perl script to parse the Apache logs and to import the data in a mySql database (see figure 2). Perl and mySql are both open-source tools, but there is nothing to prevent the same technique being adopted in Visual Basic with Oracle, Python with Access, or PL/I or COBOL and DB2.
Figure 2. Populate a SQL database with logfile data. On Windows NT and Windows 2000, there are APIs that can access the system logs, which are not held in plain text format as they are on other platforms: these can still be used to populate a relational database.
Stage 2 - Extraction Now our usage data is stored in a uniform form, we can begin extraction and analysis. There are a number of tools to use for this. 1) Perl or other scripting languages. Write custom programs to extract the data and combine it. 2) Excel. Use the Data menu to create queries on the data. 3) GUI database tools like TOAD (for Oracle).
Figure 3. Querying the data from a Perl script. What we are looking for are items such as the proportion of updates to queries. If, for instance, the log captures a url such as 'http://www.myapp.com/query.pl?since= 56' for queries since 56 days ago and 'http://www.myapp.com/update.pl?cust_id=109?item_id=2077?quantity=6', then queries need to be formulated to search out the 'query' and 'update' strings and count them. Stage 3 - Additional analysis There is a good opportunity here to do additional analysis. Suppose our application has logged 1200 updates and 8000 queries, but the log contains 10000 rows? There are 800 rows unaccounted for. Always check these. They may represent attempted hacks or application errors: who knows? In any case, they are worth knowing about. These results are used to provide the profile of the instruction mix for the load generation scripts. Note that the "Profile" page is displayed by default whenever the user first logs in and therefore accounts for a high percentage of the total even though the users do not specifically request it. Stage 4 - Reporting and using the data Having now our usage data, when we create a pair of scenarios in LoadRunner (scenarios to exercise the application), we can easily write a driver routine as follows. This driver will ensure that the t
Conclusion
Static analysis of log data from a hosting environment provides a useful way to determine how best to test an application both for performance and, potentially, for correctness or security. Open-source tools can cater for log analysis for little outlay, provided that your team has the requisite skills and tools. These are often the ones they use day in day out or are "weekend" skills. The amount of time spent is easily recouped by that saved in waiting for seldom-needed tests to complete their execution. Subsequent testing can reuse the methodology, thereby amortising the additional cost over successive deliveries, and maintaining quality at a nominal cost. Encourage your developers to make full use of the logging and tracing infrastructure. Fewer proprietary developments to support.
Together with the evaluation of the tool, one has to evaluate the structure of the testing team: do you foresee a centralised test team or do you opt for a distributed test team? A centralised test team often prefers a more powerful automated test tool with great flexibility, programming language capabilities and growth potential. A decentralised organisation will be better served by a user-friendly tool, minimising cost and the time associated with learning how to use it. Choosing the tool is one thing, identifying the people who will perform the evaluation of the test tool is also critical. Evaluating and choosing an automated test tool is not a simple thing to do. In a big organisation, take sufficient time to scrutinise the tool in order to be sure that it can be used on other projects. If there isnt any tool that really meets the evaluation criteria, do not decide on buying one anyway, because by doing so you can lose a lot of credibility and endanger future test automation projects. "For the most part, testers have been testers, not programmers. Consequently, the simple commercial solutions have been far too complex to implement and maintain; and they become shelfware." Highly repeatable testing can actually minimize the chance of discovering all the important problems, for the same reason that stepping in someone else's footprints minimizes the chance of being blown up by a land mine."
Effective Software Testing: 50 Specific Ways to Improve Your Testing Automated Software Testing: Introduction, Management, and Performance
Testing Applications on the Web: Test Planning for Internet-Based Systems Managing the Testing Process: Practical Tools and Techniques for Managing Hardware and Software Testing, 2nd Edition
Rapid Testing
Automated Web Testing Toolkit: Expert Methods for Testing and Managing Web Applications
"A tool is only as good as the process being used to implement the tool. How a tool is implemented and used is what really matters"
"If software defects were gold, then software testing would be gold mining. Test planning would be the geologist's surveys and preparation activities done before mining takes place: The geologist establishes a plan and strategy, increasing the probablity that digging at particular spots using a particular strategy would prove to be successful." "Why is there so much involved with automation? The answer is straightforward: because its difficult. Decisions about which tool(s) to use, how to architect and implement a test suite, and who will do the work are complicated. Then there are software design considerations, supporting scripts necessary to run the automation, source control, and library construction. Add to that the complexity of managing a large automation suite, continued maintenance of scripts, and adding to the test suite with new functionalityone can easily be left with the looming question, "Is this really going to be worth it?" Software testing is like fishing, but you get paid.