If there are no requirements, how will you write your test plan?

If there are no requirements we try to gather as much details as possible from:

  • Business Analysts
  • Developers (If accessible)
  • Previous Version documentation (if any)
  • Stake holders (If accessible)
  • Prototypes

How to calculate sessions per hour in performance testing? - Little's Law

Calculating Sessions per hour: For this discussion, we will be focusing on a session as the total time for the user group to finish one complete set of transactions. We may wish to know the number sessions that will be completed for any given number of virtual users. 

Example 1:- If a baseline test shows that a User Type takes a total of 120 seconds for a session, then in an hour long steady state test this User Type should be able to complete 3600 / 120 = 30 sessions per hour. Twenty of these users will complete 20 x 30 = 600 of these session in an hour. In other cases, we have a set number of sessions we want to complete during the test and want to determine the number of virtual users to start. 

Example 2:Using the same conditions in our first example, if our target session rate for sessions per hour is 500, then 500 / 30 = 16.7 or 17 virtual users. A formula called Little's Law states this calculation of Virtual Users in slightly different terms. 
Using Little's Law with Example 2:

V.U. = R x D 
where R = Transaction Rate and
D = Duration of the Session
If our target rate is 500 sessions per hour (.139 sessions/sec) and our duration is 120 seconds, then Virtual Users = .139 x 120 = 16.7 or 17 virtual users.

Case Studies – Identifying Performance-testing Objectives

Case Study 1 Scenario

A 40-year-old financial services company with 3,000 employees is implementing its annual Enterprise Resource Planning (ERP) software upgrade, including new production hardware. The last upgrade resulted in disappointing performance and many months of tuning during production.


Performance Objectives

The performance-testing effort was based on the following overall performance objectives:
Ensure that the new production hardware is no slower than the previous release.
Determine configuration settings for the new production hardware.
Tune customizations. 

Performance Budget/Constraints

The following budget limitations constrained the performance-testing effort:
No server should have sustained processor utilization above 80 percent under any anticipated load. (Threshold)
No single requested report is permitted to lock more than 20 MB of RAM and 15-percent processor utilization on the Data Cube Server.
No combination of requested reports is permitted to lock more than 100 MB of RAM and 50-percent processor utilization on the Data Cube Server at one time. 

Performance-Testing Objectives

The following priority objectives focused the performance testing:
Verify that there is no performance degradation over the previous release.
Verify the ideal configuration for the application in terms of response time, throughput, and resource utilization.
Resolve existing performance inadequacy with the Data Cube Server.

Questions
  1. The following questions helped to determine relevant testing objectives: 
  2. What is the reason for deciding to test performance? 
  3. In terms of performance, what issues concern you most in relation to the upgrade? 
  4. Why are you concerned about the Data Cube Server? 
Case Study 2
Scenario

A financial institution with 4,000 users distributed among the central headquarters and several branch offices is experiencing performance problems with business applications that deal with loan processing.
Six major business operations have been affected by problems related to slowness as well as high resource consumption and error rates identified by the company’s IT group. The consumption issue is due to high processor usage in the database, while the errors are related to database queries with exceptions.

Performance Objectives
  • The performance-testing effort was based on the following overall performance objectives: 
  • The system must support all users in the central headquarters and branch offices who use the system during peak business hours. 
  • The system must meet backup duration requirements for the minimal possible timeframe. 
  • Database queries should be optimal, resulting in processor utilization no higher than 50-75 percent during normal and peak business activities. 
Performance Budget/Constraints

The following budget limitations constrained the performance-testing effort:
  • No server should have sustained processor utilization above 75 percent under any anticipated load (normal and peak) when users in headquarters and branch offices are using the system. (Threshold) 
  • When system backups are being performed, the response times of business operations should not exceed 8 percent, or the response times experienced when no backup is being done. 
  • Response times for all business operations during normal and peak load should not exceed 6 seconds. 
  • No error rates are allowable during transaction activity in the database that may result in the loss of user-submitted loan applications. 
Performance-Testing Objectives

The following priority objectives focused the performance testing:
  • Help to optimize the loan-processing applications to ensure that the system meets stated business requirements. 
  • Test for 100-percent coverage of the entire six business processes affected by the loan-manufacturing applications. 
  • Target database queries that were confirmed to be extremely sub-optimal, with improper hints and nested sub-query hashing. 
  • Help to remove superfluous database queries in order to minimize transactional cost. 
  • Tests should monitor for relevant component metrics: end-user response time, error rate, database transactions per second, and overall processor, memory, network, and disk status for the database server. 
Questions
  1. The following questions helped to determine relevant testing objectives: 
  2. What is the reason for deciding to test performance? 
  3. In terms of performance, what issues concern you most in relation to the queries that may be causing processor bottlenecks and transactional errors? 
  4. What business cases related to the queries might be causing processor and transactional errors? 
  5. What database backup operations might affect performance during business operations? 
  6. What are the timeframes for back-up procedures that might affect business operations, and what are the most critical scenarios involved in the time frame? 
  7. How many users are there and where are they located (headquarters, branch offices) during times of critical business operations? 

These questions helped performance testers identify the most important concerns in order to help prioritize testing efforts. The questions also helped determine what information to include in conversations and reports.

Case Study 3
Scenario

A Web site is responsible for conducting online surveys with 2 million users in a one-hour timeframe. The site infrastructure was built with wide area network (WAN) links all over the world. The site administrators want to test the site’s performance to ensure that it can sustain 2 million user visits in one hour. 

Performance Objectives

The performance-testing effort was based on the following overall performance objectives:
The Web site is able to support a peak load of 2million user visits in a one-hour timeframe.
Survey submissions should not be compromised due to application errors. 

Performance Budget/Constraints

The following budget limitations constrained the performance-testing effort:
No server can have sustained processor utilization above 75 percent under any anticipated load (normal and peak) during submission of surveys (2 million at peak load).
Response times for all survey submissions must not exceed 8 seconds during normal and peak loads.
No survey submissions can be lost due to application errors. 

Performance-Testing Objectives

The following priority objectives focused the performance testing:
  • Simulate one user transaction scripted with 2 million total virtual users in one hour distributed among two datacenters, with 1 million active users at each data center. 
  • Simulate the peak load of 2 million user visits in a one-hour period. 
  • Test for 100-percent coverage of all survey types. 
  • Monitor for relevant component metrics: end-user response time, error rate, database transactions per second, and overall processor, memory, network and disk status for the database server. 
  • Test the error rate to determine the reliability metrics of the survey system. 
  • Test by using firewall and load-balancing configurations.

Questions


  1. The following questions helped to determine relevant testing objectives: 
  2. What is the reason for deciding to test performance? 
  3. In terms of performance, what issues concern you most in relation to survey submissions that might cause data loss or user abandonment due to slow response time? 
  4. What types of submissions need to be simulated for surveys related to business requirements? 
  5. Where are the users located geographically when submitting the surveys?

Test Strategy Vs Test Planning

Test Strategy:

A Test Strategy document is a high level document and normally developed by project manager. This document defines “Testing Approach” to achieve testing objectives. The Test Strategy is normally derived from the Business Requirement Specification document(BRS).

The Test Stategy document is a static document meaning that it is not updated too often. It sets the standards for testing processes and activities and other documents such as the Test Plan draws its contents from those standards set in the Test Strategy Document.
Some companies include the “Test Approach” or “Strategy” inside the Test Plan, which is fine and it is usually the case for small projects. However, for larger projects, there is one Test Strategy document and different number of Test Plans for each phase or level of testing.

Components of the Test Strategy document :
  • Scope and Objectives 
  • Business issues 
  • Roles and responsibilities 
  • Communication and status reporting 
  • Test deliverability 
  • Industry standards to follow 
  • Test automation and tools 
  • Testing measurements and metrices 
  • Risks and mitigation 
  • Defect reporting and tracking 
  • Change and configuration management 
  • Training plan 

Test Plan:

The Test Plan document on the other hand, is derived from the Product Description, Software Requirement Specification(SRS), or Use Case Documents.

The Test Plan document is usually prepared by the Test Lead or Test Manager and the focus of the document is to describe what to test, how to test, when to test and who will do what test.
It is not uncommon to have one Master Test Plan which is a common document for the test phases and each test phase have their own Test Plan documents.

There is much debate, as to whether the Test Plan document should also be a static document like the Test Strategy document mentioned above or should it be updated every often to reflect changes according to the direction of the project and activities.

My own personal view is that when a testing phase starts and the Test Manager is “controlling” the activities, the test plan should be updated to reflect any deviation from the original plan. After all, Planning and Control are continuous activities in the formal test process. 

Components of the Test Strategy document :
  • Test Plan id 
  • Introduction 
  • Test items 
  • Features to be tested 
  • Features not to be tested 
  • Test techniques 
  • Testing tasks 
  • Suspension criteria 
  • Features pass or fail criteria 
  • Test environment (Entry criteria, Exit criteria) 
  • Test delivarables 
  • Staff and training needs 
  • Responsibilities 
  • Schedule 
This is a standard approach to prepare test plan and test strategy documents, but things can vary company-to-company

Factors Affecting Performance of web application

It has been known for years that although software development constantly strives towards constant improvement, it will never completely be 100% perfect. An application’s performance, in turn, can only be as good as in comparison to its performance objectives.

Performance problems affect all types of systems, regardless of whether they are client/server or Web application systems. It is imperative to understand the factors affecting system performance before embarking on the task of handling them.

Generally speaking, the factors affecting performance may be divided into two large categories: project management oriented and technical.
Project Management Factors Affecting Performance In the modern Software Development Life Cycle (SDLC), the main phases are subject to time constraints in order to address ever growing competition.

This causes the following project management issues to arise:
  •  Shorter coding time in development may lead to a lower quality product due to a lack of concentration on performance.
  •  Chances of missing information due to the rapid approach may disqualify the performance objectives.
  •  Inconsistent internal designs may be observed after product deployment, for example, too much cluttering of objects and sequence of screen navigation.
  • Higher probability of violating coding standards, resulting in unoptimized code that may consume too many resources.
  •  Module reuse for future projects may not be possible due to the project specific design.
  • Module may not be designed for scalability.
  • System may collapse due to a sudden increase in user load.

Technical Factors Affecting Performance
While project management related issues have great impact on the output, technical problems may severely affect the application’s overall performance. The problems may stem from the selection of the technology platform, which may be designed for a specific purpose and does not perform well under different conditions.

Usually, however, the technical problems arise due to the developer’s negligence regarding performance. A common practice among many developers is not to optimize the code at the development stage. This code may unnecessarily utilize scarce system resources such as memory and processor. Such coding practice may lead to severe performance bottlenecks
such as:
  • memory leaks
  • array bound errors
  • inefficient buffering
  • too many processing cycles
  • larger number of HTTP transactions
  • too many file transfers between memory and disk
  • inefficient session state management
  • thread contention due to maximum concurrent users
  • poor architecture sizing for peak load
  • inefficient SQL statements
  • lack of proper indexing on the database tables
inappropriate configuration of the servers
These problems are difficult to trace once the code is packaged for deployment and require special tools and methodologies. Another cluster of technical factors affecting performance is security.
Performance of the application and its security are commonly at odds, since adding layers of security (SSL, private/public keys and so on) is extremely computation intensive. Network related issues must also be taken into account, especially with regard to Web applications. They may be coming from the various sources such as:
  •  Older or unoptimized network infrastructure
  • Slow web site connections lead to network traffic and hence poor response time
  • Imbalanced load on servers affecting the performance

Performance Testing Estimation Preparation

It depends of which estimation techniue you are using...if it is WBS then you will have to break all the performance testing activities in smaller parts then using your prior experience you can estimate no of days for each activities. Also take some time for each activity in spare so that you can get time in case of any environmental or deploymental delay or issue. Incase of WBS following activities can be considered: 

A. Planning 
1. Understanding of application 
2. Identifing of NFR 
3. Finilazing the workload model 
4. setup of test environment and tools& monitors 
5. Preperation of Test plan 

B. Preperation 
1. Creation & validation of Test scripts 
2. Creation of Test Data 
3. Creation of business scenarios 
4. Getting approval 

C. Execution 
1.Run a dummy test 
2. Baseline test 
3. Upgrade or tune the environment (if needed) 
4. baseline test2 
5. final performance run 
6. Analysis 
7. Final performance run2 
8. Benchmarking etc.. 

D. Reporting 
1. Creation of performance test report 
2. Review with seniors or peers 
3. Update the report 
4. Publish the final report. 
5. Getting signoff

Why response time of a page does not equal the sum of its requests

The response time for a page typically differs from the sum of its requests. This does not mean that your data is incorrect. The difference can be caused by concurrent requests, page connection times, inter-request delays, and custom code within a page.

The most common reason for the sum of the individual request times within a page to exceed the total page response time is that requests are often sent concurrently (in parallel) to a server. Thus some of the individual request response times overlap so the sum of the request response times would exceed the page response time.

Additionally, the page response time can exceed the sum of the individual request response times within the page for the following reasons:

  • The individual request response times do not include time to establish connections but the page response time does include the connection request time. 
  • Inter-request delays are not reflected in the individual request response time but are reflected in the page response time. 
  • Custom code placed within a page is executed serially (after waiting for all previous individual requests to complete) and thus contributes to the page response time. It does not affect individual request response times. However, we recommend that you place custom code outside of a page, where it will not affect page response time.

Transactional Concurrency in Load testing

How many transaction will need to run per minute if a load test has to run for 2 hours with 5000 users, assuming average length of transaction if 5 minutes?

Solution:


Duration of load test -120 minutes
User load- 5000
Average Length of transaction- 5 minutes
No. of transaction per minute-?

No. of transaction performed by single user in 120 minute = 120 minutes / 5 minute = 24 transaction

No. of transaction performed in 2 hours by 5000 users = 5000*24 =120000 transactions.

No. of transaction per minute =No. of transaction performed during 2 hour by 5000 users/duration of two hour = 120000/120= 1000 Transaction /Minute

Save Dynamic Parameter value in Text File using load runner scripting

I have a value which dynamic for each iteration. I have captured that value using web_reg_save_param(correlation) function. Please any enahncement in the script is most welcome….
Action_Main_URL()
{
char MainID; 
int i; 
char Length[100]; 
long file; 
char * filename = “c:\\Session.txt”; 

if ((file = fopen(filename, “a+” )) == NULL) 

lr_output_message(“Unable to create %s”, filename); 
return -1; 
}

web_reg_save_param( “Cor_Session_Id”, “LB= value=’”, “RB=’”, “Ord=6″, “IgnoreRedirections=Yes”, “Search=Body”, “RelFrameId=1″, LAST );
web_url(“Workplace”,
”URL=http://server/Workplace”,
”Resource=0″,
”RecContentType=text/html”,
”Referer=”,
”Snapshot=t1.inf”,
”Mode=HTML”,
LAST);
lr_start_transaction(“TS_Main_URL_Login”);
web_reg_find(“Text=Record Search and Check-In”, “SaveCount=Value_Count”,
LAST);
web_submit_data(“WcmSignIn.jsp”,
”Action=http://server/Workplace/WcmSignIn.jsp?eventTarget=signInModule&eventName=SignIn”,
”Method=POST”,
”RecContentType=text/html”,
”Referer=http://server/Workplace/WcmSignIn.jsp?targetUrl=WcmDefault.jsp&targetBase=http%3A%2F%2Fserver%2FWorkplace&sessionId={Cor_Session_Id}&originIp=10.x.x.x&originPort=”,
”Snapshot=t2.inf”,
”Mode=HTML”,
ITEMDATA,
”Name=targetBase”, “Value=http://server/Workplace”, ENDITEM,
”Name=originPort”, “Value=”, ENDITEM,
”Name=targetUrl”, “Value=Default.jsp”, ENDITEM,
”Name=encodedSessionId”, “Value=null”, ENDITEM,
”Name=originIp”, “Value=10.x.x.x”, ENDITEM,
”Name=sessionId”, “Value={Cor_Session_Id}”, ENDITEM,
”Name=browserTime1″, “Value=Sat Jan 1 05 EST 2011″, ENDITEM,
”Name=browserTime2″, “Value=Wed Jun 15 05 EDT 2011″, ENDITEM,
”Name=browserOffset1″, “Value=300″, ENDITEM,
”Name=browserOffset2″, “Value=240″, ENDITEM,
”Name=clientTimeZone”, “Value=”, ENDITEM,
”Name=appId”, “Value=Workplace”, ENDITEM,
”Name=userId”, “Value=userid”, ENDITEM,
”Name=password”, “Value=password”, ENDITEM,
EXTRARES,
”Url=images/web/common/Banner.jpg”, “Referer=http://server/Workplace/HomePage.jsp?mode=reset”, ENDITEM,
LAST);
if (atoi(lr_eval_string(“{Value_Count}”)) > 0)
{
lr_output_message(“Page found successfully.”);
}
else
{
lr_error_message(“Page is not found.”);
lr_exit( LR_EXIT_MAIN_ITERATION_AND_CONTINUE,LR_FAIL );
return(0);
}

lr_end_transaction(“TS_Main_URL_Login”,LR_AUTO);
sprintf(Length,”\n%s,”,lr_eval_string(“{Cor_Session_Id}”)); 
i = fwrite(&Length,sizeof(Length), 1, file); 
if ( i > 0) 
lr_output_message(“Successfully wrote %d record”, i ); 
fclose(file);
return 0;
}

Requirement Gathering for Performance test Project

Here are the Ideal Requirements to be included while developing a Performance test plan.

• Deadlines available to complete performance testing, including the scheduled deployment date.
• Whether to use internal or external resources to perform the tests. This will largely depend on time scales and in-house expertise (or lack thereof).
• Test environment design agreed upon. Remember that the test environment should be as close an approximation of the live environment as you can achieve and will require longer to create than you estimate.
• Ensuring that a code freeze applies to the test environment within each testing cycle.
• Ensuring that the test environment will not be affected by other user activity. Nobody else should be using the test environment while performance test execution is taking place; otherwise, there is a danger that the test execution and results may be compromised.
• All performance targets identified and agreed to by appropriate business stakeholders. This means consensus from all involved and interested parties on the performance targets for the application. 
• The key application transactions identified, documented, and ready to script. Remember how vital it is to have correctly identified the key transactions to script. Otherwise, your performance testing is in danger of becoming a wasted exercise.
• Which parts of transactions (such as login or time spent on a search) should be monitored separately. This will be used in Step 3 for “checkpointing.”
• Identify the input, target, and runtime data requirements for the transactions that you select. This critical consideration ensures that the transactions you script run correctly and that the target database is realistically populated in terms of size and content. Data is critical to performance testing. Make sure that you can create enough test data of the correct type within the time frames of your testing project. You may need to look at some form of automated data management, and don’t forget to consider data security and confidentiality.
• Performance tests identified in terms of number, type, transaction content, and virtual user deployment. You should also have decided on the think time, pacing, and injection profile for each test transaction deployment.
• Identify and document server, application server, and network KPIs. Remember that you must monitor the application landscape as comprehensively as possible to ensure that you have the necessary information available to identify and resolve any problems that occur.
• Identify the deliverables from the performance test in terms of a report on the test’s outcome versus the agreed performance targets. It’s a good practice to produce a document template that can be used for this purpose.
• A procedure is defined for submission of performance defects discovered during testing cycles to the development or application vendor. This is an important consideration that is often overlooked. What happens if, despite your best efforts, you find major application, related problems? You need to build contingency into your test plan to accommodate this possibility. There may also be the added complexity of involving offshore resources in the defect submission process. If your plan is to carry out the performance testing in-house then you will also need to address the following points, relating to the testing team.