Performance Testing Tools

How to choose a value randomly from the list:


For demonstrating the example, we will use the sample application (HP Web Tours Application).  This application shows a sample where we can book flight tickets.
The options in depart and arrive are shown as below:
web_submit_data("reservations.pl",
        "Action=http://127.0.0.1:1080/cgi-bin/reservations.pl",
        "Method=POST",
        "RecContentType=text/html",
        "Referer=http://127.0.0.1:1080/cgi-bin/reservations.pl?page=welcome",
        "Snapshot=t4.inf",
        "Mode=HTML",
        ITEMDATA,
        "Name=advanceDiscount""Value=0"ENDITEM,
        "Name=depart""Value=London"ENDITEM,
        "Name=departDate""Value=03/28/2014"ENDITEM,
        "Name=arrive""Value=Paris"ENDITEM,
        "Name=returnDate""Value=03/29/2014"ENDITEM,
        "Name=numPassengers""Value=1"ENDITEM,
        "Name=roundtrip""Value=on"ENDITEM,
        "Name=seatPref""Value=Aisle"ENDITEM,
        "Name=seatType""Value=Coach"ENDITEM,
        "Name=.cgifields""Value=roundtrip"ENDITEM,
        "Name=.cgifields""Value=seatType"ENDITEM,
        "Name=.cgifields""Value=seatPref"ENDITEM,
        "Name=findFlights.x""Value=45"ENDITEM,
        "Name=findFlights.y""Value=6"ENDITEM,
        LAST);

Loadrunner recorded the script as above when depart is selected as “London” and arrive is selected “Paris”:
Now we want to provide a random value in depart and arrive from the list of values available.

Solution:

Simple Solution is we capture the values and perform a parameterization.
But lets do using it capturing the values using correlation at runtime and select a random value programmatically.

1.     Capture the list of values.
2.     Get a random value from array.
3.     In next web_sumbit_form or web_url use the value.

If we check the Code generation Log we have the below values for depart and arrive.



Instead of discussing theoretically, lets go through below action.
Statements are followed as comments where and when required.
Solution:

Action()
{

    int place_count,i;
    char Place[100];
    web_reg_save_param("places","LB=<option value=\"","RB=\">","ORD=ALL",LAST);
/*web_reg_save_param should be placed just above the request
Here we want to exclude the double quotes. So we used \” in both LB and RB.
Also we have used ORD=ALL to capture all the options
*/

    lr_start_transaction("Flights");

    web_url("welcome.pl",
        "URL=http://127.0.0.1:1080/cgi-bin/welcome.pl?page=search",
        "Resource=0",
        "RecContentType=text/html",
        "Referer=http://127.0.0.1:1080/cgi-bin/nav.pl?page=menu&in=home",
        "Snapshot=t3.inf",
        "Mode=HTML",
        LAST);

    lr_end_transaction("Flights",LR_AUTO);
  

//Capturing the Number of places found using correlation

    place_count=atoi(lr_eval_string("{places_count}"));
    lr_output_message("Number of places= %d",place_count);
    
// output: Action.c(47): Number of places= 18
//Here I have used lr_output_message, the output in the Replay Log is shown along with the Line number.

    for(i=1;i<=place_count;i++)
    {
        sprintf (Place,"{places_%d}",i );
      //save Place to  String city
      lr_save_stringlr_eval_string (Place),"city" );
      lr_messagelr_eval_string("{city}") );

    }

/*
Output obtained from above For Loop:

Frankfurt
London
Los Angeles
Paris
Portland
San Francisco
Seattle
Sydney
Zurich
Frankfurt
London
Los Angeles
Paris
Portland
San Francisco
Seattle
Sydney
Zurich
*/
   
/*  As we check the above output, we have duplication of values. In total there are 9 cities in the List. But we have captured 18 cities, 9 from depart and 9 from arrive. Since the List are same we will select only half list as below.
*/
    //code to select random value
    //(place_count)/2 will select only 9 out of 18.

    sprintf (Place,"{places_%d}",1 + rand() % (place_count/2) );

      //save Place to  String depart
      lr_save_stringlr_eval_string (Place),"depart" );
      lr_message"City Selected for Depart : %s" , lr_eval_string("{depart}") );

      //Output: City Selected for Depart : Seattle
//Here I have used lr_message, so in the Replay log the message is displayed without the Line number


    sprintf (Place,"{places_%d}",1 + rand() % (place_count/2) );
      //save Place to  String arrive
      lr_save_stringlr_eval_string (Place),"arrive" );
      lr_message"City Selected for Arrival : %s" , lr_eval_string("{arrive}") );

      // Output: City Selected for Arrival : Portland
      
//Parameterizing  the Depart Date as Todays date
    lr_save_datetime("%m/%d/%Y"DATE_NOW"departDate");
    lr_output_message("Depart Date is %s",lr_eval_string("{departDate}"));

    //Output: Action.c(103): Depart Date is 03/29/2014

// Parameterizing  the return Date as todays date+3

    lr_save_datetime("%m/%d/%Y"DATE_NOW+ONE_DAY*3"returnDate");
    lr_output_message("Return Date is %s",lr_eval_string("{returnDate}"));

    //Output: Action.c(106): Return Date is 04/01/2014
    
//Parameterizing and passing the Values of depart, departdate, arrive and arrivedate
    lr_start_transaction("Find Flight");

    web_submit_data("reservations.pl",
        "Action=http://127.0.0.1:1080/cgi-bin/reservations.pl",
        "Method=POST",
        "RecContentType=text/html",
        "Referer=http://127.0.0.1:1080/cgi-bin/reservations.pl?page=welcome",
        "Snapshot=t4.inf",
        "Mode=HTML",
        ITEMDATA,
        "Name=advanceDiscount""Value=0"ENDITEM,
        "Name=depart""Value={depart}"ENDITEM,
        "Name=departDate""Value={departDate}"ENDITEM,
        "Name=arrive""Value={arrive}"ENDITEM,
        "Name=returnDate""Value={returnDate}"ENDITEM,
        "Name=numPassengers""Value=1"ENDITEM,
        "Name=roundtrip""Value=on"ENDITEM,
        "Name=seatPref""Value=Aisle"ENDITEM,
        "Name=seatType""Value=Coach"ENDITEM,
        "Name=.cgifields""Value=roundtrip"ENDITEM,
        "Name=.cgifields""Value=seatType"ENDITEM,
        "Name=.cgifields""Value=seatPref"ENDITEM,
        "Name=findFlights.x""Value=45"ENDITEM,
        "Name=findFlights.y""Value=6"ENDITEM,
        LAST);

    lr_end_transaction("Find Flight",LR_AUTO);
    return 0;
}
In the above we can do a check for random values that depart and arrive cities are not same.


LoadRunner Runtime Settings: A Practical Guide to Realistic Workload Modeling

When running training sessions or mentoring engineers, one question comes up repeatedly: “What runtime settings should we use?” As if there exists a universal configuration that works for every performance test. There isn’t. Runtime settings must always align with the objective of the test. However, in practice, there is a core set of principles that apply to most real-world scenarios. Understanding these is far more valuable than memorizing settings.

For scripts that support multiple actions, the most effective approach is to model business processes explicitly. Create a separate action for each business transaction and assign percentage weightings to reflect real production usage. This provides a simple and scalable workload model. Complex constructs such as sequential execution or action blocks are rarely required. One exception is when fractional execution is needed, for example 0.1 percent. Since LoadRunner supports only integer percentages, this can be approximated using nested blocks. In most performance tests, scripts are executed based on duration, not iteration count. Iteration-based execution is typically limited to debugging in VuGen.

Pacing is often misunderstood and misused. The option “as soon as previous iteration ends” is useful for debugging or data validation but should not be used for load testing. Fixed pacing can lead to synchronized user behavior where multiple users hit the system at the same time, creating artificial spikes and unrealistic load patterns. A better strategy is to use random pacing intervals while ensuring that the average iteration time aligns with the target throughput. The lower bound of pacing must not exceed the maximum execution time of the business process, otherwise the system will generate fewer transactions than intended.

Logging introduces overhead and should be used carefully. During debugging, full logging is appropriate. During load execution, logging should be minimized. A balanced approach is to enable extended logging but configure it to log only when errors occur. This provides visibility without impacting performance.

Think time simulates real user behavior and must not be ignored. A good practice is to use randomized think time, typically 50 to 150 percent of recorded values. Think time should only be ignored during debugging or data setup. Removing think time during load testing results in unrealistic, bursty traffic that does not reflect actual user behavior.

Additional attributes are often overlooked but highly useful. They allow runtime parameterization without modifying the script. For example, defining a parameter such as ServerName allows you to switch environments directly from the Controller. This simplifies testing across multiple environments.

Several miscellaneous settings are important. Continue on error should be used only if the script includes explicit error-handling logic. Failing open transactions on error should always be enabled to ensure accurate reporting. Generating snapshots on error is useful for debugging and failure analysis. Running virtual users as threads is generally preferred because it reduces memory consumption. Running as a process should only be used when required, such as when dealing with non-thread-safe code. Automatic transaction creation options should be avoided. Transactions should be explicitly defined using lr_start_transaction to maintain control and clarity.

Network speed simulation should be used intentionally. In most scenarios, virtual users should use maximum bandwidth. If bandwidth constraints need to be tested, such as for mobile users, this should be done in a separate scenario. Mixing different bandwidth profiles in a single test can distort results.

Browser emulation is often misunderstood. The User-Agent setting only changes the HTTP header sent to the server. It matters only if the application serves different content based on browser type. Otherwise, it has no impact on performance behavior.

Proxy configuration should reflect the actual production architecture. If users do not go through a proxy in production, it should not be included in the test. Proxies can introduce additional latency, create misleading errors, and interfere with load balancing behavior. They should only be included if they are explicitly part of the system under test.

Download filters help control external dependencies. A common use case is blocking third-party analytics or tracking domains. A better approach is to explicitly allow only required domains rather than selectively blocking some. This reduces the risk of missing hidden dependencies.

Content checks are one of the most underutilized features. Without validation, failed responses may still be counted as successful. Content checks ensure that responses are not only fast but also correct. This makes test results meaningful and reliable.

Runtime settings are not just configuration parameters. They define the behavioral model of your load test. If configured incorrectly, throughput will be inaccurate, concurrency will be unrealistic, and bottlenecks will be misidentified.

Performance testing is not about executing scripts. It is about accurately simulating user behavior at scale. Runtime settings are where system behavior, user interaction patterns, and workload distribution come together. Getting them right ensures that your test reflects production reality. Getting them wrong makes even a perfectly scripted test meaningless.

A System-Level Approach to Achieving Target Throughput -Pacing

Performance testing is often treated as a mechanical exercise—configure users, add think time, set pacing, and execute. Yet in real-world systems, this approach frequently fails. Not because the tools are wrong, but because the model of load generation is misunderstood. This is especially evident when teams attempt to achieve a precise throughput target (TPS/TPM) using virtual users.


The hidden problem lies in misinterpreting pacing. In many projects, pacing is treated as a simple tuning knob: increase pacing to reduce load, decrease it to increase load. While directionally correct, this view is incomplete. Throughput is not controlled by pacing alone. It is controlled by the total cycle time of a user.


For a single transaction per iteration, each virtual user operates in a loop consisting of response time (RT), think time (TT), and pacing (P). The cycle time per user is defined as CT = RT + TT + P. System throughput then becomes TPS = N / CT, where N is the number of users. This leads to the key relationship: RT + TT + P = N / TPS.


This means that achieving a target TPS is not about tuning pacing in isolation. It is about solving a system equation.


Consider a practical example. Suppose the target throughput is 50 TPS, with 100 users and an average response time of 0.5 seconds. Each user must contribute equally, so the required cycle time per user is CT = 100 / 50 = 2 seconds. Out of this, 0.5 seconds is consumed by the system. The remaining time must be controlled by the test model. Therefore, TT + P = 2 - 0.5 = 1.5 seconds.


This means each user must execute one transaction every 2 seconds, spending 1.5 seconds outside the system (think time plus pacing). This results in 0.5 TPS per user and 50 TPS overall.


The critical insight here is that many performance strategies fail because think time and pacing are added arbitrarily, without anchoring them to throughput equations. The result is a common pattern: tests pass in controlled environments, but production systems fail under real conditions because the workload model is not mathematically aligned with system behavior.


Many teams introduce random pacing to simulate real-world variability. While conceptually valid, this approach has a subtle flaw. Throughput is inversely proportional to pacing, expressed as TPM = (N × 60) / P. This means a uniform distribution of pacing values does not produce uniform throughput. Instead, the average load tends to skew toward the arithmetic mean of pacing values, not the intended throughput.


For example, if pacing is randomly varied between 72 and 360 seconds, the system will not naturally average to the desired TPM unless the distribution is carefully controlled. This creates a mismatch between expected and actual load patterns.


In modern distributed systems, this becomes even more critical. Kubernetes scheduling, autoscaling delays, and external dependencies introduce variability that amplifies modeling inaccuracies. A robust load model must therefore be deterministic at the macro level, ensuring correct TPS, while allowing controlled variability at the micro level to simulate realistic user behavior.

A better approach is to start with throughput equations and define cycle time mathematically. Where possible, use arrival-rate or throughput-based workload models instead of purely user-driven models. Pacing should act as a stabilizer, not the primary control mechanism. Randomness should be introduced carefully, ensuring it aligns statistically with the desired throughput.


Performance engineering today is no longer about scripting users. It is about modeling systems. If the load model is not mathematically consistent, the results are operationally irrelevant. Understanding pacing is not about configuring delays, but about understanding how user behavior, system latency, and throughput interact as a unified system.


I work at the intersection of Performance Engineering, SRE, Distributed Systems, and perfMLOps, focusing on how system behavior—not just code—determines real-world performance.