Flex Protocol Scripting in LR

Introduction

Adobe Flex is a software development kit released by Adobe Systems for the development and deployment of cross-platform rich Internet applications based on the Adobe Flash platform. Flex applications can be written using Adobe Flex Builderor by using the freely available Flex compiler from Adobe.
Developers use two core languages to create Flex applications. The first core language is MXML, the Macromedia Flex Mark-up Language, which includes a rich set of XML tags that allows developers to layout user interfaces. Some MXML constructs allow you to call remote objects, store data returned in a model, and customize your own look and feel to MXML components.
The second core language for Flex development is Action Script 2.0, which is similar to JavaScript. Action Script elements are coded inside MXML pages has robust event handling capabilities to allow the application to respond to dynamic user interactions. Unlike JavaScript, since Action Script runs inside the Flash plug-in there is no need to rewrite several versions of the same code to support different browsers.
The Flex server is responsible for translating the MXML and Action Script components into Flash byte code in the form of .SWF files. The SWF file is executed on the client in the Flash runtime environment. The Flex server provides other services such as caching, concurrency, and handling remote object requests. 


Flex Protocol with LR

VuGen allows you to create Vusers that emulate the protocol suite provided with the Flex 2 SDK.

RIAs are lightweight online programs that provide users with more dynamic control than with a standard web page. Like Web applications built with AJAX, Flex applications are more responsive, because the application does not need to load a new Web page every time the user takes action. However, unlike working with AJAX, Flex is independent of browser implementations such as JavaScript or CSS. The framework runs on Adobe's cross-platform Flash Player. 

Pre-requisite
  •     Load Runner 11.0 support the flex with the patch3
  •     JRE 6.0 
  •     Adobe flash player 10 and higher

Environment Variables

Verify for the following environment variables in Windows Operating system.

The environment variables can be reached by following the below steps:

1.    Right-click “My Computer”. Go to properties.
2.    Go to Advanced Tab
3.    Click on the Environment Variables button

Click on the “Ne“button under System variables and enter the below values:

Variable name: HP_FLEX_JAVA_LOG_FILE
Variable value: C:\flex.log

Variable name: VUGEN_PATH
Variable value: C:\Program Files\HP\Virtual User Generator\

Variable name: ANALYSIS_PATH
Variable value: C:\Program Files\HP\LoadRunner\

* The HP_FLEX_JAVA_LOG_FILE is used to generate the log file which will help us to identify the classes involved in a particular transaction.
This log file will be very useful for debugging. Ensure that there are no errors in the flex log file after recording.


Recording Options

The following recording options need to be considered before recording in flex protocol:

·         Go to Tools à Recording Options in LoadRunner.

·         Under Script Check the check box against “Generate recorded event logs”. This setting helps to generate the log files for debugging

·         Under Protocols tab select all three check boxes viz.,

o    Action Message Format (AMF)
o    Flex
o    Web(HTTP/HTML)

·         Under Recording, select HTML based script.

·         Select HTML Advanced button and select “A script that contains explicit URLs only (e.g.web_url, web_submit_data)”  and “Record in separate steps and use concurrent grops” for Non HTML – generated elements
·         Under Code generation select Encode AMF3 using external parser.And provide the below jar files location under Value column.

o    flex-messaging-common.jar
o    flex-messaging-core.jar
o    (Any application specific jars.The jars are dependent on the transaction and should be verified before recording every transaction)  

·         Under Port-Mapping, click on Options. Under Advanced Port Mapping Settings à Change Log level to Advanced Debug. - This setting would enable to generate flex log in C:\ drive.
  
·         The Advanced tab under HTTP properties is the standard one. 

·         Under Correlation tab, uncheck the “Enable correlation during recording” check box.
 


Post Recording Verification

After recording verify following:

1.    A file by name “flex. log” should be generated in C:\ drive.

2.    flex_amf_call should be generated with readable XML’s and not the binary format for all requests.


Correlation in Flex Scripts

Flex applications often contain dynamic data, data that changes each time you run the script. For example, the object name may change from run to run.

When you record a Vuser script, VuGen records a set of data and argument values. When you replay the script, however, the server may reject these arguments and issue an error. This error could be the result of dynamic data that is outdated and no longer accepted by the server.
To overcome this, you apply correlation to your script:

➤ Save the server response in preparation for extracting the required values.
➤ Extract the required values from the server response.
➤ Save the values to a parameter.
➤ Use those parameters as input to your Flex requests.

These errors are not always obvious, and you may only detect them by carefully examining Vuser log files. If you encounter an error when running your Vuser, examine the script at the point where the error occurred. Often, correlation will solve the problem by enabling you to use the results of one statement as input for another.
To perform correlation:

Locate the step in your script that failed due to dynamic values that need correlation.

Use the Replay Log to assist you in finding the problematic step.





Identify the server response with the correct value in one of the previous steps.

Double-Click the error in the Replay log to go to the step with the error. Examine the preceding steps in Tree View and look for the value in the Server Response tab.


3 Save the entire server response to a parameter.

Before you extract the value, the entire server response should be saved to a parameter as follows:

➤ Right-click the step node (in the left Action pane) corresponding to the server response containing the value and select Properties.
➤ In the Flex Call Properties dialog box, type a Response parameter name.
➤ Click OK to save the new parameter name.

Save the original server response value to a parameter.

➤ In the Replay Snapshot: Response Data, right-click the node above the value (for example, string), and select Save value in parameter.



 
In the XML Parameter Properties dialog, specify a parameter Name. You will use this name in subsequent steps.

➤ Click OK. The script will now contain a new function, lr_xml_get_values.

Insert the parameter in the subsequent calls.

In VuGen edit view, beginning with the call that failed, replace the value in all subsequent calls to the object with the parameter that you defined:

➤ Right-click the step node (in the Action pane) corresponding to the failed call and select Properties.
➤ Locate the argument that required correlation.
➤ In the Value box, type the parameter name in curly brackets, for example, {ParamValue_string}.



Click OK
 
Run the script.

Make sure that VuGen properly substitutes the argument value with the parameter value that you saved.

Some Important JAR files needed are 


 
We need application JAR files as well along with these JAR files from developer to generate the decoded AMF calls in the scripts else we can’t parse and correlate the requests.

SAR COMMANDS UNIX

Using sar you can monitor performance of various Linux subsystems (CPU, Memory, I/O..) in real time.
Using sar, you can also collect all performance data on an on-going basis, store them, and do historical analysis to identify bottlenecks.

Sar is part of the sysstat package.
This article explains how to install and configure sysstat package (which contains sar utility) and explains how to monitor the following Linux performance statistics using sar.
  1. Collective CPU usage
  2. Individual CPU statistics
  3. Memory used and available
  4. Swap space used and available
  5. Overall I/O activities of the system
  6. Individual device I/O activities
  7. Context switch statistics
  8. Run queue and load average data
  9. Network statistics
  10. Report sar data from a specific time
This is the only guide you’ll need for sar utility. So, bookmark this for your future reference.

I. Install and Configure Sysstat

Install Sysstat Package

First, make sure the latest version of sar is available on your system. Install it using any one of the following methods depending on your distribution.
sudo apt-get install sysstat
(or)
yum install sysstat
(or)
rpm -ivh sysstat-10.0.0-1.i586.rpm

Install Sysstat from Source

wget http://pagesperso-orange.fr/sebastien.godard/sysstat-10.0.0.tar.bz2

tar xvfj sysstat-10.0.0.tar.bz2

cd sysstat-10.0.0

./configure --enable-install-cron
Note: Make sure to pass the option –enable-install-cron. This does the following automatically for you. If you don’t configure sysstat with this option, you have to do this ugly job yourself manually.
  • Creates /etc/rc.d/init.d/sysstat
  • Creates appropriate links from /etc/rc.d/rc*.d/ directories to /etc/rc.d/init.d/sysstat to start the sysstat automatically during Linux boot process.
  • For example, /etc/rc.d/rc3.d/S01sysstat is linked automatically to /etc/rc.d/init.d/sysstat
After the ./configure, install it as shown below.
make

make install
Note: This will install sar and other systat utilities under /usr/local/bin
Once installed, verify the sar version using “sar -V”. Version 10 is the current stable version of sysstat.
$ sar -V
sysstat version 10.0.0
(C) Sebastien Godard (sysstat  orange.fr)
Finally, make sure sar works. For example, the following gives the system CPU statistics 3 times (with 1 second interval).
$ sar 1 3
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

01:27:32 PM       CPU     %user     %nice   %system   %iowait    %steal     %idle
01:27:33 PM       all      0.00      0.00      0.00      0.00      0.00    100.00
01:27:34 PM       all      0.25      0.00      0.25      0.00      0.00     99.50
01:27:35 PM       all      0.75      0.00      0.25      0.00      0.00     99.00
Average:          all      0.33      0.00      0.17      0.00      0.00     99.50

Utilities part of Sysstat

Following are the other sysstat utilities.
  • sar collects and displays ALL system activities statistics.
  • sadc stands for “system activity data collector”. This is the sar backend tool that does the data collection.
  • sa1 stores system activities in binary data file. sa1 depends on sadc for this purpose. sa1 runs from cron.
  • sa2 creates daily summary of the collected statistics. sa2 runs from cron.
  • sadf can generate sar report in CSV, XML, and various other formats. Use this to integrate sar data with other tools.
  • iostat generates CPU, I/O statistics
  • mpstat displays CPU statistics.
  • pidstat reports statistics based on the process id (PID)
  • nfsiostat displays NFS I/O statistics.
  • cifsiostat generates CIFS statistics.
This article focuses on sysstat fundamentals and sar utility.

Collect the sar statistics using cron job – sa1 and sa2

Create sysstat file under /etc/cron.d directory that will collect the historical sar data.
# vi /etc/cron.d/sysstat
*/10 * * * * root /usr/local/lib/sa/sa1 1 1
53 23 * * * root /usr/local/lib/sa/sa2 -A
If you’ve installed sysstat from source, the default location of sa1 and sa2 is /usr/local/lib/sa. If you’ve installed using your distribution update method (for example: yum, up2date, or apt-get), this might be /usr/lib/sa/sa1 and /usr/lib/sa/sa2.

/usr/local/lib/sa/sa1

  • This runs every 10 minutes and collects sar data for historical reference.
  • If you want to collect sar statistics every 5 minutes, change */10 to */5 in the above /etc/cron.d/sysstat file.
  • This writes the data to /var/log/sa/saXX file. XX is the day of the month. saXX file is a binary file. You cannot view its content by opening it in a text editor.
  • For example, If today is 26th day of the month, sa1 writes the sar data to /var/log/sa/sa26
  • You can pass two parameters to sa1: interval (in seconds) and count.
  • In the above crontab example: sa1 1 1 means that sa1 collects sar data 1 time with 1 second interval (for every 10 mins).

/usr/local/lib/sa/sa2

  • This runs close to midnight (at 23:53) to create the daily summary report of the sar data.
  • sa2 creates /var/log/sa/sarXX file (Note that this is different than saXX file that is created by sa1). This sarXX file created by sa2 is an ascii file that you can view it in a text editor.
  • This will also remove saXX files that are older than a week. So, write a quick shell script that runs every week to copy the /var/log/sa/* files to some other directory to do historical sar data analysis.

II. 10 Practical Sar Usage Examples

There are two ways to invoke sar.
  1. sar followed by an option (without specifying a saXX data file). This will look for the current day’s saXX data file and report the performance data that was recorded until that point for the current day.
  2. sar followed by an option, and additionally specifying a saXX data file using -f option. This will report the performance data for that particular day. i.e XX is the day of the month.
In all the examples below, we are going to explain how to view certain performance data for the current day. To look for a specific day, add “-f /var/log/sa/saXX” at the end of the sar command.
All the sar command will have the following as the 1st line in its output.
$ sar -u
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)
  • Linux 2.6.18-194.el5PAE – Linux kernel version of the system.
  • (dev-db) – The hostname where the sar data was collected.
  • 03/26/2011 – The date when the sar data was collected.
  • _i686_ – The system architecture
  • (8 CPU) – Number of CPUs available on this system. On multi core systems, this indicates the total number of cores.

1. CPU Usage of ALL CPUs (sar -u)

This gives the cumulative real-time CPU usage of all CPUs. “1 3″ reports for every 1 seconds a total of 3 times. Most likely you’ll focus on the last field “%idle” to see the cpu load.
$ sar -u 1 3
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

01:27:32 PM       CPU     %user     %nice   %system   %iowait    %steal     %idle
01:27:33 PM       all      0.00      0.00      0.00      0.00      0.00    100.00
01:27:34 PM       all      0.25      0.00      0.25      0.00      0.00     99.50
01:27:35 PM       all      0.75      0.00      0.25      0.00      0.00     99.00
Average:          all      0.33      0.00      0.17      0.00      0.00     99.50
Following are few variations:
  • sar -u Displays CPU usage for the current day that was collected until that point.
  • sar -u 1 3 Displays real time CPU usage every 1 second for 3 times.
  • sar -u ALL Same as “sar -u” but displays additional fields.
  • sar -u ALL 1 3 Same as “sar -u 1 3″ but displays additional fields.
  • sar -u -f /var/log/sa/sa10 Displays CPU usage for the 10day of the month from the sa10 file.

2. CPU Usage of Individual CPU or Core (sar -P)

If you have 4 Cores on the machine and would like to see what the individual cores are doing, do the following.
“-P ALL” indicates that it should displays statistics for ALL the individual Cores.
In the following example under “CPU” column 0, 1, 2, and 3 indicates the corresponding CPU core numbers.
$ sar -P ALL 1 1
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

01:34:12 PM       CPU     %user     %nice   %system   %iowait    %steal     %idle
01:34:13 PM       all     11.69      0.00      4.71      0.69      0.00     82.90
01:34:13 PM         0     35.00      0.00      6.00      0.00      0.00     59.00
01:34:13 PM         1     22.00      0.00      5.00      0.00      0.00     73.00
01:34:13 PM         2      3.00      0.00      1.00      0.00      0.00     96.00
01:34:13 PM         3      0.00      0.00      0.00      0.00      0.00    100.00
“-P 1″ indicates that it should displays statistics only for the 2nd Core. (Note that Core number starts from 0).
$ sar -P 1 1 1
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

01:36:25 PM       CPU     %user     %nice   %system   %iowait    %steal     %idle
01:36:26 PM         1      8.08      0.00      2.02      1.01      0.00     88.89
Following are few variations:
  • sar -P ALL Displays CPU usage broken down by all cores for the current day.
  • sar -P ALL 1 3 Displays real time CPU usage for ALL cores every 1 second for 3 times (broken down by all cores).
  • sar -P 1 Displays CPU usage for core number 1 for the current day.
  • sar -P 1 1 3 Displays real time CPU usage for core number 1, every 1 second for 3 times.
  • sar -P ALL -f /var/log/sa/sa10 Displays CPU usage broken down by all cores for the 10day day of the month from sa10 file.

3. Memory Free and Used (sar -r)

This reports the memory statistics. “1 3″ reports for every 1 seconds a total of 3 times. Most likely you’ll focus on “kbmemfree” and “kbmemused” for free and used memory.
$ sar -r 1 3
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

07:28:06 AM kbmemfree kbmemused  %memused kbbuffers  kbcached  kbcommit   %commit  kbactive   kbinact
07:28:07 AM   6209248   2097432     25.25    189024   1796544    141372      0.85   1921060     88204
07:28:08 AM   6209248   2097432     25.25    189024   1796544    141372      0.85   1921060     88204
07:28:09 AM   6209248   2097432     25.25    189024   1796544    141372      0.85   1921060     88204
Average:      6209248   2097432     25.25    189024   1796544    141372      0.85   1921060     88204
Following are few variations:
  • sar -r
  • sar -r 1 3
  • sar -r -f /var/log/sa/sa10

4. Swap Space Used (sar -S)

This reports the swap statistics. “1 3″ reports for every 1 seconds a total of 3 times. If the “kbswpused” and “%swpused” are at 0, then your system is not swapping.
$ sar -S 1 3
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

07:31:06 AM kbswpfree kbswpused  %swpused  kbswpcad   %swpcad
07:31:07 AM   8385920         0      0.00         0      0.00
07:31:08 AM   8385920         0      0.00         0      0.00
07:31:09 AM   8385920         0      0.00         0      0.00
Average:      8385920         0      0.00         0      0.00
Following are few variations:
  • sar -S
  • sar -S 1 3
  • sar -S -f /var/log/sa/sa10
Notes:
  • Use “sar -R” to identify number of memory pages freed, used, and cached per second by the system.
  • Use “sar -H” to identify the hugepages (in KB) that are used and available.
  • Use “sar -B” to generate paging statistics. i.e Number of KB paged in (and out) from disk per second.
  • Use “sar -W” to generate page swap statistics. i.e Page swap in (and out) per second.

5. Overall I/O Activities (sar -b)

This reports I/O statistics. “1 3″ reports for every 1 seconds a total of 3 times.
Following fields are displays in the example below.
  • tps – Transactions per second (this includes both read and write)
  • rtps – Read transactions per second
  • wtps – Write transactions per second
  • bread/s – Bytes read per second
  • bwrtn/s – Bytes written per second
$ sar -b 1 3
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

01:56:28 PM       tps      rtps      wtps   bread/s   bwrtn/s
01:56:29 PM    346.00    264.00     82.00   2208.00    768.00
01:56:30 PM    100.00     36.00     64.00    304.00    816.00
01:56:31 PM    282.83     32.32    250.51    258.59   2537.37
Average:       242.81    111.04    131.77    925.75   1369.90
Following are few variations:
  • sar -b
  • sar -b 1 3
  • sar -b -f /var/log/sa/sa10
Note: Use “sar -v” to display number of inode handlers, file handlers, and pseudo-terminals used by the system.

6. Individual Block Device I/O Activities (sar -d)

To identify the activities by the individual block devices (i.e a specific mount point, or LUN, or partition), use “sar -d”
$ sar -d 1 1
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

01:59:45 PM       DEV       tps  rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz     await     svctm     %util
01:59:46 PM    dev8-0      1.01      0.00      0.00      0.00      0.00      4.00      1.00      0.10
01:59:46 PM    dev8-1      1.01      0.00      0.00      0.00      0.00      4.00      1.00      0.10
01:59:46 PM dev120-64      3.03     64.65      0.00     21.33      0.03      9.33      5.33      1.62
01:59:46 PM dev120-65      3.03     64.65      0.00     21.33      0.03      9.33      5.33      1.62
01:59:46 PM  dev120-0      8.08      0.00    105.05     13.00      0.00      0.38      0.38      0.30
01:59:46 PM  dev120-1      8.08      0.00    105.05     13.00      0.00      0.38      0.38      0.30
01:59:46 PM dev120-96      1.01      8.08      0.00      8.00      0.01      9.00      9.00      0.91
01:59:46 PM dev120-97      1.01      8.08      0.00      8.00      0.01      9.00      9.00      0.91
In the above example “DEV” indicates the specific block device.
For example: “dev53-1″ means a block device with 53 as major number, and 1 as minor number.
The device name (DEV column) can display the actual device name (for example: sda, sda1, sdb1 etc.,), if you use the -p option (pretty print) as shown below.
$ sar -p -d 1 1
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

01:59:45 PM       DEV       tps  rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz     await     svctm     %util
01:59:46 PM       sda      1.01      0.00      0.00      0.00      0.00      4.00      1.00      0.10
01:59:46 PM      sda1      1.01      0.00      0.00      0.00      0.00      4.00      1.00      0.10
01:59:46 PM      sdb1      3.03     64.65      0.00     21.33      0.03      9.33      5.33      1.62
01:59:46 PM      sdc1      3.03     64.65      0.00     21.33      0.03      9.33      5.33      1.62
01:59:46 PM      sde1      8.08      0.00    105.05     13.00      0.00      0.38      0.38      0.30
01:59:46 PM      sdf1      8.08      0.00    105.05     13.00      0.00      0.38      0.38      0.30
01:59:46 PM      sda2      1.01      8.08      0.00      8.00      0.01      9.00      9.00      0.91
01:59:46 PM      sdb2      1.01      8.08      0.00      8.00      0.01      9.00      9.00      0.91
Following are few variations:
  • sar -d
  • sar -d 1 3
  • sar -d -f /var/log/sa/sa10
  • sar -p -d

7. Display context switch per second (sar -w)

This reports the total number of processes created per second, and total number of context switches per second. “1 3″ reports for every 1 seconds a total of 3 times.
$ sar -w 1 3
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

08:32:24 AM    proc/s   cswch/s
08:32:25 AM      3.00     53.00
08:32:26 AM      4.00     61.39
08:32:27 AM      2.00     57.00
Following are few variations:
  • sar -w
  • sar -w 1 3
  • sar -w -f /var/log/sa/sa10

8. Reports run queue and load average (sar -q)

This reports the run queue size and load average of last 1 minute, 5 minutes, and 15 minutes. “1 3″ reports for every 1 seconds a total of 3 times.
$ sar -q 1 3
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

06:28:53 AM   runq-sz  plist-sz   ldavg-1   ldavg-5  ldavg-15   blocked
06:28:54 AM         0       230      2.00      3.00      5.00         0
06:28:55 AM         2       210      2.01      3.15      5.15         0
06:28:56 AM         2       230      2.12      3.12      5.12         0
Average:            3       230      3.12      3.12      5.12         0
Note: The “blocked” column displays the number of tasks that are currently blocked and waiting for I/O operation to complete.
Following are few variations:
  • sar -q
  • sar -q 1 3
  • sar -q -f /var/log/sa/sa10

9. Report network statistics (sar -n)

This reports various network statistics. For example: number of packets received (transmitted) through the network card, statistics of packet failure etc.,. “1 3″ reports for every 1 seconds a total of 3 times.
sar -n KEYWORD
KEYWORD can be one of the following:
  • DEV – Displays network devices vital statistics for eth0, eth1, etc.,
  • EDEV – Display network device failure statistics
  • NFS – Displays NFS client activities
  • NFSD – Displays NFS server activities
  • SOCK – Displays sockets in use for IPv4
  • IP – Displays IPv4 network traffic
  • EIP – Displays IPv4 network errors
  • ICMP – Displays ICMPv4 network traffic
  • EICMP – Displays ICMPv4 network errors
  • TCP – Displays TCPv4 network traffic
  • ETCP – Displays TCPv4 network errors
  • UDP – Displays UDPv4 network traffic
  • SOCK6, IP6, EIP6, ICMP6, UDP6 are for IPv6
  • ALL – This displays all of the above information. The output will be very long.
$ sar -n DEV 1 1
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

01:11:13 PM     IFACE   rxpck/s   txpck/s   rxbyt/s   txbyt/s   rxcmp/s   txcmp/s  rxmcst/s
01:11:14 PM        lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00
01:11:14 PM      eth0    342.57    342.57  93923.76 141773.27      0.00      0.00      0.00
01:11:14 PM      eth1      0.00      0.00      0.00      0.00      0.00      0.00      0.00

10. Report Sar Data Using Start Time (sar -s)

When you view historic sar data from the /var/log/sa/saXX file using “sar -f” option, it displays all the sar data for that specific day starting from 12:00 a.m for that day.
Using “-s hh:mi:ss” option, you can specify the start time. For example, if you specify “sar -s 10:00:00″, it will display the sar data starting from 10 a.m (instead of starting from midnight) as shown below.
You can combine -s option with other sar option.
For example, to report the load average on 26th of this month starting from 10 a.m in the morning, combine the -q and -s option as shown below.
$ sar -q -f /var/log/sa/sa23 -s 10:00:01
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

10:00:01 AM   runq-sz  plist-sz   ldavg-1   ldavg-5  ldavg-15   blocked
10:10:01 AM         0       127      2.00      3.00      5.00         0
10:20:01 AM         0       127      2.00      3.00      5.00         0
...
11:20:01 AM         0       127      5.00      3.00      3.00         0
12:00:01 PM         0       127      4.00      2.00      1.00         0
There is no option to limit the end-time. You just have to get creative and use head command as shown below.
For example, starting from 10 a.m, if you want to see 7 entries, you have to pipe the above output to “head -n 10″.
$ sar -q -f /var/log/sa/sa23 -s 10:00:01 | head -n 10
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

10:00:01 AM   runq-sz  plist-sz   ldavg-1   ldavg-5  ldavg-15   blocked
10:10:01 AM         0       127      2.00      3.00      5.00         0
10:20:01 AM         0       127      2.00      3.00      5.00         0
10:30:01 AM         0       127      3.00      5.00      2.00         0
10:40:01 AM         0       127      4.00      2.00      1.00         2
10:50:01 AM         0       127      3.00      5.00      5.00         0
11:00:01 AM         0       127      2.00      1.00      6.00         0
11:10:01 AM         0       127      1.00      3.00      7.00         2

Mitigation of Performance Testing Impediments

An impediment is anything that prevents people from doing their job. Here are some impediments that performance testing teams have encountered.

A. Unavailability of subject matter / technical experts such as developers and operations staff.

B. Unavailability of applications to test due to delays or defects in the functionality of the system under test.

C. Lack of Connectivity/access to resources due to network security ports being available or other network blockage.

D. The script recorder fails to recognize applications (due to non-standard security apparatus or other complexity in the application).

E. Not enough Test Data to cover unique conditions necessary during runs that usually go several hours.

F. Delays in obtaining or having enough software licenses and hardware in the performance environment testing.

G. Lack of correspondence between versions of applications in performance versus in active development.

H. Managers not familiar with the implications of ad-hoc approaches to performance testing.

Fight Or Flight? Proactive or Reactive?

      Some call the list above "issues" which an organization may theoretically face.

      Issues become "risks" when they already impact a project.

      A proactive management style at a particular organization sees value in investing up-front to ensure that desired outcomes occur rather than "fight fires" which occur without preparation.

      A reactive management style at a particular organization believes in "conserving resources" by not dedicating resources to situations that may never occur, and addressing risks when they become actual reality.

 
Subject Matter Expertise

      The Impediment
      Knowledge about a system and how it works are usually not readily available to those outside the development team.

      What documents written are often one or more versions behind what is under development.

      Requirements and definitions are necessary to separate whether a particular behavior is intended or is a deviation from that requirement.

      Even if load testers have access to up-to-the-minute wiki entries, load testers usually are not free to interact as a peer of developers.

      Load testers are usually not considered a part of the development team or even the development process, so are therefore perceived as an intrusion to developers.

      To many developers, Performance testers are a nuisence who waste time poking around a system that is "already perfect" or "one we already know that is slow".

      What can reactive load testers do?
      Work among developers and easedrop on their conversations (like those studying animals in the wild).

      What can proactive load testers do?
      Up-front, an executive formally establishes expectations for communication and coordination between developers and load testers.

      Ideally, load testers participate in the development process from the moment a development team is formed so that they are socially bonded with the developers.

      Recognizing that developers are under tight deadlines, the load test team member defines exactly what is needed from the developer and when it is needed.

      This requires up-front analysis of the development organization:

          o the components of the application
          o which developers work on which component
          o contact information for each developer
          o existing documents available and who wrote each document
          o comments in blogs written by each developer

      An executive assigns a "point person" within the development organization who can provide this information.

      Assignments for each developer needs to originate from the development manager under whom a developer works for.

            When one asks/demands something without the authority to do so, that person would over time be perceived as a nuisence.

            No one can serve two masters. For you will hate one and love the other; you will be devoted to one and despise the other.

      A business analyst who is familiar with the application's intended behavior makes a video recording of the application using a utility such as Camtasia from ToolSmith. A recording has the advtange of capturing the timing as well as the steps.

              

The U.S. military developed the web-based CAVNET system to collaborate on innovations to improvise around impediments found in the found.


Availability of applications

      The Impediment
      Parts of an applications under active development become inacessible while developers are in the middle of working on them.

      The application may not have been built successfully. There are many root causes for bad builds:

          o Specification of what goes into each build are not accurate or complete.
          o Resources intended to go into a particular build are not made available.
          o An incorrect version of a component is built with newer incompatible components.
          o Build scripts and processes do not recognize these potential errors, leading to build errors.
          o Inadequate verification of build completeness.

      What can reactive load testers do?
      Frequent (nightly) builds may enable testers more opportunities than losing perhaps weeks wait for the next good build.

      Testers switch to another project/application when one application cannot be tested.

      What can proactive load testers do?
      Use a separate test environment that is updated from the development system only when parts of the application become stable enough to test.

      Have a separate test environment for each version so that work on a prior version can occur when a build is not successful on one particular environment.

      Develop a "smoke test" suite to ensure that applications are testable.

      Coordinate testing schedules with what is being changed by developers.

      Analyze the root causes of why builds are not successful, and track progress on elminating those causes over time.

              
Connectivity/access to resources

      The Impediment
      Workers may not be able to reach the application because of network (remote VPN) connectivity or security access.

      What can reactive load testers do?
      Work near the physical machine.

      Grant unrestricted access to those working on the system.

      What can proactive load testers do?
      Analyze the access for each functionality required by each role.

      Pre-schedule when those who grant access are available to the project.

              
 Script Recorder Recognition

      The Impediment
      Load test script creation software such as LoadRunner work by listening and capturing what goes across the wire and display those conversations as script code which may be modified by humans.

      Such recording mechanisms are designed to recognize only standard protocols going through the wire.

      Standard recording mechanisms will not recognize custom communications, especially within applications using advanced security mechanisms.

      Standard recording mechanisms also have difficulty recognizing complex use of Javascript or CSS syntax in SAP portal code.

      What can reactive load testers do?
      Skip (de-scope) portions which cannot be easily recognized.

      What can proactive load testers do?
      To ensure that utility applications (such as LoadRunner) can be installed, install them before locking down the system.

      Define the pattern install them before locking down the system.

              
 Test Data

      The Impediment
      Applications often only allow a certain combination of values to be accepted. An example of this is only specific postal zip codes being valid within a certain US state.

      Using the same value repeatedly during load testing does not create a realistic emulation of actual behavior because most modern systems cache data in memory, which is 100 times faster than retrieving data from a hard drive.

      This discussion also includes role permissions having a different impact on the system. For example, the screen of an administrator or manager would have more options. The more options, the more resources it takes just to display the screen as well as to edit input fields.

      A wide variation in data values forces databases to take time to scan through files. Specifying an index used to retrieve data is the most common approach to make applications more efficient.

      Growth in the data volume handled by a system can render indexing schemes inefficient at the new level of data.

      What can reactive load testers do?
      Use a single role for all testing.

      Qualify results from each test with the amount of data used to conduct each test.

      Use trial-and-error approachs to finding combinations of values which meet field validation rules.

      Examine application source code to determine the rules.

      Analyze existing logs to define the distribution of function invocations during test runs.

      What can proactive load testers do?
      Project the likely growth of the application in terms of impact to the number of rows in each key data variable. This information is then used to define the growth in row in each table.

      Define procedures for growing the database size, using randomized data values in names.

              
 Test Environment

      The Impediment
      Creating a separate enviornment for load testing can be expensive for a large complex system.

      In order to avoid overloading the production network, the load testing enviornment is often setup so no communication is possible to the rest of the network. This makes it difficult to deploy resources into the environment and then retrieve run result files from the environment.

      A closed environment requires its own set of utility services such as DNS, authentication (LDAP), time sychronization, etc.

      What can reactive load testers do?
      Change network firewalls temporarily while using the development environment for load testing (when developers do not use it).

      Use the production fail-over environment temporarily and hope that it is not needed during the test.

      What can proactive load testers do?
      Build up a production environment and use it for load testing before it is used in actual production.

              
Correspondance Between Versions

      The Impediment
      Defects found in the version running on the perftest environment may not be reproducible by developers in the development/unit test environments running a different (more recent) version.

      Developers may have moved on to a different version, different projects, or even different employers.

      What can reactive load testers do?
      Rerun short load tests on development servers. If the server is shared, the productivity of developers would be affected.

      What can proactive load testers do?
      Before testing, freeze the total state of the application in a full back-up so that the exact state of the system can be restored, even after changes are tried to diagnose or fix the application on the system where it's found.

      Run load tests with trace logs information. This would not duplicate how the system is actually run in production mode.

              
 Ad-hoc Approaches

      The Impediment
      Most established professional fields (such as accounting and medicine) have laws, regulations, and defined industry practices which give legitimacy to certain approaches. People are trained to follow them. The consequences of certain courses of action are known.

      But the profession of performance and load testing has not matured to that point.

      The closest industry document, ITIL, is not yet universally adopted. And ITIL does not clarify the work of performance testing in much detail.

      Consequently, each individual involved with load testing is likely to have his/her own opinions about what actions should be taken.

      This makes rational exploration of the implications of specific courses of action a conflict-ridden and thus time-consuming and expensive endeavor.

      What can reactive load testers do?
      Allocate time for planning before starting actual work until concurrance on the project plan is achieved among the stakeholders.

      Revise project completion estimates or scope as new information becomes available.

      What can proactive load testers do?
      Before the project gets away, agree on the rationale for elements of the project plan and who will do what when (commitments of tasks and deliverables). This is difficult for those who are not accustomed to being accountable, and requests for it would result in withdrawl or other defensive behavior.

      Identify alternative approaches and analyze them before managers come up with it themselves.

      Up-front, identify how to contact each stakeholder and keep them updated at least weekly, and immediately if decisions impact what they are actively working on.

      If a new manager is inserted in the project after it starts, review the project plan and rationale for its elements.

Top 10 performance issues with a Database

Here is a list of top 10 performance issues with a Database and their most probable solutions

Too many calls to the DB - There might be multiple trips to a single DB from various middleware components and any of the following scenarios will occur


1. More data is requested than necessary, primarily for faster rendering (but in the slowing down the entire performance )
2. Multiple applications requesting for the same data.
3. Multiple queries are executed which in the end return the same result
This kind of problem generally arises when there is too much object orientation. The key is to strike a balance between how many objects to create and what to put in each object. Object oriented programing may be good for maintenance, but it surely degrades performance if they are not handled correctly

Too much synchronization – Most developers tend to over-synchronize, large pieces of code are synchronized by writing even larger pieces of code. It is generally fine when there is low load, under high load, the performance of the application will definitely take a beating. How to determine if the application has sync issues. The easiest way (but not 100% fool proof) is to chart CPU time and Execution time

CPU Time – is the time spent on the CPU by the executed code
Execution time - This is the total time the method takes to execute. It includes all times including CPU, I/O, waiting to enter sync block, etc
Generally the gap between the two times gives the waiting time. If our trouble making method does not make an I/O call nor an external call, then it’s most probably a sync issue that is causing the slowness.

Joining too many tables – The worst kind of SQL issues creep up when too many tables are joined and the data is to be extracted from it. Sometimes it is just unfortunate that so many tables have to be necessarily joined to be able to pull out the necessary data.
There are two ways to attack this problem
1) Is it possible to denormalize a few tables to have more data?
2) Is it possible to create a summary table with most of the information that will be updated periodically?
Returning a large result set - Generally no user will go through thousands of records in the result set. Most users will generally limit to only the first few hundreds (or the first 3 -4 pages). By returning all the results, the developer is not only slowing the database but also chocking the network. Breaking the result set into batches (on the database side) will generally solve this issue (though not possible always)

Joining tables in the middleware – SQL is a fantastic language for data manipulation and retrieval. There is simply no need to move data to a middle tier and join tables there. Generally by joining data in the middle tier:
1. Unnecessary load on the network as it has to transport data back and forth
2. Increasing memory requirements on the application server to handle the extra load
3. Drop in server performance as the app tier is mainly held up with processing large queries
The best way to approach this problem is to user Inner and Outer joins right in the database itself. By this, the all the power of SQL and the database is utilized for processing the query.

Ad hock queries – just because SQL gives the privilege to create and use ad-hock queries, there is no point in abusing them. In quite a few cases it is seen that ad-hock queries create more mess than advantage they bring. The best way is to use stored procedures. This is not always possible. Sometimes it is necessary to use ad-hock queries, then there is no option but to use them, but whenever possible, it is recommended to use stored procedures. The main advantage with stored procedures is
1. Pre compiled and ready
2. Optimized by DB
3. Stored procedure in on the DB server, i.e. no network transmission of large SQL request.

Lack of indices – You see that the data is not large, yet the DB seems to be taking an abnormally long time to retrieve the results. The most possible cause for this problem could be lack of or misconfigured index. At first sight it might seem trivial, but when the data grows large, then it plays a significant role. There can be significant hit in performance if the indices are not configured properly.

Fill factor – One of the other things to consider along with index is fill factor. MSDN describes fill factor as a percentage that indicates how much the Database Engine should fill each index page during index creation or rebuild. The fill-factor setting applies only when the index is created or rebuilt. Why is this so important? If the fill factor is too high, if a new record is inserted and index rebuilt, then the DB will more often than not split the index (page splitting) into a new page. This is very resource intensive and causes fragmentation. On the other hand having a very low value for fill factor means that lots of space is reserved for index alone. The easiest way to overcome this is to look at the type of queries that come to the DB; if there are too many SELECT queries, then it is best to leave the default fill factor. On the other hand if there are lots of INSERT, UPDATE and DELETE operations, a nonzero fill factor other than 0 or 100 can be good for performance if the new data is evenly distributed throughout the table.

My Query was fine last week but it is slow this week?? – We get to see a lot of this. The load test ran fine last week but this week the search page is taking a long time. What is wrong with the database? The main issue could be that the execution plan (the way the query is going to get executed on the DB) has changed. The easiest way to get the current explain plan the explain plan for the previous week, compare them and look for the differences.

High CPU and Memory Utilization on the DB – There is a high CPU and high Memory utilization on the database server. There could be a multitude of possible reasons for this.
1. See if there are full table scans happening (soln: create index and update stats)
2. See if there is too much context switching (soln: increase the memory)
3. Look for memory leaks (in terms of tables not being freed even after their usage is complete) (soln: recode!)

There can be many more reasons, but there are the most common ones.

Low CPU and Memory utilization yet poor performance – This is also another case (though not frequent). The CPU and memory are optimally used yet the performance is still slow. The only reason why this can be is for two reasons:
1. Bad network – the database server is waiting for a socket read or write
2. Bad disk management – the database server is waiting for a disk controller to become free

As always these are only the most common database performance issues that might come up in any performance test. There are many more of them out there.