Lesson 7: Analyzing Your Scenario
In the previous lessons you learned how to design, control, and execute a scenario run. Once you have loaded your server, you want to analyze the run, and pinpoint the problems that need to be eliminated to improve system performance.
The graphs and reports produced during your analysis session present important information about the performance of your scenario. Using these graphs and reports, you can pinpoint and identify the bottlenecks in your application, and determine what changes need to be made to your system to improve its performance.
In this lesson, you will cover the following topics:
The aim of the analysis session is to find the failures in your system’s performance and then pinpoint the source of these failures, for example:
- Were the test expectations met? What was the transaction time on the user’s end under load? Did the SLA meet or deviate from its goals? What was the average transaction time of the transactions?
- What parts of the system could have contributed to the decline in performance? What was the time of the network and servers?
- Can you find a possible cause by correlating the transaction times and backend monitor matrix?
In the following sections, you will learn how to open LoadRunner Analysis, and build and view graphs and reports that will help you find performance problems and pinpoint the sources of these problems.
Open HP LoadRunner Analysis.
On the LoadRunner machine, double-click the Analysis icon on the Desktop. LoadRunner Analysis opens.
Open the analysis session file.
For the purpose of this section in the tutorial, in order to illustrate more diverse results, we ran a scenario similar to those you ran in the previous lessons. This time, however, the scenario incorporated 70 Vusers rather than 10 Vusers. You will now open the analysis session created from the results of this scenario.
Analysis contains the following primary panes:
- Session Explorer. In the upper left pane, Analysis shows the reports and graphs that are open for viewing. From here you can display new reports or graphs that do not appear when Analysis opens, or delete ones that you no longer want to view.
- Properties pane. In the lower left pane, the Properties pane displays the details of the graph or report you selected in the Session Explorer. Fields that appear in black are editable.
- Graph Viewing pane. In the upper right pane, Analysis displays the graphs. By default, the Summary Report is displayed in this area when you open a session.
- Legend pane. In the lower right pane, you can view data from the selected graph.
Note: There are additional panes that can be accessed from the toolbar. These panes can be dragged and dropped anywhere on the screen.
In this section, you will be introduced to the Service Level Agreement, or SLA.
SLAs are specific goals that you define for your load test scenario. Analysis compares these goals against performance-related data that LoadRunner gathers and stores during the run, and then determines the SLA status (Pass or Fail) for the goal.
For example, you can define a specific goal, or threshold, for the average transaction time of a transaction in your script. After the test run ends, LoadRunner compares the goals you defined against the actual recorded average transaction times. Analysis displays the status of each defined SLA, either Pass or Fail. For example, if the actual average transaction time did not exceed the threshold you defined, the SLA status will be Pass.
As part of your goal definition, you can instruct the SLA to take load criteria into account. This means that the acceptable threshold will vary depending on the level of load, for example, Running Vusers, Throughput, and so on. As the load increases, you can allow a higher threshold.
Depending on your defined goal, LoadRunner determines SLA statuses in one of the following ways:
- SLA status determined at time intervals over a timeline. Analysis displays SLA statuses at set time intervals (for example, every 5 seconds) over a timeline within the run.
- SLA status determined over the whole run. Analysis displays a single SLA status for the whole scenario run.
SLAs can be defined either before running a scenario in the Controller, or after in Analysis itself.
In the following section, you will define an SLA using the HP Web Tours example. Assume that the administrator of HP Web Tours would like to know whenever the average transaction times of the book_flight and search_flight transactions exceed certain values. To do this, you select the transactions and then set threshold values. These threshold values are the maximum amounts of time that would be acceptable as average transaction times.
You will also set these threshold values to take certain load criteria into account; in this case Running Vusers. In other words, as the number of running Vusers increases, the threshold value rises.
This is because although the HP Web Tours administrator would like the average transaction times to be as low as possible, it is understood that at certain times of the year it is reasonable to assume that the HP Web Tours site will have to handle a higher load than at other times of the year. For example, during peak travel season, a higher number of travel agents log on to the site to book flights, check itineraries, and so on. Given this understandably higher load, at these times a slightly longer average transaction time will be acceptable.
You will set the SLA to take three load scenarios into account: light load, average load, and heavy load. Each scenario will have its own threshold value.
You will define an SLA in Analysis after the scenario run.
Note: It is preferable to define an SLA in the Controller before a scenario run. However, for the purposes of this tutorial, because you are not analyzing the same test scenario that you ran in previous lessons, you will define the SLA in Analysis. To define an SLA in the Controller, click New in the Service Level Agreement section of the Design tab.
You will now define an SLA that will set specific goals for the average transaction times for the book_flight and search_flight transactions in the sample session file.
The average transaction times will be measured at set time intervals within the run.
To define an SLA:
- Open the SLA wizard.
In LoadRunner Analysis, select Tools > Configure SLA Rules. The Service Level Agreement dialog box opens.
Click New to open the Service Level Agreement wizard.
Note: The first time you open the Service Level Agreement wizard, the Start page is displayed. If you do not want this page to be displayed the next time you run the wizard, select the Skip this page next time check box.
- Select a measurement for your goal.
Select the transactions to monitor.
In the Select Transactions page, select a transaction to monitor from the Available Transactions list.
Set the load criteria.
In the Set Load Criteria page, you instruct the SLA to take different load scenarios into account.
- Select Running Vusers from the Load Criteria dropdown list.
Set the Load Values to look like the following example:
In the above screen, you set the SLA to define an acceptable average transaction time over three potential load scenarios:
- Light load. Between 0 and 19 Vusers
- Average load. Between 20 and 49 Vusers
- Heavy load. More than 50 Vusers
- Click Next.
Set threshold values.
In the Set Threshold Values page, you define the acceptable average transaction times for the check_itinerary transaction.
Set the threshold values to look like the following example:
You just indicated that the following average transaction times are acceptable:
- Light load. 5 seconds or less
- Average load. 10 seconds or less
- Heavy load. 15 seconds or less
Save the SLA.
To save the SLA and close the wizard, click Next then Finish then Close on the pages that follow.
Analysis applies your SLA settings to the Summary Report. The report is then updated to include all the relevant SLA information.
The Summary Report tab displays general information and statistics about the scenario run, as well as all relevant SLA information. For example, what were the worst performing transactions in terms of defined SLAs, how specific transactions performed over set time intervals, and overall SLA statuses. You open the Summary Report from the Session Explorer.
What are the overall scenario statistics?
In the Statistics Summary section, you can see that a maximum of 70 Vusers ran in this test. Other statistics such as the total/average throughput, and the total/average hits are also displayed.
What were the worst performing transactions?
The 5 Worst Transactions table shows you up to five worst-performing transactions for which SLAs were defined.
You can see that over the duration of the check_itinerary transaction, the SLA threshold was exceeded 66.4% of the time. The average percentage by which it exceeded the SLA threshold over the whole run was 200.684%.
Over which time intervals was the SLA threshold exceeded?
The Scenario Behavior Over Time section shows how each transaction performed during different time intervals. The green squares show time intervals where the transaction performed within the SLA threshold, red squares where the transaction failed, and gray squares where no relevant SLA was defined.
You can see that for the transaction for which you defined an SLA, check_itinerary exceeded the threshold in most intervals.
What was the overall transaction performance?
The Transaction Summary lists a summary of the behavior of each transaction.
We also see that the check_itinerary transaction failed 28 times.
Review the times of each transaction. The 90 Percent column displays the time of 90% of the executions of a particular transaction. You can see that 90% of the check_itinerary transactions that were performed during the test run had a time of 65.754 seconds. This is double its average time, 32.826, which means that the majority of occurrences of this transaction had a very high response time.
Note how the SLA Status column shows the relevant overall status for transactions in the SLA: Fail for check_itinerary.
You can access available graphs from the Session Explorer pane. You will now view and analyze the Average Transaction Response Time graph.
- Open the Average Transaction Response Time graph.
In the Session Explorer under Graphs, select Average Transaction Response Time. The Average Transaction Response Time graph opens in the graph viewing area.
Note: If no graphs are displayed in the Session Explorer pane, right-click the Graphs node and select the Transactions: Average Transaction Response Time node in the Open a New Graph dialog box. Click Open Graph to add the graph to the Session Explorer pane.
In the Legend pane, click the check_itinerary transaction. The check_itinerary transaction is highlighted in the graph.
The points on the graph represent the average time of a transaction at a specific time during the scenario. Hold your cursor over a point in the graph. A yellow box appears, and displays the coordinates of that point.
Analyze the results.
Note how the average transaction time of the check_itinerary transaction fluctuates greatly, and reaches a peak of 75.067 seconds, 2:56 minutes into the scenario run.
On a well-performing server, the transactions would follow a relatively stable average time. At the bottom of the graph, note how the logon, logoff, book_flight, and search_flight transactions have more stable average times.
In the previous section you saw instability in your server’s performance. Now you will analyze the effect of 70 running Vusers on the system’s performance.
Study the behavior of the Vusers.
In the Session Explorer, under Graphs, click Running Vusers. The Running Vusers graph opens in the graph viewing area.
You can see that there was a gradual start of running Vusers at the beginning of the scenario run. Then, for a period of 3 minutes, 70 Vusers ran simultaneously, after which the Vusers gradually stopped running.
Filter the graph so that you see only the time slice when all the Vusers ran simultaneously.
When you filter a graph, the graph data is narrowed down so that only the data for the condition that you specified is displayed. All other data is hidden.
- Right-click the graph and select Set Filter/Group By, or alternatively, click the Set Filter/Group By button on the Analysis toolbar.
- In the Filter Condition area, select the Values column of the Scenario Elapsed Time row.
- Click the down-arrow and specify a time range from 000:01:30 minutes to 000:03:45 minutes.
- Click OK.
In the Graph Settings dialog box, click OK.
The Running Vusers graph now displays only those Vusers running between 1:30 minutes and 3:45 minutes of the scenario run. All other Vusers have been filtered out.
Note: To clear the filter, you right-click the graph and select Clear Filter/Group By, or alternatively, click the Clear Filter and Group By button on the Analysis toolbar.
Correlate the Running Vusers and Average Transaction Response Time graphs to compare their data.
You can join two graphs together to see the effect of one graph’s data upon the other graph’s data. This is called correlating two graphs.
For example, you can correlate the Running Vusers graph with the Average Transaction Response Time graph to see the effect of a large number of Vusers on the average time of the transactions.
- Right-click the Running Vusers graph and select Clear Filter/Group By.
- Right-click the graph and select Merge Graphs.
- From the Select graph to merge with list, select Average Transaction Response Time.
Under Select type of merge, select Correlate, and click OK.
The Running Vusers and Average Transaction Response Time graphs are now displayed in one graph, the Running Vusers - Average Transaction Response Time graph.
Analyze the correlated graph.
In this graph you can see that as the number of Vusers increases, the average time of the check_itinerary transaction gradually increases. In other words, the average time increases as the load increases.
At 66 Vusers, there is a sudden, sharp increase in the average time. We say that the test broke the server. The time clearly began to degrade when there were more than 66 Vusers running simultaneously.
Saving a template
So far you have filtered a graph and correlated two graphs. The next time you analyze a scenario, you might want to view the same graphs, with the same filter and merge conditions applied. You can save your merge and filter settings into a template, and apply them in another analysis session.
To save your template:
- Select Tools > Templates. The Apply/ Edit Template dialog box opens.
- In the Templates pane, click the New button. The Add New Template dialog box opens.
- Enter an appropriate name for the template and click OK.
- Click Save and close to close the Apply/Edit Template dialog box.
The next time you open a new Analysis session and want to use a saved template:
- Select Tools > Templates. The Apply/ Edit Template dialog box opens.
- Select your template from the list, and click Save and close.
Until now, you have seen that an increase in load on the server had a negative impact on the average response time of the check_itinerary transaction.
You can drill down further into the check_itinerary transaction to see which system resources may have negatively influenced its performance.
The Auto-correlate tool can merge all the graphs that contain data that could have had an effect on the response time of the check_itinerary transaction, and pinpoint what was happening at the moment the problem occurred.
From the graph tree, select the Average Transaction Response Time graph.
Look at the check_itinerary transaction, particularly at the slice of elapsed time between 1 and 4 minutes. The average response time started to increase almost immediately, until it peaked at nearly 3 minutes.
- Filter the Average Transaction Response Time graph to display only the check_itinerary transaction.
- Auto-correlate the graph.
- Right-click the graph, and select Auto Correlate.
In the Auto Correlate dialog box, make sure that the measurement to correlate is check_itinerary, and set the time range from 1:20 to 3:40 minutes - either by entering the times in the boxes, or by dragging the green and red poles into place along the Elapsed Scenario Time axis.
The auto-correlated graph opens in the graph viewing area. The check_itinerary transaction is highlighted.
The auto-correlated graph is given a default name, Auto Correlated Graph .
- Rename the graph.
- In the Session Explorer, under Graphs, right-click Auto Correlated Graph , and select Rename Item. The graph name becomes editable.
- Type Auto Correlated - check_itinerary and press Enter, or click anywhere in the Analysis window.
Analyze the auto-correlated graph.
In the Legend pane below the graph, from the Graph column, scroll down to the Windows Resources: Pool Nonpaged Bytes and Private Bytes measurements.
In the Measurement and Correlation Match columns, you can see that these memory-related measurements, have a Correlation Match of over 70% with the check_itinerary transaction. This means that the behavior of these elements was closely related to the behavior of the check_itinerary transaction during the specified time interval.
We can conclude that exactly when the check_itinerary transaction’s response time peaked, there was a shortage of system memory resources.
In addition to the graphs that appear in the graph tree at the start of an analysis session, you can display different graphs to get other information about your scenario run.
Click Graph > Add New Graph.
The Open a New Graph dialog box opens and lists the categories of graphs that contain data and can be displayed.
Vusers. Displays information about the Vusers and their status.
Errors. Displays error statistics.
Transactions. Displays data about transactions and their response times.
Web Resources. Displays hits, throughput, and connection data.
Web Page Diagnostics. Displays data about each monitored Web page in your script.
System Resources. Displays system resource usage data.
- Display a new graph.
- In the Open a New Graph dialog box, click the “+” next to a category to expand it.
- Select a graph and click Open Graph.
Click Close to close the Open a New Graph dialog box.
Now open several additional graphs to understand more about your scenario run.
You can publish the findings from your analysis session in an HTML or Microsoft Word report. The report is created using a designer template, and includes explanations and legends of the presented graphs and data.
The HTML report can be opened and viewed in any browser.
To create an HTML report:
- Click Reports > HTML Report.
- Specify a file name for your report, and the path where you want to save it.
Analysis creates the report and displays it in your Web browser. Note how the layout of the HTML report is very similar to the layout of your analysis session. You can click on the links in the left pane to see the various graphs. A description of each graph is given at the bottom of the page.
Microsoft Word Reports
You can present your analysis session in a Microsoft Word report. The Word report is more comprehensive than the HTML report, because you have the option to include general information about the scenario, measurement descriptions, and so on. You can also format the report to include your company’s name and logo, and the author’s details.
Like any Microsoft Word file, the report is editable, so you can add further comments and findings after you build the report.
To create a Microsoft Word report:
Click Reports > New Report.
The New Report dialog box opens.
- In the General tab:
From Based on template, select Detailed report (for single run).
Enter a title for your report.
Enter the author’s name, job title, and the company’s name.
In the Format tab:
By default, the report will be built with a title page, table of contents, graph details and descriptions, and measurement descriptions. You can select options which add script details into the report, allowing you to view thumbnail images of the business process steps.
You can include a company logo by selecting Include company logo and browsing to the file location. The logo must be a .bmp file.
- In the Content tab:
Select which sections of your scenario run and analysis session you want to include in your report.
For the purpose of this tutorial, you will add an executive summary to the Content Items list.
Click the Add button to open the Add Content Items window. Check Executive Summary in the grid and click OK. The Executive Summary item is added to the list in the Content Items pane.
Enter the following text into the edit box:
- Objectives: The objectives of the test scenario were to....
- Conclusions: The conclusions I reached are as follows:
- In the Content Items pane, select the Largest URLs by Average Kbytes and click the Delete button . This will exclude this graph from the report.
Change the order in which to display the items in the report.
- In the Content Items pane, select Workload Characteristics. Click on Average Hits per Second in the Selected Columns list.
- Click the Down arrow until the item appears under Total Transactions Number. In the report, the Average Hits per Second item will follow the Total Transactions Number item.
The data is gathered and the report is created in a Microsoft Word file, which opens in Microsoft Word.
In addition to the graphs that you generated during your analysis session, the report includes an objective and a conclusion, and other sections and graphs that you chose to include while building the report.
In this lesson you learned the basics of defining a Service Level Agreement, analyzing a scenario run, and publishing your results in a report.
You have learned that performance problems can be pinpointed by studying various graphs that show bottlenecks on the server, possibly due to too heavy a load. You have seen that you can pinpoint the sources of these bottlenecks by configuring graphs to display correlated data.