Equinox IT Blog

Finding the haystack: troubleshooting a hard to find software performance problem

Ben Rowan troubleshooting a hard to find software performance problem

A client came to us with a problem: The latest release of business critical, web-based software had been deployed into their Production environment. On the morning of the release, the service desk was flooded with calls from users reporting slow performance.

The release had been through the same performance testing process as all prior releases. Testing noted a small increase in measured response times, but users were reporting page load times of several seconds. The issues reported couldn’t be readily reproduced, and the vital statistics (application server resource consumption, etc.) looked good. There were no reports of other desktop or web applications running slow. So what was the problem, and how did we find it? This blog will be split into four parts:

Part one: On the hunt

When trying to find a problem like this one, it’s important to understand what work has already been done to assess the known performance impacts associated with the release. This information can be used to do three things:
  1. Determine whether the reported user experience is consistent with the performance test results. This helps to separate an “expected” change in performance from an “unexpected” change.
  2. Determine whether the performance seen in Production differs significantly from the performance seen during testing. If so, the problem may be environment or deployment related (for example, thread pools not correctly sized during deployment).
  3. Most importantly, understand the work that’s already been done allowing us to identify any gaps there may be between how real users access the system and how performance testing has been conducted.

Performance testing

The “what” and the “how”

The application was performance tested using Microsoft Visual Studio and WildStrait PI, which applies load to the HTTP layer (emulating requests made from a real web browser). Timing points were gathered by grouping individual HTTP response times together into a “transaction.” The transaction time represents the total time a user waited for the web application to provide the information required to draw the web page.

When done properly, a webserver should not be able to distinguish traffic generated by an HTTP emulation tool from traffic generated by a real web browser. HTTP emulation allows for accurate measurement of the stability, capacity and response time of a web server, and provides an indication of the best-case user experience.

What this approach lacks is an ability to measure browser-side processing (JavaScript, etc). With web-based applications it is often the case that factors such as browser-side performance, internet connection performance, etc, are out of scope for performance testing: While it’s useful to understand how these factors impact usability and the user experience, the end-user’s hardware and internet connection are ultimately outside the control of the system’s owners. In this engagement, browser performance risk is mitigated by having real users exercise the system from a real browser while HTTP load is applied via the test rig.

Releases of the application undergo performance testing in a non-functional test environment which is as close to Production as possible (server resources, data volumes, user volumes, etc). The test workload encompasses some 27 different transaction types, both front-end GUI transactions and back-end SOAP/JMS requests. The overall system workload was expected to remain unchanged for this release (no new functionality was added, and business transaction volumes were unchanged by the release).

Performance test transaction response time results

The results of performance tests that were executed in the test environment showed that, overall, the 80th percentile response time had increased from about 850ms to 950ms, and the 90th percentile from about 1.6 seconds to 1.7 seconds. 100ms in this context is a relatively small change. (I’d challenge anyone to consistently tell the difference between a 850ms and 950ms page draw!) Manual testing executed in parallel with the automated test suite did not identify any performance problems.

None of those results aligned with what users were reporting.

Production versus performance test environments

Most webservers record information about the requests they receive and the responses they provide. Among other things, this information normally includes the URL of the request, the request method (GET/POST/LOCK etc), HTTP response code (200, 304 etc), and – if configured to do so – the response time. While it is difficult to process these logs in a format which makes them comparable with WildStrait PI transaction response times, it was possible to directly compare the logs from the performance test (NFR) and Production environments. We found that:

  • Webserver request volumes were a little higher in NFR than Production. This is good as it tells us we slightly overstated the workload.
  • Webserver response times were substantially the same between the NFR and Production environments.

This analysis tells us there’s no gross difference between the performance of NFR and Production: We’d expect users to see a response time increase of about 100ms following this release, not the several seconds being reported to the service desk.

Further analysis was conducted to confirm that all application hosts, webservers and application servers reported the same performance. The consistent performance in all dimensions told us this wasn’t a problem on a single app host, web server or app server – all instances have comparable performance.

While the comparable performance between NFR and Production suggested there was no resource-related limitations, we did confirm the key resources (CPU, memory/swapping, network and disk) were operating comfortably at both the VM and physical levels.

Elimination, part one

Measurement of response times at the HTTP layer neatly encapsulates the performance of all components “behind” the webserver:

  • Application servers
  • Database server
  • Underlying storage (SAN), including database storage
  • Network interconnects between the above systems

So we could conclude from our investigation that the performance problem is very unlikely to be found server side.

This is a good starting point, as it narrows the problem space considerably. But it does leave a long list of possibilities: Everything in front of the webserver. This includes:

  • Load balancer (as this is outside the webserver response time measurements)
  • Network (WAN and LAN) between the client and the server
  • End-user computing environment
  • Web browser and client side script

Look out for the next article in this series 'Part two: Following the trail to performance problems in front of the webserver'.

Subscribe by email