In the following article, we explore the concept of performance modelling and how it can be used to create an accurate system performance model, which in turn can be used to create performance test scenarios.  

 

What is Performance Modelling / Volumetric Analysis?

Performance Modelling is also known as volumetric analysis or volumetric modelling.  It is a process whereby volumetric data (expected or historical) about a system use is used to create a statistical model of how the load will be applied to the system.

This should not be confused with capacity planning, although the output of performance modelling can play a major part in any future capacity planning undertakings.

 

Why is this modelling important?

The creation of an accurate model can provide an insight into how a proposed or actual system will work or how it works at present. It is a necessity for the creation of realistic performance test scenarios.

An inaccurate model will lead to the creation of unrealistic performance test scenarios and ultimately deliver inaccurate results.

 

Inputs to create a Model

All data can be sourced from historical data, or from expected values typically determined by business analysts.

The 3 types of statistics that should be collected are as follows.

 

a) User profile statistics

The most common user profile is a user role. For example, a job, a department, or a position i.e. In an investment bank. Some users will be front-office users, others middle-office users and some will be back-office users. Even though they may be using the same system, their usage patterns will ultimately be different as they will access different parts of the system.

Not all users will act in the same way or do the same things, take front- office users as an example. Some will be slower than other users, they may create 5 trades an hour, while others will create 3 on average.  A profile is typically a combination of business processes that a group of users will perform.

For every profile, you will need to establish

  • How many users are at each location?
  • How many transactions a typical user profile will run, for each transaction?

 

b) Transaction statistics

Here we define transactions as a business process, for instance, the creation of a trade or the settlement of a trade. This is the most important set of volumetric data you will need to build a model for. For each transaction included you need to establish the peak per hour.

For every transaction you will need to know:

  • How many users will be executing it from each location?
  • How many times it will be executed from each location?

 

c) User Location statistics

If users are at different geographical locations, there will be an impact on performance which needs to be considered. 

An application hosted in London Docklands will be slower when accessed from Pune than it would be when accessed from the City.

This data will be used to create a model that takes the different time zones into account.

For every location you will need to know:

  • What locations will the application be accessed from?
  • How many users are sitting at each location?
  • How many users of each profile are sitting at each location?
  • How many of each transaction will be executed per hour? (This can be either for every hour or just for the peak hour)

 

Once you have the three items of volumetric data as detailed in the diagram below, (user numbers by profile, user numbers by location, and transactional volumes.) Next, we model the peak hour of the application, to find out when the greatest possible expected peak takes place in order to simulate it.

 

Image 1

In addition and so that we may take all elements into account, we need to create a 24 hour model of the application as some users may work in different time zones or shifts and all of this will affect the peaks and spikes within the application.

We can create this 24-hour ‘system activity model’ detailing users (with locations) and transactions running on the system, by creating the following tables based on the data collected previously-

    • Task Distribution (over a 24-hour period on the busiest day)
    • User Distribution (over a 24-hour period on the busiest day)
    • User Profiles per Location (for Peak Hour) 
    • Transaction Volumes per User Profile (for Peak Hour)
    • Transaction Volumes per Location (for Peak Hour)

 

1. Task Distribution (over a 24-hour period on the busiest day)

This is a model of the transactional activity over the course of 24 hours on the busiest day. This will allow you to see the busiest (Peak) hour and the total number of transactions per day.

Graph 1

From the above table, we are now able to create a graph to represent an expected transactional real-world usage model.

The following is an example of a 24-hour graph of number of transactions (per transaction.)

Bar Chart 1

 

 

2. User Distribution (over a 24-hour period on the busiest day)

 

This is a model of the users on the busiest day, over the course of 24 hours. This will ensure you see the total number of users using the system in the busiest (Peak) hour.

Graph 2.1

From the above table we are now able to create a graph to represent an expected user real-world usage model:

The following is an example of a 24-hour graph of number of users (by location.)

 

Bar Chart 2

3. User Profiles per Location (for the peak hour)

It should be something like the example below based on the peak hour being 10.00 -11.00 in the user distribution table:

Graph 3.1

4. Transaction Volumes per User Profile (for the peak hour)

It should be something like the example below based on the peak hour being 10.00 -11.00 in the task distribution table:

Graph 4.1

 

5. Transaction Volumes per Location (for the peak hour)

It should be something like the example below based on the peak hour being 10.00 -11.00 on the task distribution table:

Graph 5.1

By using the above data we can start creating realistic performance scenarios based on the peak hour data.  

Based on the data, we know that for the peak hour,

  • 479 transactions were executed by 276 different users
  • 479 transactions were executed across 5 different locations
  • 4 different user profiles were used across the 5 locations.

 

In conclusion, by creating an accurate system performance model by sourcing relevant data we can create various performance test scenarios to validate if the proposed system will work under its expected peak conditions.

To see how SQA Consulting may assist your company performance testing your applications, please contact us.

  • Iso 27001 2013 Badge White
  • CE+ Logo Affiliated Hi Res
  • Iso 9001 2015 Badge White