Apart from functional testing, there are many other quality indicators the Sqills Testing Team is involved in. One of these is Performance. Testing the performance of S3 Passenger is an integral part of the suites release cycle. Load testing is the part of performance testing where we check if a change in our code ultimately slows down the application. This is a different approach to stress testing. In stress testing you want to see how much strain a system can handle before it breaks. Today I want to focus on how load testing on S3 Passenger is done at Sqills. I will start by having a look at some of the tooling, followed by an example of a test run.
Tooling – Gatling
A little over a year ago, JMeter, the industry standard for a long time, was our go-to performance-testing tool. However, we were not quite satisfied with the results that we got. As monitoring got better, it became clear that the way JMeter works (synchronously, with a constant amount of open threads) does not quite match the load profile we see on our customers’ systems.
Enter Gatling…
This is a relatively new tool that proved to offer a much more production-like and consistent load on our systems. In contrast to JMeter, which has a UI to set-up your tests, Gatling offers full Scala support. Scala is a functional programming language building off the Java Virtual Machine (JVM). This approach, while having a steeper learning curve, makes Gatling very flexible. Finally, Gatling also generates very nice reports that give a good representation of the performance of each individual call.
Figure 1: Gatling Load-test Run report
The table in figure 1 is an overview of the performance indicators of the (JSON) request as displayed on the front page of a Gatling report. Per call, you can see the number of requests made, the average requests per second, a minimum and maximum response time and the different percentiles in milliseconds. For example, these percentiles show that 99% of the journey-search calls that were made had a response time of < 79ms. This is valuable information when analysing test results. For example, one call that had a timeout (for whatever reason) could greatly increase the mean response time, giving a possible false indication on the performance of this call, while the 99 percentiles will still show you the call itself is at the desired speed.
Figure 2: Gatling Load-test Run report details
The reports you see above displays some more metrics of a Gatling run. These graphs show the number of users per scenario (more on that later) and a Response Time Distribution on all requests. This information is available over all calls and it is also possible to drill down to each individual call.
Tools – Monitoring
When an application is being tested, proper monitoring of your system’s performance can make all the difference between having to guess what the reason for a slow response is and being able to pinpoint it exactly. While Gatling provides a very detailed report, it still cannot be aware of the internal workings of our application. New Relic, and to a lesser extent, Zabbix, offers us this insight.
Though we expect to move to Prometheus soon, we currently utilize Zabbix to monitor our servers, usually Docker containers running a certain component of S3 Passenger. Things like CPU load, CPU utilization, Memory usage, SWAP usage, etc. can be viewed for each machine running in the application suite. For example, when a load test is running, heavy load on one container and low load on another identical one might indicate something is amiss with the load-balancing. Or a slight but steady increase in the Memory-usage can indicate a memory leak.
Figure 3: Zabbix Screens overview
New Relic, on the other hand, provides us with real-time insights into the core of the system. Each individual call (internal and external) can be measured and analysed into its different components. In the figure below, you can see the getBooking call. This call consists of its own internal logic, which takes time to process, but is also dependent on other application components. New Relic allows us to pinpoint exactly what is slow about a specific call. But it can go even further. If any call passes a predefined threshold New Relic will record a complete trace detailing the all actions within that component and their corresponding processing times. Naturally this is very helpful when doing detailed performance analysis.
Figure 4: getBooking call breakdown in New Relic
Tooling – Other
There is quite some other tooling in place to automate our test and analyse the results.
To analyse log-files and error messages we use Sentry (Open-source error tracking that helps developers monitor and fix crashes in real time) and Kibana (monitoring of log files).
In order to prepare our test environment and get the same starting point we empty the database using a Postgres Docker image with some SQL commands and we utilize Postman to do REST calls to whichever environment we specify to reset some caches.
Since load testing has its own place in the SDLC (Systems Development Lifecycle) we naturally want to be able to run this in our Continuous Integration / -Deployment pipeline. We use Jenkins to do this.
Preparing and running a load-test
Now for the more technical stuff: I would like to give you a view on how our tests are set-up on the inside. We will focus on the BOOKING flow. First, we look at the different variables and how they are used to create versatile tests. We look at what a Simulation is, and how a Scenario inside this simulation is built-up. Finally, we look at the definition of the JSON request.
Variables
Let’s start with a look at the project’s variables. There are many we want to be able to tweak. From SUT (System Under Test) to the amount of load we want to place on a specific process flow. An example of these variables are:
`DURATION` => time the simulation will run (in human language)
`DUR` => How many days ahead the Departure can be made starting with today +1. (default = 31)
`RET` => Days after Departure date the return booking should be made (default = 2)
`CAL` => Days after the Departure date the Calendar call is made (default = 14)
`LOOK_RATE` => Amount of users in the LOOK scenario
`PROV_RATE` => Amount of users in Provisional Booking scenario
`BOOK_RATE` => Amount of users in the Complete Booking scenario
`V1_RATE` => Amount of users in the V1 scenario
Every website visit does not equal a finished booking so we need to be able to differ the load. Depending on the test we might want to have a LOOK:BOOK ratio of 10:1. We also want to have a large enough load on our booking servers, so we specify a load of 72:8 (since the BOOK scenario also goes through the LOOK scenario).
Below, the environment variable BOOK_RATE is taken from the system and put into the variable bookRate.
val bookRate: Double =
parseDouble(sys.env.apply(“BOOK_RATE”)).getOrElse(1d)
Not only Environment Variables are used. We have several functions that create “random” values. This is also our first look at the Scala language the project is written in. A good example is the related to the Environment Variable “DUR”. This value can be used to specify how many days in advance a booking can be made. This is relevant since booking on only one day will quickly fill up the available seats, while it is not possible to book for any date in the future. Therefore, we pass the “DUR” value (fixedDur) to a little function that sets a random date between Tomorrow (not today, since we might run the test at 23:59 and there wouldn’t be availability) + the value of DUR.
def setDates(): Map[String, String] = {
val fixedDur: Int = Random.nextInt(dur) +1
Map.apply(
`PREV_DATE` -> date.plusDays(fixedDur -1).format(DateTimeFormatter.ISO_LOCAL_DATE),
“DEPARTURE_DATE” -> date.plusDays(fixedDur).format(DateTimeFormatter.ISO_LOCAL_DATE),
“NEXT_DATE” -> date.plusDays(fixedDur +1).format(DateTimeFormatter.ISO_LOCAL_DATE),
“RETURN_DATE” -> date.plusDays(rtn + fixedDur).format(DateTimeFormatter.ISO_LOCAL_DATE),
“CALENDAR_DATE” -> date.plusDays(cal + fixedDur).format(DateTimeFormatter.ISO_LOCAL_DATE)
)
}
val setDate: Iterator[Map[String, String]] = Iterator.continually(setDates())
Finally, the variable setDate is set using Gatlings Iterator.continually function. This will generate a new set of dates for every scenario iteration. As you can see, there are several other relevant dates calculated as well.
Simulations
Environment variables like BOOK are used to specify the load in a Gatling Scenario. A Simulation is a collection of several Scenarios. Below is an example of a Simulation that handles the most common requests.
Here you can see the variable BOOK (bookRate) is mapped to the salesFlowScenario.
setUp(
Scenarios.orientationScenario.inject(constantUsersPerSec(oriRate) during duration),
Scenarios.journeySearchScenario.inject(constantUsersPerSec(lookRate) during duration),
Scenarios.provisionalBookingScenario.inject(constantUsersPerSec(provRate) during duration),
Scenarios.salesFlowScenario.inject(constantUsersPerSec(bookRate) during duration),
Scenarios.afterSalesScenario.inject(constantUsersPerSec(afterRate) during duration),
Scenarios.returnScenario.inject(constantUsersPerSec(returnRate) during duration),
Scenarios.v1Scenario.inject(constantUsersPerSec(v1Rate) during duration),
Scenarios.calendarScenario.inject(constantUsersPerSec(calendarRate) during duration),
Scenarios.salesFlowScenarioWithCancellation.inject(constantUsersPerSec(bookAndCancelRate) during duration)
).protocols(httpConf)
Scenarios
Simulations are divided further into Scenarios. The scenarios can consist of some feeder functions (for specifying OD’s, services or random values) followed by step executions finishing with a Complete Booking action. In the table below the salesFlowScenario, that follows the BOOK flow, can be seen.
val salesFlowScenario: ScenarioBuilder = scenario(“SalesFlow”)
.feed(servicesFeeder)
.feed(randval.randVal)
.feed(randval.setDate)
.exec(token.requestNew)
.exec(bookingFlow.user)
.exec(bookingFlow.stations)
.exec(orientationFlow.calendar)
.exec(orientationFlow.journeySearch)
.doIf(session => session.apply(“tariffCode”).asOption[String].exists(_.trim.nonEmpty)) {
exec(bookingFlow.provisionalBooking)
.doIf(session => session.apply(“bookingNumber”).asOption[String].exists(_.trim.nonEmpty)) {
exec(bookingFlow.completeBooking)
}
}
Some steps are dependent on the response on earlier requests. For example, to be able to do a create provisionalBooking call, you need a tariff code from the journeySearch response. The value “tariffCode”, if it exists in the earlier response is passed to the next step in the scenario. If it doesn’t exist (because there is no availability) Gatling will consider this user’s process finished.
Request-building
In the code-block below you can see how the actual provisionalBooking call is made. The type of request (post) is first defined, the headers are added to the request and then the body is made up. The value $tariffCode, taken from the journeySearch response, is visible in building this provisionalBooking request. The other variables are values taken from different feeder functions. At the bottom the .check function fetches the booking number from the json response and saves it as bookingNumber to be used in the completeBooking scenario.
val provisionalBooking: ChainBuilder =
exec(http(“booking”)
.post(“/api/v2/booking”)
.headers(headers)
.body(StringBody(
“””
{
“segments”: [
{
“origin“: “${ORIGIN}”,
“destination“: “${DESTINATION}”,
“direction“: “outbound”,
“service_name”: “${SERVICE}”,
“service_identifier”: “${SERVICE}|${DEPARTURE_DATE}”,
“start_validity_date”: “${DEPARTURE_DATE}”,
“items”: [
{
“passenger_id”: “passenger_1”,
“tariff_code”: “${tariffCode}”
}
]
}
],
“passengers”: [
{
“type”: “A”,
“id”: “passenger_1”,
“disability_type”: “NH”,
“number”: 1
}
]
}
“””
))
.check(jsonPath(“$.data.booking.booking_number”).optional.saveAs(“bookingNumber”)))
Wrapping up
This is as deep as the rabbit-hole goes. I have tried to give both a high level overview and a deeper insight into the load-testing process of S3 Passenger. I hope this blog has given you a good insight in this activity. If you have any further questions or remarks on this topic, they can be directed to rick.overmars@sqills.com.
Want to know more about load testing?
Get in contact and let’s discuss the details!