Tuesday, June 28, 2016

Running Gatling load tests in Docker containers via Jenkins

Gatling is a modern load testing tool written in Scala. As part of the Jenkins setup I am in charge of, I wanted to run load tests using Gatling against a collection of pages for a given website. Here are my notes on how I managed to do this.

Running Gatling as a Docker container locally

There is a Docker image already available on DockerHub, so you can simply pull down the image locally:


$ docker pull denvazh/gatling:2.2.2

Instructions on how to run a container based on this image are available on GitHub:

$ docker run -it --rm -v /home/core/gatling/conf:/opt/gatling/conf \
-v /home/core/gatling/user-files:/opt/gatling/user-files \
-v /home/core/gatling/results:/opt/gatling/results \
denvazh/gatling:2.2.2

Based on these instructions, I created a local directory called gatling, and under it I created 3 sub-directories: conf, results and user-files. I left the conf and results directories empty, and under user-files I created a simulations directory containing a Gatling load test scenario written in Scala. I also created a file in the user-files directory called urls.csv, containing a header named loc and a URL per line for each page that I want to load test.

Assuming the current directory is gatling, here are examples of these files:

$ cat user-files/urls.csv
loc
https://my.website.com
https://my.website.com/category1
https://my.website.com/category2/product3

$ cat user-files/simulations/Simulation.scala

package my.gatling.simulation

import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._

class GatlingSimulation extends Simulation {

  val httpConf = http
    .baseURL("http://127.0.0.1")
    .acceptHeader("text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8")
    .doNotTrackHeader("1")
    .acceptLanguageHeader("en-US,en;q=0.5")
    .acceptEncodingHeader("gzip, deflate")
    .userAgentHeader("Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Firefox/31.0")

  val scn1 = scenario("Scenario1")
    .exec(Crawl.crawl)

  val userCount = Integer.getInteger("users", 1)
  val durationInSeconds  = java.lang.Long.getLong("duration", 10L)
  setUp(
    scn1.inject(rampUsers(userCount) over (durationInSeconds seconds))
  ).protocols(httpConf)
}

object Crawl {

  val feeder = csv("/opt/gatling/user-files/urls.csv").random

  val crawl = exec(feed(feeder)
    .exec(http("${loc}")
    .get("${loc}")
    ))
}


I won't go through the different ways of writing Gatling load tests scenarios here. There are good instructions on the Gatling website -- see the Quickstart and the Advanced Tutorial. What the scenario above does is it reads the file urls.csv and randomly picks a URL from it, then runs a load test against that URL.

I do want to point out 2 variables in the above script:

  val userCount = Integer.getInteger("users", 1)
  val durationInSeconds  = java.lang.Long.getLong("duration", 10L)

These variables specify the max number of users we want to ramp up to, and the duration of the ramp-up. They are used in the inject call:

scn1.inject(rampUsers(userCount) over (durationInSeconds seconds))

The special thing about these 2 variables is that they are read from JAVA_OPTS by Gatling. So if you have a -Dusers Java option and a -Dduration Java option, Gatling will know how to read them and how to set the userCount and durationInSeconds variables accordingly. This is a good thing, because it allows you to specify those numbers outside of Gatling, without hardcoding them in your simulation script. Here is more info on passing parameters via the command line to Gatling.

While pulling the Gatling docker image and running it is the simplest way to run Gatling, I prefer to understand what's going on in that image. I started off by getting the Dockerfile from GitHub:

$ cat Dockerfile

# Gatling is a highly capable load testing tool.
#
# Documentation: http://gatling.io/docs/2.2.2/
# Cheat sheet: http://gatling.io/#/cheat-sheet/2.2.2

FROM java:8-jdk-alpine

MAINTAINER Denis Vazhenin

# working directory for gatling
WORKDIR /opt

# gating version
ENV GATLING_VERSION 2.2.2

# create directory for gatling install
RUN mkdir -p gatling

# install gatling
RUN apk add --update wget && \
  mkdir -p /tmp/downloads && \
  wget -q -O /tmp/downloads/gatling-$GATLING_VERSION.zip \
  https://repo1.maven.org/maven2/io/gatling/highcharts/gatling-charts-highcharts-bundle/$GATLING_VERSION/gatling-charts-highcharts-bundle-$GATLING_VERSION-bundle.zip && \
  mkdir -p /tmp/archive && cd /tmp/archive && \
  unzip /tmp/downloads/gatling-$GATLING_VERSION.zip && \
  mv /tmp/archive/gatling-charts-highcharts-bundle-$GATLING_VERSION/* /opt/gatling/

# change context to gatling directory
WORKDIR  /opt/gatling

# set directories below to be mountable from host
VOLUME ["/opt/gatling/conf", "/opt/gatling/results", "/opt/gatling/user-files"]

# set environment variables
ENV PATH /opt/gatling/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ENV GATLING_HOME /opt/gatling

ENTRYPOINT ["gatling.sh"]

I then added a way to pass JAVA_OPTS via an environment variable. I added this line after the ENV GATLING_HOME line:

ENV JAVA_OPTS ""

I dropped this Dockerfile in my gatling directory, then built a local Docker image off of it:

$ docker build -t gatling:local .

I  then invoked 'docker run' to launch a container based on this image, using the csv and simulation files from above. The current directory is still gatling.

$ docker run --rm -v `pwd`/conf:/opt/gatling/conf -v `pwd`/user-files:/opt/gatling/user-files -v `pwd`/results:/opt/gatling/results -e JAVA_OPTS="-Dusers=10 -Dduration=60" gatling:local -s MySimulationName

Note the -s flag which denotes a simulation name (which can be any string you want). If you don't specify this flag, the gatling.sh script which is the ENTRYPOINT in the container will wait for some user input and you will not be able to fully automate your load test.

Another thing to note is the use of JAVA_OPTS. In the example above, I pass -Dusers=10 and   -Dduration=60 as the two JAVA_OPTS parameters. The JAVA_OPTS variable itself is passed to 'docker run' via the -e option, which tells Docker to replace the default value for ENV JAVA_OPTS (which is "") with the value passed with -e.

Running Gatling as a Docker container from Jenkins

Once you have a working Gatling container locally, you can upload the Docker image built above to a private Docker registry. I used a private EC2 Container Registry (ECR).  

I also added the gatling directory and its sub-directories to a GitHub repository called devops.

In Jenkins, I created a new "Freestyle project" job with the following properties:

  • Parameterized build with 2 string parameters: USERS (default value 10) and DURATION in seconds (default value 60)
  • Git repository - add URL and credentials for the devops repository which contains the gatling files
  • An "Execute shell" build command similar to this one:
docker run --rm -v ${WORKSPACE}/gatling/conf:/opt/gatling/conf -v ${WORKSPACE}/gatling/user-files:/opt/gatling/user-files -v ${WORKSPACE}/gatling/results:/opt/gatling/results -e JAVA_OPTS="-Dusers=$USERS -Dduration=$DURATION"  /PATH/TO/DOCKER/REGISTRY/gatling -s MyLoadTest 


Note that we mount the gatling directories as Docker volumes, similarly to when we ran the Docker container locally, only this time we specify ${WORKSPACE} as the base directory. The 2 string parameters USERS and DURATION are passed as variables in JAVA_OPTS.

A nice thing about running Gatling via Jenkins is that the reports are available in the Workspace directory of the project. If you go to the Gatling project we created in Jenkins, click on Workspace, then on gatling, then results, you should see directories named gatlingsimulation-TIMESTAMP for each Gatling run. Each of these directories should have an index.html file, which will show you the Gatling report dashboard. Pretty neat.


1 comment:

Anonymous said...

Hi how can i access to that index.html file?

Modifying EC2 security groups via AWS Lambda functions

One task that comes up again and again is adding, removing or updating source CIDR blocks in various security groups in an EC2 infrastructur...