Friday, December 29, 2017

Experiences with the Kong API Gateway

Kong is an open-source API Gateway that you can install on-premise if you don't want or cannot use the AWS API Gateway. The benefits of putting an API Gateway in front of your actual API endpoints are many: rate limiting, authentication, security, better logging, etc. Kong offers all of these and more via plugins. I've experimented with Kong a bit and I am writing down my notes here for future reference.

Installing Kong on Ubuntu 16.04

Install Kong from deb package

# wget
# dpkg -i kong-community-edition-0.11.2.xenial.all.deb

Install PostgreSQL

# apt-get install postgresql

Create kong database and user

# su - postgres
$ psql
psql (10.1)
Type "help" for help.

postgres=# CREATE USER kong; CREATE DATABASE kong OWNER kong;
postgres=# ALTER USER kong PASSWORD 'somepassword';

Modify kong configuration

# cp /etc/kong/kong.conf.default /etc/kong/kong.conf

Edit kong.conf and set:

pg_host =             
pg_port = 5432                  
pg_user = kong                  
pg_password = somepassword
pg_database = kong              

Run kong migrations job

# kong migrations up

Increase open files limit

# ulimit -n 4096

Start kong

# kong start

Install kong dashboard

Install updated version of node first.

# curl -sL -o
# bash
# apt-get install nodejs
# npm install -g kong-dashboard

Create systemd service to start kong-dashboard with basic auth

I specified port 8001 for kong-dashboard, instead of the default port 8080.

# cat /etc/systemd/system/
Description=kong dashboard

ExecStart=/usr/local/bin/kong-dashboard start --kong-url http://localhost:8001 --basic-auth someadminuser=someadminpassword


# systemctl daemon-reload
# systemctl start kong-dashboard

Adding an API to Kong

I used the Kong admin dashboard to add a new API Gateway object which points to the actual API endpoint I have. Note that the URL name as far as Kong is concerned can be anything you want. In this example I chose /kong1. The upstream URL is your actual API endpoint. The Hosts value is a comma-separated list of the host headers you want Kong to reply to. Here is contains the domain name for my actual API endpoint ( as well as a new domain name that I will use later in conjunction with a load balancer in front of Kong (

  • Name: my-kong-api
  • Hosts:,
  • URLs: /kong1
  • Upstream URL:

At this point you can query the Kong admin endpoint for the existing APIs. We’ll use HTTPie (on a Mac you can install HTTPie via brew install httpie).

$ http HTTP/1.1 200 OK Access-Control-Allow-Origin: * Connection: keep-alive Content-Type: application/json; charset=utf-8 Date: Fri, 29 Dec 2017 16:23:59 GMT Server: kong/0.11.2 Transfer-Encoding: chunked { "data": [ { "created_at": 1513401906560, "hosts": [ "", "" ], "http_if_terminated": true, "https_only": true, "id": "ad23a91a-b76a-417d-9889-0088a13e3419", "name": "my-kong-api", "preserve_host": true, "retries": 5, "strip_uri": true, "upstream_connect_timeout": 60000, "upstream_read_timeout": 60000, "upstream_send_timeout": 60000, "upstream_url": "", "uris": [ "/kong1" ] } ], "total": 1 }

Note that any operation that you do via the Kong dashboard can also be done via API calls to the Kong admin backend. For lots of examples on how to do that, see the official documentation as well as this blog post from John Nikolai.

Using the Kong rate-limiting plugin

In the Kong admin dashboard, go to Plugins -> Add:
  • Plugin name: rate-limiting
  • Config: minute = 10
This means that we are limiting the rate of calls to our API to 10 per minute. You can also specify limits per other units of time. See the rate-limiting plugin documentation for more details.

Verifying that our API endpoint is rate-limited

Now we can verify that the API endpoint is working by issuing a POST request to the Kong URL (/kong1) which points to our actual API endpoint:

$ http --verify=no -v POST ""
POST /kong1?query=version HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Content-Length: 0
User-Agent: HTTPie/0.9.9

HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 43
Content-Type: application/json
Date: Sat, 16 Dec 2017 18:16:53 GMT
Server: nginx/1.10.3 (Ubuntu)
Via: kong/0.11.2
X-Kong-Proxy-Latency: 86
X-Kong-Upstream-Latency: 33
X-RateLimit-Limit-minute: 10
X-RateLimit-Remaining-minute: 9

   "data": {
       "version": "21633901644387004745"

A few things to notice in the call above:

  • we are making an SSL call; by default Kong exposes SSL on port 8443
  • we are passing a query string to the /kong1 URL endpoint; Kong will pass this along to our actual API
  • the HTTP reply contains 2 headers related to rate limiting:
    • X-RateLimit-Limit-minute: 10 (this specifies the rate we set for the rate-limiting plugin)
    • X-RateLimit-Remaining-minute: 9 (this specifies the remaining calls we have)

After 9 requests in quick succession, the rate-limiting headers are:

X-RateLimit-Limit-minute: 10
X-RateLimit-Remaining-minute: 1

After 10 requests:

X-RateLimit-Limit-minute: 10
X-RateLimit-Remaining-minute: 0

After the 11th request we get an HTTP 429 error:

HTTP/1.1 429
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Date: Sat, 16 Dec 2017 18:21:19 GMT
Server: kong/0.11.2
Transfer-Encoding: chunked
X-RateLimit-Limit-minute: 10
X-RateLimit-Remaining-minute: 0

   "message": "API rate limit exceeded"

Putting a load balancer in front of Kong

Calling Kong APIs on non-standard port numbers gets cumbersome. To make things cleaner, I put an AWS ALB in front of Kong. I added a proper SSL certificate via AWS Certificate Manager for the domain and associated it to a listener on port 443 of the ALB. I also created two Target Groups, one for HTTP traffic to port 80 on the ALB and one for HTTPS 
traffic to port 443 on the ALB. The first Target Group points to port 8000 on the server running Kong, because that's the port Kong uses for plain HTTP requests. The second Target Group points to port 8443 on the server running Kong, for HTTPS requests.

I was now able to make calls such as these:

$ http -v POST ""
POST /kong1?query=version HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Content-Length: 0
User-Agent: HTTPie/0.9.9

HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 42
Content-Type: application/json
Date: Fri, 29 Dec 2017 17:42:14 GMT
Server: nginx/1.10.3 (Ubuntu)
Via: kong/0.11.2
X-Kong-Proxy-Latency: 94
X-Kong-Upstream-Latency: 29
X-RateLimit-Limit-minute: 10
X-RateLimit-Remaining-minute: 9

    "data": {
        "version": "21633901644387004745"

Using the Kong syslog plugin

I added the syslog plugin via the Kong admin dashboard. I set the following values:

  • server_errors_severity: err
  • successful_severity: notice
  • client_errors_severity: err
  • log_level: notice
I then created a file called kong.conf in /etc/rsyslog.d on the server running Kong:

# cat kong.conf
if ($programname == 'kong' and $syslogseverity-text == 'notice') then -/var/log/kong.log
& ~

# service rsyslog restart

Now every call to Kong is logged in syslog format in /var/log/kong.log. The message portion of the log entry is in JSON format.

Sending Kong logs to AWS ElasticSearch/Kibana

I used the process I described in my previous blog post on AWS CloudWatch Logs and AWS ElasticSearch. One difference was that for the Kong logs I had to use a new index in ElasticSearch. 

This was because the JSON object logged by Kong contains a field called response, which was clashing with another field called response already present in the cwl-* index in Kibana. What I ended up doing was copying the Lambda function used to send CloudWatch logs to ElasticSearch and replacing cwl- with cwl-kong- in the transform function. This created a new index cwl-kong-* in ElasticSearch, and at that point I was able to add that index to a Kibana dashboard and visualize and query the Kong logs.

This is just scratching the surface of what you can do with Kong. Here are a few more resources:

No comments:

Modifying EC2 security groups via AWS Lambda functions

One task that comes up again and again is adding, removing or updating source CIDR blocks in various security groups in an EC2 infrastructur...