Installing Kong on Ubuntu 16.04
Install Kong from deb package
# wget https://bintray.com/kong/kong-community-edition-deb/download_file?file_path=dists/kong-community-edition-0.11.2.xenial.all.deb
# dpkg -i kong-community-edition-0.11.2.xenial.all.deb
Install PostgreSQL
# apt-get install postgresql
Create kong database and user
# su - postgres
$ psql
psql (10.1)
Type "help" for help.
postgres=# CREATE USER kong; CREATE DATABASE kong OWNER kong;
postgres=# ALTER USER kong PASSWORD 'somepassword';
Modify kong configuration
# cp /etc/kong/kong.conf.default /etc/kong/kong.conf
pg_host = 127.0.0.1
pg_port = 5432
pg_user = kong
pg_password = somepassword
pg_database = kong
Run kong migrations job
# kong migrations up
Increase open files limit
# ulimit -n 4096
Start kong
# kong start
Install kong dashboard
Install updated version of node first.
# curl -sL https://deb.nodesource.com/setup_6.x -o nodesource_setup.sh
# bash nodesource_setup.sh
# apt-get install nodejs
# npm install -g kong-dashboard
Create systemd service to start kong-dashboard with basic auth
I specified port 8001 for kong-dashboard, instead of the default port 8080.
# cat /etc/systemd/system/multi-user.target.wants/kong-dashboard.service
[Unit]
Description=kong dashboard
After=network.target
[Service]
ExecStart=/usr/local/bin/kong-dashboard start --kong-url http://localhost:8001 --basic-auth someadminuser=someadminpassword
[Install]
WantedBy=multi-user.target
# systemctl daemon-reload
# systemctl start kong-dashboard
Adding an API to Kong
I used the Kong admin dashboard to add a new API Gateway object which points to the actual API endpoint I have. Note that the URL name as far as Kong is concerned can be anything you want. In this example I chose /kong1. The upstream URL is your actual API endpoint. The Hosts value is a comma-separated list of the host headers you want Kong to reply to. Here is contains the domain name for my actual API endpoint (stage.mydomain.com) as well as a new domain name that I will use later in conjunction with a load balancer in front of Kong (stage-api.mydomain.com).- Name: my-kong-api
- Hosts: stage.mydomain.com, stage-api.mydomain.com
- URLs: /kong1
- Upstream URL: https://stage.mydomain.com/api/graphql/pub
At this point you can query the Kong admin endpoint for the existing APIs. We’ll use HTTPie (on a Mac you can install HTTPie via brew install httpie).
$ http http://stage.mydomain.com:8001/apis
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Date: Fri, 29 Dec 2017 16:23:59 GMT
Server: kong/0.11.2
Transfer-Encoding: chunked
{
"data": [
{
"created_at": 1513401906560,
"hosts": [
"stage-api.mydomain.com",
"stage.mydomain.com"
],
"http_if_terminated": true,
"https_only": true,
"id": "ad23a91a-b76a-417d-9889-0088a13e3419",
"name": "my-kong-api",
"preserve_host": true,
"retries": 5,
"strip_uri": true,
"upstream_connect_timeout": 60000,
"upstream_read_timeout": 60000,
"upstream_send_timeout": 60000,
"upstream_url": "https://stage.mydomain.com/api/graphql/pub",
"uris": [
"/kong1"
]
}
],
"total": 1
}
Using the Kong rate-limiting plugin
In the Kong admin dashboard, go to Plugins -> Add:
- Plugin name: rate-limiting
- Config: minute = 10
Verifying that our API endpoint is rate-limited
Now we can verify that the API endpoint is working by issuing a POST request to the Kong URL (/kong1) which points to our actual API endpoint:
$ http --verify=no -v POST "https://stage.mydomain.com:8443/kong1?query=version"
POST /kong1?query=version HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Content-Length: 0
Host: stage.mydomain.com:8443
User-Agent: HTTPie/0.9.9
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 43
Content-Type: application/json
Date: Sat, 16 Dec 2017 18:16:53 GMT
Server: nginx/1.10.3 (Ubuntu)
Via: kong/0.11.2
X-Kong-Proxy-Latency: 86
X-Kong-Upstream-Latency: 33
X-RateLimit-Limit-minute: 10
X-RateLimit-Remaining-minute: 9
{
"data": {
"version": "21633901644387004745"
}
}
A few things to notice in the call above:
- we are making an SSL call; by default Kong exposes SSL on port 8443
- we are passing a query string to the /kong1 URL endpoint; Kong will pass this along to our actual API
- the HTTP reply contains 2 headers related to rate limiting:
- X-RateLimit-Limit-minute: 10 (this specifies the rate we set for the rate-limiting plugin)
- X-RateLimit-Remaining-minute: 9 (this specifies the remaining calls we have)
After 9 requests in quick succession, the rate-limiting headers are:
X-RateLimit-Limit-minute: 10
X-RateLimit-Remaining-minute: 1
After 10 requests:
X-RateLimit-Limit-minute: 10
X-RateLimit-Remaining-minute: 0
After the 11th request we get an HTTP 429 error:
HTTP/1.1 429
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Date: Sat, 16 Dec 2017 18:21:19 GMT
Server: kong/0.11.2
Transfer-Encoding: chunked
X-RateLimit-Limit-minute: 10
X-RateLimit-Remaining-minute: 0
{
"message": "API rate limit exceeded"
}
Putting a load balancer in front of Kong
Calling Kong APIs on non-standard port numbers gets cumbersome. To make things cleaner, I put an AWS ALB in front of Kong. I added a proper SSL certificate via AWS Certificate Manager for the domain stage-api.mydomain.com and associated it to a listener on port 443 of the ALB. I also created two Target Groups, one for HTTP traffic to port 80 on the ALB and one for HTTPStraffic to port 443 on the ALB. The first Target Group points to port 8000 on the server running Kong, because that's the port Kong uses for plain HTTP requests. The second Target Group points to port 8443 on the server running Kong, for HTTPS requests.
I was now able to make calls such as these:
$ http -v POST "https://stage-api.mydomain.com/kong1?query=version"
POST /kong1?query=version HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Content-Length: 0
Host: stage-api.mydomain.com
User-Agent: HTTPie/0.9.9
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 42
Content-Type: application/json
Date: Fri, 29 Dec 2017 17:42:14 GMT
Server: nginx/1.10.3 (Ubuntu)
Via: kong/0.11.2
X-Kong-Proxy-Latency: 94
X-Kong-Upstream-Latency: 29
X-RateLimit-Limit-minute: 10
X-RateLimit-Remaining-minute: 9
{
"data": {
"version": "21633901644387004745"
}
}
Using the Kong syslog plugin
I added the syslog plugin via the Kong admin dashboard. I set the following values:- server_errors_severity: err
- successful_severity: notice
- client_errors_severity: err
- log_level: notice
I then created a file called kong.conf in /etc/rsyslog.d on the server running Kong:
# cat kong.conf
if ($programname == 'kong' and $syslogseverity-text == 'notice') then -/var/log/kong.log
& ~
# service rsyslog restart
Now every call to Kong is logged in syslog format in /var/log/kong.log. The message portion of the log entry is in JSON format.
Sending Kong logs to AWS ElasticSearch/Kibana
I used the process I described in my previous blog post on AWS CloudWatch Logs and AWS ElasticSearch. One difference was that for the Kong logs I had to use a new index in ElasticSearch.
This was because the JSON object logged by Kong contains a field called response, which was clashing with another field called response already present in the cwl-* index in Kibana. What I ended up doing was copying the Lambda function used to send CloudWatch logs to ElasticSearch and replacing cwl- with cwl-kong- in the transform function. This created a new index cwl-kong-* in ElasticSearch, and at that point I was able to add that index to a Kibana dashboard and visualize and query the Kong logs.
This is just scratching the surface of what you can do with Kong. Here are a few more resources:
- Webinar on API & Microservices with Kong
- Presentation on API Gateway Pattern and Kong in a Microservices World
- Blog post on API authentication with Kong
No comments:
Post a Comment