Thursday, December 17, 2009

Deploying Tornado in production

We've been using Tornado at Evite very successfully for a while, for a subset of functionality on our web site. While the instructions in the official documentation make it easy to get started with Tornado and get an application up and running, they don't go a long way towards explaining issues that you face when deploying to a production environment -- things such as running the Tornado processes as daemons, logging, automated deployments. Here are some ways we found to solve these issues.

Running Tornado processes as Unix daemons

When we started experimenting with Tornado, we ran the Tornado processes via nohup. It did the job, but it was neither solid nor elegant. I ended up using the grizzled Python library, specifically the os module, which encapsulates best practices in Unix system programming. This module offers a function called daemonize, which converts the calling process into a Unix daemon. The function's docstring says it all:

Convert the calling process into a daemon. To make the current Python
process into a daemon process, you need two lines of code:

from grizzled.os import daemon
daemon.daemonize()

I combined this with standard library's os.execve, which replaces the current (already daemonized) process with another program -- in my case with the Tornado process.

I'll show some code in a second, but first I also want to mention...

Logging

We use the standard Python logging module and send the log output to stdout/stderr. However, we wanted to also rotate the log files using the rotatelogs utility, so we looked for a way to pipe stdout/stderr to the rotatelogs binary, while also daemonizing the Tornado process.

Here's what I came up with (the stdout/stderr redirection was inspired by this note on StackOverflow):


from socket import gethostname
from grizzled.os import daemonize
PYTHON_BINARY = "python2.6"
PATH_TO_PYTHON_BINARY = "/usr/bin/%s" % PYTHON_BINARY
ROTATELOGS_CMD = "/usr/sbin/rotatelogs"
LOGDIR = "/opt/tornado/logs"
LOGDURATION = 86400

logdir = LOGDIR
logger = ROTATELOGS_CMD
hostname = gethostname()
# service is the name of the Python module pointing to your Tornado web server
# for example myapp.web
execve_args = [PYTHON_BINARY, "-m", service]
logfile = "%s_%s_log.%%Y-%%m-%%d" % (service, hostname)
pidfile = "%s/%s.pid" % (logdir, service)
logpipe ="%s %s/%s %d" % (logger, logdir, logfile, LOGDURATION)
execve_path = PATH_TO_PYTHON_BINARY

# open the pipe to ROTATELOGS
so = se = os.popen(logpipe, 'w')

# re-open stdout without buffering
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)

# redirect stdout and stderr to the log file opened above
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())

# daemonize the calling process and replace it with the Tornado process
daemonize(no_close=True, pidfile=pidfile)
os.execv(execve_path, execve_args)


The net result of all this is that our Tornado processes run as daemons, and the logging output is captured in files managed by the rotatelogs utility. Note that it is easy to switch out rotatelogs and use scribe instead (which is something we'll do very soon).

Automated deployments

I already wrote about our use of Fabric for automated deployments. We use Fabric to deploy Python eggs containing the Tornado code to each production server in turn. Each server runs N Tornado processes, and we run nginx to load balance between all M x N tornados (M servers with N processes each).

Here's how easy it is to run our Fabric-based deployment scripts:

fab -f fab_myapp.py nginx disable_tornado_in_lb:web1
fab -f fab_myapp.py web1 deploy
# run automated tests, then:
fab -f fab_myapp.py nginx enable_tornado_in_lb:web1

We first disable web1 in the nginx configuration file (as detailed here), then we deploy the egg to web1, then we run a battery of tests against web1 to make sure things look good, and finally we re-enable web1 in nginx. Rinse and repeat for all the other production web servers.

6 comments:

Big 40wt Svetlyak said...

Why do not try to use python logging and syslog-ng which is able to send logs via network?

Grig Gheorghiu said...

Yes, you're right, we could have just as well used syslog-ng and logrotate. In fact we'll revisit our logging strategy soon in order to include scribe in it.

Jeremy Fincher said...

Shameless self-promotion, perhaps, but you might want to look into finitd for daemonization and logging needs. It has the advantage of being something you can use consistently, with programs of different languages, rather than just Python.

Alexander Gorban said...

Why not simply use daemon module?

Grig Gheorghiu said...

Alexander -- thanks for the pointer. I wasn't aware of PEP 3143, I'll check it out.

boyombo said...

I tried to follow your link to grizzled and found they have moved to http://software.clapper.org/grizzled-python/
Please update your link

Modifying EC2 security groups via AWS Lambda functions

One task that comes up again and again is adding, removing or updating source CIDR blocks in various security groups in an EC2 infrastructur...