How to auto-reload a server on changes with doit

5 03 2012

Another day my friend told me he is using wsgiref.simple_server and asked if doit could be used to auto-reload the server. My first answer was do not use wsgiref :)

But you might prefer to use wsgiref & doit for for auto-reload for two reasons. doit can be bundled in a single file and included in the project. You might want to have explicit control on when the server should be reloaded.

server.py:

from wsgiref.simple_server import make_server, demo_app

httpd = make_server('', 8000, demo_app)
print "Serving HTTP on port 8000..."

# Respond to requests until process is killed
httpd.serve_forever()

dodo.py:

import subprocess
import glob
import os
import signal

def start_server(server_cmd, pid_filename, restart=False):
    # check the server is running
    if os.path.exists(pid_filename):
        if restart:
            stop_server(pid_filename)
        else:
            msg = "It seems the server is already running, check the file %s"
            print  msg % pid_filename
            return False

    # start server
    process = subprocess.Popen(server_cmd.split())

    # create pid file
    with open(pid_filename, 'w') as pid_file:
        pid_file.write(str(process.pid))
    return True

def stop_server(pid_filename):
    # check server if is running
    if not os.path.exists(pid_filename):
        return
    # try to terminate/stop server's process
    with open(pid_filename) as pid_file:
        pid = int(pid_file.read())
        try:
            os.kill(pid, signal.SIGTERM)
        except OSError:
            pass #ignore errors if process does not exist
    # remove pid file
    os.unlink(pid_filename)


########################################

DOIT_CONFIG = {'default_tasks': ['restart']}

PID_FILENAME = 'pid.txt'
START_SERVER = 'python server.py'

def task_start():
    return {'actions': [(start_server, (START_SERVER, PID_FILENAME,))]}

def task_stop():
    return {'actions': [(stop_server, (PID_FILENAME,))]}

def task_restart():
    return {'actions': [(start_server, (START_SERVER, PID_FILENAME, True))],
            'file_dep': glob.glob('*.py'),
            'uptodate': [False],
            }

In order to do a auto-reload/restart the server we need to be able to start and stop the server with two independent commands.
So when starting the server we create a text file containing the PID of the process server.

The start_server funcion contains a boolean parameter ‘restart’ to control the behaviour when there is already a pid file.

task_restart

Usually ‘file_dep’ is used to indicated when a task is up-to-date but in this case we use it just to trigger a re-execution of the task in the ‘auto’ mode.

So apart from the action to restart the server the task’s file_dep controls which files to watch for modifications. Since we want always want to start the server when the task is called we need to add the ‘uptodate’ parameter to be false.

To use it just type:

$ doit auto restart




A faster (but incomplete) implementation of SCons on top of doit

19 07 2011

Motivation

doit is an automation tool. It is kind of build-tool but more generic…

My motivation was to demonstrate how to create a specialized interface for defining tasks. doit can be used for many different purposes, so its default interface can be quite verbose if compared to tools created to solve one specific problem.

Instead of creating an interface myself I decided to use an existing interface. I picked SCons. So the goal was to be able to build C/C++ project using an existing SConstruct file without any modification. And of course it should be as good as SCons on dependency tracking, ensuring always a correct result.

A secondary goal was to make it fast.

Note that this implementation is very far from complete. I only implemented the bare minimum to get some benchmarks running.

Implementation

I wont go into the gory details… Just a few notes. You can check the code here.

In docons.py there is an implementation of the API available in SConstrcut files. When a “Builder Method” (like Program or Object) is executed a reference to the builder is saved in a global variable. These “Builder Methods” are actually implemented as class that can generate dictionaries representing doit tasks.

In doit the configuration file that define your tasks is called dodo.py. In this case the end user wont edit this file directly. dodo.py will import the SCons API namespace from docons, than it will execfile the SConstruct file and collect the tasks from the “Builder Methods”.

Creating tasks for compile/link is straightforward. The hard part is automatically finding out the dependencies in the source code and mapping it into account on your tasks. To find out the dependencies (the #include for C code) in the source I am using the same C preprocessor as used by SCons.

SCons uses the concept of a “Scanner” function associated with a Builder. In doit the implicit dependencies are “calculated” in a separate task. The dependencies are than put into the build (compile/link) tasks through calc_dep (calculated dependencies).

A faster implementation

It seems SCons creates the whole dependency graph before starting to execute the tasks/builders. Because of this it doesn’t scale so well when the number of files increase.

doit creates the dependency graph dynamically during task execution. But even on a no-op build it will end-up with a complete graph built because it will check all tasks dependencies.

tup is a build-tool that saves the dependency graph in a SQLite database. I decided to give it a try to its approach.  So I created “dup” – sorry for the name :) . It will still read the build configuration from SConstrcut files but it will keep a SQLite database mapping the targets to all of its dependencies. This enables a much faster no-op and incremental builds. The  underlying graph dependency of a target will only be built if required.

Benchmarks

I did some very basic benchmarks from SCons and my two SCons implementations. The benchmarks were created using the gen-bench from wonderbuild benchmarks.

I used gen-bench script was run with the arguments “50 100 15 5”.  This generates 10000 tiny interdependent C++ source and header files, to be built into 50 static libs.

All benchmarks were run on a intel i3 550 quad-core (3.20GHz), running Ubuntu 10.10, python2.6.6, doit 0.13.0, SCons 2.0.0. All benchmarks were run using a single process.

The graph doesn’t include full build time. It was SCons=266 seconds, docons=249 seconds, dup=253 seconds.

  • no-op 1 lib -> no-operation build when all files are up-to-date and only one of the 50 libraries were selected to be built (scons build-scons/lib_0/lib0.a)
  • no-op -> no-operation build
  • partial build – cpp file -> only the file lib_1/class_0.cpp was modified. rebuilt 1 object file and 1 lib.
  • partial build – hpp file -> only the file lib_0/class_0.hpp was modified. rebuild 30 object files and 14 libs.

Analysis

  1. comparing SCons and docons on no-op build you can see how doit is considerably faster than SCons on creating the dependency graph and checking what to build.
  2. comparing “no-op 1 lib” with “no-op” you can see how both SCons and docons have a performance degradation from creating the dependency graph (5.2 and 3.6 times slower respectivelly). And how dup shows little influence on no-op build relative to the size of the dependency graph.
  3. all 3 solutions have almost no difference between a no-op build and a partial/incremental build where a single source file is modified.
  4. as the number of built objects increase the advantage of dup over docons is reduced because it requires the dependency graph from the affected tasks to be built.

Should you stop using SCons?

Probably not. The implementation is very incomplete and probably buggy in many ways. This code was written just as proof-of-concept (and for fun) to check how powerful, flexible and fast doit can be.

I have personally no interest in developing a C/C++ build tool and if I would build one I would create a different interface from the one used by SCons.





appengine & virtualenv

21 11 2010

UPDATE: appengine 1.6.1 & uses gaecustomize.py

This article will explain how to setup Google AppEngine (GAE) with virtualenv.

GAE does not provide a “setup.py” to make the SDK “installable”, it is supposed to be used from a folder without being “installed”. GAE actually forbids the use of any python library in the site-packages folder. All included libraries must be in the same folder as your application, this allows GAE to automatically find and upload third-party libraries together with your application code when you upload the code to GAE servers.

So what would be the advantages of using of using virtualenv with GAE? The main reason is to have an environment to run unit-tests and functional tests. It will allow us to use the interactive shell to make operations on DB. And it also enforce you are using the correct python version.

Step 0 – install App Engine SDK

Make sure you use 1.6.1 or later, an important bug was fixed on this release. As described on official docs. Tested with virtualenv 1.6.4.

Step 1 – create and activate a virtualenv

Same as usual…


$ virtualenv --python python2.5 --no-site-packages gae-env
$ source gae-env/bin/activate

Step 2 – add google_appengine path

Add a path configuration file named “gae.pth” to the virtualenv site-packages with the path to google_appengine. This way google_appengine will be in sys.path enabling it to be imported by other modules.

You will need to adjust the content of the file according to where you created your virtualenv and google_appengine location. Mine looks like this:


$ cat gae-env/lib/python2.5/site-packages/gae.pth
../../../../google_appengine

Simple test to make sure your gae.pth is correct:

(gae-env)$ python
>>> from google import appengine

If you did not get any exception you are good to go on.

Step 3 – fix path for third-party libs

The AppEngine SDK comes with a few third-party libraries. They are not in the same path as google’s libraries. If you look at dev_appserver.py you will see a function called fix_sys_path, this function adds the path of the third-party libraries to python’s sys.path. One option would be to add these paths to gae.pth… But I prefer to use the function fix_sys_path so we have less chances of having problems with future releases of the SDK.

Note that this will not look for your config in app.yaml. So you might need to add some extra imports. The example below is using webob version 1.1.1 instead of the default one.

Path configuration files can also execute python code on if the line starts with import. Add a module gaecustomize.py to site-packages:

gae-env/lib/python2.5/site-packages/gaecustomize.py

def fix_sys_path():
    try:
        import sys, os
        from dev_appserver import fix_sys_path, DIR_PATH
        fix_sys_path()
        # must be after fix_sys_path
        # uses non-default version of webob
        webob_path = os.path.join(DIR_PATH, 'lib', 'webob_1_1_1')
        sys.path = [webob_path] + sys.path
    except ImportError:
        pass

And modify gae.pth it calls the above module:

gae-env/lib/python2.5/site-packages/gae.pth

../../../../google_appengine
import gaecustomize; gaecustomize.fix_sys_path()

For some unknown reason gae.pth is being processed twice and on the first time google_appengine is not added to sys.path. Thats why I explicitly call the function fix_sys_path.

Check if it is working fine:


(gae-env)$ python
>>> import yaml

Again. You should not any exceptions on this…

Step 4 – add dev_appserver.py to bin

Not really required but handy.


gae-env/bin $ ln -s ../../google_appengine/dev_appserver.py .

Conclusion

Now you have an isolated environment running AppEngine! But pay attention libraries used your production code should not be installed in your virtualenv, you should do “GAE way” and link them from your application folder. You should install on virtualenv only stuff used on your tests. Check site.py docs for more details on using .pth files.