ipythonblocks.org Move: Part 4 – Application Updates

This is Part 4 in a series of blog posts describing my move of ipythonblogs.org from Rackspace to Heroku. In this post I’ll describe the updates I’ve made to the application layer of ipythonblocks.org. Other posts are:

  • Part 1: Introduction and Architecture
  • Part 2: Data Migration
  • Part 3: Database Interface Updates
  • Part 4: Application Updates

The application logic is not really changed in this update, the bulk of changes are to support providing SQLAlchemy sessions to allow database access during requests. (See Part 3 for discussion of the database interface layer of ipythonblocks.org.)

Application Overview

ipythonblocks.org is powered by Tornado, which combines an application framework, web server, and asynchronous features. On Heroku the application is started with the command

python -m app.app --port=$PORT --db_url=$DATABASE_URL

$PORT and $DATABASE_URL are environment variables provided by Heroku to specify, respectively, which port to listen on and where to find the Postgres database attached to the app. These are parsed from the command line by Tornado’s options module and made available globally on the tornado.options.options variable.

SQLAlchemy Sessions

Engine and Session Factory

ipythonblocks.org uses the the SQLAlchemy ORM to interact with the database, which requires sessions. I could in theory create new connections to the database inside of every function in the database interface, but it’s more efficient to let SQLAlchemy manage connections via an Engine and [session factory][sa=factory]. These are meant to live for the lifetime of an application process, so they can be created at application startup time. With Tornado this can be accomplished by creating a subclass of the Tornado application class and adding engine/sessionmaker creation to its __init__:

class AppWithSession(tornado.web.Application):
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.engine = sa.create_engine(tornado.options.options.db_url)
        self.session_factory = sessionmaker(bind=self.engine)

An application instance is created once at application startup time and made available to request handlers. The tornado.options.options.db_url variable is the result of the command-line option parsing mentioned above.

Request Scoped Sessions

The engine and session factory help manage database connections, but for the actual database communication I need session instances. The goal is to have one session for every request that is opened at the beginning of the request and closed at the end. Tornado has prepare and on_finish methods that are run before and after requests, but the downside of these is that they are disconnected from the request logic and don’t give you the opportunity to catch exceptions and rollback changes. I decided to add a context manager to my request handlers instead and to wrap all DB accesses within the context manager. Here’s how the context manager is added to a specialized subclass of Tornado’s request handler class (this context manager is borrowed straight from the SQLAlchemy docs):

class DBAccessHandler(tornado.web.RequestHandler):
    def session_context(self):
        session = self.application.session_factory()
            yield session

This can then be called from within request handlers to get a session without having to worry about committing, rolling back, or closing it. For example, here’s the request handler for ipythonblocks.org/random:

class RandomHandler(DBAccessHandler):
    def get(self):
        with self.session_context() as session:
            hash_id = dbi.get_random_hash_id(session)

        self.redirect('/' + hash_id, status=303)

Testing the App


Testing the application requires a slightly different database strategy than I used in Part 3 for the database interface module. There nothing was ever committed to the database so at the end of each test I could rollback an open transaction to restore the DB, but while testing the application things will be committed. To restore the database for the next test I’ll have testing.postgresql fully teardown and rebuild the database. Luckily testing.postgresql has a feature that makes this process slightly faster by caching the initial database files for re-use and I use this feature when setting up a test database factory at the test module level:

def setup_module(module):
    module.PG_FACTORY = testing.postgresql.PostgresqlFactory(

def teardown_module(module):

setup_module and teardown_module are features of pytest that are run once before and after all the tests in the module. The module argument passed in is the test module itself, which is a bit of a trip. It can be used to set global values within the module namespace that can be accessed later. I use this tactic here instead of pytest’s amazing fixtures because to test a Tornado app I have to use unittest subclasses, which are not compatible with fixtures. (More on that below.)

At the test level I use these test fixtures to create sessions and configure the tornado.options.options variable with the URL of the test database:

class UtilBase(tornado.testing.AsyncHTTPTestCase):
    def setup_method(self, method):
        self.postgresql = PG_FACTORY()
        self.engine = sa.create_engine(self.postgresql.url())
        self.Session = sessionmaker(bind=self.engine)
        self.session = self.Session()
        tornado.options.options.db_url = self.postgresql.url()

    def teardown_method(self, method):

These are run before and after each test method in the file.

Testing Tornado Apps

Testing the application means simulating real requests that return synchronously within a test. That would be complicated if I had to manage it on my own, but most web frameworks/servers come with utilities to help manage testing, including Tornado. In the case of Tornado I need to subclass AsyncHTTPTestCase, which is itself a subclass of unittest.TestCase. This means I can’t take advantage of many of my favorite pytest fixtures, but I want to take advantage of Tornado’s testing features. You can see above that I’ve already subclassed AsyncHTTPTestCase to create a test superclass with fixtures, so my test cases subclass UtilBase. As an example, here’s a couple of the tests of the /post endpoint for sending new grids to ipythonblocks.org:

class TestPostGrid(UtilBase):
    app_url = '/post'
    method = 'POST'

    def test_json_failure(self):
        response = self.get_response('{"asdf"}')
        assert response.code == 400

    def test_validation_failure(self):
        response = self.get_response('{"asdf": 5}')
        assert response.code == 400

    def test_returns_url(self):
        req = request()
        response = self.get_response(json.dumps(req))

        assert response.code == 200
        assert 'application/json' in response.headers['Content-Type']

        body = json.loads(response.body)
        assert body['url'] == 'http://www.ipythonblocks.org/bizkiL'

To see more, the bulk of the application code changes are in this commit.

What’s Next

That’s it! There were other changes that I haven’t highlighted in these posts, mostly related to logging now that I’m logging only to standard out and not to files. If you’re interested in the full set of differences made while doing this migration from Rackspace to Heroku you can see them here.

ipythonblocks.org is now up and running on Heroku: http://www.ipythonblocks.org/ (The URL now contains a www. because of Heroku’s architecture and Hover not supporting compatible DNS records.) ipythonblocks has the updated URL as of version 1.7.1 and I’ve tested it with a new grid. Please give it a try and thanks for reading!

ipythonblocks.org Move: Part 4 – Application Updates

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.