Brandon Konkle
Brandon Konkle

Principal Engineer, type system nerd, Rust enthusiast, supporter of social justice, loving husband & father, avid comic & manga reader, 日本語を勉強してる。

I’m a Software Architect with more than 15 years of experience creating high performance server and front-end applications targeting web and mobile platforms, & today I lead a team at Formidable Labs.

Share


Tags


Django Dev, Test, and Prod Environments Revisited

Back in November, I posted a detailed entry about the environments I used to develop, test, and deploy my Django applications. Since then, I've made a lot of changes to my configuration that I feel have helped boost my productivity and effectiveness. I wanted to talk a little bit about these changes here, and invite readers to share ideas below in the comments about how they do things differently.

This article was written to build upon the original entry and highlight the changes. You may want to read that entry first to better understand what I refer to here.

I still use Ubuntu exclusively on my desktop, laptop, and servers. I've never been more convinced that the era of the Microsoft monopoly is coming to a close, and I never cease to be amazed by the work that the Ubuntu developer community puts out. I also still use the latest version of Eclipse to develop with, using Aptana, PyDev, and Subclipse. Another plugin that I've incorporated, however, is MercurialEclipse.

Version Control

Yes, I've been won over by the dark side of distributed version control systems. There are several reasons that I've decided to switch from Subversion to Mercurial for my internal source control. One of the biggest reasons is speed. I was already noticing sluggishness in my big all-in-one SVN repo. I started researching alternatives, and found a lot of anecdotal evidence suggesting that distributed systems were much faster. This article by Robert Fendt came out after I switched, but it further reinforces that point. SVN lags behind the three major DVCS systems in most operations.

Another advantage that I wanted was the ability to make multiple "personal" commits to your local copy of the repository before pushing major changesets to the central repository (which itself is really just a convention in DVCS's).

Mercurial was the obvious choice for me because it was based on Python, the language I am hopelessly biased towards. (Edit: See comments below - Bazaar is also based on Python.) Creating a new repository in Mercurial takes all of a couple seconds. This turned out to be a major benefit to me. As I mentioned in my previous environment entry, I was organizing all of my projects under one SVN repo. With Mercurial, it was very easy to create separate repositories for each project which are automatically picked up by the Web interface.

I no longer use the branches, tags, and trunk convention because I've found it's easier for me without them. When I need to test something, I push my development repo to the my central server. I then pull from the testing server, update, and test. When I'm ready to deploy, I push any changes back to the central server and then use Fabric to pull the changes to the production server and update. I'll talk more on Fabric later in this post.

Settings

I've tried a couple of different ways to manage settings between the development, testing, and production environments. I've finally settled on a way using Python's config file parser that makes it completely automatic and invisible to me during regular use. First, I filled in my settings.py file with all of my production settings. Then, I created a directory in my home folder on each of my environments called .adoleo. Within the config file, I created a category called environment and a setting called type. For my development machines, I set the type to dev. For testing, I set it to test. For production, I set it to prod. That way, I can read the config files with Python and automatically override the production settings with appropriate settings for my other environments. Here's an example:

The top of my config file contains my production settings like any other Django project would.

DEBUG = False
TEMPLATE_DEBUG = DEBUG

ADMINS = (
    ('Brandon Konkle', 'brandon.konkle@adoleo.com'),
)

MANAGERS = ADMINS

DATABASE_ENGINE = 'my_db_engine'
DATABASE_NAME = 'my_db_name'
DATABASE_USER = 'my_db_user'
DATABASE_PASSWORD = 'my_db_password'
DATABASE_HOST = ''
DATABASE_PORT = ''

And continuing on as usual until I get down to the bottom of the settings file:

# Import config to determine environment, and then override prod settings
import os
import sys
import ConfigParser

config_loc = os.path.expanduser('~') + '/.adoleo/config.file'

env_type = False

if os.path.exists(config_loc):
    config = ConfigParser.SafeConfigParser()
    config.read(config_loc)

    try:
        env_type = config.get('environment', 'type')
    except:
        pass

if env_type == 'dev':
    DEBUG = True

    DATABASE_ENGINE = 'my_dev_db_engine'
    DATABASE_NAME = 'my_dev_db_name'
    DATABASE_USER = 'my_dev_db_user'
    DATABASE_PASSWORD = 'my_dev_db_password'

    MEDIA_ROOT = '/my/dev/media/root'

    TEMPLATE_DIRS = (
        '/my/dev/templates',
    )
elif env_type == 'test':
    DATABASE_USER = 'my_test_db_user'
    DATABASE_PASSWORD = 'my_test_db_password'

    MEDIA_ROOT = '/my/test/media/root/'

    TEMPLATE_DIRS = (
        '/my/test/templates',
    )

This way, if the config file contains a dev or test indicator, I override the needed settings with values for that particular environment. If the value is prod, or there is a problem reading the config file, it defaults to the production settings.

Development Database Storage

For awhile I ran a full-scale PostgreSQL server on each of my development machines. As I mentioned in my previous post, I had to run pg_dumps every time I needed to switch back and forth between one of the development machines. Also, if I needed to wipe and reinstall a dev machine, I had to recreate the PostgreSQL environment completely. Fixtures are one way to address this problem, but I ran into some issues that made them a bit tricky.

My solution was to use SQLite for development. I don't know why I didn't think of it earlier - it's so easy and simple! I go ahead and commit my local database file to the Mercurial repository, and then if I need to switch machines I can do a simple hg pull && hg up on the new machine and can instantly use the current dev database.

Deploying to Testing and Production with Fabric

I've recently started using the excellent Fabric tool to deploy my projects to testing and production. Working with Mercurial was a challenge at first until I discovered the -R option, which designates the repository to work with. In my fabfile.py script I first set a few details based on the environment:

repo_user = 'mercurialuser'
repo_pass = 'mercurialpassword' # Could also use getpass()
repo_url = 'central.mercurialserver.com/reponame'

def test():
    config.fab_hosts = ['testing.server.com']
    repo_loc = '/mercurial/test/repo/location'

def production():
    config.fab_hosts = ['production.server.com']
    repo_loc = '/mercurial/prod/repo/location'

Then I use commands like this to deploy:

def deploy():
    run('hg -R %s pull http://%s:%s@%s' % (repo_loc, repo_user,
                                           repo_pass, repo_url))
    run('hg -R %s up' % repo_loc)

That's it for now. What tricks do you use for managing your Django environments?

I’m a Software Architect with more than 15 years of experience creating high performance server and front-end applications targeting web and mobile platforms, & today I lead a team at Formidable Labs.

View Comments