ipythonblocks.org Move: Part 4 – Application Updates

This is Part 4 in a series of blog posts describing my move of ipythonblogs.org from Rackspace to Heroku. In this post I’ll describe the updates I’ve made to the application layer of ipythonblocks.org. Other posts are:

  • Part 1: Introduction and Architecture
  • Part 2: Data Migration
  • Part 3: Database Interface Updates
  • Part 4: Application Updates

The application logic is not really changed in this update, the bulk of changes are to support providing SQLAlchemy sessions to allow database access during requests. (See Part 3 for discussion of the database interface layer of ipythonblocks.org.)

Application Overview

ipythonblocks.org is powered by Tornado, which combines an application framework, web server, and asynchronous features. On Heroku the application is started with the command

python -m app.app --port=$PORT --db_url=$DATABASE_URL

$PORT and $DATABASE_URL are environment variables provided by Heroku to specify, respectively, which port to listen on and where to find the Postgres database attached to the app. These are parsed from the command line by Tornado’s options module and made available globally on the tornado.options.options variable. Continue reading “ipythonblocks.org Move: Part 4 – Application Updates”

ipythonblocks.org Move: Part 4 – Application Updates

ipythonblocks.org Move: Part 3 — Database Interface

This is Part 3 in a series of blog posts describing my move of ipythonblogs.org from Rackspace to Heroku. In this post I’ll describe the updates I’ve made to the database interface module of ipythonblocks.org. Other posts are:

  • Part 1: Introduction and Architecture
  • Part 2: Data Migration
  • Part 3: Database Interface Updates
  • Part 4: Application Updates

The big change to the database interface module was the switch from dataset to SQLAlchemy for database abstraction. This involves using the ORM models described in Part 2, removing the JSON de/serialization functions needed to use SQLite, removing use of memcached, and updating tests to use a Postgres database to match production. The full diff is here, but I’ll breakdown the important points below. Continue reading “ipythonblocks.org Move: Part 3 — Database Interface”

ipythonblocks.org Move: Part 3 — Database Interface

ipythonblocks.org Move: Part 2 – Data Migration

This is Part 2 in a series of blog posts describing my move of ipythonblogs.org from Rackspace to Heroku. In Part 1 of this series I described my motivation for the move and the broad changes I expect to make as part of the migration. In this post I’ll describe the grid data model and how I migrated the existing grid data from SQLite to Postgres. Other posts are:

  • Part 1: Introduction and Architecture
  • Part 2: Data Migration
  • Part 3: Database Interface Updates
  • Part 4: Application Updates

Continue reading “ipythonblocks.org Move: Part 2 – Data Migration”

ipythonblocks.org Move: Part 2 – Data Migration

ipythonblocks.org Move: Part 1

This is Part 1 in a series of blog posts describing my move of ipythonblogs.org from Rackspace to Heroku. In this first post I’ll describe the existing deployment and what I intend to change during the migration. Other posts are:

  • Part 1: Introduction and Architecture
  • Part 2: Data Migration
  • Part 3: Database Interface
  • Part 4: Application Updates

As a side project I maintain a Python library called ipythonblocks that displays colored grids in a Jupyter Notebook. (See also this intro blog post.) It can be useful for teaching or for a bit of fun art. I also maintain the website ipythonblocks.org that allows users to post their ipythonblocks grids on the internet to be shared. ipythonblocks.org has been hosted on Rackspace since I first launched it, but now I’m migrating the site to Heroku for easier maintenance and deployment. Continue reading “ipythonblocks.org Move: Part 1”

ipythonblocks.org Move: Part 1

Palettable 3.0 Released

I’m happy to announce the release of Palettable version 3.0. Palettable is a Python library that packages a variety of color palettes for use with matplotlib or really anywhere. Here’s the full diff since the last release.

This release includes a number of new palettes:

The new cmocean, matplotlib, and MyCarta palettes are created from data that contains 256 color points per palette. By default palettes are created with lengths 3-20 colors, but you can request longer ones via the get_map function. For example, to get the matplotlib Viridis palette with 200 color points:

In [2]: palettable.matplotlib.get_map('Viridis_200')

You can find Palettable on the web at:

P.S.: Here’s a little demo notebook.

Palettable 3.0 Released

Python 3 Universally

Frustration

A Python user is starting a project and thinks to themselves, “Yay, new code! I can use Python 3 for this!”. They install the latest Anaconda for Py3 and get to work. A few days and hundreds of lines of code later they find out that a particular library they need (maybe imposm.parser) only supports Python 2. Our well intentioned user sighs, re-installs Anaconda for Py2, and carries on. Maybe next time (or maybe not). (This is a semi-autobiographical story.)

Elsewhere, a Python library maintainer is excited about Py3’s new asyncio module and could put it to immediate use but doesn’t want to alienate users who are stuck on Py2.


There are many valid reasons to be using Py2 today: a dated dependency, the inertia of existing code, not wanting to break a working setup, not knowing how/why to switch, and lack of time.

There are also many valid reasons for wanting to develop exclusively for Py3: access to new features, reduced support burden, simplified maintenance, wanting to get ahead of the 2020 end-of-support for Py2, and lack of time. These tensions have the potential to create much frustration in the Python community, but I think with some intentional effort on the part of Python developers and leaders it will all be fine.  Continue reading “Python 3 Universally”

Python 3 Universally

C Extensions for Python 2 and 3

One of my upcoming tasks at work is converting Pandana to support both Python 2 and 3. The tricky bit is that Pandana has a C extension written in plain C using the Python 2 C-API, which is not compatible with Python 3.

It seems like the best way to have a C extension that supports both Python 2 and 3 is to not write the extension in C. These days there are a number of alternatives that allow you to write interfaces in Python or something like Python (Cython). I decided to make a sample project with some C functions to wrap so that I could try out CFFI, Cython, and the standard library ctypes module.

You can find the project with examples of all three and a longer writeup at https://github.com/jiffyclub/cext23. Pull requests are welcome on the repo with further examples!

C Extensions for Python 2 and 3

Office Hours at SciPy 2015

Edit: 2015/7/1: Sessions will now be held each day of the conference during the afternoon coffee breaks from 3 – 3:30 PM.

Edit: 2015/7/2: Sessions will be held in room 210 on the main level of the conference center.

This year at SciPy 2015 I’d like to run some informal “office hours” help sessions to help people with any questions they might have. I can imagine questions about:

  • scientific Python libraries (NumPy, SciPy, Pandas, matplotlib…)
  • software installation (Anaconda, conda, pip…)
  • software packaging
  • Git & GitHub
  • the command line (shell)
  • web applications
  • much more!

The sessions will be during the afternoon coffee breaks 3 – 3:30 PM each day of the conference (Wednesday – Friday). The SciPy organizers have very kindly reserved room 210 for the sessions. Follow me on Twitter for any last minute updates.

If there seems to be significant interest I’ll try to find times for some additional sessions, but that might be hard to do.

Whether you’ve got questions or answers, I hope you’ll join!

Office Hours at SciPy 2015

SciPy 2015 Talk Help

In the interest of helping to improve the diversity and beginner friendliness of the SciPy conference, I’m offering to help first-time speakers from underrepresented groups with their talk proposals and potential talk preparations for SciPy 2015. If that sounds like you and you’d like my help editing a proposal and/or preparing a talk, send me an email.

Notes:

  • The deadline for proposals is April 1
  • The conference is July 8-10 in Austin, Texas
  • SciPy has a Code of Conduct
  • SciPy is committed to diversity
  • SciPy has some financial aid
  • I will be at the conference
  • I’m not a conference organizer, but I have in the past helped with talk selection (and may again this year)
  • I have never given a talk at SciPy (except lightning talks)

P.S. If you’re looking for some talk ideas, try this post.

SciPy 2015 Talk Help

SciPy 2015 Talk Ideas

The SciPy 2015 call for proposals is open until April 1. In case anyone wants to give a talk but doesn’t have an idea I came up with a few:

  • introduction to testing with a focus on numerics
  • guide to profiling
  • introduction to packaging and distribution
  • scidb
  • xray
  • bcolz
  • which tool to use for which job (cover core packages)
  • data visualization options
  • write a numpy ufunc in Python, Cython, and C
  • roundup of high-performance options (C, Cython, Numba, Parakeet, etc.)

Thanks to Rob Story for some suggestions. If you’ve got ideas for talks you’d like to see, leave a comment!

(I will be at SciPy 2015, but I’m organizing a Software Carpentry tutorial so I probably won’t be submitting a talk proposal.)

P.S. If you’re a first-time speaker from an underrepresented group thinking about giving a talk at SciPy 2015, I’m offering to help with proposal editing and talk prep.

SciPy 2015 Talk Ideas