A Python user is starting a project and thinks to themselves, “Yay, new code! I can use Python 3 for this!”. They install the latest Anaconda for Py3 and get to work. A few days and hundreds of lines of code later they find out that a particular library they need (maybe imposm.parser) only supports Python 2. Our well intentioned user sighs, re-installs Anaconda for Py2, and carries on. Maybe next time (or maybe not). (This is a semi-autobiographical story.)
Elsewhere, a Python library maintainer is excited about Py3’s new asyncio module and could put it to immediate use but doesn’t want to alienate users who are stuck on Py2.
There are many valid reasons to be using Py2 today: a dated dependency, the inertia of existing code, not wanting to break a working setup, not knowing how/why to switch, and lack of time.
There are also many valid reasons for wanting to develop exclusively for Py3: access to new features, reduced support burden, simplified maintenance, wanting to get ahead of the 2020 end-of-support for Py2, and lack of time. These tensions have the potential to create much frustration in the Python community, but I think with some intentional effort on the part of Python developers and leaders it will all be fine. Read More »
One of my upcoming tasks at work is converting Pandana to support both Python 2 and 3. The tricky bit is that Pandana has a C extension written in plain C using the Python 2 C-API, which is not compatible with Python 3.
It seems like the best way to have a C extension that supports both Python 2 and 3 is to not write the extension in C. These days there are a number of alternatives that allow you to write interfaces in Python or something like Python (Cython). I decided to make a sample project with some C functions to wrap so that I could try out CFFI, Cython, and the standard library ctypes module.
You can find the project with examples of all three and a longer writeup at https://github.com/jiffyclub/cext23. Pull requests are welcome on the repo with further examples!
Edit: 2015/7/1: Sessions will now be held each day of the conference during the afternoon coffee breaks from 3 – 3:30 PM.
Edit: 2015/7/2: Sessions will be held in room 210 on the main level of the conference center.
This year at SciPy 2015 I’d like to run some informal “office hours” help sessions to help people with any questions they might have. I can imagine questions about:
- scientific Python libraries (NumPy, SciPy, Pandas, matplotlib…)
- software installation (Anaconda, conda, pip…)
- software packaging
- Git & GitHub
- the command line (shell)
- web applications
- much more!
The sessions will be during the afternoon coffee breaks 3 – 3:30 PM each day of the conference (Wednesday – Friday). The SciPy organizers have very kindly reserved room 210 for the sessions. Follow me on Twitter for any last minute updates.
If there seems to be significant interest I’ll try to find times for some additional sessions, but that might be hard to do.
Whether you’ve got questions or answers, I hope you’ll join!
In the interest of helping to improve the diversity and beginner friendliness of the SciPy conference, I’m offering to help first-time speakers from underrepresented groups with their talk proposals and potential talk preparations for SciPy 2015. If that sounds like you and you’d like my help editing a proposal and/or preparing a talk, send me an email.
- The deadline for proposals is April 1
- The conference is July 8-10 in Austin, Texas
- SciPy has a Code of Conduct
- SciPy is committed to diversity
- SciPy has some financial aid
- I will be at the conference
- I’m not a conference organizer, but I have in the past helped with talk selection (and may again this year)
- I have never given a talk at SciPy (except lightning talks)
P.S. If you’re looking for some talk ideas, try this post.
The SciPy 2015 call for proposals is open until April 1. In case anyone wants to give a talk but doesn’t have an idea I came up with a few:
- introduction to testing with a focus on numerics
- guide to profiling
- introduction to packaging and distribution
- which tool to use for which job (cover core packages)
- data visualization options
- write a numpy ufunc in Python, Cython, and C
- roundup of high-performance options (C, Cython, Numba, Parakeet, etc.)
Thanks to Rob Story for some suggestions. If you’ve got ideas for talks you’d like to see, leave a comment!
(I will be at SciPy 2015, but I’m organizing a Software Carpentry tutorial so I probably won’t be submitting a talk proposal.)
P.S. If you’re a first-time speaker from an underrepresented group thinking about giving a talk at SciPy 2015, I’m offering to help with proposal editing and talk prep.
I wrote brewer2mpl a couple years ago to help people use colorbrewer2 color palettes in Python. Since then it’s expanded to include palettes from Tableau and the whimsical Wes Anderson Palettes Tumblr; and there’s plenty of room for more palettes from other sources. To encompass the growing scope, brewer2mpl has been renamed to Palettable! (Thanks to Paul Ivanov for the name.)
The Palettable API has also been updated for the IPython age. All available palettes are now loaded at import and are available for your tab-completion pleasure. Need the
YlGnBu palette with nine colors? That’s now available at
palettable.colorbrewer.sequential.YlGnBu_9. Reversed palettes are also available with a
I hope you find Palettable useful! You can find it on the web at:
P.S.: Here’s a little demo notebook.
It has been over two years since Erik Bray and I made the first release of SnakeViz 0.1, a tool for visualizing performance profiles of Python code. It had multiple performance bottlenecks, but it worked just well enough that it took me a long time to prioritize making improvements. That time has finally come around and I’m happy to announce that SnakeViz 0.2 is now available!
The look and feel of SnakeViz remains much the same (see a screenshot), but there are some new things on the screen:
- Detailed function information when hovering over the visualization
- Call stack list for tracking where you are when zooming the visualization
- Control the depth of the displayed call tree
- Limit the display of functions that take up relatively little time
Under the Hood
The first release of SnakeViz had some performance bottlenecks:
- It tried to transfer a complete call tree from the server to the client as JSON
- It tried to display the entire call tree in the sunburst visualization
Those limited the usefulness of SnakeViz with profiles that contained calls to a lot of functions. The version 0.2 release is an almost complete rewrite in order to make SnakeViz work with larger profiles.
The first limitation is addressed by moving the building of call trees into the client application. Profile data is passed from the server to the client in close to the same form as it’s available from Python’s pstats module. Once in the client, the profile data is used to construct call trees on demand for visualization.
The second limitation is addressed by limiting how much of the profile is visualized at once. Call trees are built only to a user specified depth and users can opt to omit functions that do not use much time from display. (The “depth” and “cutoff” controls.)
I and others have tested SnakeViz 0.2 with some fairly large profiles and found it works. You can read more about SnakeViz in the updated docs. Please give it a try! Issues can be reported on GitHub.