Scientists spend an increasing amount of time building and using software. However, most scientists are never taught how to do this eﬃciently. As a result, many are unaware of tools and practices that would allow them to write more reliable and maintainable code with less eﬀort. We describe a set of best practices for scientiﬁc software development that have solid foundations in research and experience, and that improve scientists’ productivity and the reliability of their software.
Over the course of eight days in October I taught at three Software Carpentry boot camps in California. It was utterly exhausting and tremendously rewarding and hopefully I’ll do it again sometime. I want to take a moment to post a little feedback from the third boot camp and mention some of the things I learned on the trip.
As a tool when teaching unit testing it would be great to have a way to run nose or pytest in an IPython Notebook. For example, a %nosetests magic would do test collection in the Notebook namespace and do its usual run and reporting. Of course it’s always possible to write test functions and then just call them, but having a report that compiles everything in one place is nice. Plus it could look just like nosetests called from the command line.
Unfortunately for this idea these testing frameworks have for the most part been engineered for doing their test collection using the file system as the starting point. In a couple hours of fiddling I couldn’t figure out how to use either nose or pytest to do test collection in a notebook. I’m sure it could be done with enough hacking.
Just for kicks, though, I threw together a little IPython line magic that does its own limited test collection, running, and reporting. You can see it via nbviewer and grab it on GitHub. This magic only grabs functions that begin with “test” and the reporting doesn’t include tracebacks when there are failures or errors. But you do get the exceptions themselves.