When you write a bunch of code that may have complex internal dependencies using a language with a compiler, you need to build. Unfortunately, we have not really solved this problem. I’ve been thinking a lot about building software for the past 5 years or so, and I wanted to write some of the things I’ve been thinking in a few posts.
It’s hard to believe, but good version control (git, hg, svn) is a relatively recent development. In fact, I think the idea of using hash trees to track version history dates to 2003 in Monotone. I had the misfortune to actually use CVS fairly extensively prior to 2003–2007 period of explosion in distributed version control tools. While there are still interesting issues in version control, particularly at the largest scale, at small scale (say less than 1 million lines of code, in 10k files), it is a well solved problem.
With git I can go back to any point in time and see exactly the source code I had. Amazing! But, getting the source code is not what I usually want. What I want is the ability to reconstruct the artifacts as they would have been built at that time. This is called reproducibility and it’s not just for scientists. Havoc Pennington wrote an excellent pitch for this recently.
Havoc lays out many reasons you want reproducibility. It’s not too hard to think of even more reasons, but to be perfectly honest, to me it just seems like an obviously nice and simple property that there is little reason to give up when it is within our grasp. I think the main reason we have not demanded this property is that few have named and contemplated it. Who is in the anti-reproducibility camp?
On the other hand, who is totally committed to reproducibility? I would say I am only aware of two projects that represent the reproducibility wing of the reproducibility party: Nix and Bazel. Nix is a package manager which aims to totally reproducibly build software packages at specific versions without conflicting with one another in a dll-hell situation. Bazel is the open source version of Google’s internal generic build tool.
I describe bazel as make done right. Bazel is currently a few things: a way to specify a graph of build targets, a system to exactly track which nodes need to be rebuilt, and a highly constrained and sandboxed extension system that makes it tractable to write custom reproducible build rules.
I’ve been working with bazel for the past year on scala support. Bazel’s model makes perfect sense to someone working with functional programming: build should be a pure function of the current state of the repo, and each target should be a pure function of its declared inputs. If the hash of the dependencies don’t change: I don’t need to rebuild the output. Bazel takes this even further: we can think of tests as a build process for building test results. If the inputs to a test don’t change, we don’t need to rerun the test, just present the cached results. It’s memoized pure functions all the way down.
I think we may be on the cusp of period in build similar to distributed version control in 2003 when it comes to reproducibility. I think 10 years from now, the new kids will be coming in baffled how we thought it was okay that we had version control, but checking out a version and rebuilding it may give different results, or it might not build at all.
In my next post I’ll go into detail using scala as a case study for bazel and share some of the pros and cons of using bazel to build a language like scala.