When we set out to compare the performance of two algorithms against each other, the quality of the comparison is wholly dependent on the quality of the two implementations. If an algorithm is particularly difficult to implement correctly, if it is poorly understood or documented, or if its performance is highly variable based on any number of idiosyncratic settings, a comparison against another algorithm should normally be suspect.
In our case, we wanted to compare a Chaitin-Briggs graph-coloring register allocator against a Chow and Hennessy priority-based allocator, but it seemed impossible to get a fair comparison. The Chaitin-Briggs allocator is relatively well understood. It has been studied extensively. It has few parameters to guide its behavior. In contrast, Chows allocator, although often cited as a viable competitor to the graph-coloring allocator, is none of these. It has been implemented very few times. Its presence in the literature is more historic than analytic. Worst of all, it not only includes a number of implementation decisions that individually affect the quality of code, but these decisions interact unpredictably.
Adaptivity suggests a solution to this problem. By allowing a search technique to optimize an algorithms performance, we can make definitive statements comparing two algorithms.
In this paper, we present an implementation of a priority-based register allocator as described by Chow and Hennessy. We show how to implement the allocator in an adaptive framework, and we show that the code produced consistently achieves the performance of a Chaitin-Briggs graph-coloring register allocator.
The Chaitin-Briggs algorithm has received so much attention in the literature that we felt it would be fair to use a non-adaptive implementation as a baseline to compare against.
Our results show that the two algorithms give similar performance, but only when we use an adaptive search algorithm for the Chow allocator. We believe that this is the first time these two algorithms have been fairly tested side-by-side, and we argue that adaptivity is an effective tool for experimental computer science.