Introduction to Ruby

  • David M. Peixotto
  • Rice Computer Science Club
  • 2006-10-17
  • pdf
I gave an introduction to the Ruby programming language to the Rice Computer Science Club. The talk contains a simple introduction to Ruby with a focus on the features that make Ruby great, such as dynamic code generation.

Chow and Hennessy vs. Chaitin-Briggs Register Allocation: Using Adaptive Compilation to Fairly Compare Algorithms

  • Keith D. Cooper, Timothy J. Harvey, and David M. Peixotto
  • SMART Workshop (at HiPEAC 2008)
  • 2008-01-27
  • pdf
I gave this talk SMART(Statistical and Machine learning approaches to ARchitectures and compilaTion) workshop. It describes my work on using adaptive compilation as a way to fairly compare the performance of two register allocation algorithms. We use adaptive compilation to make up for our lack of experience with one of the algorithms by having the compiler automatically derive effective tuning parameters for the algorithm. We then have high confidence that we did not make a bad parameter selection that is causing the algorithm to perform poorly.

Tuning a Priority-Based Register Allocator Using Adaptive Compilation

  • Keith D. Cooper, Timothy J. Harvey, and David M. Peixotto
  • Rice Graduate Student Colloquium
  • 2008-02-11
  • pdf
I gave this talk to the Rice Computer Science graduate student colloquium. It describes the work in my Master's thesis on using adaptive compilation to tune the performance of a global register allocation algorithm.

Parallelizing F# for Multi-Core Hardware

  • Keith D. Cooper, David M. Peixotto, and Vivek Sarkar
  • Rice Corporate Affiliates Meeting
  • 2008-10-17
  • pdf
We explore compilation and runtime strategies for explicit and automatic parallelization of codes in the F# programming language. F# is a variant of the functional programming language ML that runs on the Microsoft .NET platform. We expect the functional nature of F# will be beneficial in the context of multi-core hardware and we seek ways to exploit this advantage. Our initial research examines the performance of several F# benchmarks that have been parallelized by hand. Each benchmark is parallelized using both the built in F# async construct and the Microsoft Parallel Extensions to the .NET framework. The results indicate that F# is a good language for writing parallel computations and enables opportunities to exploit both automatic and user directed parallelization.

The Habanero Multicore Software Project

  • David M. Peixotto, Keith D. Cooper, John Mellor-Crummey, and Vivek Sarkar
  • Microsoft External Research Symposium
  • 2009-03-30
  • pdf
A poster presented at the Microsoft External Research Symposium containing an brief overview of the Habanero and HPCToolkit projects, and detailed information on phasers and the early work on concurrent collections.

Low-Level Compiler Optimizations for High-Level Functional Programming Languages

  • David M. Peixotto and Keith D. Cooper
  • Rice Corporate Affiliates Meeting
  • 2009-10-14
  • pdf
Functional programming languages provide the programmer with a high-level declarative interface for describing algorithms. This high-level interface poses challenges to efficiently compiling these languages for execution on traditional machines. Many years of research have been invested to bring the efficiency of these high-level languages closer to that of traditional complied languages, such as Fortran and C. Much of this past work relies on high-level transformations that are enabled by the functional semantics of the input language. In this research, we investigate how traditional compilation techniques can be leveraged to improve the performance of high-level functional programming languages.

Low-Level Behavior of Lazy Functional Programs

  • David M. Peixotto, Keith D. Cooper
  • Rice Graduate Student Colloquium
  • 2010-03-08
  • pdf
Low-Level Behavior of Lazy Functional Programs.

Lazy functional programming languages are well known for their expressiveness, clean semantics, and slow execution time. While many years of research have been invested to bring the efficiency of these high-level languages closer to that of traditional complied languages such as Fortran and C, much of this past work relies on high-level transformations made possible by the functional semantics of the input language. In our research we investigate how traditional compilation techniques can be leveraged to improve the performance of high-level lazy functional programming languages.

Traditional compiler optimizations were developed for imperative languages and it is not clear whether they can be applied in the back-end of a compiler for a lazy functional language. In order to shed some light on whether we can hope to apply these optimizations, we examine several low-level characteristics of lazy functional programs. In this talk we present our investigation of programs written in Haskell, a state-of-the-art lazy functional programming language. We attempt to quantify the similarity of Haskell programs to C programs based on the execution metrics of dynamically instrumented binaries.

We draw three conclusions from our experiments. First, traditional Haskell programs exhibit low-level behavior that is readily distinguished from C programs. Second, modern Haskell programs written explicitly for efficient parallel execution show behavior much similar to traditional C programs. Third, our results indicate that traditional compiler optimizations may find more opportunities when performed as link-time and run-time optimizations. The results of these experiments suggest that traditional optimizations can have a place in a Haskell compiler, particularly as parallel programs become more common with the rise of multi-core computing.


Fibon -- a New Benchmark Suite for Haskell

  • David M. Peixotto, Keith D. Cooper
  • Haskell Implementors Workshop
  • 2010-10-01
  • pdf
Accurate benchmarking is a critical step in the development of compiler optimizations. Inaccurate measurements and unrealistic programs can produce results that lead to incorrect conclusions about the effectiveness of an optimization. In this talk we will discuss the fibon benchmark suite, a new benchmark suite that embraces modern Haskell programs written by the Haskell community. We motivate the creation of a new benchmark suite by discussing our experience with existing benchmarks during our exploration of opportunities for applying low-level compiler optimizations to Haskell codes.

CnC is Determistic

  • David M. Peixotto, Jens Palsberg
  • CnC Workshop
  • 2010-10-06
  • pdf
We define the syntax and operational semantics of Featherweight CnC. We use these semantics to prove that Featherweight CnC is deterministic.

Trace-Based Runtime Optimization for Haskell

  • David M. Peixotto
  • PhD Proposal
  • 2011-09-13
  • pdf
Slides used at PhD Proposal.

Low-Level Haskell Traces

  • David M. Peixotto, Keith D. Cooper
  • IFL (Implementation and Application of Functional Languages)
  • 2011-10-05
  • pdf
Slides presented at IFL 2011.

Although many high-level optimizations are typically performed by optimizing compilers for functional languages, much less attention has been paid to the profitability of low-level optimizations. We argue that for statically-typed lazy languages such as Haskell, low-level optimizations are best performed over program traces. Trace-based dynamic compilation is a technique for optimizing programs by performing optimizations on frequently executed program traces at runtime.

In this paper we demonstrate the performance benefits of trace-based optimization by presenting a hand-coded case study of building and optimizing program traces for Haskell. We then explore the use DynamoRIO, a modern dynamic binary trace-based system, to automatically build program traces. We examine the performance impact of running Haskell programs under DynamoRIO and compare the results to C and C++ programs from the SPEC benchmark suite.

Our results show that while trace-based compilation is a promising technique, more work is needed to make it efficient for Haskell programs. Future work will explore techniques for making trace-based compilation of Haskell programs efficient and profitable.


Low-Level Haskell Code: Measurements and Optimization Techniques

  • David M. Peixotto
  • PhD Defense
  • 2012-04-03
  • pdf
Slides used at PhD Defense.