Due: 20 April 2015 at 11:59:59pm
Download: hw6.zip
To Submit: Go Here
This is a group project. Teams of size 1 or 2 are acceptable. If you are working in a team of two, only one student needs to submit the project.
Getting Started
Many of the files in this project are taken from the earlier projects. The new files (only) and their uses are listed below. Only those preceeded with an asterisk will need to be modified for this assignment.
llprograms/*.ll - example .ll programs used in testing lib.ml - set and map modules (enhanced with printing) solver.ml - the general-purpose iterative dataflow analysis solver cfg.ml - "view" of LL control-flow graphs as dataflow graphs analysis.ml - helper functions for propagating dataflow facts opt.ml - optimizer liveness.ml - sample implementation of a dfa analysis (liveness) *alias.ml - alias analysis *dce.ml - dead code elimination optimization *constprop.ml - constant propagation analysis & optimization analysistests.ml - test cases ---------------------------------------------------------------------------- (used in developing our tests, but perhaps useful to you:) printanalysis.ml - a standalone program to print the results of an analysis dumpgraph.ml - a standalone program to print dataflow graph info
Overview
The OAT compiler we have developed so far produces fairly inefficient code, since it performs no optimizations at any stage of the compilation pipeline. In this project, you will implement a selection of dataflow analyses and optimizations at the level of our LLVMlite intermediate representation in order to improve code size and speed. These include unreachable code elimination, dead code elimination, and constant propagation.
Provided Code
The provided code makes extensive use of modules, module signatures, and functors. These aid in code reuse and abstraction. If you need a refresher on OCaml functors, we recommend reading through Chapter 9 of Real World OCaml.
In lib.ml, we provide you with a number of useful modules, module signatures, and functors for the assignment, including:
- OrdPrintT: A module signature for a type which is both comparable and can be converted to a string for printing. This is used in conjunction with some of our other custom modules described below. Wrapper modules Lbl and Uid satisfying this signature are defined later in the file for the Ll.lbl and Ll.uid types.
- SetS: A module signature for a module which extends OCaml's built-in sets to include string conversion and printing capabilities.
- MakeSet: A functor which creates an extended set (SetS) from a type which satisfies the OrdPrintT module signature. This is applied to the Lbl and Uid wrapper modules to create a label set module LblS and a UID set module UidS.
- MapS: A module signature for a module which extends OCaml's built-in maps to include string conversion and printing capabilities. Three additional helper functions are also included: update for updating the value associated with a particular key, find_or for performing a map look-up with a default value to be supplied when the key is not present, and update_or for updating the value associated with a key if it is present, or adding an entry with a default value if not.
- MakeMap: A functor which creates an extended map (MapS) from a type which satisfies the OrdPrintT module signature. This is applied to the Lbl and Uid wrapper modules to create a label map module LblM and a UID map module UidM. These map modules have fixed key types, but are polymorphic in the types of their values.
Task I: Dataflow Analysis
Your first task is to implement a version of the worklist algorithm for solving dataflow flow equations presented in lecture 21. Since we'll be defining several analyses, we'd like to reuse as much code as possible between each one. In lecture, we saw that each analysis differs only in the choice of the lattice, gen and kill sets, the direction of the analysis, and whether we take the meet or join to compute the flow into a node. We can take advantage of this by writing a generic solver as an OCaml functor and instantiating it with these parameters.
The Algorithm
Assuming only that we have a directed graph where each node is labeled with a dataflow fact and a flow function, we can compute a fixpoint of the flow on the graph as follows:
let w = new set with all nodes repeat until w is empty let n = w.pop() old_out = out[n] let in = combine(preds[n]) out[n] := flow[n](in) if (!equal old_out out[n]), for all m in succs[n], w.add(m) end
Here equal, combine and flow are abstract operations that will be instantiated with lattice equality, meets or joins and the function computed by the gen and kill sets of the analysis, respectively. Similarly, preds and succs are the graph predecessors and successors in the flow graph, and do not correspond to the control flow of the program. They can be instantiated appropriately to create a forwards or backwards analysis.
Getting Started and Testing
Be sure to review the comments in the DFA_GRAPH and FACT module signatures in solver.ml, which define the parameters of the solver. Make sure you understand what each declaration the signature does -- your solver will need to use each one (other than the printing functions)!
Your only task is to fill in the solve function in the Solver.Make functor in solver.ml. The input to the function is a flow graph labeled with the initial facts. It should compute the fixpoint and return a graph with the corresponding labeling. You may find the set datatype from lib.ml useful for manipulating sets of nodes.
To test your solver, we have provided a full implementation of a liveness analysis in liveness.ml. Once you've completed part 1, the liveness tests in the test suite should all be passing. These tests compare the output of your solver on a number of programs with pre-computed solutions in analysistest.ml. Each entry in this file describes the set of uids that are live-in at a label in a program from ./llprograms. To debug, you can compare these with the output of the Graph.to_string function on the flow graphs you will be manipulating.
Task II: Alias Analysis and Dead Code Elimination
The goal of this section is to implement a simple dead code elimination optimization that can also remove store instructions when we can prove that they have no effect on the result of the program. Though we already have a liveness analysis, it doesn't give us enough information to eliminate store instructions: even if we know the UID of the destination pointer is dead after a store and is not used in a load in the rest of the program, we can not remove a store instruction.
The problem is that there may be different UIDs that name the same stack slot -- in other words it may have aliases. There are a number of ways this can happen after a pointer is returned by alloca:
- The pointer is used as an argument to a getelementptr or bitcast instruction
- The pointer is stored into memory and then later loaded
- The pointer is passed as an argument to a function, which can manipulate it in arbitrary ways
Some pointers are never aliased. For example, the code generated by the Oat frontend for local variables. We can find such uses of alloca using a simple version of an alias analysis.
Alias Analysis
We have provided some code to get you started in alias.ml. You will have to fill in the initial state of the graph, flow function and lattice operations. The type of lattice elements, fact is a map from UIDs to symbolic pointers of type SymPtr.t. Your analysis should compute, at every program point, the set of UIDs of pointer type that are in scope and additionally, whether that pointer is the unique name for a stack slot according to the rules above. See the comments in alias.ml for details.
- Alias.insn_flow: the flow function over instructions
- Alias.fact.combine: the combine function for alias facts
Dead Code Elimination
Now we can use our liveness an alias analyses to implement a dead code elimination pass. We will simply compute the results of the analysis at each program point, then iterate over the blocks of the CFG removing any instructions that do not contribute to the output of the program.
- For all instructions except store and call, the instruction can be removed if the UID it defines is not live-out at the point of definition
- A store instruction can be remove if we know the UID of the destination pointer is not aliased and not live-out at the program point of the store
- A call instruction can never be removed
In dce.ml, you will only need to fill out the dce_block function that implements these rules.
Task III: Constant Propagation
Compilers don't often directly emit dead instructions like the ones found by the analysis in the previous section. Instead, dead code is often produced as a result of other optimizations that execute parts of the original program at compile time, for instance constant propagation. In this section you'll implement a simple constant propagation analysis and constant folding optimization.
Start by reading through the constprop.ml. Constant propagation is similar to the similar to the alias analysis from the previous section. Dataflow facts will be maps from UIDs to the type SymConst.t, which corresponds to the lattice from the lecture slides. Your analysis will compute the set of UIDs in scope at each program point, and the integer value of any UID that is computed as a result of a series of binop and icmp instructions on constant operands. More specifically:
- The flow out of any binop or icmp whose operands have been determined to be constants is the incoming flow with the defined UID to Const with the expected constant value
- The flow out of any binop or icmp with a NonConst operand sets the defined UID to NonConst
- Similarly, the flow out of any binop or icmp with a UndefConst operand sets the defined UID to UndefConst
- A store or call of type Void sets the defined UID to UndefConst
- All other instructions set the defined UID to NonConst
- Constprop.insn_flow with the rules defined above
- Constprop.Fact.combine with the combine operation for the analysis
- Constprop.cp_block (inside the run function) with the code needed to perform the constant propagation transformation
Task IV: Graded Test Cases
We have provided you with numerous tests of the analyses and optimizations, but our tests are far from exhaustive. For this task, you should create (either by hand or by compiling from OAT) an LLVM IR program that exhibits "interesting" results after "full" optimization.
Unlike the provided optimization tests, which are designed to check the correctness of a particular optimization pass (i.e. dce or constprop), the test you submit should be interesting when these passes are interleaved. In particular, the test case you submit should consist of three things:
- a (small) integer constant n indicating the number of iterations of optimization to be performed,
- a "source" test.ll file, and
- the "targe" testopt.ll file, which is the expected result after optimization
This test case should be usable with the executed_fullopt_file function provided for you in gradedtest.ml. It will be called as:
executed_fullopt_file n [("test.ll", "testopt.ll")]
Grading
Projects that do not compile will receive no credit!Your team's grade for this project will be based on:
- 90 Points: the various automated tests that we provide.
- 5 Points for posting an interesting test case to Piazza. (Graded manually.)
- 5 Points divided among the test cases created by other groups. (Graded manually.)