-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathosr-correctness-related.tex
14 lines (8 loc) · 3.51 KB
/
osr-correctness-related.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
% !TEX root = thesis.tex
\subsection{Comparison with Related Work}
In this section we discuss the connections of our ideas with previous works. For existing OSR implementations - in which the generation of compensation code is left to code optimizers - we refer the reader to \mysection\ref{ss:osr-llvm-related}.
\paragraph*{Correctness of Compiler Optimizations.} Translation validation~\cite{Pnueli98, Necula00} tackles the problem of verifying that the optimized version of a specific input program is semantically equivalent to the original program. Lacey \etal~\cite{Lacey02, Lacey04} propose to express optimizations as rewrite rules with CTL formulas as side conditions, showing how to prove such transformations correct. \cite{Lerner03, Lerner05, Kundu09} investigate how to automatically prove soundness for optimizations expressed in terms of transformation rules. In particular, a further step towards generality is made in~\cite{Kundu09}: proving the equivalence of {\em parameterized programs} enables proving the correctness of transformation rules once for all. We believe that this approach deserves further investigation in the OSR context, as it could provide a principled approach to computing mappings between equivalent points in different program versions in the presence of complex optimizations.
While all the aforementioned works focus on proving optimizations sound, in this thesis we aim at proving OSR correct in the presence of optimizations. Of a different flavor, but in a similar spirit as ours, Guo and Palsberg~\cite{Guo11} use bisimulation to study what optimizations of a tracing JIT compiler are sound. OSR is used in traditional JIT compilation to devise efficient code for a whole method, while a tracing JIT performs aggressive optimizations on a linear sequence of instructions, which may return from guarded side exits when the control flow diverges from the recorded trace.
\paragraph*{Debugging Optimized Code.} Hennessy's seminal paper~\cite{Hennessy82} has sparked a lot of interest in debugging of optimized code in the past three decades (e.g., ~\cite{Coutant88, Adl-Tabatabai96, Wu99, Jaramillo00, Barr14}). Some works~\cite{Hennessy82, Wu99} in particular attempt to reconstruct source-level variable values in the presence of certain optimizations. We discuss our connections with them in \mysection\ref{se:CS-debug}, in which we explore the end-to-end utility of our OSR mappings in the context of a source-level debugger.
\paragraph*{Other Related Work.} Program slicing techniques~\cite{Weiser82,Weiser84, Korel88, Agrawal90} have found many diverse applications, such as program debugging, comprehension, analysis, testing, verification and optimization. Given a slicing criterion consisting of a program point P and several variables used at P, program slicing computes a slice of the program that may affect their values at P in terms of data and control dependencies~\cite{Tan16}. We believe that the simple ideas behind our \buildcomp\ algorithm could be improved by taking advantage of this wealth of analysis techniques.
Another interesting work to look at is~\cite{Bhandari15}, which explores deoptimization in the presence of exceptions for the loop tiling transformation. In order to be able to rollback out-of-order updates, an algorithm identifies a minimal number of elements to backup and generates the necessary code. Intuitively, supporting deoptimization for complex loop transformations may be both space- and time- costly, but in the OSR context flexibility/performance trade-offs are still largely unexplored.