-
Notifications
You must be signed in to change notification settings - Fork 8
2.4 Catch a Tail Call
Many people often wonder why in 2021 there is still an interest in the Lisp language. After all, this is a language born at the end of the 50's, whose syntax has hardly changed since then.
Yet some sorrowful spirits are trying to keep it alive against all odds. It is full of parentheses, with unreadable constraints on the position of operators and functions. And if that were not enough, it has its worshippers, its absolute fans ready to do anything to evangelize the masses of unbelievers.
They are convinced that one must abjure Python, Java or C++ to follow the sacred ways of Clojure, Scheme or Common Lisp.
So why does this strangely archaic language offer that make some programmers go nuts ?
First of all, Lisp is perhaps the simplest language to implement. Indeed, Lisp is remarkably regular, since operators and function calls have exactly the same prefixed syntax.
Let's compare the following Python code:
def test(s):
y = 9*x/2+3*x
return y
print(test(10))
with the following Lisp code:
(defun tst(x):
(+ (* 9 (/ x 2)) (* 3 x))
)
At first glance, the Python code seems quite simple.
However, due to the operator precedence rules, this expression proves to be much more complicated to interpret than expected.
Try to calculate 10
by hand, you will see that the task is not easy.
The Lisp version, which at first glance seems heavier and more complicated, has no ambiguity at all. Although it is heavily burdened with parentheses, the order of execution is quite obvious.
In fact, if one had to give a preliminary interpretation to this expression, it is very likely to look like this:
Don't parentheses now seem like an elegant solution to transcribe this tree into a textual form?
Let's be honest, nobody would write such an ambiguous Python expression, good computer scientists would implement this expression in the following form:
def test(s):
y = (9*(x/2))+(3*x)
return y
They would add brackets to make this expression more explicit... Almost as many as the Lisp version...
What makes Lisp very interesting is that the syntax of the language merges with its Abstract Syntax Tree .
In a way What You See Is What You Execute...
Abstract Syntax Tree is usually an important step when compiling any kind of code, be it Python, Java or Haskell. Arithmetic expressions in Python, for instance, require intermediate complex procedures to be transformed into trees.
Parsing Lisp, on the other hand, is pretty straightforward...
It consists of first segmenting the code into a list of tokens. Then, the list is sequentially traversed. When an open parenthesis is met, the parser goes into recursion, which it leaves when a closing parenthesis is found. Thus, at each recursion point, one can intercept the compilation of an expression to interpret it or modify it locally if necessary.
In LispE, this allows in particular to apply macros or to compose expressions based on map, filter or other takewhile.
We will also connect the tail call detection mechanism to it...
Beneath this barbaric term is in fact an extremely effective mechanism for transforming a recursion into an iteration. More precisely, in such a function, the last call coincides with the final result. To better understand what this definition hides, one only has to look at the following code:
# Yes, it's Python, the idea is to show that this problem is universal
def fact(s):
if x == 1:
return 1
else:
return x * fact(x - 1)
The above version is not a tail call. You have to unstack all the calls before getting the result.
Simply adding another parameter to this function is enough to make it a tail call. The goal here is that the end of the calculation must coincide with the last call of the function:
def fact(x,y):
if x == 1:
return y
else:
return fact(x - 1, x*y)
# The first call is:
fact(3,1)
As you can see on this version, the intermediate calculations are performed as the recursion progresses.
When x == 1
, y
already contains our result.
The Python code above can be simply translated into LispE as follows:
(defun fact(x y)
(if (eq x 1)
y
(fact (- x 1) (* x y))
)
)
The objective is to make LispE detect when to handle the code into a tail call mode.
And of course, the immediate question is: how to identify a tail call?
First of all, let us recall that the construction of the Abstract Syntax Tree is the result of a non-terminal recursion. We go down in recursion at each opening parenthesis and we go up at each closing parenthesis. Each call returns a sub-list which is stored in the current list when the recursion is completed. Once all the instructions have been analyzed, this current list is in turn returned to the upper level.
The trick is the following: If a function only contains one block of instructions, we will mark the head of this block as a possible candidate for a tail call.
(defun fact(x y)
(if (eq x 1) ; <--- this if is marked as potentially a tail call
y
(fact (- x 1) (* x y))
)
)
The trick here is very simple. When the if is executed, you transmit the baton to the selected call. Each instruction will therefore systematically receive this flag at each call.
(defun fact(x y)
(if (eq x 1); the 'if' transmits its flag to the next instruction to be executed
y
(fact (- x 1) (* x y)); <-- so we flag this instruction
)
)
- If it is a recursive call, it will be treated as a tail call .
- If it is another if, it will receive the flag in turn.
- If it is a completely different instruction, the mark will have no effect.
In LispE, a call identified as a tail call will only partially go through some of the function execution code.
Thus, instead of adding a new variable zone in the execution stack, as in a normal call, the current zone is reused to overwrite the local variables with their new values. Then we come back from recursion, without executing the body of the function by returning a particular value: _TERMINAL.
In other words, when a tail call is detected:
- The argument stack is updated
- We return the value _TERMINAL.
// caller_function is initialized at the first call
if (caller_function == this && isTailCall())
update_stack();
return _TERMINAL;
}
But how does the complete execution of the function take place?
Indeed, if the call to the function consists in simply updating the argument stack, the whole thing must be executed until the result is obtained...
The solution is quite simple...
We embed each function body execution in a loop, whose condition depends on the its output.
If the call is not a tail call, this loop will have no other effect than an additional test...
do {
element = eval(function_body);
}
while (element == _TERMINAL);
A value is returned for each execution of the function body.
- If this value is _TERMINAL, it means that the argument stack has been updated and a new iteration is needed.
- Otherwise, the function has returned an actual value and the loop is automatically stopped.
Thus, the execution of a function can be summed up in the following code:
if (caller_function == this && isTailCall()) {
update_stack();
return _TERMINAL;
}
// Note that the first call will go here
// since caller_function has not been updated yet
create_new_stack();
update_stack();
caller_function = this;
do {
element = eval(function_code);
}
while (element == _TERMINAL);
What makes this code very simple is that the decision to enter tail call is purely local. It does not involve any particular compilation of the code, no complex analysis to find out exactly where these calls will take place. The mechanics only requires _marking the potential places of tail call and letting the interpreter dynamically decide whether these calls will be treated as such.
The whole thing requires less than 20 lines of code out of the 40,000 that make up the interpreter, a minor cost for a mechanism that avoids exploding the stack during complex computations.
Here is a last example where more than 2,000,000 recursive calls are executed:
; It also works for lambdas...
(
(lambda (x y)
(if (eq x 0)
(* 2 y)
(self (- x 1) (+ y 2))
)
)
2000000 1
)