SICP 6: Metalinguistic Abstraction

Reading Time: 5 minutes

Screen Shot 2019-11-05 at 10.35.45 AM

I recently took Dave Beazley’s week-long course on The Structure and Interpretation of Computer ProgramsIn this series, I’ll share memorable insights from the course (here’s where you can see all the posts so far).

In this post, we’ll talk about delayed evaluation—specifically, about the implementation details of building an interpreter for a Scheme like-language that defaults to lazy evaluation instead of eager evaluation.

To demonstrate this, we’ll build a Scheme interpreter in Scheme (or rather Racket, a dialect of Scheme). Why bootstrap the interpreter like this?

Because in scheme, the code is data.

For most programming languages, the front end of the interpreter needs to take in the source code as a string and do a fair amount of parsing to produce a list of tokens, let alone an abstract syntax tree denoting the order in which the operations should be evaluated.

Scheme does not require so many translation steps because its syntax explicitly encodes the tokens and their evaluation order. Parentheses surround each expression to be evaluated; there is no need for precedence interpretation because operations happen in order from the innermost set of parentheses to the outermost set of parentheses. In fact, when we were building a parser in Java for the lox programming language, our abstract tree expression representation looked almost exactly like Scheme.

This is powerful. Because we can pass the source code directly into an interpreter to be parsed as data, we can skip a lot of the interpreter front end skullduggery. This rudimentary standard Scheme interpreter in racket (with applicative order, or eager evaluation) spans fewer than 200 lines.

This is especially useful in a teaching environment because we can dive directly into complex implementation questions. How will we manage scope and local environments? When will we evaluate operations? How should we store unevaluated operations? 

We talked a lot about these questions as we re-implemented our interpreter such that our customized Scheme would do normal order, or lazy evaluation. (I remember the terms ‘normal order‘ and ‘applicative order‘ by reminding myself that it’s normal to be lazy sometimes 🍹).

So what does this entail?

We make three structural changes to our applicative-order interpreter in order to default to normal order for all operations.

1. We don’t return a result from evaluation: we succeed or fail.

Because we wait to evaluate each expression until we absolutely must, we don’t return values upon processing them that require us to evaluate them. Instead, we supply two branches of behavior: success and failure.

2. We move the evaluation operation to the last possible second.

Rather than evaluate everything right away, we encapsulate forcing evaluation in a set of functions to use when we absolutely must evaluate each of our expressions:

When is it that we absolutely must evaluate each of our expressions? There are a few cases.

 A) When it is passed to a primitive procedure that will use the value

In this case, we are saving to an environment the procedure of evaluating an expression, and that will be run when a piece of code accesses that attribute of the environment:

 B) When it is the value of the predicate of a conditional:

C) When it is the value of an operator that is about to be applied as a procedure:

3. In the meantime, we store unevaluated expressions as “thunks.”

We need a way of organizing code without evaluating that code. So we encapsulate the concept of a ‘thunk’—an un-evaluated piece of code combined with the environment in which that piece of code should evaluate (containing any variables that the thunk needs).

Note that we tag each of these data structures with the name 'thunk.

Suppose we wanted to memoize these thunks: that is, evaluate them only when necessary, store the value upon that first evaluation, and access that stored value rather than the computation to reach it. How might our data structure change to accommodate such a caching scheme?

Perhaps upon evaluation, we could replace the thing that the name of the thunk points to in our environment. Instead of pointing to an un-evaluated expression and a local environment, the name would instead point to the value, without the environment (since we don’t need it to evaluate anymore):

We don’t even need to check if the thunk has been evaluated before evaluating it: since our Scheme interpreter returns an evaluated expression unchanged, we can evaluate whatever the value is to which the name of the thunk refers.

Returning Briefly to Streams

It’s worth noting that, when we worked with streams, the tail of the stream remained unevaluated, but the car (head) of the stream did have to be evaluated. In our lazy scheme, that wouldn’t need to be the case. Instead, everything could go un-evaluated until it were absolutely needed.

Is this bending your mind? Are you excited for more? In the next post, we’ll talk about non-deterministic programming and building a scheme interpreter that can help us solve logic puzzles.

If you liked this piece, you might also like:

The Crafting Interpreters series

The time and space efficiency series

Seniority means saying no (since people who like this series are probably experienced enough for senior matters to be relevant)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.