SICP 2: State and Modularity

Reading Time: 7 minutes

Screen Shot 2019-11-05 at 10.35.08 AM

I recently took Dave Beazley’s week-long course on The Structure and Interpretation of Computer ProgramsIn this series, I’ll share memorable insights from the course (here’s where you can see all the posts so far).

In this post, we’ll look at some of the concepts from the first chapter of the book.

Briefly: Scheme

The examples and exercises in the book use the Scheme programming language. I want to get to the concepts, so I’ll show you some Scheme here as a warmup and link you to this tutorial in case you want to go deeper on Scheme.

Here’s what Scheme looks like:

Things to notice:

  • The operator comes first.
  • A set of parentheses explicitly demarcate the scope of each piece of code.

We’ll return to the significance of these characteristics in a future post.

Scheme uses applicative (eager) evaluation. So the following code will never finish:

New variables and functions can live inside the scope of other functions, as you see in the subcomponents of sqrt here. Note that the variable x remains accessible throughout the scope of sqrt, including inside the encapsulated functions.

One thing you might notice about the above examples: you don’t see any for loops or while loops. You do see one function, though, that appears to do something repeatedly: sqrt-iter checks a condition called good-enough? and, if the result is false, it calls itself.

A Loopless Language

Scheme doesn’t have loops: to perform an operation on each item in a collection, we instead use recursion.

Let’s stop for a moment and focus on our experiences with programming languages like Swift, Kotlin, Java, JavaScript, .Net, Ruby, or Python. How often do you loop over something in these languages—counting all your maps, filters, and applys? Constantly.

How often do you use recursion in these languages? I have used it three times in a professional capacity. All of those occasions had to do with database queries that traversed a tree whose depth was unknown when the function was called.

For this reason, I conceptualized looping as a subset of collections operations that met two conditions:

a) a one-dimensional collection
b) whose length we knew when the function got called.

For collections where one or both of these things were not true, we’d need a more general-purpose collection operation. That’s what we get with recursion. We just don’t run into it often because the subset of collections operations that can be handled with looping are the majority of cases that we run into professionally.

SICP changed my perspective on this, because it demonstrates how either recursion or iteration might be used to handle the same collection operation.

Here is how we might sum a collection of numbers recursively in Scheme:

This would work, but as n increases, it takes longer—and, more troublingly, uses more memory. It constructs a list from beginning to end of all the addends and, when that list is complete, sums them together.

Compare that to this version. Instead of the function calling itself directly, the function contains an inner, iterative function that receives the number yet to be summed and the sum of the numbers processed so far:

This iterative implementation passes the state of the progressing operation to each successive step. The call therefore doesn’t need to keep the individual items that have already been processed, reducing memory load.

For any collection operation in Scheme, you can convert between an iterative and a recursive approach. In fact, SICP dedicates a legion of exercises to doing exactly this.

This feels surprising because most programming languages I’ve used—including those with functional capacity, that allowed me to perform collections operations on streams—treat iteration and recursion as fundamentally different things, one of which we use all the time and the other of which we use almost never.

This isn’t because they’re fundamentally different operations. It’s those languages have introduced syntactic affordances for collection operations, and those affordances have implicitly conveyed a perspective about the cases in which iteration should be used, and recursion should be used.

There’s also something to be explored in this passage of state. We’ll come back to this in a moment.

Screen Shot 2019-06-09 at 7.53.15 AM

Higher Order Procedures

Scheme allows you to pass functions around as arguments to other functions. This allows us to abstract away common behavior, like the accumulation of the results of an operation on a series of numbers:

This allows us to incrementally build functions with other functions. We see this ability represented with lambdas and higher order procedures in Scheme. We sometimes see this represented with closures and decorators in other languages.

SICP drives this point by providing exercises in which we repeatedly compose more and more functions together. This allows each function to retain a name and a scope, and it allows programmers to assemble procedures that share patterns with the same parts. We see that in this example, where we pass an improvement function to an iterative improver that allows us to progressively guess at both square roots and cube roots.

By passing around functions, we can also repeat the same function on the same data a number of times, as we do in this case for smoothing out the results of a square:

State and Modularity as Tools for the Engineering Process

The passage of functions and the passage of state together allow us to perform an operation, perform a test on the result, and keep performing that same operation (or a different operation) if the test fails. We can also save off these states, if we like, to log the progress of a procedure, or to pause a procedure, or back it up.

Or, if it’s a long running process with the potential to fail, we could save off the progress at the point of failure, debug the failure, and restart the process from the failure point rather than starting the whole thing over. These types of operations frequently come up on the wish lists of machine learning engineers who use the scikit-learn library. They’ll start to train a model, run out of memory, and lose all the work done so far. This especially sucks if it was some kind of deep learning model that takes four days to run and failed three days in.

How do these models converge on their answers? The process is called gradient descent. Would it be possible to implement this in such a way that state is passed from one step to the next and saved off in the event of a failure?

More broadly, how do the choices that we make in programming language design affect the way its programmers are encouraged, or limited, to manage operations and state?

Now that an exploration of the first chapter of SICP has raised this question, I’d like to think more about it.

In the meantime, we’ll move on to the second chapter of the book, which deals with the representation of data structures.

If you liked this post, you might also like:

This ongoing series tracking my progress through Crafting Interpreters by Bob Nystrom

This piece that dives into the scipy CSR sparse matrix

This applied data science case study

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.