Live Coding: The Abstraction Gradient

Reading Time: 14 minutes

Screen Shot 2019-10-02 at 10.26.33 AM

We’ve been driving hard on features lately in our coding live streams. Today, we step back and refactor. This refactor focuses on a crucial concept for building discoverable, maintainable code. I call it the abstraction gradient.*

*Wait, I’m lost. What are we doing? Read about why I do live coding.

To understand abstraction gradients, let’s start with an example from biology. How do our brains process visual stimuli?

The Brain

10 second biology lesson: Different areas of the brain are responsible for processing different sensory inputs. The V1 area specializes in taking in the visual field and detecting its most primitive elements: edges and lines, light and shadow. That information goes to the secondary visual cortex, which specializes in detecting distinct shapes. The shape information goes to an area that specializes in detecting objects among the shapes, and so on to different areas for more complex levels of organization, like chairs, dogs, and faces.

Now, suppose we wrote some code to model the brain, and our SecondaryVisualCortex class had methods like .detectCircle(), .detectSliver(), .detectBlob() …and also .detectKoala(). That would be weird, right?

When a developer who doesn’t know much about our program looks for koala detection, their understanding of the brain isn’t going to help them find it. They would expect koala detection to happen further along, in an area responsible for collating more complex shapes than circles, slivers, and blobs.

The more the responsibilities of our pieces of code match their levels of abstraction, the more  leverage we give a developer who doesn’t have context (either someone else, or our future selves) to figure out what this code is doing. This is called discoverability. Discoverability directly affects a code’s maintainability: if I can’t tell what it’s doing, I can’t confidently fix it.

When working with abstraction gradients in code, we want to answer four questions:

  1. What are the different pieces of code in play for the feature we’re working on?
  2. What responsibilities do each of those pieces of code fulfill?
  3. What types of features do these pieces of code affect?
  4. Do the responsibilities we identified in question 2 match the types of features we identified in question 3?

To illustrate this, I drew pictures. And you get to watch me draw them!

Pretty cool drawing, right?

In this post, you’ll find:

  1. Two embedded live coding videos. I had to run to a dinner thing while in the middle of fixing something on-stream. So I did an encore stream later to finish the job.
  2. A link to the commits on github that we write in the live stream, so you can review the code yourself.
  3. An outline of the topics we talk about in the live stream, with links to time stamps on the video.

Show Notes: Examining the Abstraction Gradient

We’re creating a mode for the Zooniverse mobile app that tailors the user experience for museums, so exhibit designers can showcase one project for their guests to learn and participate.

Museum-goers are really good at getting museum devices out of the state they’re supposed to be in, and sometimes even onto Facebook instead (naughty!). So we have restricted navigation to make it harder to accidentally (or on purpose) navigate away from the project being featured at the museum.

We have already made this change by the time the live stream starts. But let’s take a step back and talk about an important part of code design: abstraction gradients, a term I coined precisely for this live stream and then immediately forgot in front of the camera.

It was on the tip of my tongue!

You can explore the code!

This repo is public on github, so you can see each commit we make, including the commit message and the full change set. If you’d like to review or copy-paste any part of the code we wrote, you’ll find it here.

Our commits, in the order we made them:

  1. Remove project-specific contextual information from the navigation bar

  2. Repair default and preview background colors on navigation bar

  3. Unify a duplicate color in the theme file under the name $zooniverseTeal

You can also get a transcript or closed captioning for either of these live streams.

In this post, I explain how you can get those things in one or two clicks on YouTube.

Here’s the First Video:

Outline: What we cover in this live stream

1:42The coding starts here. We do a recap of what feature we’re building, what we have accomplished so far, and where we’re going next.

2:41Review of what we have done together in the app in the previous livestreams.

3:47Reviewing a new feature that I have added on my own for museum mode. I give you a chance to guess what it is, and then we talk about what I have done and why.

7:00This is where I start talking about the abstraction gradient, the critical concept that frames the rest of our session.

15:29Here is the most condensed, salient example for an abstraction gradient.

20:30In which I explain our approach to component design (which takes the abstraction gradient into account)…

22:38and contrast it with a dodgy implementation of a component design convention in reactive programming in general (and React frameworks specifically).

23:49 – Here, we walk through a technique for analyzing legacy code—code illustration! We already explained the thing we’re illustrating at least twice, so that part won’t be new to you. This frees up your brain cells to focus on the technique of code illustration, including the representation of code pieces, control flow, responsibilities, and scopes.

I typically do this step with pens on a physical sheet of paper, so you  get to see me learn to use a programmatic diagramming tool (Miro) in real time. I thought this would be annoying for you. Viewers tell me it’s kinda fun.

“I don’t want these instructions! Just let me struggle in peace!”

39:50Here is the climactic part where we discuss the implications of abstraction gradients.

With a bonus: because I have used a programmatic drawing tool instead of pen and paper, I can move the responsibilities within scopes to illustrate the point I’m making, for your viewing pleasure!

“Be normal, Chelsea.”

47:45And now, we arrive at our code solution! So we presented a problem, illustrated the parts of the problem, discussed a framework for thinking about the problem, and arrived at the solution.

Have I succeeded in telling you a story? Do I know any code storytellers who might evaluate this for me? I’ll bake you a lava cake.

Come on. How could you possibly resist lava cake?

52:20 – Talking about tradeoffs. We have to make a decision here about writing DRY code versus code that explains itself at a point where developers are likely to see it. Over time, developers build up a collection of heuristics upon which they weight tradeoffs and arrive at decisions about how to write their code. Here, I’m making explicit the tradeoff I have considered, the heuristic I have used, and the choice I have made. We do this hundreds of times an hour as programmers, much the way a driver does while cruising down the highway. What heuristics do you use when you evaluate tradeoffs in writing your code?

1:00:30Live debugging. We finish the change we meant to make, but we have broken something else. Here for about five minutes, you hear me narrate my hypotheses about what is wrong (“I wonder if”, “I suspect that”), the experiments I use to test those hypotheses (“what if we try”, “let’s do”), and the results I get from my experiments (“Aha!”, “wow, that’s unexpected).

The debugging wizardry you witness and/or hear about from wizened developers? It’s that: hypothesis, test, use results to generate new hypothesis.

Seasoned pros might do this fast. They might not narrate it, so it looks even faster. And they might get a head start from all the experiments they have already run and remembered from their careers. But the process we do here is the same process they’re doing.

1:05:12 – How do we push a commit with known issues that we need to fix before merge? Here, we write a commit message to explain exactly what comes next and record our list of things we want to try next time. As much as I wanted to keep going on this code, I had to rush to dinner with a friend.

So we ended this live stream and picked up where we left off in a second one:

Outline for This Video

1:20The coding starts here. We recap what we were working on when we finished up the last live stream, and then continue our work on figuring out why our headers aren’t the color we expected.

5:30 – We discover some code that doesn’t work. I show you some of the techniques I use to debug React Native apps, and we confirm a suspicion we developed last time: that some code we had found was usually not doing anything. We also discover why it isn’t working, which gives us latitude to change it with confidence.

9:29We iterate through three ways to change the broken code, ending with the one that makes us (well, me) the happiest.

17:00 We introduce a guard clause, plus some of my convictions about how to write one.

You hear me explicitly stating my assumptions (“if this is working as we expect, we should see a black background…”)

21:18 – Let’s talk about scope. In part 1 of this stream, we put blue boxes around pieces of code to represent their scope. Here, we discuss the role of scope as it generalizes to any variable; the scope is the area of code inside which the compiler accesses the value to which the variable points.

25:01The titles aren’t showing up on my headers. Why not? We find the reason, then take the opportunity to discuss the importance of doing a line-by-line review prior to committing our change sets.

33:58 – We talk about tradeoffs in code design. “We’re gonna be explicit and make something that is easier to read and harder to modify under the assumption that programmers, most of the time, are reading a line of code rather than modifying it.”

35:18Discovering scope overreach and deciding to leave a refactor undone. How do we decide whether to tackle a change or leave it alone for now? If we leave it alone, how do we signal that this still needs to be changed later?

38:25 – And here, we add a refactor to the scope of our work rather than leave it alone. How is this refactor different from the one above that we decide to leave for later? When and how do we perform this refactor to minimize the risk that it disrupts making our other changes?

Feel free to cogitate on this in your free time, and when you’re ready, you’ll find some answers in the work we do at 47:52.

42:49My heterodox opinion on including TODO in code. “It’s worth rethinking the choice to not provide context for the future on the assumption that no one will act on it.”

44:38We review our changes to make a commit. Mostly we’re ready to compose our message and commit, but we do modify a thing or two.

50:02We review and commit the refactor we discussed at 38:25. Bonus items: my controversial inclinations on how to name widely scoped style variables, plus the one weird trick I use in my commit messages to anticipate future complications and signal the commit where I might have caused them.

Chelsea, why didn’t you just test that all of that was working on Android during the live stream?” I don’t explain this on the stream, but in this case I have a resource limit. Android Studio, the Android emulator, XCode, the iOS simulator, and my live streaming software are all computationally intensive programs. In this stream, I already have 3 of those running simultaneously on my laptop. Adding two more would have slowed down my machine to the point where we would have spent another 25 minutes getting through this, if we could do it at all. So, since the first place I would test it would be on whatever device I’m already running (to ensure that I haven’t made code-related mistakes), that’s what we do here. Later I’ll check it on Android.

52:29Updating snapshot tests. Those familiar with React Native development might know this process well :).

54:40 – Witness an interactive rebase. For some of you this will be old hat, but when I was learning to rebase interactively, I did not find a lot of step-by-step examples. So, here is one.

56:10We push!

New For this Live Stream:

  1. I got a couple of comments about the lighting on me, so I changed that slightly from the first part to the second part of this stream. I cannot guarantee that this change is permanent; I stream from where I can, and who knows where that will be? I will try. Because I love you. (But the point is the code, and you can see that, right?)
  2. Last time, we introduced an explicit refactor list. This time, since we had a break in the middle where we needed to push code, we included our list directly in a commit message. I encourage heavy-handed use of version control for transferring context. Think of context-rich commit messages as the code equivalent of director interviews from your favorite movies. You pay extra to have a copy of the movie with that director interview. And the director knows you love it; that’s why they record the interview. You’re directing this code, so be sure to provide all the juicy context and anecdotes for your adoring fans!*
  3. We did a code illustration! I hope you enjoyed it. Please let me know if it would be helpful for me to continue doing these.

*Or whomever gets stuck maintaining this thing when you win the lottery and move to wine country.

That’s it for this session, y’all. See you soon!

If you liked this piece, you might also like:

This post on my entree into live coding (plus, how to get a transcript of a YouTube video)

This talk about the technology and psychology of refactoring—in which you’ll hear me explain some of what you see me do when I code.

This 3 part series about time and space efficiency—In which I approach the topic of performance from the perspective of a code sample. Why make things fast? Why make them take up less space? And how do we evaluate the tradeoffs?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.