Live Coding: Exploring an API

Reading Time: 10 minutes

I live stream some of my development work on an open source React Native mobile application for The Zooniverse.

The last time I did this, we worked on some themes for the volunteer-facing UI. Today, we’ll start switching between those themes based on information we get from an HTTP API.

In this post, you’ll find:

  1. Two embedded live coding videos. A meeting came up while we were in the middle of doing something on-stream. So I did an encore stream later to finish the job.
  2. A link to the commits on github that we write in the live stream, so you can review the code yourself.
  3. An outline of the topics we talk about in the live stream, with links to time stamps on the video.

Show Notes: Exploring an API

In today’s live stream, we are working on museum mode for the Zooniverse mobile app.

This mode will tailor the user experience of our mobile app for use in museums, so exhibit designers can showcase a particular project for their guests to learn and participate. Museum exhibits tend to be dark places, so it would be jarring to have a white screen on the app. For this reason, we are implementing a dark mode that will activate when we know the project we are showing is being displayed in a museum.

This requires us to change several visual elements of the app based on a response to an API call. We implemented those visual changes in our last live stream, and we used a placeholder value that would evaluate to ‘true’ to turn on the changes so we could check that they looked good.

In this live stream we implement figure out the API call that we need to make to get the information about whether a project is supposed to be in museum mode for a given user.

You can explore the code from these live streams!

This repo is public on github, so you can see each commit we make, including the commit message and the full set of changes we made to the code. If you’d like to review or copy-paste any part of the code we wrote, you’ll find it here.

Our commits, in the order we made them:

  1. Determine at fetch time whether projects should be in museum mode.

  2. Use project museum mode attribute to determine app colors

  3. Adjust props validation for project.in_museum_mode

You can also get a transcript or closed captioning for either of these live streams.

In this post, I explain how you can get those things in one or two clicks on YouTube.

Here’s the First Video:

Outline: What we cover in this live stream

11:44: The coding starts here. We do a recap of what feature we’re building, what we have accomplished so far, and where we’re going next.

15:36: Intro to the Zooniverse API: this is the API that will send us the information we’re looking for. The API, like the mobile app, is open source, so you can see the repo on github.

16:10: I cannot seem to authenticate via the Zooniverse API with Postman. I show you Postman, I show you the issue, and I share this documentation on how to do it (although the documentation is out of date). My offer to pair with somebody on this for a recorded pairing video is still good for now, but I’ll give it another go this week.

19:41: Alternatives to an HTTP client for exploring an API.

  1. Documentation. I’m a big fan of reading documentation. That said, it’s not uncommon for documentation to be out of date. So what other options do we have for learning about an API?
  2. Tracing API client libraries. Here is the blog post I mention in the video that deep-dives into how to study other people’s code.
  3. Experimenting in the code. Here, we modify the code of the client app and make predictions about what we expect to see. Here, our react native debugging tools help us out, because we can print things to the console and run our app to see the responses we receive from the API.

24:10: A protracted exercise in iterative logging. You see me work all the way through our API calls to figure out which attribute on which call I will need for the theme switch.

I log things out when debugging React Native code, rather than setting break points. Two reasons. First, JavaScript is an interpreted language. I find setting break points much less reliable for interpreted languages like JavaScript, Ruby, or Python than for compiled ones like Java or Swift. Second, we’re working with code here that is designed to run asynchronously, so breaking in one part of the program wouldn’t (and shouldn’t) stop other parts. This can lead to confusion when the absence of an expected input isn’t stopping the app from running, or even appearing to run normally. So I let the whole thing run and log my points of interest.

58:02break

1:02:40On the value of developing software exploration skills. In this part of the video I make a cup of tea and talk about why programming is hard. If you’ve read this post, you’ve experienced a taste of this conversation. In this part of the video, I say a version of something that I said in that post:

The thing about computers is that their evolution depends on the admirable, herculean efforts of a staggering number of fallible human beings, all over the world, coordinating their efforts very poorly. So programming doesn’t make sense, the systems don’t all work the same way…it’s hard.

So it behooves us to learn skills that allow us to spelunk unknown code the way cavers would spelunk an unknown cave. It looks dark and dangerous in there; maybe there are high cliffs or deep pits we don’t see until we’re practically on top of them. It would be far easier for us to stand outside and criticize the programmers who made this cave. But with some spelunking skills, we figure out how to take our first steps inside, and then further inside. The process (usually) remains slow, but it becomes less daunting.

Image from https://www.wired.com/2015/09/dinko-stopic-cave-photography/

1:05:06An aside to talk about cool planetarium stuff. I won’t spoil it here in the show notes 😊

1:11:03: We begin making a refactor list. This is a technique that I started using while pair programming; when I notice something I’d like to fix, I can make a note of it and continue along my previous train of thought with the knowledge that I can come back later to fix other stuff.

1:11:30Live Debugging. Folks really appreciated seeing some live debugging in the last live stream, so I wanted to point out another opportunity to see that in this live stream. In fact, in this debugging example, you get to see a few things:

  1. I’m relatively successful at narrating my thought process, so you can understand the decisions I’m making.
  2. You get to see me make mistakes and miss things. As I said for the last video, I want you to know that I make mistakes. I want you to know, the next time you make a mistake, that it happens to all of us. Mistakes don’t mean you can’t write code; they mean you’re just like the rest of us 😂.

In fact, this debugging session runs all the way up to the end of the first video, where I stop because I had to run to a meeting. We pick it up again in the second live coding video:

Outline for This Video

1:48The coding starts here. We recap what we were working on when we finished up the last live stream, and then continue our work on figuring out why our method isn’t returning the boolean we expect for each project.

9:40:00: When simplifying code meets improving performance. I have created a couple of examples in the past about evaluating whether and how to focus on code performance.

The short version: there are two kinds of performance. First, there’s how fast the code runs. Second, there is how fast (or rather, how easily) a developer can maintain or modify the code. Tech teams often prematurely optimize on the first kind of performance and cost their team the second kind of performance—which can end up being more costly (if things get so bad that a rewrite is needed, it’s far more costly).

Milliseconds often don’t matter to users. However, the biggest code speed problems often crop up at the interfaces between apps and their databases, or apps and their external APIs. In the code we’re writing, we have an interaction with an external API. In this case, rather than making the code more complicated, a more consolidated, faster API interaction also makes the code cleaner.

16:47Fun with the order of an “or” ( || ) operator. This technique might be old hat to the senior programmers in the crowd, but for folks who are newer to the use of the logical operators, this part is worth a watch. We finish up this part at 20:36.

18:31: Making a commit. Now let’s evaluate if there are refactors we want to make.

19:28: break

23:20We return to the refactor list. It’s time to clean up some code.

28:50: We establish an expectation before checking our result. By explicitly expressing what we think our code should be doing, we can more quickly get to the result we want and more clearly remember differences between what we wanted to see and what we’re actually seeing.

31:37: Fixing some issues before our final commit. This piece of cleanup sometimes feels tedious and annoying, but a little bit of maintenance at every change adds up over time to a much cleaner, more maintainable code base.

34:42One last mistake. Can you catch what I’ve done here before I do?

39:15: Check our our design tool! We use InVision to share designs across the mobile team.

41:44: Fixing up some things about the format of our props validation.

43:25: Clearing our refactor list. Before I make final commits, I like to clear my refactor list, then perform any final cleanup like removing logging statements or superfluous code comments.

48:55: Reviewing our changes to commit.

51:40: Final commit changes, with an interactive rebase. Then at 52:58, we push!

2:58:11: Pushing to GitHub and creating the Pull Request: In which you can see the process that our team uses to work on code together.

New For this Live Stream:

  1. I turned on presentation assistant. This means that, each time I use a keyboard shortcut (also called a key binding) in my IDE (Webstorm), a little text bubble appears at the bottom to tell you what I pressed and what it does. I try to narrate these, but in the event that I miss them, I want you to have those labels available.
  2. We introduced an explicit refactor list. In previous videos we implicitly had a refactor list stored in our memories, but it’s helpful to write it down.
  3. I monitored comments! There were no comments, but I’m glad to have this figured out so that, when these streams are announced earlier and to a wider audience, I can (hopefully) interact with you while we code together.

That’s it for this session, y’all. See you soon!

If you liked this piece, you might also like:

This post on my entree into live coding (plus, how to get a transcript of a YouTube video)

The series on reducing job interview anxiety—no relation to this post, but people seem to like this series.

This talk about the technology and psychology of refactoring—in which you’ll hear me explain some of what you see me do when I code.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.