Would you like to watch me code? Now you can.

Reading Time: 8 minutes

adler planetarium

I live stream some of my development work on an open source React Native mobile application for The Zooniverse.

First, I share my screen. Then I write code for a few hours, narrating my process the entire way. It’s streamed to YouTube, and you get to tune in.

If you can’t tune in while I’m coding, you can watch it later, too.

I haven’t decided the details of how I’ll announce live streams yet, but for now I’ll probably announce them on some slack channels, and maybe on Twitter.

Why I live code

  1. To help others learn. I have found relatively few demonstrations of sound engineering principles in live coding. So I’d like to contribute that to the internet.
  2. To help me learn. I love to teach and share, and I’m always looking to learn new ways to do that. I’d also like to selfishly capture other programmers’ wisdom by giving them opportunities to armchair pilot my programming.
  3. To generate interest in open source contributions to a react-native project. JavaScript frameworks enjoy a lot of popularity in commercial client applications. But most well-known open-source React work is in the framework itself (or the plugins)—not apps that use the framework. The Zooniverse mobile app presents a unique opportunity for folks who want to build their React Native portfolio to work on a client app like the ones they’d be asked to work on in a commercial enterprise.

How I live code

While I’m live coding, I stream my entire screen using OBS. It’s open-source software used by gamers to livestream their gameplay. I owe a debt of gratitude to Joel Grus for helping me get started with it, and I have had a lot of success with Googling whatever I want to do.

I stream to YouTube. I chose YouTube over Twitch because the videos stay up in perpetuity on YouTube, rather than coming down after two weeks on Twitch.

What I have learned so far:

Here is my first live stream (~10 minutes), where I introduce the project itself. The goal for this stream was to get something streamed, so by that metric, it succeeds! I reflected on what I learned from this stream on Twitter, but I’ll also share it here:

1. The messing up is constant. I cannot expect the same level of polish from live coding that I can from edited blog posts.

 2. I am capable of raising one eyebrow. The way Laura Prepon does this as Alex Vause on Orange is the New Black makes me green with envy. I have even practiced in the mirror a few times. Evidently I can already do it! Just not on command.

3. I need to think about what my audience sees when I’m working. In this stream, my webcam blocks part of my simulator screen.

I fixed the screen blocking issue in the second livestream, which lasts about three hours and includes actual programming. The goal for this stream was to practice pointing at parts of the screen, narrating my work, and throwing up a wait screen during break periods. I did not announce this stream, so no random people saw it live. I also didn’t know how to use the pop-out comments feature on YouTube when I streamed this, so even if someone had watched, I would not have seen their comments. That’s on the list for next time.

Takeaways from the second livestream:

  1. I’d like to look at the camera, rather than above it, while I’m explaining things. That’s a byproduct of the way I am sitting and how I have my camera set up. My plan is to get some googly eyes and put them on either side of my camera so I have a reminder to look into it.
  2. I’d like to turn on Presentation Assistant for the next stream. Presentation Assistant is a plugin for JetBrains IDEs that shows the key binding on the screen when someone uses a keyboard shortcut. Throughout the video I narrate my keyboard shortcuts, but the YouTube automatic transcription isn’t super accurate with them. Presentation Assistant would add a visual aid for this.
  3. I think I’m ready to take comments as I stream. After this stream ended, I learned how to get a pop-out comment box from a YouTube live stream. I can show that on my screen as a component of my screen share, and I can also announce the streams ahead of time to give folks a chance to join in.
  4. In this live stream, I also narrate a list of refactors that we will perform. While this works fine, I’d like to keep the digital equivalent of a note pad onscreen, so I can take notes about things I’d like to revisit during a session.

How to get a transcript of  a live stream

It would take a long time to transcribe 3 hours of narration. YouTube has an auto-generated transcript that mostly works (with the exception of the aforementioned key bindings).

To see that transcript in the form of closed captioning, you can hover your cursor over the video and then click the closed captioning button, pointed out in the following screenshot:youtube closed captioning

You can also see the transcript alongside the video as it is playing, with the line selected that corresponds to what is currently being said. To get that, click the kebab menu (circled below) and from the menu, choose “Open transcript” (pointed at below).

transcript youtube

This way, you can follow along reading while I talk (or whomever is narrating your YouTube video of choice).

Making programming videos accessible and useful

Once I finished my second live stream, I had a three hour coding video on my hands. No matter what wonderful moments happen in that video, people don’t sit down and watch three hours of coding. I wanted to make something that folks would find more useful.

So I texted the following to about two dozen of my friends:

I have a question for you. Suppose someone had live coded for 3 hours and narrated their process the whole time. Now they have a video on youtube of them doing this, and you are a person who wants to learn more about how to write the stack they are writing. What would make that video most useful and accessible to you?

1. Copyable Snippets of the code being shared on screen

2. A transcript of everything the host is saying

3. An outline of the video that lists what topics were covered and the time stamps of when exactly in the video they were covered

4. Nothing; just the video would be useful on its own

5. Nothing would make the video at all useful

6. Something else (feel free to specify)

Almost everyone wanted either number 1 or number 3. In my view, the full time programmers leaned toward 3, and the folks who primarily serve other functions (and sometimes code to support those functions) chose 1.

To make the videos more useful, my plan is to produce a set of show notes containing:

  1. A link to the commit, or commits, we work on in the live stream
  2. An outline of topics from the video, with time stamps.

The first show notes are forthcoming. I have discovered that they take a lot of work.

Conclusion

Lately I have begun live streaming some of my development work on The Zooniverse open source React Native mobile application. I’m doing it to help others learn, to help myself learn, and to create a visible, accessible starting point for folks who would like to contribute to an open source React app.

I live stream with OBS to YouTube, so the videos will stay up in perpetuity. So far I have done two live streams, and each time I’m picking up on new skills, new options, and new things to improve.

YouTube provides a transcription service for the videos, which I hope will make them easier to digest. To further ease this process, you can expect to see show notes about each of the live streams after the fact (at least, until I decide that’s way too much work).

If you liked this piece, you might also like:

The rest of the behind the scenes posts—many of which talk about other semi-creative, semi-technical endeavors like this one

This two post series on time and space efficiency—in which I break down programming decisions just like I do in live streams

The API design series—debunking all the myths about your favorite HTTP protocol. Sorry, not sorry 😉

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.