Live Stream Director’s Cut: Adding Interactivity

Reading Time: 6 minutes

I’m live streaming my programming from start to finish on an iOS app to help folks explore and translate Scottish Gaelic phrases.

Edinburgh, Scotland
Edinburgh, Scotland

The last post about this provided links and director’s cuts for streams 6-9, in which we begin adding code functionality to the home view. In this post, I’ll cover the streams where we add events—stuff that the home view does in response to taps on the screen. In particular, when someone taps on a blog post preview, a drawer should animate up from the bottom of the view that loads that blog post in full. Then, when they tap the “Done” button, it should animate back down.

Our first step will be to prepare our blog previews to accept taps. Because this isn’t a button or a similar view that accepts taps by standard, we’ll use a gesture recognizer:

Theoretically we could also do this with IBActions, but that we have many, many buttons in this app on which we can demonstrate that :). Gesture recognizers represent the superset of the button functionality, because they can be used to get views to respond to all kinds of touches onscreen—the “tap” gesture (technically dubbed touchUpInside) being just one of those, and the standard for buttons.

Now it’s time to implement loading the blog post and sliding the our drawer.

We went with a WKWebView in this app to load the posts while keeping the person inside the app, rather than kicking them out to Safari.

The other exciting part of this stream is that you get to see how to create views programmatically in iOS. Up until now, we have almost exclusively used storyboards, but in this case, we want to generate a view that waits offscreen for an animation to be called, and then slides in.

We also discuss the two different animation APIs in iOS, where they each come from, and how I recommend using each one.

Then, of course, there’s testing.

Here’s the thing about iOS: it is, hands down, no question, by far, the least cooperative framework in which to implement automated testing. In particular. it’s difficult to write view logic unit tests without tightly coupling to the iOS implementation in general, and ViewController instantiation in particular.

I think one reason that the presenter pattern enjoys so much popularity in enterprise mobile applications is that it’s the closest you can get to tearing tested logic away from ViewController instantiation. You can inject a presenter into the ViewController, have the ViewController pass it data and views, have the presenter spit back results, and then test the presenter separately from the ViewController. If you have a lot of logic that you’re not required to put in the ViewController, great. But selectors and the like, that we use for adding responses to events, don’t move out of the ViewController so easily. Moreover, if only 5% of your ViewController code can be moved out, moving it out adds indirection for very little gain in the conciseness or verifiability.

Your other option for testability, besides moving code out of ViewControllers, is to wrest control of ViewController instantiation from iOS. You can do it, but at what cost? It’s such a massive workaround that it involves largely abandoning the use of Interface Builder for segueing from screen to screen. Interface Builder exists so designers (well, in practice, developers) can review and change the user experience diagram by clicking and dragging. Remember: the point of a well-tested, well-refactored code base is to make the code easy to change. Interface Builder makes the UI, and especially screen-to-screen transitions, easy to change. These are also the exact kinds of things that are likely to change in an app. Breaking that to preserve testing purity isn’t worth it to me.

So another solution is to capitulate on the unit tests and instead ensure that the UI works correctly through feature tests. These run rather slowly, but they work, and they don’t require us to throw out everything Apple ever did to make iOS apps easier to write in order to make them testable. Here’s their caveat: they’re much easier to write after the UI is in place as opposed to before. Why: XCode provides us with a tool that allows us to open the simulator and use the app while it records all the events we generate in the XCUITest API. Then we can go into the test, with the list of events, and insert assertions where we want to check that certain things are true, or not true. That’s what you see me do in the above video.

Now that my functionality is neatly tested, I want to try a refactor in which I extract common, copied-and-pasted UI elements into a reusable custom one.

Stream 13 was my first failure at doing that. I time-boxed the attempt because the benefit we stand to gain in this app from making this change is minor, but I wanted to try it. A half hour later, I hit my time-box and called it.

But a few days later, I was feeling stubborn. So I tried again.

Still no dice. So instead of trying a third time, I finally convinced myself to move on to the next feature, which was…

Visiting a site at a slide-up drawer when we tapped on a button. Wow! It’s so similar to the feature we added for the blog post previews! So we got to reuse a lot of our code from that, and update the XCUITest to boot so that all of our drawer events have test coverage.

In the next series, I think we might move to a different screen and work on something new in this app for a while :).

If you liked this piece, you might also like:

The teaching tag, if you’re learning something from these streams and want to see more about how I teach

How to Jump-Start a New Programming Language, or maybe, even, gain a more concrete mental model of the one you already use!

How does git detect renames?—This piece is about how git detects renames, but it’s also about how to approach questions and novel problems in programming in general.

 

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.