How to Be a 10x Developer

Reading Time: 13 minutes

It’s not a clickbait title, but I understand why it looks that way.

I’ve made my thoughts about the 10x developer trope extremely clear. I think that someone who produces 10x as much code as someone else has left a wake of destruction that will slow down other developers. I don’t think 10x counts if you achieve it by slowing down everyone else. I think 10x has to take into account how you impact other people.

And that, paradoxically, suggests a path for how to become a 10x developer: empower and enable other people such that nine additional developers’ worth of work gets done. You can do that by making nine other developers’ workflows 2x smoother. You can also do that by making 18 other developers’ workflows 50% smoother, or 90 other developers’ workflows 10% smoother.

owl holding a red leaf in its beak
PC @templephotobetsy on Instagram

At some point, your biggest gains in your X are gonna be in improving the ecosystem for other developers. On occasions when I have succeeded at that, from driving workplace changes to teaching computer science classes, I have noticed that I’m using one or more of three techniques. So I thought I’d list them here.

First, establish the motivation.

Learning is work. Sometimes, it’s boring or difficult work. Folks don’t undertake that kind of work unless they understand why they should. That’s not as simple as telling them why. Understanding comes from engaging with the material.

Suppose you want to make a major code change that affects your team’s workflow. If you blaze forward and do that without buy-in, the team will resent it (as we’ve discussed in more detail here). So I recommend starting with individual conversations about the reasons for the change:

What is bothering you about your team’s current workflow that motivates you to push your solution? Are you annoyed at having to write the same code over and over in different places? Are you tired of trying to poke your way around an opaque tool? Are you losing track of things that cost you time and money? Before you push your solution, pull your coworkers to it by making sure they feel the pain that you feel.

Before I do a major refactor, I’ll deliberately pair program with key people on my team on the kind of problems that my refactor would solve. I want them to experience the pain firsthand; I want to hear them say ‘oh my god this is annoying.’ After I have that admission of pain from a critical mass of people on my team, I’ll propose my solution, and I’ll offer to spearhead the refactor. I will craft a concise explanation of how our workflow changes after the refactor, and I’ll point out why I think the new way is nicer for us than the way we’re doing it now.  I have ‘negative testimonials’ about the old way, which I can whip out to capture the attention and agreement of others on my team. This approach allows me to muster support for my refactor and then, in spearheading the refactor, look like I’m taking one for the team.

Imagine the same scenario, same refactor, but instead of socializing my solution, I go off on my own, do the refactor, and push. My team didn’t understand the pain that the refactor solved, so all they’re seeing is the pain of learning the new way to do things. To them, it looks like I made a unilateral decision that screwed them over—the opposite of what a team player would do. Keep in mind, in both circumstances, I wrote the same code. The only difference is how I introduced it.

In effect, even the right code changes need to be sold.

How do we know what the right code changes are? This one is tricky, but one guiding principle has helped me out a lot:

Look for bonehead solutions to convoluted technical issues.

Michael Feathers talks a great deal about technical simplification options in his various discussions on edge-free programming (here’s a more detailed look at some of those principles). But for exemplary purposes, here are a few simple solutions to complex problems that I’m especially proud of:

  • I solved a thorny issue with a sometimes camouflaged, sometimes unreliable delete button by ripping it out and making it so you could click anywhere on the object being deleted to delete it (Here’s the PR if you want more details)
  • I resolved the issue of a laggy metronome in React-Native, where computationally expensive operations can result in delays for scheduled tasks. The app happened to also need to play back tracks at various multiples of their original speed. So I recorded a sound file of a metronome at 120 BPM, calculated what speed to play it by dividing the BPM the musician requested by 120, and wrote one method that shelled out to expo-av to play accurate-to-tempo sound files for both the metronome and the backtracks.
  • I resolved an issue of regulating how often a person should be allowed to log their mood by realizing the issue was fake (assumed foil to “let’s make sure they log it at least this often”) and letting them log as often as they want (more details here).

Each of these solutions made the code cleaner and eliminated multiple problems with the product. But they were not genius solutions. In fact, they were easy to miss precisely because complex problems don’t scream “I have a bonehead solution.”

One drawback: the tech industry generally rewards churning out tons of code over elegant solutions. Chen Lin recently tweeted about his experience with this at Uber. Chen is responding to this commentary, which starts out salty but goes on to articulate a rarely-discussed element of how the tech industry functions: tech executives have to make their work appear innovative in order to secure VC funding. So they’re forced to artificially “make it innovative” by filing patents or hopping on the latest trends. In the case of Uber and iOS development, that imperative burned them badly.

Academia is similar: Dr. Jeremy Watt, author of Machine Learning Refined, articulates how academics have to make their work appear novel in order to get published, and they have to get published to keep their jobs, so they’re forced to artificially differentiate similar concepts—to convolute rather than simplify.

There’s a great explanation from Mekka Okereke about how to counteract some of this as an engineer. It shares a lot with our “first establish the motivation” approach—namely, by starting from the problem, not the solution.

Also, a giant caveat: This solution is not always there. Sometimes the solution just has to be clever, or complicated, or messy. Sadly I can’t give you an ironclad heuristic for how to find them or how to confirm their existence. I can tell you this: in most cases where I have found them, I have found them by popping up a level from the implementation level to the design level.

Also, this recommendation applies specifically to technical solutions: it does not apply, for example, to racism or fascism. The problems we’re talking about here are de facto circumscribed by the scope of an application. When we’re talking about a generational cultural, political, and structural phenomenon, “just rip out X” doesn’t work.

White supremacy culture in tech alone is so insidious and multifaceted that I’ve got an ongoing series about reimagining different parts of tech through a not-white-supremacist lens. But also, listen to Marco Rogers about this caveat. He knows more things than I do.

ANYWAY, anyway, anyway.

So we’ve talked about selling our changes, and we’ve talked about making changes that reduce the scope of complexity. Naturally, the third thing I find myself doing, is selling new knowledge by reducing the scope of complexity. Here we go:

Try to spread intuition with as few prerequisites as possible.

One of my Mobile Software Development students asked me what frameworks he should consider for testing a Python app he’s working on. Here’s what I said:

In Python there are two leading options: unittest and pytest. Both work fine. When I’m working on teams, I tend to go with pytest because the output is a little nicer. 

But…when I’m teaching Python, I don’t use a testing framework at all. I write methods that call the method I’m testing, and then I use Python’s built in assert keyword to check the state of the modified objects or return values. Reason being, I don’t want to jump into the complexity of all the different stuff you can use to test until students understand why we test. I try to use the minimum toolset that gets the point across.


One of the big things that educational approaches manage to screw up, in my view, is to introduce prerequisites that make the topic in question harder instead of easier. Fluent Forever founder Gabriel Wyner articulates how language learning often depends on translation—which is a whole, separate, difficult skill. So when I’m trying to help someone build intuition, I’ll try to eliminate as many dependencies as possible.

I’ll show you an example for teaching testing in Python. I pulled this example out of a notebook that I share with my Emergent Works and Justice Through Code mentee. She was working on writing a Sudoku solver. Of course, to solve a Sudoku, it helps to know when a Sudoku is solved.

Suppose we start with a similar but simpler puzzle; each row and each column must each possess one copy of each number, like this:

puzzle = [
    [1, 2, 3, 4],
    [2, 1, 4, 3],
    [3, 4, 1, 2],
    [4, 3, 2, 1],
]

This is a valid puzzle, albeit smaller than your typical 9 x 9.

It shows a way of representing our problem space—a list of lists. We could make a Puzzle class for this, with all kinds of methods on it. I’m not doing that yet. I’m doing the minimum thing that gets the point across: we need a way to represent our data. Here is a way to represent our data with minimal dependencies and indirection.

We’re gonna do the same thing for the test, no test framework involved (yet!):

def valid_puzzle(puzzle):
    try: 
        # Python is duck-typed, so here we check that the input is the type we expect
        is_a_list = type(puzzle).__name__ == "list"
        assert(is_a_list)
    
        # Check that the puzzle is a square
        number_of_rows = len(puzzle)
        number_of_columns = len(puzzle[0])
        assert(number_of_rows == number_of_columns) # Same width and height
        assert(len(set([len(row) for row in puzzle])) == 1) # Every row has same length
    
        # Check the numbers used for the solution 
        for row in puzzle:
            assert(len(set(row)) == len(row)) # Only one of any given number in the row
            
            #Numbers range from 1 to however big the row is
            assert(max(row) == len(row)) # In a 4-item row, the biggest number is 4, etc
            assert(min(row) == 1) # The smallest number is always 1
        
        # Since we checked the maximum and minimum number for the rows,
        # and since we checked that the data is a square,
        # we can assume the max number in the columns and don't have to check it.

        # Check that there's only one of any given number in the column
        for index, item in enumerate(puzzle):
            column = []
            for row in puzzle:
                column.append(row[index])
            assert(len(set(column)) == len(column))
        
        return "This solution is valid!"
    except:
        return "Sorry! Not a valid puzzle solution."

What don’t we care about? We don’t want to assert that the puzzle is 9×9 (and in fact in our current representation is too simple to pass if we assert that). So instead we make the checker work for any size of puzzle:

bigger_puzzle = [
    [1, 2, 3, 4, 5, 6, 7],
    [1, 2, 3, 4, 5, 6, 7],
    [1, 2, 3, 4, 5, 6, 7],
    [1, 2, 3, 4, 5, 6, 7],
    [1, 2, 3, 4, 5, 6, 7],
    [1, 2, 3, 4, 5, 6, 7],
    [1, 2, 3, 4, 5, 6, 7],
]
valid_puzzle(bigger_puzzle)
>> Sorry! Not a valid puzzle solution.

It works at standard size, too.

ninex9_puzzle = [
    [3, 1, 6, 5, 7, 8, 4, 9, 2],
    [5, 2, 9, 1, 3, 4, 7, 6, 8], 
    [4, 8, 7, 6, 2, 9, 5, 3, 1], 
    [2, 6, 3, 4, 1, 5, 9, 8, 7], 
    [9, 7, 4, 8, 6, 3, 1, 2, 5], 
    [8, 5, 1, 7, 9, 2, 6, 4, 3], 
    [1, 3, 8, 9, 4, 7, 2, 5, 6], 
    [6, 9, 2, 3, 5, 1, 8, 7, 4], 
    [7, 4, 5, 2, 8, 6, 3, 1, 9] 
]
valid_puzzle(ninex9_puzzle)
>> This solution is valid!

So we have a solution. Now, let’s talk about the problems with our solution. What sucks about it?

  • Well, if a solution isn’t valid, the error message doesn’t tell you which assertion failed or why.
  • If it throws an error besides the AssertionError, you will not know.
  • Once we have defined our assert_solved method, we have to explicitly call it somewhere to get it to run.

This is where I introduce the idea of a test framework. Pytest and unittest both include features to:

  • Automatically run all the tests in a test class
  • Bubble up the exception if the code being tested throws an exception.
  • Explain, if the test fails, what the test was looking for, and what it got instead

Then we talk about how to make all these things happen.

To automatically run all the methods in a test class, we can use one of Python’s data model methods to get the list of methods and attributes, filter out the attributes with another data model method, then call every method. Here’s a blog post that demonstrates almost that exact operation. But wait! We don’t want to call every method on the class—that would include all the special methods! How do we get around that? Well, one way to do it is put the same prefix at the start of every method signature, and filter for that. Now students understand why a test framework might require a test method name to start with “test!”

To bubble up the exception, we’d have to catch specifically AssertionErrors in that except block up there. Then, we could either handle any other errors with a second except block, or we could just re-throw all the other errors.

To provide clearer output, we’d need to get creative with how we store the variables we pass into the asserts such that they could be fished back out and printed in the AssertionError capture.

This is where I can introduce testing libraries. Because we have taken time to develop intuition around how these libraries work, they no longer sound like magic. Students understand how they work because they implemented a simple one and then, at a minimum, talked through how a real one would need to work. PyTest is no longer some kind of enigmatic genius box. I find that this approach doesn’t just transfer knowledge—it helps new Python developers build their confidence and their skills.

Whatever multiplier I have as a developer, I think I get from doing these three things.

I sell my solutions by engaging with others first on the motivation for them. I try to find bonehead solutions to convoluted issues. And I try to eliminate barriers to entry when I’m helping others build intuition.

If you liked this piece, you might also like:

This workshop on analyzing risk in a software system

The ongoing advanced programming with Python series

This lively talk about what counts as a programming language!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.