I was researching Numenta, a company with a mission to reverse-engineer the human neocortex in software. One of the engineers gave a talk that introduced Numenta’s software at OSCON 2013. He offered a couple of resources for audience members who wanted to learn more about the science: a white paper on hierarchal temporal memory and a book called On Intelligence: How a New Understanding of the Brain Will Lead to the Creation of Truly Intelligent Machines. In the book, Jeff Hawkins and Sandra Blakeslee summarize a hypothesis on the structure and function of the neocortex. As I read through the book, I decided to take notes along the way and synthesize what I was learning in pictures. The result fit onto one page:
I doubt that these notes are coherent enough to serve as a Cliff’s Notes for the book itself, but they help me to synthesize and remember what the book is saying.
Since the advent of the Turing Test, the software engineering community has broadly regarded “intelligence” in machines to mean “indistinguishability from humans.” We have seen this definition represented in pop culture from R2D2 and C3PO to the 2013 movie Her. problem is that if we define machine intelligence that way, then that becomes the goal of machine intelligence. And that goal isn’t especially useful.
We don’t need machines to do what humans do—we have humans to do what humans do. Instead, intelligent machines can fill in at what humans are bad at—solving complex calculations, overcoming cognitive biases, reaching conclusions with speed.
Hawkins and Blakeslee favor a more specific, less anthropocentric definition of intelligence: the capacity to remember and predict patterns in the world. The neocortex forms its predictions about the world by drawing analogy to patterns it has seen in the past.
A simple example goes something like this: I saw a door here last time I came through this room, so I predict there should be a door here again.
A more obscure analogy, something we might describe someone as intelligent for noticing, goes something like this: I want to find a way to teach this concept. I have never seen it taught effectively before, but I saw a different concept taught effectively through an interactive activity. I wonder if I can adapt that interactive approach to teach my concept.
We think of creativity as a uniquely human characteristic, but this definition of intelligence allows for machines to be creative, too. Creativity comes from the capacity to draw more distant, obscure analogies and use them to express ideas with a new perspective. Can you identify the analogies being drawn in the three creative works below?



It should be noted that we don’t have the scientific evidence to support everything in Hawkins and Blakeslee’s hypothesis. The authors readily admit that several details of their hypothesis will probably be shown to be inaccurate or incorrect. But the overarching idea gives us a framework to understand what the neocortex is doing, and it offers an idea about how the neocortex does it that feels achievable to model through software.
Even if it isn’t exactly how the human brain makes us human, it gives us a way to build machines that can remember and predict much the same way humans do.