Machine Learning Intuition: Understanding Taylor Series Approximation

We have talked before about the intuition behind cost function optimization in machine learning. We took a look at where cost functions come from and what they look like. We also talked about how one might get to the bottom of one with gradient descent.

When you open a machine learning textbook, you’ll see much more math than we used in that introduction. But the math is, in fact, important; the math gives us tools that we can use to quickly find the minima of our cost functions.

We talked about derivatives in the last post, and we’re talking about Taylor series in this post. Both of these tools come from calculus and help us identify where to find the minima on our cost functions.

Continue reading “Machine Learning Intuition: Understanding Taylor Series Approximation”

Machine Learning Intuition: Using Derivatives to Minimize the Cost Function

We have talked before about the intuition behind cost function optimization in machine learning. We took a look at where cost functions come from and what they look like. We also talked about how one might get to the bottom of one with gradient descent.

When you open a machine learning textbook, you’ll see much more math than we used in that introduction. But the math is, in fact, important; the math gives us tools that we can use to quickly find the minima of our cost functions.

We’re going to talk about two of those tools: derivatives (in this post) and Taylor series (in the next post). Both of these tools come from calculus and help us identify where to find the minima on our cost functions.

Continue reading “Machine Learning Intuition: Using Derivatives to Minimize the Cost Function”

One-Page Notes: Weapons of Math Destruction, by Cathy O’Neil

Folks ask me about the dangers of trusting computer-generated algorithms and artificial intelligence. The conversation usually brings up a future scenario in which the machines outsmart humans.

But there’s a more current problem: we trust machines to build algorithms based on incomplete or biased data that we feed them, and they perpetuate poor and unfounded decisions under the guise of ‘scientificness’ because a computer made the decision. Continue reading “One-Page Notes: Weapons of Math Destruction, by Cathy O’Neil”

One-Page Notes: On Intelligence, by Hawkins and Blakeslee

I was researching Numenta, a company with a mission to reverse-engineer the human neocortex in software. One of the engineers gave a talk that introduced Numenta’s software at OSCON 2013. He offered a couple of resources for audience members who wanted to learn more about the science: a white paper on hierarchal temporal memory and a book called On Intelligence: How a New Understanding of the Brain Will Lead to the Creation of Truly Intelligent Machines. In the book, Jeff Hawkins and Sandra Blakeslee summarize a hypothesis on the structure and function of the neocortex. As I read through the book, I decided to take notes along the way and synthesize what I was learning in pictures. The result fit onto one page:

Continue reading “One-Page Notes: On Intelligence, by Hawkins and Blakeslee”

Machine Learning, Part 2: Classification

I’m working my way through a Coursera specialization on Machine Learning. The specialization includes several courses, the first of which provides a high-level overview of ML. I finished that one before I began talking about the coursework on my blog because I didn’t want to identify myself as a student of machine learning until I had actually gone through with something.

Going forward, I’ll share a post on each of the in-depth classes in the specialization. The first in-depth class was called Regression, and you can find my post about that course right here. The second class is called Classification, and this post will cover key concepts from the course as well as my observations and any supplementary readings that help me understand the material.

**Publishing unfinished. Will update as course continues.

Continue reading “Machine Learning, Part 2: Classification”

OpenGov Hack Night: Dan Platt and Craig Booth on Chicago Crime Stories

Tonight at OpenGov Hack Night in 1871, Dan Platt and Craig Booth of Narrative Science came in to give us a tour of ‘Chicago Crime Stories,’ which, as described on the EventBrite Description, is:

an application utilizing Chicago open data to produce narrative on crime in any Chicago neighborhood.

The app takes mountains of Data about Chicago crime and presents a story based on a specific neighborhood, which you can specify on the homepage.

Continue reading “OpenGov Hack Night: Dan Platt and Craig Booth on Chicago Crime Stories”

Windy City Rails: Mark Menard on Upfront Design

“You’re not a junior developer anymore: you’re here, at a conference…you have a right to change the code, and that includes its design.”

When Mark Menard stood up today at Windy City Rails to give his talk, “Let’s Do Some Upfront Design,” he examined the refactoring of code—including tests—and brought us to the realization that “All code is an impediment to change.” How do we stop that form happening?

Screen Shot 2014-09-04 at 12.03.57 PM

Continue reading “Windy City Rails: Mark Menard on Upfront Design”

Blog at WordPress.com.

Up ↑