Believe it or not, academic mathematicians are as likely to have seen APL on a blackboard in graduate school as you are to have seen it on a computer screen. APL didn’t start out as a computer language at all.

Instead, Kenneth Iverson pioneered its use in the 1960s as a shorthand for mathematical expressions. In the same way that XML extends HTML, you can think of APL as an extension of the set of canonical mathematical symbols like and and .

The ‘programming’ in its name has nothing to do with computer programming: instead, Iverson explains it thusly in his book on the language from 1962:

Applied mathematics is largely concerned with the design and analysis of explicit procedures for calculating the exact or approximate values of various functions. Such explicit procedures are called algorithms or programs. Because an effective notation for the description of programs exhibits considerable syntactic structure, it is called a programming language.

Despite the fact that APL didn’t start out as a tool for computers, it has a bit of a cult following among programmers. And taking an interest in the language is by no means impractical: plenty of modern software includes APL, and there’s still a market for programmers who can write it.

So how did this odd-looking shorthand make the jump to computers, and from there how did it worm its way into the bits and bytes that make up today’s applications? This interview from the ’70s provides some interesting insight into just what people were thinking about the language over 40 years ago.

I miss computers with blinking lights.