A thought that struck me in the swimming pool today.

Lazy lists are another kind of late-binding.

Everyone hypes lazy lists by saying “wow! you can even have infinitely long lists!”

To which the natural response is, “So what? The universe is finite and my life and this program are certainly finite, why should I care?”

But, instead think about this. “Infinite lists” are really “unbounded lists” are really “when you create them, you don’t know how long you want them to be” lists.

It’s another part of the program, the “consumer” of the list, which knows how long you want them to be.

For example, I’m currently generating patterns in Clojure + Quil. Here’s one which is basically formed by filling a grid with a number of tiles.


The important thing here is that, although I know I want a repeated pattern of different tiles, I don’t need to know how many of these tiles in total I’m actually going to need to fill my grid, so I just create an infinite list, using

(cycle [tile1 tile2 tile3 tile4])

Then I have a function which zips the list of tiles with a list of positions to be filled. The list of tiles is infinite, but the list of positions on the grid isn’t, so the zipping ends up with a finite grid of tiles.

In a sense, this is yet another example of “late-binding” (ie. leaving decisions as late as possible). It allows the names of data-structures to stand for things (data-structure) whose shape and extent is determined much later.

My Quora answer to the question “Is it still reasonable to say mainstream languages are generally trending towards Lisp, or is that no longer true?

Based on my recent experiments with Haskell and Clojure.

Lisp is close to a pure mathematical description of function application and composition. As such, it offers one of the most concise, uncluttered ways to describe graphs of function application and composition; and because it’s uncluttered with other syntactic constraints it offers more opportunities to eliminate redundancy in these graphs.

Pretty much any sub-graph of function combination can be refactored out into another function or macro.

This makes it very powerful concise and expressive. And the more that other programming languages try to streamline their ability to express function combination, the more Lisp-like they will get.

Eliminating syntactic clutter to maximize refactorability will eventually make them approximate Lisp’s "syntaxlessness" and "programmability".

In that sense, Paul Graham is right.

HOWEVER, there’s another dimension of programming languages which is completely orthogonal to this, and which Lisp doesn’t naturally touch on : the declaration of types and describing the graph of type-relations and compositions.

Types are largely used as a kind of security harness so the compiler or editor can check you aren’t making certain kinds of mistakes. And can infer certain information, allowing you to leave some of the work to them. Types can also help the compiler optimise code in many ways : including safer concurrency, allowing the code to be compiled to machine-code with less of the overhead of an expensive virtual machine etc.

Research into advanced type management happens in the ML / Haskell family of languages and perhaps Coq etc..

Ultimately programming is about transforming input data into output data. And function application and composition is sufficient to describe that. So if you think POWER in programming is just about the ability to express data-transformation, then Lisp is probably the most flexibly expressive language to do that, and therefore still is at the top of the hierarchy, the target to which other programming languages continue to aspire.

If you think that the job of a programming language is ALSO to support and protect you when you’re trying to describe that data-transformation, then POWER is also what is being researched in these advanced statically-typed languages. And mainstream languages will also be trying to incorporate those insights and features.