One PL nihilism is "all languages are the same." This is not so—general-purpose languages have converged on procedures, but languages that differ nontrivially are used in computing.


C, Haskell

Lexically scoped, procedural languages with multiple arguments and one return value (and destination) have found great success; the fact that C and Standard ML have converged surely vindicates this paradigm.

There is also a trick introduced by Fortran, allowing us to write

v := x ⋅ y + z ⋅ w

Rather than

t0 ← a ⋅ b t1 ← c ⋅ d v ← t0 + t1

with registers. This is used in essentially every language.


Notably datalog was an antecessor, using logic programming productively.

SQL is substantially different from imperative or functional languages, one defines queries rather than steps, rooted in relational algebra.

Regular Expressions

Regular expressions were discovered by Kleene when considering biology—what input would an organism accept?

So regular expressions were not designed as a programming language; Kleene's work was firmly in mathematics, including notation! However it remains that regular expressions are a language which defines pattern-accepting machines.

Regular expressions are so superior for defining input scanners that lexer generators generate code in (say) C from regular expressions; general-purpose languages are far more cumbersome for this task.

Interestingly one does not need the Fortran trick.



Joy does not require lexical scoping and allows multiple return values. A thoroughly concatenative style also has no need for the Fortran trick. Notably, it was subsequently discovered that pattern matches are inverses in concatenative programming.

Though theoretically interesting, concatenative languages remain obscure.


As I pointed out, APL is truly different because of its forks and trains; one can write avg =: +/ % #, for instance.

Such style can be rewritten to use explicit style with typical lexical scope, but in any case it's truly heterodox way to write code.