I used to work with a number of LISP machine believers at the MIT AI Lab/CSAIL. They all had more modern computers for day to day tasks, but used the lispm for most of their programming. This wasn't that long ago (I left in 2010), and I suspect that those machines will remain in active use for as long as people can keep them running.
They all believed that the loss of the lisp machine was a serious loss to society and were all very much saddened by it. I never used the system enough to come to my own conclusions in that regard, but it was interesting food for thought. As somebody for whom Linux/POSIX is very deeply entrenched, would I even recognize a truly superior system if it was dropped in my lap? More importantly, would society in general? The superior technology is rarely the "winner"
It was by far the most productive programming environment I have ever used. The level of integration of the editor, debugger, IO system, and interpreted and compiled code is unparalleled. Interestingly it philosophically descended from MACLISP development on a machine (PDP-10) that was designed with Lisp in mind and that had an O/S (ITS) whose "shell" was a debugger, so you could also do pretty tightly coupled development with EMACS (in TECO) and your code in a mix of interpreted and compiled Lisp. In theory this deep level of integration need not be Lisp-specific, but I haven't seen it that often.
The closest I've used were the three environments at PARC when I was there: Smalltalk, Mesa/Cedar and Interlisp-D. When I use Xcode or Eclipse I feel removed from the machine. In these other environments I felt simultaneously able to think at a higher level and yet more tightly coupled to the hardware.
I've used various GNU Emacs modes and the coupling between them and the runtime environment is not tight enough. Today I use SLIME+SBCL and it's OK. It too lacks the tight coupling of the lispm. However for production we'll end up re-coding in C++ for performance.
PS: A good friend of mine scorns the lispm-style of development as "programming by successive approximation." There's some truth in that.
Well, with exploratory programming you tend to build up a couple of data structures, add some functions to manipulate them, and then extend out from there. In a more structured way you think up front about a lot more things. In the second you probably doodle some stuff out on the whiteboard or on paper before coding; in the first you probably simply start with an empty buffer type straight into the REPL. There's a time and place for each and certainly not a well-defined dividing line between the two (except in some highly structured UML/TDD processes which may not even exist any more outside aerospace).
For example I talked about the Lisp implementation of the code I'm working on: for deployment it looks like it'll be be an implementation in C++ based on what we end up learning about performance; certain implementation decisions that were particularly good or particularly bad, or that we iterated on several times before settling on something good; and that handles memory management more directly.
It was fun although at that point in my life I was not comfortable with statically typed languages. So it was good for me as well.
I really just experimented in it and the (more welcoming to me) Smalltalk environment. I used InterLisp-D as my "day job" language (actually we implemented 3-Lisp in it, with some custom microcode).
BTW there was a good paper from the Mesa group which I can't find online (my copy must be buried in a box someplace) comparing the performance of counted strings vs delimited strings (e.g. [3, 'f', 'o', 'o'] vs ['f', 'o', 'o', \0] in C syntax). According to the paper the bounded strings were much faster. All three languages (Smalltalk, Mesa and Lisp) used counted strings.
Thankfully Rich Hickey & co wrote Clojure so we can program on a modern Lisp in the Java Virtual Machine and in the browser! (ClojureScript) (even the .NET CLR is supported)
Though it kind of sucks that Tail Call Elimination is such a difficult task on the JVM.
Scheme kind of gets you thinking in a way that works iteratively, but gets expressed recursively. Its easy to read, and can make for some great optimisation without being premature.
The JVM does not really support this style of programming - despite LISP's syntax leading towards it.
I wouldn't say that it was appropriate there, either.
I really wish I could get my hands on a LISP Machine.
I love the idea of LISP being so close to the metal, but that power means some design tradeoffs.
Scheme makes sense with TCE.
I'm not sure InterLISP and the like need it - memory is limited and you work without so many of the system overheads I'm used to with modern systems. Iteration is less costly here, and that makes it easier to not need recursive design. (Which is more expensive).
In simple terms:
Why worry about blowing up a stack you don't need, when you've thought long and hard before you allocated it?
The designers argued that a million-line software (the OS + basic applications) was easier to debug/develop without having TCO everywhere. It makes stack traces useless, unless one thinks of clever ways to keep tail calls recorded, which makes it complex/tricky. The basic machine architecture is a stack machine with compact and nice instructions. Stack traces were useful then. The compiler also was not very sophisticated when it comes to optimizations.
I've long thought the right thing for Common Lisp would be to provide a way to declare sets of functions such that any tail call from one member of the set to another would be TCO'd; all other calls would push stack. This lets you write sets of mutually tail-recursive routines without forcing TCO on the entire system.
It's an entirely reasonable tradeoff, which I understand. But not having TCO is a strong negative from the language perspective, IMHO. Not an impossibly bad one, but it's really nice.
Funny enough, it seems JavaScript will get TCO before Java. JavaScript, for f*ck's sake! This will provide even more argumentative power to the people who claim JavaScript is 'Scheme in C clothes'.
However, a named let in Scheme is not a loop. Its still a lambda.
Which means you can have nested lets with TCE, and you can construct them on the fly, depending on what your needs are, usung patterns like currying.
That flexibility just doesn't work on the JVM. You can force it, but it'll be slow and horrible compared to another pattern - and I don't think a LISP should tell me how to do something. That's Python's ideology.
I believe the whole point was to target the JVM, because of reuse and maturity. And actually it's not stuck just in java ecosystem, clojure got wings years ago. Yoy can find it inside a browser today too :)
I wonder if there ever has been talk of a native Clojure? I guess it may not be very usable without the JVM ecosystem, though. Frankly, I find calling JVM library calls from Clojure to be quite ugly and really stand out in the code (mostly because of the mix of the lower-case-dash-delimited variable and fn nameing convention of Clojure and the mixed-case/camelCase naming style of Java.
The problem with all languages that decide to implement their own runtime, instead of building it on top of JVM or .NET eco-systems is that their native code generation and GC implementation are always going to be worse.
Also there is the issue of having to implement the whole set of third party libraries from scratch, just like PyPy and JRuby have issues using libraries that rely on CPython or Ruby FFI.
So unless you get a set of developers really committed to go through the efforts of making it succeed, everyone will ignore it.
The closest thing to a native Clojure is Pixie[1]. As the authors note, it's a "Clojure inspired lisp", not a "Clojure Dialect".
>Pixie implements its own virtual machine. It does not run on the JVM, CLR or Python VM. It implements its own bytecode, has its own GC and JIT. And it's small. Currently the interpreter, JIT, GC, and stdlib clock in at about 10.3MB once compiled down to an executable.
Yeah, but for many of us the Java ecosystem is a feature.
You only get tooling comparable to Visual VM or Mission Control in commercial Common Lisps.
Also I think many that bash Java don't realise it is the only language ecosystem that matches C and C++ in availability across OSes, including many embedded ones.
It is a consequence of being an enterprise language.
I imagine you never had the pleasure of doing enterprise distributed computing projects via CORBA, DCOM, SUN-RPC, DCE in C, C++, Visual Basic and Smalltalk.
Guess where those enterprise architects moved on.
EDIT: Should have mentioned Delphi and Objective-C as well.
Indeed. It's actually my preferred language for writing code. It may not have as many libraries, and it might not be as mature as, say, CL, but it's just so pleasant to program in.
The blub paradox would indicate that you wouldn't recognize it.
Richard P. Gabriel's famous "Lisp: The Good News, The Bad News, And How To Win Big" (aka "Worse is Better") discusses this very thing. I'd reccomend reading it.
But frankly, it's hard to say that an environment is objectively better. What one person may view as a step up, another may view as a step down, and we all have a kneejerk reaction to unfamiliar environments. The lispm is a really nice environment, to be sure, but I'll likely never know if it's better. Emacs will have to be good enough (which it certainly is).
There's still significant things on the list below you cant do with Linux, C, etc. So, yeah, I'd say a modern version of Genera would give hou a worthwhile experience.
They all believed that the loss of the lisp machine was a serious loss to society and were all very much saddened by it. I never used the system enough to come to my own conclusions in that regard, but it was interesting food for thought. As somebody for whom Linux/POSIX is very deeply entrenched, would I even recognize a truly superior system if it was dropped in my lap? More importantly, would society in general? The superior technology is rarely the "winner"