Since the article hammers on about Cyc -- I architected Cyc at MCC, rebuilding the project from the bottom up, developing a layered architecture (that others then took in directions I hadn't imagined) which really emphasizes the reusability and component architecture that was possible with the lispm (I used the predecessors of them as well when I was at MIT and at other research labs). I agree with the 100K LoC in a year, in order of magnitude at least. It was definitely the most productive I have been in my life, but then again it was a research project not an end-user package.
I had two machines in my office at MCC, one with a color monitor (in addition to the stock B&W), both with maxed-out RAM (8 MW, each word I believe was 40 bits (36 bits of machine word + 4 of ECC?). I can't imagine how much it cost -- probably enough to buy a small house in Austin.
I never used the "complete" key or several of those far away ones as they were too distant from the home row. Tab completed as it had in Emacs for years (I started using Emacs in 1977 and still use it as my development environment). I still use the same "dynamic window" typing approach described in the article, but with a C++ app passing JSON data to a web app. You'll notice the article references Gene Ciccarelli -- Emacs began as his TECO init file around 1976 or so.
(I also used D-machines at PARC before moving to Cyc -- a very different experience).
> The whole point was to make an individual or a small group very productive and let them manage high complexity.
And I think it was very impressive.
In contrast, nowadays we seem to get most interest and discussion here for projects that aim make programming more accessible (beginner-friendly, child-friendly, end-user programmable) and/or more suitable to large-team and crowd programming (type systems, golang, git, ...)
Do we still try—do we even want—to design for individual programmer's productivity and mastery of high complexity?
Nowadays is not different from the old days. People say the same things about investing in programmer productivity now that they did thirty years ago.
There have been some truly great programming systems, and I've been lucky enough to work with some of them. They didn't grow out of any larger, industry-wide push for better programmer productivity, though, because there's never been any such industry-wide push, and they didn't result in any such push.
Rather, every once in a while, the right circumstances arise for the development of a great programming system--we get someone with a vision, and it's the right kind of vision, and for a while, for one reason and another, they have the support of someone with a sufficient budget, and sufficient influence over how it gets spent, and they're able to find the right people to realize the vision, and we get a great programming system. Then we get a Lisp Machine or a Smalltalk-80 or a SK8, and for a while we get a community of users who learn and appreciate this great programming system and come to rely on it. They may come to feel that certain truths of programming have been revealed, and that subsequent work will have to acknowledge those truths, and so progress has been made, and further progress will follow. I know I felt that way.
Then reality sets in. The great programming environment is a creature of its ecosystem. Technology moves on, and platforms with it. The great programming system depends on its ecosystem, is intertwined with it. Indeed, one of the things that makes it great is its ability to subsume and transform its ecosystem, to master it and render it easy for the programmer to master.
These same characteristics, though, mean that it's hard to move it to new ecosystems. The great programming environment is a large, complex project. It is time-consuming to port, the more so because you have to port enough of the ecosystem with it to make it compelling.
Because it's large and complex, it's hard to communicate what makes it great. Its greatness is not self-evident at a glance; in fact, it's challenging to describe. it's not about this feature or that; it's not about a list of features with checkboxes. It's about how the entire suite of features is designed to work together to empower human behavior.
Describing what makes it great is like trying to convey a sense of a complex object graph to someone. If you describe particular objects, your listener shrugs and says, "how is that different from X?" If you try to succinctly summarize the whole thing and its advantages, the listener says, "you're handwaving." They won't understand until they have learned and understood the entire graph, but whether or not to invest the effort to do that is precisely what they're trying to decide.
So your great environment loses its platform and needs to be on a new one. But the new one is inadequate precisely because it's new, and the environment is large and complex. You need investment of time and effort and therefore of money to bring it up on the new platform. To get that you need buy-in from someone who can budget for it. But the people who can budget for it are generally not programmers, and almost certainly not programmers familiar with your great environment, which means they do not understand why the investment might be worthwhile. They might be willing to listen, or even predisposed to trust you, but you have trouble explaining.
Meanwhile, a simple but just-good-enough compiler and standard library and text editor get built for or ported to the new platform for the cost of a few weekends' hobby time, and because it's there and it works, it gets used. Sure, it's utter crap compared to what you know is possible, but works now, and people can make things with it now, and it's like what they've seen before, and they don't have to take a leap of faith and learn some weird and esoteric complicated system that somebody says is great while waving his hands.
Bear in mind that I actually do think that some of these great old programming systems are genuinely great and worth the time to learn, use, and even port or reimplement. But that doesn't change the dynamics that have kept us abandoning great old systems and starting over with stone knives and bearskins once a generation or so, and nothing I see now leads me to believe things will be any different in the future.
If you really want to work with one of these kinds of systems, your best bet is to work on one of these kinds of systems, as a hobby or a mission or whatever you want to call it.
Mm. I've been wondering recently why is it that Programmers don't have more specialized tools (Which incidentally reminds me of this lecture: [Bret Victor -- Seeing Spaces] https://www.youtube.com/watch?v=klTjiXjqHrQ).
It's interesting to think that high-performance traders have an entire market dedicated to specialized computer interfaces, terminals, etc. all focused on improving their productivity and ability to reason about the stock exchange.
As the top poster says about the Bloomberg terminals:
""1. You have to have a Bloomberg keyboard to work in finance. It's not even a question. It's a COGS for anybody who wants to work in finance or trading.""
At the end of the day, the reason such a market hasn’t grown around programmers the way it has around financial instrument traders is because programmers are still in the “expense” column while traders are in the “income” column or at worst, the “cost of sales” column. Interestingly enough, many fintech companies tend to avoid this classification of their programmers. The closer programmers get to the incoming money spigot, the less they are treated as “expense” column line items.
The Symbolics systems are very nice but very niche. They only really work when you can ship a product with a complete supporting stack, down to the hardware.
You couldn't build an equivalent for web or app development, because the technology doesn't support an equivalent approach.
You certainly couldn't sell them to the public and leave them open, because they're too easy to break. You might have been able to sell a commoditised application layer with commoditised hardware to the public, but that would have required an extra few zeroes on the end of R&D and marketing costs.
You could argue that web technology should have started with an open approach like this. But there would have been consequences, not least of which in browser design, which would have gone from being a fairly trivial parser/renderer to a complete VM running on non-optimal consumer hardware, and the fact that without commoditisation, development systems would have cost five or even six figures.
The benefits might have taken a while to become obvious.
It's possible that functional VM web technology would have made the web work better after the initial take-up issues, and we'd have something sleek and elegant instead of the awful glued-together patchwork of mediocre hack technologies we have today. But it's also possible the adoption costs would have strangled the web at birth.
We'll never know, but IMO it's an interesting question.
This kind of competition is something that I'd really like to see being analysed in academic CS. It would be useful if the tradeoffs were understood theoretically instead of just being historical curiosities.
As CTO of my last company I tried to optimize for developer productivity. This goal requires careful sheapording of the team and choices along the way. I find this attention to detail is often lost or never strives for in favor of other priorities. After a while, the team loses site of what productive feels like.
TL;DR These lisp machine articles are interesting, in part, because they remind us what productive feels like.
"The NYT article talks about computer-supported work in general. I will explain how the Symbolics Lisp environment made the software developer more productive. The Symbolics Lisp Machine was a high-tech solution. There are also useful lower-tech solutions. Many Lisp programmers like the relative 'simplicity' of just an Emacs window with a 'slave' Lisp process (say, SLIME + Emacs). Some of these developers are using just a terminal and Emacs. It gives them an integrated environment that is both simple and effective. The Symbolics Lisp Machine environment is more complex. It has been designed to support the development of novel and complex software, especially AI software. The whole point was to make an individual or a small group very productive and let them manage high complexity. Artificial Intelligence software often was very complex (Cyc is an example for that)."
Somehow whenever Lisp is the topic we end up with writing like the above, hand-waving about some mysterious properties of Lisp that will bring us to programmer Nirvana.
It seems modern workstations have similar features as the described system, with much higher bloat, which certainly makes it impressive for its time. I wish the article could have put more weight on the modern context and tone down the propaganda a bit.
It's not just Lisp, it's the lispm. This was true of Interlip-D as well (the PARC lisp environment that ran on the Dandelion (AKA Star) as well as other D machines. It had two factors:
1 - Lisp is a dynamic language -- even compiled code can be modified and supersceded. This makes rapid prototyping really easy; it makes exploratory programming really easy, and it makes it easy to incrementally provide core improvements.
2 - As the entire system is in Lisp ("it's Lisp all the way down") there are no barriers (technical nor conceptual) between user code and system code. When I was developing Cyc I make the frame datastructures (Units and Slots in the parlance) first class objects that behaved exactly as if they were part of the base system distribution. They looked like a fundamendal datastructure just like conses, arrays, integers etc, which means people could rite code using them in ways I hadn't anticipated. Coupled with the introspective nature of Lisp you could easily write patches or new functionality that took advantage of what was on the system. And the debugger was always running, so instead of a core dump you could poke around and see what happened with all the dynamic data intact. (In the D-Machine case, on the project I worked on at PARC we actually added some instructions (wrote microcode) to the machine for this purpose).
Now, let's compare this (good and bad) with the state of the art today:
We definitely have dynamic languages heavily inflected by this work: C++ classes can be indistinguishable from built-in ones, and the entire STL can be written in C++. Clojure, js, etc are mainstream dynamic languages.
But what we still have gain mismatches at interfaces: calling into the kernel is expensive; you have a cost in performance and functionality when interfacing between languages (e.g. a lot of boxing/unboxing when you interface C++ to Python, for example) and the build and development tools are typically disjoint from the program under development, which means the code and tool can't introspect about their relationship with each other. On the other hand, a Pascal or C program on an MIT-style Lispm (Symbolics/LMI/TI) had the same calling conventions and datatypes as a Lisp program and could intercall freely and be debugged clearly.
And people are less interested in that these days: the dominant mode of programming is to assemble large numbers of black boxes and hope that they work. That gives me the heebie-jeebies, but perhaps that's just snobbery on my part: tons of useful systems are being built that way, and by a much larger set of people than used to be considered programmers.
There were downsides of course, many of which are the reason these kinds of architectures no longer exist. The biggest one is the two-edged sword of the simplicity and complexity of Lisp: it's a very simple language with many sharp tools in the toolbox; a new user could very rapidly come up to speed, but most commonly a new user ended up developing a thick rope that they then proceeded to hang themselves with. It's a systems programming language and its very simplicity makes it too complex for most for application development. The same issue affects C++ today: it's an immensely powerful and expressive systems programming language but also a lot to get your hands around. Compare that to Go which explicitly tries to head in the opposite direction.
Another "downside" is exploratory programming: I still use that approach but in terms of full disclosure: a good friend of mine denigrates it as "programming by successive approximation". I can see his point.
And of course the lack of barriers would make applications quite vulnerable in today's security environment.
It's still by far the most productive environment I've ever used, but I'm not sure it has a place in today's world.
I had two machines in my office at MCC, one with a color monitor (in addition to the stock B&W), both with maxed-out RAM (8 MW, each word I believe was 40 bits (36 bits of machine word + 4 of ECC?). I can't imagine how much it cost -- probably enough to buy a small house in Austin.
I never used the "complete" key or several of those far away ones as they were too distant from the home row. Tab completed as it had in Emacs for years (I started using Emacs in 1977 and still use it as my development environment). I still use the same "dynamic window" typing approach described in the article, but with a C++ app passing JSON data to a web app. You'll notice the article references Gene Ciccarelli -- Emacs began as his TECO init file around 1976 or so.
(I also used D-machines at PARC before moving to Cyc -- a very different experience).