A familiar example is “goto” versus structured programming. “goto” is supremely expressive—conditional jumps are sufficient to express all of the usual structured programming constructs. But sacrificing “goto” enables much more powerful reasoning about programs because it restricts the shape of the control flow graph to common patterns: “while”, “do…while”, “for”.
One of the core features of functional programming is “recursion schemes”, that is, generalising patterns of data flow. While “goto” lets you manipulate control flow arbitrarily, it turns out that you don’t usually want arbitrary control flow—and the same is true for data, so rather than “for” loops, it’s often easier to think in terms of the common patterns: “map”, “reduce”, “filter”.
> rather than “for” loops, it’s often easier to think in terms of the common patterns: “map”, “reduce”, “filter”.
It is. My lament there is that, at least in my experience, optimizers haven't caught up in this department. So while I invariably start with the higher-order operations, all too often the profiler subsequently informs me that I need to replace that code with a for loop.
I suspect that side effects are the culprit there: In a language without referential transparency, the optimizer's very limited in what kinds of reasoning it can make about a function. I'm not quite ready to give up side effects just yet, though. I wish I could work in a language that only allows them in functions that explicitly declare that they want to use them, though.
While there is some truth to the author's point, and we can see the effect in in a lot in practice, he makes the mistake of confusing "useful" with "analyzable in FP terms".
This exact point (comprehensibility vs. power) is why Smalltalk is so amazing: it adds a lot of power while at the same time being more comprehensible, not less. That's no small feat, and IMHO one that is both under-appreciated and under-analyzed.
He isn't talking about comprehensibility but instead analyzeability. For instance a type system normally improves comprehensibility but sometimes can require rewriting your algorithm in a less clear way, reducing how easy it is to understand. In contract it always improves analyzeability as you can say more about the program without going into details.
This is really natural in mathematics and logic—it's the tension between consistency and completeness.
Essentially, you have a tension between laws a models. A larger number of laws means more reasoning power at the expense of fewer models (possibly 0 in which case you're inconsistent). A larger number of models means greater realizability but fewer laws (possibly 0 in which case you've gained nothing through this exercise).
It's always a game of comparative linguistics—does this abstraction pay its way? Well, without it we have this set of laws and with it we gain this new set. Are things in balance such that the abstraction is realizable and the laws are meaningful? That's your tradeoff.
I’ve certainly been more impressed by creative expression in restricted media, and I’m not sure that’s only due to the restrictions—it seems people actually perform better when trying to circumvent limitations. Examples that come to mind include pixel art, sonnets, pointillism, and origami.
For programming languages, a paradigm allowing to be "as expressive as necessary" is LOP[1] (Language Oriented programming).
With it you can basically choose your level
of expressiveness while you develop your program (with different levels for different parts of your program). The frustrating trade-off between expressiveness and readability[2] is the main reason I love this paradigm. Unfortunately LOP has never been really trendy. Hope it will change soon.
[2] For programming languages I would rather speak of readability (as opposed to analyzability) because a program is often not written in stone but "alive" (it's modified, enhanced over time...). It probably does not apply to the other fields discussed in the article though.
This trade-off is also why I love Lisp. Although you can easily write something totally incomprehensible, people well versed in Lisp manage to be very expressive with a very powerful language. Of course lisp macros are the key.
It's extremely easy to do LOP with Lisp thanks to s-expressions (all those parenthesis :). But you can be much more expressive with LOP.
If you ban (for your whole program) some components then I guess you would loose some expressiveness.
Also I don't know if you can categorize "components" as more or less readable. For some problems a "component" might be highly readable while totally inappropriate for another problem. I see some cases where using more components makes the code more readable.
This is an art of using language features (paradigms etc...) and design patterns that will be the most readable while powerful enough to solve your problem. I love doing it with Lisp or Ruby which are languages with a very modular syntax. But it's time consuming so I'm doing it only for substantial projects I really care about. It's much more efficient to use conventional well established programming style (think RoR for example) and less bug-prone (more secure etc...) because it's well tested by many.
I really like this post; it's a nice look at at issue that comes up over and over again in designing systems.
One thing that occurs to me is that it relates to what part of a system I identify with. For example, I think very differently about a dictatorship if I imagine myself the benevolent dictator than if I think of myself as a citizen.
I also feel very differently about the different sorts of system depending on what I'm up to. When I'm mainly exploring a problem space, I want powerful tools, but when I'm building something to last, I want something safely constrained. In the former, I place my identity with the lone author, where the power to do the unexpected is vital. In the latter, I identify with those maintaining and debugging a system, where the power to do anything is a giant pain.
"That is, the more expressive a language or system is, the less we can reason about it, and vice versa. The more capable the system, the less comprehensible it is."
What makes these assertions true? Research/data/polls etc would be helpful. It is hard to accept such wide-ranging claims without some proof.
Also, could someone please post the effective definitions of "expressiveness" and "capability" as used in the post ?
An example (somewhat) from the business world: integrated vs modular.
For example, Apple (ios) is integrated and they have complete freedom to design their hardware and software stack from the ground up. This allows them full expressiveness.
In contrast, Google (android) is modular, and they integrate/source components from commodity vendors. It's less expressive, perhaps a degraded experience compared to apple, but much more modular.
The interesting bit is that the traditional wisdom in business is "in the end modular approaches to technology always defeat integrated approaches", although how this will play out in Apple's case is hard to predict, because I suppose Apple is kind of special.
However if we apply the traditional business wisdom to the software domain here, I wonder the same holds: that the expressive/integrated approach is more powerful initially (where may the modular woudln't be able to get "off the ground"), but over time it looses ground to the constrained/modular approach as the modular benefits are allowed to scale to their full potential.
When my kids and I play with Legos, I think back to then I was a kid, and all the things we could build with our big box of maybe 20 different kinds of block. They were very compositional.
Today, we have a million different pieces from various sets, and it is difficult to put them together in ways that conform to what we imagine, so they aren't as much fun.
One of the core features of functional programming is “recursion schemes”, that is, generalising patterns of data flow. While “goto” lets you manipulate control flow arbitrarily, it turns out that you don’t usually want arbitrary control flow—and the same is true for data, so rather than “for” loops, it’s often easier to think in terms of the common patterns: “map”, “reduce”, “filter”.