When my brother was a little kid he had trouble learning to tie his shoes. He wanted to use velcro shoes, so he got them. Does he wear velcro shoes today? No! He learned to tie his shoes, even though it wasn't an intuitive process. Partly out of necessity, but also because tying knots is a generally useful skill (aside from attaching shoes to feet) and because learning new things is an important part of life.
If you stick to intuitive operations, you'll suffer a lack of perspective. You never expand your intuition. Better metaphors are those that more closely encapsulate the facts of the problem at hand, not those that more nicely fit human mental models. The human mental model is wrong by default. Aristotle's intuition about gravity was wrong, and a significant amount of damage would be done to force discussion of gravity to match what he found intuitive.
People can learn new notations. They do it every day. People can adapt to complex metaphors. But complex problems will never simplify themselves to fit human preconceptions and intuition.
If you're dealing with a set-like problem, then a set-based language is good. If you're dealing with an event-like problem, then an event-based language is good. Human intuition is not an important factor, save for our bias toward seeing a given problem as set-like or event-like. An arbitrary problem may be better modeled using another frame of mind. But you won't see that if you're requiring that problems be solved using 'intuitive' language.
And yet wildly different paradigms are used to solve the same problems every day. A video game (for example) can be programmed imperatively, or reactively, or with OOP, or following a compositional pattern, and so forth. All of these metaphors are broad enough to express computation in general. What I'm saying is, when designing general languages, it's worth consciously factoring in the general ways in which the human mind works. It's not the only factor, but it's one that's often ignored.
To be doubly clear: I'm not saying that unintuitive, highly-formalized syntaxes are useless, just like assembly isn't useless. Each is crucial for certain uses. What I am saying is, they shouldn't be necessary for (and often aren't even well-suited to) solving the average programming problem. We don't write software in assembly any more, but we haven't evolved as far beyond its paradigms as we like to think.
There is a huge number of problems that software engineers work on every day, whose domain (what's being modeled) is fully understood by many laypersons (or nontechnical subject-matter experts). And yet those laypeople lack the ability to express their ideas to a computer, and those engineers waste huge amounts of effort translating those simple ideas into needlessly esoteric code. This is a fundamental failure in language design, and it needs to be addressed.
Lol, like what? Go have a layperson check if a file exists. Or pass a substring of UTF-8 to C function. Or check equality of two floating point calculations. Or multiply two signed integers together. Or fix anything that doesn't work due to performance problems.
And when they do all these wrong, ask them to show you how to debug and correct a useful but 'simple' program in production.
Lay people can't express their ideas to a computer because their ideas don't work in a computer. The code is esoteric because logic is esoteric, and the human brain is just not good at reasoning about edge cases without years of practice and experience.
>I'm not saying that unintuitive, highly-formalized syntaxes are useless, just like assembly isn't useless.
Ok, good. So why did you object to the existence of LISP? Why would you say it is a mistake?
In any case, my overall point in all of this is that there's a difference between simplifying a language and simplifying the learning experience. We should always be looking to make things easier to learn, but that doesn't actually require changes to the language. You can do that by finding better ways of teaching, better documentation, and better explanations for how and why things work.
If you stick to intuitive operations, you'll suffer a lack of perspective. You never expand your intuition. Better metaphors are those that more closely encapsulate the facts of the problem at hand, not those that more nicely fit human mental models. The human mental model is wrong by default. Aristotle's intuition about gravity was wrong, and a significant amount of damage would be done to force discussion of gravity to match what he found intuitive.
People can learn new notations. They do it every day. People can adapt to complex metaphors. But complex problems will never simplify themselves to fit human preconceptions and intuition.
If you're dealing with a set-like problem, then a set-based language is good. If you're dealing with an event-like problem, then an event-based language is good. Human intuition is not an important factor, save for our bias toward seeing a given problem as set-like or event-like. An arbitrary problem may be better modeled using another frame of mind. But you won't see that if you're requiring that problems be solved using 'intuitive' language.