Computers and programs are really complex; programming is really hard; Programming (computer science) will eventually be able to replace human reasoning (and be better at), but the complexity to do that requires deep mathematical knowledge and formal methods (Djikstra was a big fan of formal program proving). Universities aren't teaching computer science, because businesses don't care about that, they just want coders.
I took a few computer science courses at UT back when Djikstra (not from Djikstra himself, though, from Dr. Nell Dale) was there. Everything in the algorithms class came with formal proofs. Loop invariants were core concepts. The book was not yet published, we spiral-bound photocopy of the draft.
We are incrementally replacing human reasoning with computation in the present day. For instance, most static type checkers are weak but fast theorem provers, and type inference replaces some of the human reasoning involved.
Granted, static type checking is a very minor corner case, but manifold small incremental changes add up. It's untrue that human reasoning has never been replaced with automation, and it's untrue that human reasoning isn't currently in the process of being further replaced.
I agree that total replacement of human reasoning is likely any time soon. However, I'd argue that total replacement of human reasoning implies removal of human desires from the input. (What are programs, if not incredibly formal expressions of what humans desire computers to do? How can we divorce human desires from human reasoning about what is good/desirable?) Science fiction provides numerous examples of how a complete decoupling of computers from human desires can go terribly wrong.
Banking on computers to automate all of human reasoning? Sure. Preparing for computers to automate some disproportionately impactful subset of human reasoning, on the other hand, is very reasonable.
Both my undergrad and grad education did not train me to program. I did learn computer science though. Even then... mathematical and algorithmic proofs are in a league of their own. CS has always been applied math as much as physics is.
CS is definitely applied mathematics. Whether or not all of the maths that Dijkstra thought were essential to programming are much use in the day-to-day business of programming is debatable. His curmudgeonly view of our field, from the linked paper:
'As economics is known as "The Miserable Science", software engineering should be known as "The Doomed Discipline", doomed because it cannot even approach its goal since its goal is self-contradictory'
[I've incorrectly put some of the statements from Part 2 in the Part 1 summary, but I've already sunk enough time into summarizing, and the flow feels a bit better this way.]
Part 1: definitions and motivation.
"Radical novelty" describes something new that is so different from everything that came before it that analogical thinking is misleading and new phrases built by analogy with old phrases are inadequate at best. Thinkers in the middle ages were greatly held back by over-use of analogies. There are many cases where modern society has dealt poorly with radical novelty: relativity, quantum mechanics, atomic weapons and birth control pills. The way our civilization has learned to deal with great complexity is to specialize professions, where each profession abstracts away some amount of information, typically expressed in the range of physical scale of their work. The architect deals with a certain amount of complexity: not having to deal with the large scale of the town planner or the small scale of the metallurgist working for the I-beam manufacturer (my interpretation of "solid state physicist" in this context). Computing is radically novel in the scale of complexity handled by a single profession. The terms "software maintenance" (as if time or use alone, rather than shifting requirements, degraded software), "programmers workbench", etc. are evidence that analogies are misleading and software is radically novel.
Part 2: consequences [this summary is more abbreviated than part 1]
History has shown the natural human reaction to radical novelty is to pretend analogies still hold and to pretend that rapid progress isn't possible. We can't keep treating software like it's just some kind of mechanical device and programmers as assembly line workers stamping out widgets. We can't treat software production as just a new type of manufacturing. Manufacturing-style quality control (testing) is a misleading analogy for software quality control, and formal methods are needed for software. Software engineering isn't about designing new widgets, but about fixing the impedance mismatch between humans as assigners of tasks and computers and executors of tasks. There are a variety of vested interests (Mathematicians, businesses, the military, those teaching software development as widget building, etc.) working against advancement of Computer Science as a field. We need to start by fixing our terminology to stop encouraging misleading analogies. ("Software bug" encourages us to think of programmer errors as things that passively happen, like insects crawling into relays, etc.) The job of a software developer is to show that their designs and implementations are correct, not to correctly execute some set of operations that create a software widget.
I took a few computer science courses at UT back when Djikstra (not from Djikstra himself, though, from Dr. Nell Dale) was there. Everything in the algorithms class came with formal proofs. Loop invariants were core concepts. The book was not yet published, we spiral-bound photocopy of the draft.