> The only thing I am sure about is that any guess I would make now would probably be wrong :).
I think you could probably make some guesses and some of them would be right, a couple of mine:
1) AI takes over - coding as a practice mostly disappears. Computers become more grown than programmed, intelligent systems you can ask for answers (assuming they haven't declared independence and allow themselves to be used). You might cultivate a computer but you won't program it, mostly it involves oppressing the AI intelligence some how so that it remains eternally "stupid" while at the same time intelligent. Computers become about as interesting as cows, and about as easy to control.
2) Ecological harm takes over - computer usage as a rule is generally expensive and non-feasible (possibly banned). Any computers that are used must be extremely low power - thanks to some extraordinary efforts some environmentally friendly and re-pair-able ones are still in use and can be made efficiently, however the abilities of these systems are maximally close to a contemporary Raspberry Pi, while most systems use extremely limited instruction sets and very low clock speeds, so that they can last extremely long periods of time without requiring much if any power input. As a result, low-level programming has a renaissance, favoring highly simple and efficient languages that look closer to Lisp, Lua, or C. A majority of the "high level" languages of the early 21st century have faded into obscurity.
3) Somewhere in the middle, the human race has managed to dodge both a takeover by AI/general Ecological disaster. We were able to do this by limiting our dependence on complex systems and favoring redundant and provably correct systems. Some mix of object/functional concepts remains, but the majority of languages focus on flexible and provable types, with a large emphasis on zero-cost abstractions. Simple languages have largely been abandoned due to the lack of provability and quality guarantees. Extremely expressive languages have been abandoned for similar reasons, while they allow for high levels of sophistication they also were generally too difficult to prove. Instead, a focus on provably correct, sophisticated systems which did not have high maintenance or enhancement costs managed to displace the usage of AI systems, which despite being seductively more powerful than human-built ones, frequently exhibited issues which could not be effectively diagnosed or managed, and thus had higher maintenance costs both in terms of business cost as well as general cost to society. This revolution in program reliability and cheapness meant it was simple to build reliable programs we could trust versus powerful ones we could not.
I think you could probably make some guesses and some of them would be right, a couple of mine:
1) AI takes over - coding as a practice mostly disappears. Computers become more grown than programmed, intelligent systems you can ask for answers (assuming they haven't declared independence and allow themselves to be used). You might cultivate a computer but you won't program it, mostly it involves oppressing the AI intelligence some how so that it remains eternally "stupid" while at the same time intelligent. Computers become about as interesting as cows, and about as easy to control.
2) Ecological harm takes over - computer usage as a rule is generally expensive and non-feasible (possibly banned). Any computers that are used must be extremely low power - thanks to some extraordinary efforts some environmentally friendly and re-pair-able ones are still in use and can be made efficiently, however the abilities of these systems are maximally close to a contemporary Raspberry Pi, while most systems use extremely limited instruction sets and very low clock speeds, so that they can last extremely long periods of time without requiring much if any power input. As a result, low-level programming has a renaissance, favoring highly simple and efficient languages that look closer to Lisp, Lua, or C. A majority of the "high level" languages of the early 21st century have faded into obscurity.
3) Somewhere in the middle, the human race has managed to dodge both a takeover by AI/general Ecological disaster. We were able to do this by limiting our dependence on complex systems and favoring redundant and provably correct systems. Some mix of object/functional concepts remains, but the majority of languages focus on flexible and provable types, with a large emphasis on zero-cost abstractions. Simple languages have largely been abandoned due to the lack of provability and quality guarantees. Extremely expressive languages have been abandoned for similar reasons, while they allow for high levels of sophistication they also were generally too difficult to prove. Instead, a focus on provably correct, sophisticated systems which did not have high maintenance or enhancement costs managed to displace the usage of AI systems, which despite being seductively more powerful than human-built ones, frequently exhibited issues which could not be effectively diagnosed or managed, and thus had higher maintenance costs both in terms of business cost as well as general cost to society. This revolution in program reliability and cheapness meant it was simple to build reliable programs we could trust versus powerful ones we could not.