> "(you can just write doWhatIMean() and the compiler will automatically choose the optimal implementation for your current problem and platform - where if you'd specified the implementation details yourself, you'd be sub-optimal in many cases)"
How far are we with declarative programming in sense of being expressive and expecting the compiler to understand what we mean? I'd believe that without strong AI human will always be better off with imperative programming than a compiler with declarative, for anythig beyond relatively simple isolated pieces of code/algorithms. In general we have so much more experience and knowledge about the target system and environment than the compiler does, this is less true for JIT compilers, but they come with their own overhead.
I'd really love to see superoptimization[1] to be done as a service for compilers. Say you have a function with certain semantics the compiler is fully certain about. The compiler fingerprints this function and uploads a fingerprint + behavioral analysis to <somewhere in cloud> where it's being exhaustively optimized by bruteforcing all the meaningful instruction sequences which conform to the semantics of the function. After the most optimal piece of instructions is being found, it's associated with the finerprint and stored in a database and then returned to the compiler. Now, when ever someone else writes the exact same function(or code with exact same semantics) the compiler queries the <some cloud service> for the optimal piece of code. Of course, a system like this would need more information about the actual target of the code(CPU architecture, cost of things like memory access, cache miss, branch mispreciction etc.).
How far are we with declarative programming in sense of being expressive and expecting the compiler to understand what we mean? I'd believe that without strong AI human will always be better off with imperative programming than a compiler with declarative, for anythig beyond relatively simple isolated pieces of code/algorithms. In general we have so much more experience and knowledge about the target system and environment than the compiler does, this is less true for JIT compilers, but they come with their own overhead.
I'd really love to see superoptimization[1] to be done as a service for compilers. Say you have a function with certain semantics the compiler is fully certain about. The compiler fingerprints this function and uploads a fingerprint + behavioral analysis to <somewhere in cloud> where it's being exhaustively optimized by bruteforcing all the meaningful instruction sequences which conform to the semantics of the function. After the most optimal piece of instructions is being found, it's associated with the finerprint and stored in a database and then returned to the compiler. Now, when ever someone else writes the exact same function(or code with exact same semantics) the compiler queries the <some cloud service> for the optimal piece of code. Of course, a system like this would need more information about the actual target of the code(CPU architecture, cost of things like memory access, cache miss, branch mispreciction etc.).
[1]: https://en.wikipedia.org/wiki/Superoptimization (The system I roughly described is far more extensively analyzed in some of the papers. Really great read!)