Very interesting, but if memory serves, Microsoft had this idea maybe in ‘96. Though patent expiry-wise, the timing of cap’n proto might make a bit of sense.
Java was still pretty brand spanking new in ‘96. And RMI wasn’t inflicted on us until ‘97. But half the stuff we use was invented in the 70’s and explored in the 80’s anyway.
The classes in college that I have used the most by far are the series that covered logic and set theory, and the one on distributed computing. The latter does take some of the fun out of people rediscovering things though.
IIRC, Berkeley had an OS with swarm computing and process migration around ‘87. It wasn’t until 20 years later that the speed of disk vs networking had a similar imbalance again. And of course the mainframe guys must constantly think the rest of us are idiots.
The idea of creating a program and sending it across an abstraction boundary for execution to avoid the cost of continually navigating the abstraction boundary for each step in the program is reasonably trivial. It was an approach I used in my first job for optimizing access to data in serialized object heaps - rather than deserialize the whole session object heap for each web request (costly), I'd navigate the object graph in the binary blob, but this required following offsets, arrays etc. The idea of a 'data path' program for fetching data occurred almost immediately.
In the end a different approach using very short-lived objects which were little more than wrappers around offsets into the heap combined with type info turned out to be more usable, and with generational GC, was plenty fast too; the abstraction stack was thin, the main cost avoided was materializing a whole heap.
The extent to which computer programmers literally don't read anything about computing history or alternative systems is quite funny. I can't wait until, sometime in the next few years, someone announces they're "modernizing the cloud operating system" and "unlocking the synergies between microservices" or whatever by... reinventing MsgSend, which we all knew about 30 years ago at this point.
To save you some time and test my prediction powers: they'll get $50 million in funding and almost flop until they find a way to stick Linux inside of a virtual machine (so it can act like a glorified driver for you, for device compatibility, because it turns out burning cash on trying to replicate 10 million lines of drivers isn't smart). It will grow Docker compatibility at some point. 90% of the development will come from one group of ~6 people. Then they'll get bought out and the project will die overnight. The end.