Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One big problem we're now backing into is having incompatible paradigms in the same language. Pure callback, like Javascript, is fine. Pure threading with locks is fine. But having async/await and blocking locks in the same program gets painful fast and leads to deadlocks. Especially if both systems don't understand each other's locking. (Go tries to get this right, with unified locking; Python doesn't.)

The same is true of functional programming. Pure functional is fine. Pure imperative is fine. Both in the same language get complicated. (Rust may have overdone it here.)

More elaborate type systems may not be helpful. We've been there in other contexts, with SOAP-type RPC and XML schemas, superseded by the more casual JSON.

Mechanisms for attaching software unit A to software unit B usually involve one being the master defining the interface and the other being the slave written to the interface. If A calls B and A defines the interface, A is a "framework". If B defines the interface, B is a "library" or "API". We don't know how to do this symmetrically, other than by much manually written glue code.

Doing user-defined work at compile time is still not going well. Generics and templates keep growing in complexity. Making templates Turing-complete didn't help.



> incompatible paradigms

See Architectural Mismatch or, Why it's hard to build systems out of existing parts[1]

Yes, we are not good at it, but we need to do it. Very often, the dominant paradigm is not appropriate for the application at hand, and often no single paradigm is appropriate for the entire application.

For example, UI programming does not really fit call/return well at all[2].

> attaching software unit A to software unit B .. master/slave

Case in point: this is largely due to the call/return architectural style being so incredibly dominant that we don't even see it as a distinct style, with alternatives. I am calling it 'The Gentle Tyranny of Call/Return'.

[1] http://www.cs.cmu.edu/afs/cs.cmu.edu/project/able/www/paper_...

[2] http://dl.ifip.org/db/conf/ehci/ehci2007/Chatty07.pdf


See Architectural Mismatch or, Why it's hard to build systems out of existing parts

If you want to study that, look at ROS, the Robot Operating System. ROS is a piece of middleware for interprocess communication on Linux, plus a huge collection of existing robotics, image processing, and machine learning tools which have been hammered into using that middleware. The dents show. There's so much dependency and version pinning that just installing it without breaking a Ubuntu distribution is tough. It does sort of work, and it's used by many academic projects.

In a more general sense, we don't have a good handle on "big objects". Examples of "big objects" are a spreadsheet embedded in a word processor document or a SSL/TLS system. Big objects have things of their own to do and may have internal threads of their own. We don't even have a good name for these. Microsoft has Object Linking and Embedding and the Common Object Model, which date from the early 1990s and live on in .NET and the newer Windows Runtime. These are usually implemented, somewhat painfully, through the DLL mechanism, shared memory, and through inter-process communication. All this is somewhat alien to the Unix/Linux world, which never really had things like that except as emulations of what Microsoft did.

"Big object" concepts barely exist at the language level. Maybe they should.


So this might just me over-fitting my new obsession to everything in the world, or alternatively I might just be out of my depth here, but could it be argued that Elixir's (or rather Erlang's) OTP approach solves/sidesteps most if not all the issues you mention?

Starting a separate 'Erlang process' for all async stuff, for example, seems so wonderfully simple to me compared to the async mess I find in JS, and applying various patterns(?) to that (Task, GenServer, SuperVisor) still provides a lot of freedom without incompatibility.

Please correct me if I'm wrong though. I'm still in the research phase so I haven't even written much Elixir/Erlang yet...


Keep in mind that Graydon mainly think in term of low level compilers, system level languages and software you ship to your users' computer. Not a language that live on your servers and provide a service through the network, nor distributed systems.

In that case, a model ala Erlang may totally make sense, but create a quite high overhead in term of runtime, deployment and impedance mismatch with the rest of the ecosystem. There are use cases of course, and there are interesting possible solutions like Pony that appears slowly, but we still do not have a good solution around that.

For a pure "server" and "distributed systems" or "machine dedicated to your software", you are right in my opinion. Add on top of that that erlang know its limits and provide you with way to easily interact with languages with different paradigms.

It may make sense to keep these different paradigms isolated but easy to talk between them and exchange.


> Pure functional is fine. Pure imperative is fine. Both in the same language get complicated.

Perhaps my programming language vocabulary is the limiting factor here, but I understand “pure functional” to refer to a non-sequential computation (no ordering, just a pure transformation) and “pure imperative” to be monadic computation in Haskell (a sequence of steps, executed one after the other). I don’t see why these two could be considered incompatible — indeed, monadic computations make little sense without pure functions to transform the values inside the monad.

Can you clarify?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: