Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
General purpose programming languages' speed of light (tratt.net)
43 points by ltratt on April 9, 2013 | hide | past | favorite | 58 comments


This "Magellanic view" of programming language exploration doesn't seem quite right. Programming language design is mostly not about finding virgin territory by inventing brand new ideas (which was admittedly easier in the era when high-level languages were brand new – i.e. in the 1950s). Rather, it's mostly about finding unexplored folds hidden nearby in the vast, combinatorial manifold of ways to combine existing ideas in a single, coherent language. In my experience, people who have spent a lot of time designing languages are the most sympathetic to people trying new permutations – precisely because they are so painfully aware of all the awful compromises they were forced to make in their own designs and their understanding that much better ways of combining those features might be so tantalizingly close.

Scala is a great example: the innovation of the language is not so much in new language features, but rather in its ingenious combination of so many powerful features into a single, coherent system. Of course, some may argue that Scala has too many features (I'm a bit terrified of it), but it's indisputable that putting all those pieces together in a way that works is tour de force of language design.


Am I the only one who sees no clothes on this article?

I mean, it seems that the person who wrote it spends a lot of time thinking about programming and much less time programming; that's where he ends up with meta-ideas that are interesting but also mostly wrong.

Languages are substantially different. You don't (and can't) understand every concept under the hood to drive the thing. You can't add and substract features, they're interdependent.


LISPs allow features to be added. The under-the-hood concepts are very simple.

It may be an error to see programming languages as technologically similar to computer hardware. But there's no Moore's Law of programming languages. It's hard to say that there are widely accepted analogs 45nm and 90nm fabrication.

Instead, computer programming languages are like other languages. They are more or less appropriate for a particular task in a particular context. Clojure isn't objectively a more advanced LISP. However, it is more appropriate for concurrency. Particularly in a Java dominated context.

Clojure primarily seeks to solve a different set of problems than Common Lisp. The C Programming Language is tailored for a different set of problems than ALGOL 68 or Fortran. Perl a different set than C. And so it goes.

Languages are abandoned because they lack expressiveness relevant to the goals of the programmer not because a new language ships with 16GB and 5ghz.


The comment about the last 20 years also makes me twitch a bit. That sort of thing always makes me wonder if it's supposed to be about the state of actual usable programming languages or of recent PL research. Either way, the first two things I think of when asked about recent advances are substructural types (maybe stretching the 20-year limit) and higher-order contracts, and these are both available in things intended as general-purpose languages (Rust and Racket).


Yeah, it's almost like he's making a definition that's impossible to satisfy: the changes in mainstream languages (since 1993!) don't count because the features already existed in academic languages, but the new features in academic languages don't count because they aren't mainstream.


>Languages are substantially different. You don't (and can't) understand every concept under the hood to drive the thing. You can't add and substract features, they're interdependent.

All the above are patently false.

-- Most languages in common use are NOT substantially different. Not just obvious things like Java/C#, or F#/Haskell/Ocaml/ML but even C# to Ruby or Objective-C to Javascript are not that different, much less fundamentally.

-- For most languages a lot of good programmers can and DO understand "every concept under the hood" (VM internals, runtime costs, etc).

-- And you sure CAN (and language designers do all the time) add and substract features. They add features from other languages and they remove some to create theirs. And then, when the reach version 2.0 of the language, they cherry pick and add some more features from other languages, or remove out-of-favor features.


Adding features to languages happens all the time. Closures were added to C++. Generics to Java. Classes to php. Many other features were added to various languages over time.

I don't understand your objection.


What I meant is that you never say "we had those ten features and we added five more, now we have fifteen".

You can't really count features and derive something meaningful from that. Every other new feature is a game changer. Or sometimes it is barely visible.


He sort of concedes as much re: static typing

https://twitter.com/laurencetratt/status/321612459741958146


A bit off topic, but I've been learning about languages like ML and Scala, and what does everybody have against static typing? I feel like if we used type systems better, we'd have a lot fewer problems. You can prove that your programs have no bugs! That's much stronger than unit testing.

When thinking 'static types', does everybody just think C/C++/Java? Is it upfront costs? My first ML program, I took half an hour to write a function that output all the words in a trie. It gets easier, and more interesting afterward, but I might have given up had it not been for a class. Are static types too rigid for prototyping? Scala, Haskell and OCaml have REPLs.

Some of the "experimental features of static type systems" like dependent types are really powerful; you can make some strong proofs about the logic of your program. I'd be willing to give up (or at least try going without) duck typing for that.


Yeah, a lot of people's only experience with static types is C or Java. (A programming-language-theory PhD student friend-of-mine was first exposed to C while writing her dissertation. "This isn't a type system at all!" was the diagnosis)

Also, sometimes in Haskell it's so hard to write a function that mostly-works (because they type system sees your mistake but you don't) that you can't get enough of your half-baked idea down so that you can see it clearly enough to figure out how it's supposed to really work correctly

Also, I've seen plenty of programs that provably have no bugs, but still grow to exhaust all system memory or occasionally take several hours to return from a function call for no apparent reason. And those are a major pain to diagnose.


>Also, I've seen plenty of programs that provably have no bugs, but still grow to exhaust all system memory or occasionally take several hours to return from a function call for no apparent reason

I think you mean programs that type checked, not programs that provably had no bugs.


What if those programs were guaranteed to do the right thing, if only that had enough time or space to finish? Maybe they're still "correct".

Or, to put it another way: our existing type systems don't make code "provably correct", and when people say that it does (like the commenter above), they're trying to talk you into something.


>our existing type systems don't make code "provably correct"

That is what I am telling you.

>and when people say that it does (like the commenter above)

They said no such thing.


Yes, you're telling me something that I was already trying to demonstrate with my example.


I can't speak to your motive, all I can do is read what you posted. What you posted was wrong, and I explained that.


What I posted was obviously self-contradictory, which is not the same as wrong.


What you posted was wrong, like I said. There are programs that have proofs demonstrating they are free of bugs. You have not seen them do either of the things you claim to have seen. You were stating something false.


Some of the "experimental features of static type systems" like dependent types are really powerful; you can make some strong proofs about the logic of your program.

Yes, it is nice to be able to write proofs and have them checked mechanically. However, for nontrivial programs, even stating correctness (to a level of precision a proof checker can understand) is often very difficult. Once you've figured out how to do that, you still have to take the time to write the proof. There is also a cost associated with this capability even if you don't exercise it: dependent type systems (aside from things like Dependent ML, which restrict type indices to a much smaller language) tend not to play nicely with side effects like mutation, I/O, exceptions, nontermination, etc. So the proof system is not just something that's there for those times when you need it and fades away when you don't.


People really don't like spending 6 hours trying to get something to compile.

Compiling and running a possibly incorrect program that seems to work is more rewarding than trying to figure out how exactly it was that your compiler inferred a type different than you intended.


I have a bowl. What types should it accept?

Well certainly soup. And nuts. And everything in between.

Plus washers and ball bearings and my son's head for haircuts. Solids, liquids and gasses need to implement the Bowlable interface. Just to be <type> safe, vacuums should probably implement Bowlable as well.

I've got some classes to write and inheritance to track. Then all that's left is to implement Microwavable, Edible, Spoonable, and Unhealthy. Then I can enjoy my Maruchan Ramen.

Bad things will happen if I put a can of Rustoleum in my Microwave and power it up on high for five minutes. So I'm not saying type safety doesn't matter. Just that it is not always better.


Put pesticide in the bowl. Try to get soup out of the bowl. Feed to son. This old debate will rage on forever.


>what does everybody have against static typing?

A lack of understanding. People get very upset when you say this, but I have yet to see a single person argue against static typing accurately. Every single argument (oh look, a bunch have been posted in reply to you even) I've seen has been based on exactly this fallacious reasoning:

>When thinking 'static types', does everybody just think C/C++/Java?

Yep. Including the people who write dynamically typed languages even:

http://www.artima.com/intv/strongweak.html


Interesting concept, but the flaw at the heart is the presupposition that lack of progress right now means we are at the ultimate limit of what can be accomplished. Imagine if cavemen learning to paint on walls said, well we haven't improved in a few millennia, so this is probably the most complex thing that can be represented by drawings. Or what about math stopping with Euclid? It was thousands of years later that progress happened.

Technology comes in fits and starts. A lot of new things happened in the 50s and 60s, we are still trying to figure out ways to use and apply them. Just because someone thought about and prototyped something then doesn't mean it's not new when that feature goes mainstream (e.g. garbage collection in Java, channels for concurrency in Go, etc).

For a long time we couldn't break the sound barrier, which is a limit, but not the speed of light limit. Because progress is stalled now doesn't mean there will never be progress in the future.


The author addresses this in the post. First by comparing major technological advances to earthquakes: "they occur at unpredictable intervals, with little prior warning before their emergence." At the end of the post he says that he doesn't think we've reached an ultimate limit and that the language design space needs to be explored more fully.


Which would mean even he doesn't agree with his ridiculous main point. But I think you're being too generous.


I think it's more of a thought experiment; just an interesting idea to entertain even if we haven't reached the limit. The more immediately applicable point is that of a cognitive limit moreso than the suggestion of a technological limit.


I think the author did address this but you raise an interesting analogy. I remember Carl Sagan made a point in Cosmos that the natural sciences lost millenia of progress thanks to the dominance of Platonic thought and mysticism over empiricle research and observation. I do not think it will take millenia to see it, but perhaps in a few decades we will see the present circumstances of program language application in a similar light.


>" Imagine if cavemen learning to paint on walls said, well we haven't improved in a few millennia, so this is probably the most complex thing that can be represented by drawings."

The analogy between painting and programming languages has some precedent around the HN community. But the idea that contemporary painted images are more advanced than those on the walls of Lascaux is suspect because it is premised upon our acceptance of a belief that the ancient paintings were not well suited to their purpose.

It is more likely that the opposite is the case. Today, most paintings carry little significant meaning to their author's larger community. The odds that Salvador Dali's butcher was impacted by the armlessness of Venus de Milo are pretty low - never mind the $86 clown on steel collage Mission Thrift Store.

Art and programming languages have evolved. But that evolution is Darwinian not teleological. High-level languages are better adapted to the humans who write them but not the machines which run them.


But when our hypothetical Blub programmer looks in the other direction, up the power continuum, he doesn't realize he's looking up. What he sees are merely weird languages. He probably considers them about equivalent in power to Blub, but with all this other hairy stuff thrown in as well. Blub is good enough for him, because he thinks in Blub.

http://www.paulgraham.com/avg.html


We went backwards when we went to the web in terms of development tools. It was necessary but RAD (Delphi, Visual Basic, yes that VB) were far ahead of where we are now today in the ability to put basic pieces together to make an application. I've not used visual studio in a long time, but I would hope they retained their core philosophy of reuse.


A symptom of this: we have Object Oriented languages that can't produce UI elements that are objects. And that's because we can't figure out how to let objects live from one web request to another? And that's because despite all the new database paradigms, nobody has made a widget-object model that can be saved to the database and reloaded transparently, and that's because our models of serializability of objects are brittle and unsafe (see the recent YAML bugs), and that's because we have no good systems for seamlessly migrating data structures from one version to another


Objects do too much to live in a heterogeneous world. We'd do better serializing maps (JSON is popular, of course, but EDN is better) and operating on them using generic functions rather than trying to bind functionality to data. It's easy enough to serialize the properties of a UI widget when the functionality is data-driven and lives elsewhere.


I think you're probably right? I mean, with the tools available in 2013, I certainly agree that the Clojure way is the least crazy. But I miss some OOP stuff when I'm doing that - like, I kinda like the organizational qualities of OOP; lots of the time I wish I had ruby-style polymorphism (on immutable structures?) instead of having to put all the logic in one function. Maybe that means I should just learn how to use defprotocol. But I guess that still doesn't give you an event model - what's the EDN equivalent of onClick ?


You should learn how to use records, protocols and hierarchies if you want to organize things that way in Clojure. You'll have to be a bit more ad-hoc about the same kinds of things when you're sending them to Javascript or storing them in a database.


RAD (Delphi, Visual Basic, yes that VB) were far ahead of where we are now today in the ability to put basic pieces together to make an application

Have you tried Django? It provides some rather large building blocks that fit together nicely and reasonable abstraction for constructing your own.


I've always had a kind of feeling that there is an upper bound on the rate at which a human can articulate an (original) idea/program/function in explicit enough terms for a computer to then run it.

Even to do this in the first place will always require a bare minimum understanding of the language of logic; basic control flow statements, variables and so on.

This is why I feel things like Bret Victor's idea of the 'MathKiller' (which as I understand the idea, is his general term for a hypothetical universally-intuitive computing environment which can model anything) are goals which we can only ever approach asymptotically - there will always be some uncharted waters where the only option available to those who want to explore further is simply to straight up write some code.

I guess the point I'm trying to make is that I feel improving the sophistication of the programming languages we use or changing the core paradigms upon which they are based will not help the situation; improving the sophistication of the tools we use to write them with will.


The author's argument is a classic literature criticism / philosophy argument and I'm kicking myself trying to remember which specific person/theory from philosophy / lit crit he is more or less paraphrasing.

Note that I'm not by any means accusing the original author of plagiarism or of trying to pull off a stunt. Convergent evolution makes perfect sense. Although comp sci languages are a bit more rigid than other human languages, trying to express creative / complicated stuff in a human created language is an old, heavily discussed problem, even if the original author doesn't know it. Does this go all the way back to Plato? I can't remember its just too early in the morning.

The bright side is "we" as a species have been churning out new lit more or less continuously for a couple millenia, so even if lit production dies off at some point in the future (and I don't think it will) that means that in computer languages we "only" have a couple more millenia of productive programming left.


I agree in the sense that I don't think there's going to be a killer new programming language or language paradigm that's going to overturn the existing languages within the problem domains they're suited for.

Since the current high-level languages are so extensible, the new paradigm seems to be moving beyond programming languages to a higher level of abstraction based on frameworks and DSLs. We're not just "writing code," we're always writing code that writes code (that writes code, and so on, of course, until it's a stream of 0's and 1's.). We do this because working at a higher level of abstraction is usually more productive. The recent crop of programming frameworks are just another layer on top of this. Next I guess we will have some sort of meta-frameworks on top of those. So the specific language choice will only matter insofar as it's a part of the framework stack.


I like to envision something like 'augmented coding', where the concepts are driven by the programmer, and some friendly bot fills in the nitty gritty. I'm not talking about something like a GUI driven language, more like autocomplete - just that the autocomplete is doing way more than calling up a list of method names.


Charles Simonyi proposed doing this with human bots in his thesis back in 1976: http://www.parc.com/content/attachments/meta-programming-csl...

Seldom have I encountered a book that made me want to throw it at the wall. Wrong, wrong, wrong, but everybody should read it.


Stronger static typing could help with that a bit. After all, if you have powerful types the auto-complete could help more.

Also, if you have relatively easy types and building blocks, there are only a few functions of that type that are also simple.


Something like a Lisp macro but including user (i.e. developer) interaction?

On seconds thoughts - that sounds a bit too much like a Visual Studio wizard/template.


In a sense, that's what a good optimizing compiler is. You were probably thinking of something more abstract, but I submit that the difference is more quantitative than qualitative. We already have the kind of thing you want; you just want a better one.


I don't understand. When I say, in Ruby, `"hello".upcase`, I'd say I'm expressing a concept, and a whole lot of nitty gritty - memory allocation, object instantiation, method lookup, etc, is being "filled in" for me.

This works because the code I type still expresses exactly what I want. The thing the computer can't do for me is specify exactly what I want.

Are you looking for more of the same, or some "guess what I want"?


Why not make bot a function and just call it in your code?

There are things like this, for example, scaffolding in rails.


Yes; we would first need some unambiguous way of communicating with the IDE what we want it to do, and we'll call that a "programming language"


Sounds like AI and maybe machine learning. Once we get to that point though, why do we need a person telling the bog what to do?


If you don't intentionally give it a god, repeated experimental evidence is it'll invent (at least) one for itself, so you'd best keep in the loop, unless you really trust evolution or don't really care what happens.


Hate to say it, but....."Clippy" in your IDEs, but way, way better. Or to give more detail, "intelligent" agents that know some things about your domain, rather than just the language like we have now with Intellij, Resharper, etc..

And I still think what Simonyi and crew are doing over at http://www.intentsoft.com/ is the future of development, even if their specific products never make it into the mainstream


A post on the future of programming languages without mentioning Agda, Coq, or OMeta?

I think we are just now getting beyond the "low-hanging fruit" era of programming languages.


Haskell's type system can be unwieldy (and often on the edge of experimentation, it is) but that's because it's not so different from a programming language itself. That said, both the type system and the language, Haskell, itself are based on incredibly simple pieces.


How would scissors invention fit in that reasoning?

I vote for the tooling option or, to be more precise, for a different combination of tooling and features.


I hope the author spends some time learning about programming languages and programming language research. The core point is simply false, and the evidence to support it ranges from flimsy to nonsense.

"The plain truth about programming languages is that while there have been many small gains in the last 20 years, there have been no major advances"

There have been many major advances. The fact that they were not included in java does not mean they do not exist.

>There have been no new paradigms

Arguably there have only ever been two paradigms: imperative and functional (object oriented and procedural simply being minor variations of the imperative paradigm). It is not reasonable to expect entirely new paradigms to be discovered on anything other than an incredibly rare basis.

>I'm not even aware of major new language features

The first thing this should do is trigger your "I should research new language features" instinct, not your "I should assume there are none" instinct.

>beyond some aspects of static type systems

So, he does know some, but chooses to ignore them because why exactly?

>The core of virtually every extant programming language is largely similar.

No. And it is entirely possible for new things to replace old things, programming languages are not required to take the C++ approach of accumulating every possible feature that has ever existed.

>Some of the things Haskell and Scala's type systems can express are astonishing; but I have also seen each type system baffle world-renowned experts

That is an awfully bold statement to just pull out of nowhere with nothing to back it up. Who are these experts, and what baffled them exactly? Haskell has been a hotbed of programming language research in the last decade, with a large number of advances being made and being put into actual use, then more advancements being built on top of those. Dismissing the entire concept of type systems based on an unnamed "expert" who was somehow "baffled" by some unmentioned aspect of the language is insane.

>Verification techniques and tools have made major advances

Yeah, like those crazy type system things you just dismissed as being unconvincing, oversold, and baffling to experts. Go learn agda and then tell me nothing new has happened in 20 years.


> I hope the author spends some time learning about programming languages and programming language research.

The author is a programming language academic. See: http://tratt.net/laurie/research/pubs/


I am aware. And as I said, I hope he spends some time learning about programming languages and programming language research. Not just a tiny subset of virtually identical languages that are not used for research. Look at what he has published. His work has been entirely in the world of unityped languages, and his characterization of type system research is that "some stuff has happened but I don't know what it is and its types so who cares". Making claims about programming languages as a whole requires a far broader knowledge base than he appears to have acquired. Very deep knowledge in a very small space is certainly very useful, but not for making general statements about broad topics.


So, he does know some, but chooses to ignore them because why exactly?

My initial guess as to why was that a lot of type systems research seems to be more focused on systems that are only really accessible to other researchers (i.e. not usable by rank-and-file programmers).


>My initial guess as to why was that a lot of type systems research seems to be more focused on systems that are only really accessible to other researchers

Except that quite a bit has gone on in haskell, and is in practical real world use right now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: