Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Now, I often get downvoted because I (wrongly) chose a hill to die on... but this one? Come on guys!? JavaScript is described in a standard that's like 50 pages PDF, most of which isn't code... Python has hundreds of modules in the standard library. It's humongous compared to JavaScript. I don't understand why this is such a controversial idea...


I didn't downvote you (I'm just reading this thread for the first time), but I don't get your original argument. Usually benchmarking two languages means either:

* microbenchmarks comparing speed of doing stupid things (like adding 10000 integers or sorting a list with bubblesort 1000 times)

* 'real world'-ish benchmarks comparing idiomatic solutions in two languages doing the same thing

In both cases it doesn't matter (much) how big a standard library is. If you want to compare two languages doing something complex, you need to have -standard or not- implementation of that something for both languages.

But maybe I (and possibly others) have missed your point?


Well, people who write such benchmarks have no business writing benchmarks, and the comments to such comparison would usually say as much.

Such benchmarks don't compare anything in a meaningful way, and, in the second case, don't even compare what they claim to compare (you don't compare the languages if you run some third-party code on top of it, which has nothing to do with how the language itself is implemented).

These "benchmarks" just score some holly-war points for people who either irrationally like, or irrationally hate a particular language...


> you don't compare the languages if you run some third-party code on top of it, which has nothing to do with how the language itself is implemented

What if "the language itself" provides arbitrary precision arithmetic by wrapping GMP and GMP is not written in "the language itself" ?

https://gmplib.org/

https://hackage.haskell.org/package/integer-gmp


Language either has a standard that defines what the language is (and isn't), or it has something similar to a standard (which makes the definition more vague). In either case, there will be some sort of a document accepted by the majority of the language users that states what is and isn't the language.

If the language's document states that the language has arbitrary precision operations, then the authors of the language are free to implement it however way they want, let it be monkeys with abacus, it's still part of the language.


So we're in a "no true Scotsman" situation then, where no benchmark is ever useful.

I kind of disagree on the meaningfulness of microbenchmarks, they give a feel for the performance, even if it's not a perfectly useful apples-to-apples comparison.

Like, if decoding a large JSON takes 3 milliseconds in one language and 2 minutes in another, that's signaling that the second language is a worse fit for certain projects. Even if the benchmark isn't super rigorous.


How did you get this impression? No, we are not in a true Scotsman situation.

In order to establish that some language is faster than the other, you can devise a convincing metric. It will have to state what are the aspects that you are going to take into account, what are you baseline assumptions about measuring speed. Those using your experiment then will be able to practically use your results because they will be able to interpret your meaning of speed of the language.

The test in OP doesn't give anything like that. It ignores a lot of common testing practices by failing to control for many obvious confounds, by failing to provide any kind of sensible explanation of what is meant by the speed of the language, by using unreliable measuring techniques. It's just a hands-down awful way to test anything by the standards of people in the field of performance testing.

To further illustrate my point. Suppose you trust OP when they say that Python and JavaScript are roughly similar when it comes to the speed of calculating Fibonacci numbers. You then take a 16-cores server and run 16 processes of Python program calculating Fibonacci numbers and in another instance you take the same server and run 16 JavaScript process doing the same... only to discover that, eg. JavaScript now does 16 times better than Python (because Python decided to run all programs on the same core).

Note, I'm not saying that that's how Python actually works. All I'm saying is that the author doesn't control for this very obvious feature of contemporary hardware and never explicitly acknowledges any baseline assumptions about how the program is supposed to be executed. This is bad testing practice, intentionally or not it can be used to score "political" points, to promote a particular program over another.


I dislike the habit we see here of downvoting without commenting the reason why the comment is being downvoted.

We can only guess, but I believe it's because even though that is true, that python packs a lot more in its library than js, therefore more batteries, more time spent including those batteries and less time spent optimizing the interpreter, but standard benchmarks do away with all of that and focus on simple problems, like adding two numbers, to see how each language behaves.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: