Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Did supercomputers ever produce something meaningful or did advancement usually come out of more "scrappier" setups?

I remember hearing a lot about rankings of supercomputers, but less so about what they actually achieved.



Yeah they do all the time. I remember in my parallel computing course where we got to use our 800 core test PC back in grad school where people were running simulations of different weather patterns and climate change. Earthquake simulations and what not. A lot of that can be done taking advantage of all of those cores. Academia specifically heavily uses these to get closer to the "physics" with clear discrete limitations


What is the difference between a supercomputer and a million ordinary servers connected together?


Typically:

  1. Low latency network, 1-2us.  Most servers can't ping their local switch that quickly, let alone the most distant switch for 1M nodes
  2. High bandwidth network, at least 200gbit
  3. A parallel filesystem
  4. Very few node types.
  5. Network topology designed for low latency/high bandwidth, things like hypercube, dragonfly, or fat tree.
  6. Software stack that is aware of the topology and makes use of it for efficiency and collective operations, 
  7. Tuned system images to minimize noise, maximize efficiency, and reduce context switches and interupts.  Reserving cores for handing interrupts is common at larger core counts.


Simplicity of the programming model, basically, though in the end it all just comes down to bandwidth and latency.


Communication speed / latency is a big one. Sometimes it matters how quickly extremely large volumes of data can be sent between cores.


Supercomputers exist in meaningful part to compensate for our lack of ability to do nuclear tests. This is why the national labs run them.


>Did supercomputers ever produce something meaningful

supercomputers do all the hard work in research universities all the time. Hell, astrophysics and research involving telescopes and observatories use em all the time.


Yes, absolutely. Most climate models run on supercomputers, same with molecular dynamics, large scale fluid dynamics, energy systems simulations and of course a whole lot of weapons research.


> supercomputers ever produce something meaningful

Absolutely, they contribute to research all of the time.

Some of them have pages where they list research outputs that they enabled (though this is of course limited to those authors tell them about!).


Ever better weather forecast for one. I can remember that, about two decades ago, weather forecast was still rather wobbly, and could only see a couple days into the future. Now 10-day forecast is routine, and surprisingly good. Much of that improvement came about as a result of more powerful supercomputers.


Weather forecasts are vastly more accurate because of supercomputers. And they're improving all the time.


We've never had architectures that scale so effectively, unlocking new cognitive capabilities by just increasing parameters/exaflops/datasets without writing a lot more code or changing the architecture. Ilya Sutskever mentioned this in some interview, that transformers are the first with that property but probably won't be the last or best.


Most of research stuff done at CERN for example.

Besides the outcomes that were adopted by the industry, before cloud computing there was grid computing, exactly to manage such resources at scale.

https://en.wikipedia.org/wiki/Worldwide_LHC_Computing_Grid


I think part of this comes down to your definition of "supercomputer", but I mean, pretty much the entire internet is powered by servers. I've never worked there, but I'm assuming that AWS data centers have very powerful computers designed to handle thousands of VMs/containers each, and I suspect with all the AI hype, a large percentage of them have very beefy GPUs in there as well.

If you're talking about the more stereotypical "high performance supercomputers", I think that they are still used very liberally within the defense industry. I think Lockheed Martin, for example, uses them for CFD analysis.


Google might've been built on a laptop, but it can't scale on a laptop. Same applies to coding an algorithm on a scrappy setup, and then scaling it to sequence DNA or simulate a phenomenon.


Not sure what this means, because google effectively scaled on laptops (generic x86).


In my head the way I differentiate "supercomputers" (national labs) and "warehouse-scale computers" (google/amazon/azure) is:

1. workload for national labs this is mostly sparse fp64 in my understanding, for warehouse-scale computing is lots of integer work, highly branchy, lots of pointer chasing, stuff like that.

2. latency/reliability vs throughput warehouse-scale computing jobs often run at awful utilization, in the 5-20% range depending on how you measure, in order to respond to shocks of various kinds and provide nice abstractions for developers. fundamentally these systems are used live by humans and human time is very valuable so making sure it stays up always and returns quickly is paramount. In my understanding supercomputing workloads are much more throughput-oriented, where you need to do an enormous amount of computation to get some answer but it doesn't much matter whether the answer comes in one week or two weeks.

3. interconnect warehouse-scale computing workloads are mostly fairly separable and the place where different requests become intertangled is in the database. In the supercomputing world, in my understanding, there are often significant interconnect needs all the time, so extremely high performance networking is emphasized.


Nice ontological classification thank you !


Yes, especially when you account for shared university supercomputers, and especially when it took a supercomputer to do much of anything.

Generally given there are a much larger number of less powerful computers, more accessible to much scrappier interests, one would expect more innovation to be done on them.


Weather simulations and forecasting are very useful to society and practically almost all available public weather forecasting datasets were computed in some supercomputer cluster.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: