Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Weak vs. Strong Memory Models (2012) (preshing.com)
49 points by Tomte on Feb 13, 2022 | hide | past | favorite | 14 comments


> The C11 and C++11 programming languages expose a weak software memory model which was in many ways influenced by the Alpha.

What this doesn't mention is that the software memory model for systems languages like C++ should pretty much be as weak as possible, to be able to generate good code on platforms that range from strong to weak. If the software memory model is weaker that your hardware's, it just works, because the compiler will know whether hardware reordering is possible. If it's stronger, getting optimal code generation is pretty much impossible, because you're forced to take the stronger semantics, even in places where you don't need them.


Yup, Java with it's strong memory model introduced with jsr 133 gave a whole lot of different headaches to people implementing the JVM on ARM. I'm still not sure if it's 100% reliable compared to the x86 impl.

My friends' practical experiences suggest it's still bugged here and there.

Edit: you gotta love how people on HN will down-vote you for sharing factual information on a topic. Are we levelling down to Reddit's standards?


Can you please elaborate? SW running against a strong memory model is actually simpler and less buggy than SW written against a weak memory model. In a weak memory model, each atomic operation has 5 modes whereas in strong they’re all equivalent. Most programmers can barely write simple atomic code and the weak memory model is trickier to wrap one’s head around (at least I still struggle to properly understand it vs using high level shortcut heuristics).

A strong memory model is also trivial to implement on a weak memory system whereas the reverse is impossible. I can believe that Java on ARM can have bugs (fewer customers, relatively newer than x86, etc). It can also have bugs due to someone writing weak operations that were only tested on a strong memory model machine (ie latent bugs that wouldn’t be discovered on a weak memory model machine). Or heck. It can have bugs precisely because someone wants to somehow leverage weak operations for performance butt it did so incorrectly.

Side note: The M1 is an interesting CPU because it implements both strong and weak memory models directly in HW.

I don’t know why you got downvotes but what you provided was more of an opinion than fact (my post too is opinion).


> SW running against a strong memory model is actually simpler and less buggy than SW written against a weak memory model

This is exactly what my comment is implying. On x86 you have a very strong memory ordering consistency in the form of TSO. This not only helps implementing synchronisation primitives but also in practice enables techniques such as synchronisation piggybacking which isnt safe on weaker models such as ARM's.

> what you provided was more of an opinion than fact

Don't really know how to respond to that beyond if the differences between memory models are a matter of opinion rather than facts than the following wiki page shouldn't even exist

https://en.m.wikipedia.org/wiki/Memory_ordering


I think you misunderstood what I was saying. Implementing a strong memory model on a weak memory model system is trivial. I’m not sure what point you’re trying to make that Java somehow using a strong memory model for the language causes problems for porting the VM to ARM with a weaker model. Any porting issues would result from them trying to eke out more performance and trying to take advantage of weaker atomic ops in the C implementation of the VM (a strong memory model by the way is widely accepted as being an incorrect abstraction for the HW because it leaves so much room on the floor - x86 is stuck due to back compat reasons). None of this would be visible to running Java programs themselves and arguable Java choosing a strong consistency model is the right choice considering how much of a foot gun a weak memory model implies.

A factual post might try to back up the claim with specific bugs or more detailed analysis that has been done.


I must say as a non expert on the topic, i don't understand the downvotes. I wish people who downvoted would take a moment and explain what they disliked


Maybe you could force a comment on downvotes or you loose karma?


I'm running a Java server that is heavily concurrent on ARM, what bugs did he mention in particular?


Can you recommend some good resources on Memory/Consistency Models, Memory Ordering, Memory Barriers and related subjects? I find in most blogs everything is explained piecemeal and never comprehensively. The problem is that there are both Compiler Orderings and Barriers vs. Processor Orderings and Barriers and you have to know both for effective understanding.


Not ideal but a good place to start would be SO:

https://stackoverflow.com/questions/tagged/memory-barriers

https://stackoverflow.com/questions/tagged/memory-model

https://stackoverflow.com/questions/tagged/java-memory-model

Also, the x86 tag on SO has plenty of memory model-focused discussions although it's only TSO.


I've been collating resources I've found here https://www.philipzucker.com/notes/CS/Concurrency/

The replies to this John Regehr thread were particularly rich with links https://twitter.com/johnregehr/status/1451355617583460355?s=...


FYC; You may find this book useful for a comprehensive overview (though dated) of Concurrency Paradigms: Foundations of Multithreaded, Parallel, and Distributed Programming by Gregory Andrews (https://www2.cs.arizona.edu/~greg/).


The OP blog references their learning resources, which include some freely available books, like https://cdn.kernel.org/pub/linux/kernel/people/paulmck/perfb...


https://man7.org/linux/man-pages/man2/membarrier.2.html provides a good example of when a processor memory barrier is eliminated but a compiler memory barrier is still required.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: