> Well, you have to have multiple threads of control to run on multiple processors simultaneously. And if you're running on multiple processors simultaneously, you have multiple threads of control.
That depends on the semantics of your programming language. You may have a different perspective than the author, but I think his explanation is clear.
Edit, To add a concrete example: I am currently working on a concurrent Haskell program that simulates a system of communicating nodes. However I am running it on my single core netbook. So we have indeterminacy and concurrent semantics without parallelism.
You may have a different perspective than the author, but I think his explanation is clear.
Well, not being someone from the functional programming world, I don't think the explanation is clear at all.
Edit, To add a concrete example: I am currently working on a concurrent Haskell program that simulates a system of communicating nodes. However I am running it on my single core netbook. So we have indeterminacy and concurrent semantics without parallelism.
So you're saying that concurrency is an issue of semantics used in programming, while parallelism is talking about the underlying problem?
I might be able to agree with that, but if that's the argument, that is what the author of the blog post should have stated.
Anyway, please let me know if I'm understanding your point.
That depends on the semantics of your programming language. You may have a different perspective than the author, but I think his explanation is clear.
Edit, To add a concrete example: I am currently working on a concurrent Haskell program that simulates a system of communicating nodes. However I am running it on my single core netbook. So we have indeterminacy and concurrent semantics without parallelism.