Frankly, this isn't much of an explanation. I mean, what does it mean for one thing to be "near" to another, in this case? (Answer: nothing, which is why we don't use terms like "near" in computer science.)
By parallel programs I think he means that the multiple modules are never near the same thing and thus not ever near each other.
I think what you're saying here is that some problems are "embarassingly parallel," in the sense that the problem can be broken into many completely independent sequential computations.
But I don't think that's what the guy who wrote the article was talking about. If so, I think he would have said so.
What 'near'/'far' means depends on the space you're talking about. If we're talking about the space of memory locations, then two modules are near each other if they both have access to the same storage location. You can fix this, for instance, by making things asymmetric; allow one module to have only read access and one module to only have write access. In that case, what you've done is make it so that the triangle inequality no longer obtains on the space of memory locations (at least with respect to those two modules).
It wasn't an analogy but it can be made as rigorous as you like.
Let S be the set of (addressable) storage locations, let d(x, y) = x - y which is the distance between storage locations (an integer). The triangle inequality holds if d(x, y) + d(y, z) >= d(x, z). d is also clearly symmetric and reflexive; thus S and d form a metric space.
Suppose module A contains a pointer to storage location x.
Suppose module B contains a pointer to storage location y.
Suppose A and B each contain a function which reads from z and writes to x or y respectively.
If, with respect to A, d(x, z) = n < infinity, then, using pointer arithmetic, A can write to z and thus write to y. The distance between A and B is bounded (effectively it's n).
On the other hand, if, with respect to A, d(x, z) = infinity (the value infinity might mean d threw an exception or something), then the distance between A and B is effectively infinite.
So the concurrency/parallelism dichotomy boils down to this:
Programs in which such distance between modules is finite are concurrent, programs in which the distance between modules is infinite are parallel.
This example only applies to memory but it can easily apply to other kinds of relevant spaces as well.
By parallel programs I think he means that the multiple modules are never near the same thing and thus not ever near each other.
I think what you're saying here is that some problems are "embarassingly parallel," in the sense that the problem can be broken into many completely independent sequential computations.
But I don't think that's what the guy who wrote the article was talking about. If so, I think he would have said so.