Actually, most of the uses of the word "emergent" I encounter are exactly as the LW article describes. If you'd replace the word "emergent" with "magic", you'd learn nothing more and nothing less from a sentence. [0]
As for feeling foolish after playing with evolutionary algorithms - I'm not the OP but I can relate somewhat given how I saw people learning evolutionary algorithms and neural networks at my university (and I'm pretty sure it's not a local phenomenon). Evolutionary algorithms are usually explained as inspired by biological evolution, with implicit (and sometimes explicit) note that "evolution made us, therefore evolution is superpowerful, therefore evolutionary algorithms - which are just evolution in code - will be superpowerful too!". Except they're not, and the whole concept is bullshit. It's a belief in Random Number God. Throw enough shit at the wall and something will stick. Evolution is terribly, terribly inefficient, and so are the evolutionary algorithms.
Sure, this inefficiency gives them some interesting properties that may help them avoid particular types of local optimas, etc. But those are mathematical features of an algorithm type, and have nothing to do and share no power with evolution, or magic.
The whole problem stems from people trying to transfer virtues of biology to computing by using a surface metaphor. There's a post on LW that covers it nicely:
"So... why didn't the flapping-wing designs work? Birds flap wings and they fly. The flying machine flaps its wings. Why, oh why, doesn't it fly?"
Or about neural networks,
"A backprop network with sigmoid units... actually doesn't much resemble biology at all. Around as much as a voodoo doll resembles its victim. The surface shape may look vaguely similar in extremely superficial aspects at a first glance. But the interiors and behaviors, and basically the whole thing apart from the surface, are nothing at all alike. All that biological neurons have in common with gradient-optimization ANNs is... the spiderwebby look."
I encounter a lot of similar "medieval thinking" in CS departments. I don't know why. It probably goes in common with the concept of not caring about how the world works.
[0] - it's also a good trick I picked up while hanging on LW; if you don't know why something happens, label it as unknown explicitly. Say "this process is driven by magic", or "caused by Divine Intervention" instead of trying to invent equivalently-informative but sciency-sounding labels like "emergent behaviour" or "spontanous self-organization". This way you'll never forget that your theory still has holes that need to be filled in, and you won't accidentally confuse yourself (or others).
Who is saying that the analogously named models are equivalent the analogs themselves? Analogous terminology and metaphors exist so that people can ease themselves into a deeper understanding of a subject. I agree that some people can incorrectly draw grand conclusions from a simple name, but that doesn't mean the people who use these "things" with these "names" only understand their mechanisms on a superficial level. Seriously, good luck explaining any model without introducing an analog that we, as humans, can relate with.
Sure, I'm not saying analogies are the problem. They're not, we need them, they're important parts of our cognition.
My point was twofold: a/ beware inference from surface analogies, always strive to understand where the border between similar and different properties lie, and b/ it does happen. People do draw conclusions and build their understanding on superficial analogies in a systematic way. That's why I mentioned the anecdote of CS students I know. I've seen it in real life. Also known as cargo-culting, it's unfortunately not a rare phenomenon.
Also, I'm not the CDDDDARP ((cddddar this-comment)-poster), but I hazard a guess that he "felt sort of foolish" because he discovered he accidentally did some inferring from surface analogies and ended up disappointed.