Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not sure that the phenomenon you've encountered is unique to the evolutionary computation community: If you go to an academic conference dedicated to specific methodology, because the attendees have a vested interest in that methodology (they may have built their career on it), that hammer will certainly end up finding many questionable 'nails'.

In my opinion, EAs are most useful when you don't have a more specific human-fit algorithm to solve a problem, and particularly when a gradient to your objective function cannot be calculated. For example, when you are trying to create a controller for a many-jointed robot in a complex simulation with a high-level objective function.



Thank you for your comment, hope I am not being too harsh, I just have been very peeved by my experience. The downvotes are already here, not unexpected.

From my experience, and this was a while ago so I will be glad if this has changed, it seems that the community prefers to push their techniques as snake oil and not try to nail down the characteristics of their techniques and show how to match it with a function I want to optimize. I want them to offer principled guidelines that would allow generalizing the techniques beyond anecdotes. I have no problems with communities pushing their technique, to the contrary, my disenchantment is with them not doing this.

I want the community to produce re-usable pieces of interesting/novel information that I can use when I am faced with optimizing a function.

You mentioned non-differentiable functions. Now lets take a look at a subclass of these functions: convex non-differentiable functions. There are very efficient methods for these.

Consider another class, lets throw away convexity, consider functions that are non-differentiable and very rough locally but when filtered with a low pass filter is well behaved. Then again we know what to do.

Consider functions that are difference of potentially non differentiable convex functions (this is a Huge class. The difference need neither be differentiable, nor be convex), then again we have good ideas about what to do.

I think building this decision function: Problem_type -> preferred_algorithm is a very useful exercise. What annoys me is that GA community seems not to be interested in this, and take cheap shots by presenting anecdotes.

Prove properties, of your techniques, I will buy them by the bagful.


Maybe not exactly what you want, but this book has some useful information: http://cleveralgorithms.com/nature-inspired/index.html .

Check out the "Heuristics" subtitle of the algorithms. For example,

"Differential evolution was designed for nonlinear, non-differentiable continuous function optimization."

"NSGA was designed for and is suited to continuous function multiple objective optimization problem instances."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: