Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well not quite. While SVMs gained a lot of popularity for having nice properties e.g.

1) a convex problem which means a unique solution and a lot of already existing technology can be used

2) the "kernel trick" which enables us to learn in complicated spaces without computing the transformations

3) can be trained online, which makes them great for huge datasets (here the point 2) might not apply - but there exist ways - if someone's interested I can point out some papers)

There is an ongoing craze about deep belief networks developed by Hinton et al. (who is teaching this course) who came up with an algorithm that can train them reasonably well (there exist local optima and such, so it's far from ideal). Some of the reasons they're popular

1) they seem to be winning algorithm for many competitions / datasets, ranging from classification in computer vision to speech recognition and if I'm not mistaken even parsing. They are for example used in the newer Androids.

2) DBNs can be used in an unsupervised mode to _automatically_ learn different representations (features) of the data, which can be then used in subsequent stages of the classification pipeline. This makes them very interesting because while labelled data might be hard to get by, we have a lot of unlabelled datasets thanks to the Internet. As what they can do - see the work by Andrew Ng when they automatically learned a cat detector.

3) DBS are "similar" to biological neural networks, so one might think they have the necessary richness for many interesting AI applications.



"SVMs. . .3)can be trained online, which makes them great for huge datasets (here the point 2) might not apply - but there exist ways - if someone's interested I can point out some papers)"

Please do. I want to read some about SVMs since i haven't heard that much about them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: