Well I think for one, the field of AI is massive and spans enormous amounts of research over 60-70 years. You wouldn't know about any breakthroughs unless you studied in the field.
From my brief exposure, here are some big algorithms and some applications in roughly chronological order. Definitely not exhaustive. Those in the know...feel free to correct me:
- Least squares regression (prediction- used everywhere)
My prediction (heh pun intended) is that you see enormous changes in the field when processing by GPU's becomes much more available. There are some algorithms that are simply difficult to research because labs don't have access to fast enough machines. Also, there is beginning to be some effort to port these algorithms to a Map/Reduce framework so they can be run at scale (check out Apache Mahout). Lastly, I'm slightly biased towards machine learning as that's where I chose to do my grad research. I'm not sure what problem domains are under AI or ML or Statistics...I tend to clump them all together.
An obvious choice would be ridiculously large Neural Networks. However, any problem where you start with a random seed and converge on a solution benefits from starting at several different random seeds to avoid local minima. Also search algorithms (chess programs etc) often paralyze vary well.
From my brief exposure, here are some big algorithms and some applications in roughly chronological order. Definitely not exhaustive. Those in the know...feel free to correct me:
- Least squares regression (prediction- used everywhere)
- Fishers Discriminant (classification tasks)
- Perceptron networks (classification tasks)
- Markov Models/Hidden Markov Models (Speech/Handwriting recognition)
* Machine Learning community develops out from AI *
- Support Vector Machines (Image recognition/ classification)
- Expectation Maximization (prediction)
- Relevance vector machines (classification / prediction)
- Gaussian processes
- Predictive sampling
My prediction (heh pun intended) is that you see enormous changes in the field when processing by GPU's becomes much more available. There are some algorithms that are simply difficult to research because labs don't have access to fast enough machines. Also, there is beginning to be some effort to port these algorithms to a Map/Reduce framework so they can be run at scale (check out Apache Mahout). Lastly, I'm slightly biased towards machine learning as that's where I chose to do my grad research. I'm not sure what problem domains are under AI or ML or Statistics...I tend to clump them all together.