Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The benefit of these AI-generated simulation models as a training mechanism is that it helps add robustness without requiring a large training set. The recombinations can generate wider areas of the space to explore and learn with but using a smaller basis space.

To pick an almost trivial example, let's say OCR digit recognition. You'll train on the original data-set, but also on information-preserving skews and other transforms of that data set to add robustness (stretched numbers, rotated numbers, etc.). The core operation here is taking a smallset in some space (original training data) and producing some bigset in that same space (generated training data).

For simple things like digit recognition, we can imagine a lot of transforms as simple algorithms, but one can consider more complex problems and realize that an ML model would be able to do a good job of learning how to generate bigset candidates from the smallset.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: