Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How is object recognition possible on a (i)phone without a huge/massive dataset like Google uses on Google Photos' AI backend? It has to be worse AI than from Google Photos. Can someone clarify?


You only need large datasets to train a model - the result can be shipped around pretty easily. My company builds an app http://forevery.com/ which actually runs a neural net on your phone to organize your photos.


You only need a huge dataset to train the network. The trained network itself is (relatively) tiny, so it can be loaded and run on iPhones.


It's a whole subfield of machine learning called model distillation or model compression that concerns with shrinking large neural nets, or ensembles of them, to fit on small devices like phones and tablets. By reducing the space needed by 10x, they only lose 1-2% in accuracy.

But what I like about it is that neural networks can be transferred between different architectures, and the whole process of training a neural net can be sped up by starting from a previous result.


When you say relatively, just how many MBs of data would a trained net be that can recognize objects and do the kind of recognition/photo grouping they showed at the keynote?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: