It's the age old thin client vs. fat client debate repeating itself again. It seems like as the chips & tools get more mature we'll see more and more model deployments on customer hardware. Transmitting gigabytes of sensor/input data to a nearby data center for real time result just isn't feasible for most applications.
There's probably lots of novel applications of AI/ML that remain to be built because of this limitation. Probably also good fodder for backing your way into a startup idea as a technologist.
There's probably lots of novel applications of AI/ML that remain to be built because of this limitation. Probably also good fodder for backing your way into a startup idea as a technologist.