Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I recently had a very real world project be forced to abandon some promising methods because they were taking too long to train.


It was not possible to increase speed by getting more powerful compute?


No. Resources are not infinite, and we were already on the edge of what the resources at most sites where training would be done could be expected to have.


Thanks. I think you are a correct exception to what I said. I should have known that using words like "nobody" would not go over well on HN (but tedious to type "a very large percentage"), despite that statement being a verbatim quote from one of the world's leading ML engineers and, to me, not controversial.

I do consider the cloud both widely available and near infinite in resource adding capability.

If it is really not economically feasible to add resources, then the performance gains were not as promising as thought (whether cloud or on-site).


> Thanks. I think you are a correct exception to what I said. I should have known that using words like "nobody" would not go over well on HN (but tedious to type "a very large percentage")

In the future, you could use “most”.


So the problem in my circumstance is two-fold:

1) The ML experts in the field have all, pretty much, settled on the need for a uniform method to train models, but for each model needing to be trained on-site.

2) While the cloud might be near infinite in terms of adding capacity, "Hey guys, lets stage up some health-data compatible AWS instances to do something that was a side project we're not even sure will work" in what is always a cash-starved part of healthcare is...well...a pretty big ask.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: