Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In terms of capturing the screen input for training, the fps is capped at 10 fps. This is following the number suggested in NVIDIA's paper[1], which cautions against overfitting. The program captures faster when the steering angle gets larger, because it suggests that curvy road is ahead, which are the more challenging scenarios for the program.

In terms of testing, the max fps is determined by the performance of the GPU and the complexity of the model. I couldn't find any definitive studies done on the matter, but anywhere from 10-30 fps seems to work OK.

[1] https://arxiv.org/abs/1604.07316



Add a RNN with a 10-frames history on top of NVidia's model and you get the winning Udacity challenge entry (mentioned on selfdrivingcars.mit.edu). I assume you also programmed this model during Udacity's Self-driving car nanodegree.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: