Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How much of that camera data is labeled for distance measurement use cases?


Because cars (generally) move, probably most of it. You can compare successive frames and make predictions.


I think Tesla actually used radar data to provide ground truth for this. So they don't even do that to generate ground truth. What happens if things have the same relative speed when doing the labeling? Or low light conditions? How do you account for per part variation in lens and sensor designs and how it messes with your predictions?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: