Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This article is only considering static images. Lidar static "images" necessarily contain depth information so yeah obviously they'll have better depth estimates.

But that's really beside the point because the world is not static and any system attempting self-driving will need to take that into account.

Using parallax measurements which is what Tesla says they are doing, you can dramatically increase the estimates of depth measurements by comparing multiple frames of 2D images.

Also, just a reminder that Tesla is also using radar in conjunction with the cameras.



This was my question as well. How good are systems over a stream of data?

I am not expert in this field: how tracking actually works with a time dimension? These must be some sort of "state" carried over frame-by-frame? What is the "size" of this state? Objects just do not disappear and reappear for certain frames? This latter effect you can often see on many automatic labeling demos you find on GitHub.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: