Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Impressive demo.

Any FSD startup that put their money on LiDAR is even more screwed now.



Disagree there. Humans have massive compute, dual optics, and amazing filters.

Computer vision has 1-2 of those three, and I don't think we are near an AGI for self driving yet. Driving is IMO, an AGI level task.

Does you dataset have a crocodile in it? Does you monocular depth model get fooled by a billboard that's just a photo?


>Does you monocular depth model get fooled by a billboard that's just a photo?

This is actually a pretty clever example, I tried a few billboards on the demo online and, as these models are regressive so they output the mean of the possible outputs, sometimes the model is perplexed and doesn't seem to know if to output something completely flat or that actually has a depth, and by being perplexed it outputs something in between.


AGI is a pretty fuzzy term that will goal post shift like AI has. You can define it that way tautologically but I can easily see a world where we have self driving cars but standalone AI scientists don’t exist. Does that mean we have AGI because we have self driving cars or not because it’s not general in that it can’t also tackle other human endeavors?


That's only a "happy path" attitude.

How well would a moncular path with headlights moving toward it at night operate? How about in rain, snow, or fog?

I'm not saying LiDAR is the only way, but I don't see a reason to use this as a solution.

I'm not saying this isn't valuable. I used to work in 3D/metaverse space, and having depth from a single photo, and being able to recreate a 3D scene from that is very valuable, and is the future.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: