Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I roughly agree. I think the only other piece, that is critical to mention, would be remote humans for support in extremely awkward situations to help get the car get back on track as RLHF.

This requires the car can always come to a safe stop, which I think the LLM-based driver should be very capable of doing.



Automatic emergency braking would be a good first step, it would certainly solve the case in article where the car drives though downed power lines.

I think the logical next step is to have the LLM output the driving path similar to how GPT4 outputs SVGs. Feed in everything you have, raw images, depth maps, VRU positions, nav cues, and ask the LLM output a path.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: