Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Real-world speedrun timer that auto-ticks via vision on smart glasses (github.com/realcomputer)
4 points by tash_2s 22 hours ago | hide | past | favorite | 3 comments
I built a hands-free HUD for smart glasses that runs a real-world speedrun timer and auto-splits based on what the camera sees. Demo scenario: making sushi.

Demo: https://www.youtube.com/watch?v=NuOVlyr-e1w

Repo: https://github.com/RealComputer/GlassKit

I initially tried a multimodal LLM for scene understanding, but the latency and consistency were not good enough for this use case, so I switched to a small object detection model (fine-tuned RF-DETR). It just runs an inference loop on the camera feed. This also makes on-device/offline use feasible (today it still runs via a local server).





Cooking feels like a perfect fit for smart glasses (hands busy, lots of short steps), but I have not seen many apps that work reliably in a real kitchen. It feels like the hardware is finally getting to the point where this should become practical soon.

I can imagine corps using this HUD with a vision model for worker supervision. (Manna, anyone?)

    Cheeseburger Assembly
    
        x Toast bun
        x Place bun
        x Add ketchup
        x Add onions
        Add pickle
        Place cheese slice
        Add patty
        Place bottom bun
        Wrap and flip burger

    TIME REMAINING: 00:09.01
    UNITS THIS HOUR: 60
    ACCURACY: 99.8%
    KEEP UP THE GOOD WORK!

Totally. I think this kind of thing sits right on that line: it can help someone (hands-free guidance, training, accessibility, staying in flow), and it can also slide into a pretty dystopian "score the worker" surveillance HUD. My intent here is the former: personal "real-world speedrun" / practice tooling, not a manager dashboard or productivity policing.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: