Hacker Newsnew | past | comments | ask | show | jobs | submit | yeldarb's commentslogin

Yes, it should.


thanks!


We (Roboflow) have had early access to this model for the past few weeks. It's really, really good. This feels like a seminal moment for computer vision. I think there's a real possibility this launch goes down in history as "the GPT Moment" for vision. The two areas I think this model is going to be transformative in the immediate term are for rapid prototyping and distillation.

Two years ago we released autodistill[1], an open source framework that uses large foundation models to create training data for training small realtime models. I'm convinced the idea was right, but too early; there wasn't a big model good enough to be worth distilling from back then. SAM3 is finally that model (and will be available in Autodistill today).

We are also taking a big bet on SAM3 and have built it into Roboflow as an integral part of the entire build and deploy pipeline[2], including a brand new product called Rapid[3], which reimagines the computer vision pipeline in a SAM3 world. It feels really magical to go from an unlabeled video to a fine-tuned realtime segmentation model with minimal human intervention in just a few minutes (and we rushed the release of our new SOTA realtime segmentation model[4] last week because it's the perfect lightweight complement to the large & powerful SAM3).

We also have a playground[5] up where you can play with the model and compare it to other VLMs.

[1] https://github.com/autodistill/autodistill

[2] https://blog.roboflow.com/sam3/

[3] https://rapid.roboflow.com

[4] https://github.com/roboflow/rf-detr

[5] https://playground.roboflow.com


SAM3 is probably a great model to distill from when training smaller segmentation models, but isn't their DINOv2 a better example of a large foundation model to distill from for various computer vision tasks? I've seen it used for as starting point for models doing segmentation and depth estimation. Maybe there's a v3 coming soon?

https://dinov2.metademolab.com/


DINOv3 was released earlier this year: https://ai.meta.com/dinov3/

I'm not sure if the work they did with DINOv3 went into SAM3. I don't see any mention of it in the paper, though I just skimmed it.


We used DINOv2 as the backbone of our RF-DETR model, which is SOTA on realtime object detection and segmentation: https://github.com/roboflow/rf-detr

It makes a great target to distill SAM3 to.


> It makes a great target to distill SAM3 to.

Could you expand on that? Do you mean you're starting with the pretrained DINO model and then using SAM3 to generate training data to make DINO into a segmentation model? Do you freeze the DINO weights and add a small adapter at the end to turn its output into segmentations?


I was trying to figure out from their examples, but how are you breaking up the different "things" that you can detect in the image? Are you just running it with each prompt individually?


The model supports batch inference, so all prompts are sent to the model, and we parse the results.


Thanks for the linkes! Can we run rf-detr in the browser for background removal? This wasn't clear to me from the docs


We have a JS SDK that supports RF-DETR: https://docs.roboflow.com/deploy/sdks/web-browser


We (Roboflow) have had early access to this model for the past few weeks. It's really, really good. This feels like a seminal moment for computer vision. I think there's a real possibility this launch goes down in history as "the GPT Moment" for vision.

The two areas I think this model is going to be transformative in the immediate term are for rapid prototyping and distillation.

Two years ago we released autodistill[1], an open source framework that uses large foundation models to create training data for training small realtime models. I'm convinced the idea was right, but too early; there wasn't a big model good enough to be worth distilling from back then. SAM3 is finally that model (and will be available in Autodistill today).

We are also taking a big bet on SAM3 and have built it into Roboflow as an integral part of the entire build and deploy pipeline[2], including a brand new product called Rapid[3], which reimagines the computer vision pipeline in a SAM3 world. It feels really magical to go from an unlabeled video to a fine-tuned realtime segmentation model with minimal human intervention in just a few minutes (and we rushed the release of our new SOTA realtime segmentation model[4] last week because it's the perfect lightweight complement to the large & powerful SAM3).

We also have a playground[5] up where you can play with the model and compare it to other VLMs.

[1] https://github.com/autodistill/autodistill

[2] https://blog.roboflow.com/sam3/

[3] https://rapid.roboflow.com

[4] https://github.com/roboflow/rf-detr

[5] https://playground.roboflow.com


Wonder if you can subtract these vectors to get the opposite effect and what that ends up being for things like sycophancy or hallucination.

I also wonder what other personality vectors exist.. would be cool to find an “intelligence” vector we could boost to get better outputs from the same model. Seems like this is likely to exist given how prompting it to cosplay as a really smart person can elicit better outputs.


Is this a new product or a marketing page tying together a bunch of the existing MediaPipe stuff into a narrative?

Got really excited then realized I couldn’t figure out what “Google AI Edge” actually _is_.

Edit: I think it’s largely a rebrand of this from a couple years ago: https://developers.googleblog.com/en/introducing-mediapipe-s...


Hey all, sharing a project we made in 2 hours at the Vercel+NVIDIA hackathon last week.

While the app is cool, the thing that blew my mind is that the entire app was coded by Vercel's v0 agent. In other words: I did not write a single line of code to create the app (though my teammate did write the backend scraper & DB filler by hand).

[1] Writeup: https://blog.roboflow.com/nycerebro/

[2] Repo (including the generated code + initial meaty prompts): https://github.com/yeldarby/nycerebro

[3] v0 session: https://v0.dev/chat/nyc-erebro-app-RwzRUEMGveH?b=b_6AuWalvG7...


I've been reflecting a bit on this and remembering what it used to be like when I did hackathons regularly a decade or so ago. This project seems on-par with the type of 48 hour hackathon project I used to do (assuming CLIP had existed), but now I was able to do it in 2 hours instead of 48.

I can't imagine someone non-technical building something like this with prompting. The success of the project was highly dependent on my direction of the model to do what I wanted it to do (even though I gave it leeway in exactly how to do it). It did feel a bit like managing another engineer to do something vs doing it myself.

I don't use agents like this in my day to day work yet (I experimented with OpenHands a couple of months ago but it was frustrating, expensive, and took just as long as doing the task myself). But I'm thinking I probably will be a year from now.

A few times when the model got stuck I copy/pasted some stuff into o1 and pasted its response back into v0 (felt kind of like "escalating" to a more senior engineer) and that helped it get unstuck. Future models will be even more capable than o1. I imagine there will likely need to be a UI for "bringing in the big guns" of a smarter model in the future even if the grunt-work is done by a fast+cheap base model.

There's probably also something to letting the model "speak its native tongue". I don't know next.js but letting the model work with patterns it's been trained on probably helped it be more effective (compared to having OpenHands work in my own codebase using a structure it's unfamiliar with).


Is there any Docker alternative on Mac that can utilize the MPS device in a container? ML stuff is many times slower in a container on my Mac than running outside


The issue you're running into is that to run docker on mac, you have to run it in a vm. Docker is fundamentally a linux technology, so first emulate x86_64 linux, then run the container. That's going to be slow.

There are native macos containers, but they arent very popular


Docker can run ARM64 linux kernel, no need to emulate x86


You still pay the VM penalty, though it's a lot less bad than it used to be. And the Arm MacBooks are fast enough that IME they generally compare well against Intel Linux laptops even then now. But it sounds like first-class GPU access (not too surprisingly) isn't there yet.


Podman-Desktop can do it


More context from Jeremy Howard (fast.ai): https://x.com/jeremyphoward/status/1857765905188651456


It’s sad that only Google can (and honestly a bit surprising that Google hasn’t) use multimodal video models to index the semantic contents & transcripts of these videos for search. Huge long tail of unique content.


It is! I've often wanted to search for something that happens in a video instead of just the title, description, and keywords.


it would be a totally game changer and super helpful for months then people start abusing it to gain views anyway like the current web


If you do it naively your video frames will buffer waiting to be consumed causing a memory leak and eventual crash (or quick crash if you’re running on a device with constrained resources).

You really need to have a thread consuming the frames and feeding them to a worker that can run on its own clock.


Sorry for the newbie question

Under windows, say that I have an RTSP stream (or something similar)

Would you use a single python script with which one of this multithreading solutions?

1 import concurrent.futures

2 import multiprocessing

3 import threading


That's not how loop devices work on Linux.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: