We (Roboflow) have had early access to this model for the past few weeks. It's really, really good. This feels like a seminal moment for computer vision. I think there's a real possibility this launch goes down in history as "the GPT Moment" for vision.
The two areas I think this model is going to be transformative in the immediate term are for rapid prototyping and distillation.
Two years ago we released autodistill[1], an open source framework that uses large foundation models to create training data for training small realtime models. I'm convinced the idea was right, but too early; there wasn't a big model good enough to be worth distilling from back then. SAM3 is finally that model (and will be available in Autodistill today).
We are also taking a big bet on SAM3 and have built it into Roboflow as an integral part of the entire build and deploy pipeline[2], including a brand new product called Rapid[3], which reimagines the computer vision pipeline in a SAM3 world. It feels really magical to go from an unlabeled video to a fine-tuned realtime segmentation model with minimal human intervention in just a few minutes (and we rushed the release of our new SOTA realtime segmentation model[4] last week because it's the perfect lightweight complement to the large & powerful SAM3).
We also have a playground[5] up where you can play with the model and compare it to other VLMs.
SAM3 is probably a great model to distill from when training smaller segmentation models, but isn't their DINOv2 a better example of a large foundation model to distill from for various computer vision tasks? I've seen it used for as starting point for models doing segmentation and depth estimation. Maybe there's a v3 coming soon?
Could you expand on that? Do you mean you're starting with the pretrained DINO model and then using SAM3 to generate training data to make DINO into a segmentation model? Do you freeze the DINO weights and add a small adapter at the end to turn its output into segmentations?
I was trying to figure out from their examples, but how are you breaking up the different "things" that you can detect in the image? Are you just running it with each prompt individually?
We (Roboflow) have had early access to this model for the past few weeks. It's really, really good. This feels like a seminal moment for computer vision. I think there's a real possibility this launch goes down in history as "the GPT Moment" for vision.
The two areas I think this model is going to be transformative in the immediate term are for rapid prototyping and distillation.
Two years ago we released autodistill[1], an open source framework that uses large foundation models to create training data for training small realtime models. I'm convinced the idea was right, but too early; there wasn't a big model good enough to be worth distilling from back then. SAM3 is finally that model (and will be available in Autodistill today).
We are also taking a big bet on SAM3 and have built it into Roboflow as an integral part of the entire build and deploy pipeline[2], including a brand new product called Rapid[3], which reimagines the computer vision pipeline in a SAM3 world. It feels really magical to go from an unlabeled video to a fine-tuned realtime segmentation model with minimal human intervention in just a few minutes (and we rushed the release of our new SOTA realtime segmentation model[4] last week because it's the perfect lightweight complement to the large & powerful SAM3).
We also have a playground[5] up where you can play with the model and compare it to other VLMs.
Wonder if you can subtract these vectors to get the opposite effect and what that ends up being for things like sycophancy or hallucination.
I also wonder what other personality vectors exist.. would be cool to find an “intelligence” vector we could boost to get better outputs from the same model. Seems like this is likely to exist given how prompting it to cosplay as a really smart person can elicit better outputs.
Hey all, sharing a project we made in 2 hours at the Vercel+NVIDIA hackathon last week.
While the app is cool, the thing that blew my mind is that the entire app was coded by Vercel's v0 agent. In other words: I did not write a single line of code to create the app (though my teammate did write the backend scraper & DB filler by hand).
I've been reflecting a bit on this and remembering what it used to be like when I did hackathons regularly a decade or so ago. This project seems on-par with the type of 48 hour hackathon project I used to do (assuming CLIP had existed), but now I was able to do it in 2 hours instead of 48.
I can't imagine someone non-technical building something like this with prompting. The success of the project was highly dependent on my direction of the model to do what I wanted it to do (even though I gave it leeway in exactly how to do it). It did feel a bit like managing another engineer to do something vs doing it myself.
I don't use agents like this in my day to day work yet (I experimented with OpenHands a couple of months ago but it was frustrating, expensive, and took just as long as doing the task myself). But I'm thinking I probably will be a year from now.
A few times when the model got stuck I copy/pasted some stuff into o1 and pasted its response back into v0 (felt kind of like "escalating" to a more senior engineer) and that helped it get unstuck. Future models will be even more capable than o1. I imagine there will likely need to be a UI for "bringing in the big guns" of a smarter model in the future even if the grunt-work is done by a fast+cheap base model.
There's probably also something to letting the model "speak its native tongue". I don't know next.js but letting the model work with patterns it's been trained on probably helped it be more effective (compared to having OpenHands work in my own codebase using a structure it's unfamiliar with).
Is there any Docker alternative on Mac that can utilize the MPS device in a container? ML stuff is many times slower in a container on my Mac than running outside
The issue you're running into is that to run docker on mac, you have to run it in a vm. Docker is fundamentally a linux technology, so first emulate x86_64 linux, then run the container. That's going to be slow.
There are native macos containers, but they arent very popular
You still pay the VM penalty, though it's a lot less bad than it used to be. And the Arm MacBooks are fast enough that IME they generally compare well against Intel Linux laptops even then now. But it sounds like first-class GPU access (not too surprisingly) isn't there yet.
It’s sad that only Google can (and honestly a bit surprising that Google hasn’t) use multimodal video models to index the semantic contents & transcripts of these videos for search. Huge long tail of unique content.
If you do it naively your video frames will buffer waiting to be consumed causing a memory leak and eventual crash (or quick crash if you’re running on a device with constrained resources).
You really need to have a thread consuming the frames and feeding them to a worker that can run on its own clock.