Agree that it's not exactly the same, all analogies have holes, they're simplifications after all.
I guess I'm vary of the messaging because I'm a developer 99% thanks to FOSS, and being able to learn from FOSS projects how to build similar stuff myself. Without FOSS, I probably wouldn't have been able to "escape" the working-class my family was "stuck in" when I grew up.
I want to do whatever I can to make sure others have the same opportunity, and it doesn't matter if the weights themselves are FOSS or not, others cannot learn how to create their own models based on just looking at the weights. You need to be able to learn the model architecture, training and what datasets models are using too, otherwise you won't get very far.
> This kind of purity test mindset doesn't help anyone. They are shipping the most modifiable form of their model.
It does help others who might be stuck in the same situation I was stuck in, that's not nothing nor is it about "purity". They're not shipping the most open model they can, they could have done something like OLMo (https://github.com/allenai/OLMo) which can teach people how to build their own models from scratch.
Thank you, sometimes it feels weird to argue against people who are generally pro-FOSS but somehow for LLMs are fine with misleading statements. I'm glad at least one other person can see through it, encouraging I'm on the right track :)
I'm not sure I'd even call Llama "open weights". For me that would mean I can download the weights freely (you cannot download Llama weights without signing a license agreement) and use them freely, you cannot use them freely + you need to add a notice from Meta/Llama on everything that uses Llama saying:
> prominently display “Built with Llama” on a related website, user interface, blogpost, about page, or product documentation.
This kind of purity test mindset doesn't help anyone. They are shipping the most modifiable form of their model.