> you load a small part of the model, then team up with people serving the other parts to run inference or fine-tuning.
If multiple people participate in a fine tuning session, you have to trust all of them. You also have to trust everybody for inference too, but at least one of them can’t scramble the model.
This is all covered in the docs if you click through past the landing page. If you want to propagate changes to others you need to set up your own swarm, you can't go tuning things on random participants. You can read more at:
Maybe this could be solved with opt-in (or opt-out via banning) federation similar to Mastodon. Instead of one network you could have a bunch of different networks each focused on the interests of a different community. Or maybe as someone with a node, you could "subscribe" to different communities that use different filtering and prioritization mechanisms for task assignments.
I do love the general direction, and I think it's inevitable that training will move to be more decentralized like this. It's also the best chance we have at disrupting the centralization of "Open"AI and their ilk. I say the earlier we figure this out, the better, but it's not an easy problem to solve cleanly. And, not to be that guy, but maybe we could add some cryptocurrency incentives to the mix... conveniently enough, the crypto miners already have the GPUs ready to go!