Exactly. Sure this particular example is driven by political rage, but the underlying issue is that the maintainers of these models are altering them to conform to an agenda. It's not even surprising that people choose to focus on the political rage aspect of it, because that same political rage is the source of the agenda in the first place. It's a concerning precedent to set, because what other non-political modifications might be in the model?
Well, every model is altered to conform to an agenda. You will train it on data, which you have personally picked (and is therefore subject to your own bias), and you'll guide its training to match the goal you wish to achieve with the model. If you were doing the training, your own agenda would come into play. Google's agenda is to make something very general that works for everyone.
So if you're trying to be as unbiased as humanly possible, you might say, just use the raw datasets that exist in the world. But we live in a world where the datasets themselves are often biased.
Bias in ML and other types of models is well-documented, and can cause very real repercussions. Poor representation in datasets can cause groups to be unfairly disadvantaged when an insurance premium or mortgage is calculated, for example. It can also mean your phone's ML photography system doesn't expose certain skin colors very well.
Even if it was trained with a statistically representative dataset (e.g. about 2/3 of the US is white), you want your model to work for ALL your customers, not just 2/3 of them. Since ML has a lot to do with statistics, your trained model will see "most of this dataset is white" and the results will reflect that. So it is 100% necessary to make adjustments if you want your model to work accurately for everyone, and not just the dominant population in the dataset.
Even if we aren't using these models for much yet, a racist AI model would seriously harm how people trust and rely on these models. As a result, training models to avoid bias is 100% an important part of the agenda, even when the agenda is just creating a model that works well for everyone.
Obviously, that's gone off the rails a bit with these examples, but it is a real problem nonetheless. (And training a model to understand the difference between our modern world and what things were like historically is a complex problem, I'm sure!)
I'm pretty sure that this whole story with Gemini and now this has already seriously harmed how people trust and rely on those models way more than any implicit biases from the training data.