I genuinely think there is potential for a silly Internet tradition here. Google should pick bunch of candidate winners by ML, hire them six months before, fire them all a week before awarding, and then programmatically re-hire moments before awarding. It can't be more malicious than most academic pranks and it shouldn't matter whether the conspiracy is real, it'll be just funny.
I think calling it a conspiracy theory is a bit of a stretch. I could be wrong. I agree that's how it should be. But I don't get the impression there are lot of fans of Google in the Prize Committee. Either way, it's not something that matters too much. Just a thought.
> I think calling it a conspiracy theory is a bit of a stretch.
It's the textbook definition of a conspiracy theory, isn't it? I mean, a group conspiring to not awarding the most prestigious prize in science to someone who deserved it because of who their employer was, and suddenly awarding it once he switched employers?
> But I don't get the impression there are lot of fans of Google in the Prize Committee.
This is a conspiracy-oriented line of reasoning. Who anyone's employer was is something that never surfaced when discussing Nobel prizes. Suddenly it became the basis of a theory on how people conspired to first not award it and afterwards award it, and somehow the guy's accomplishments don't even register in the discussion.
That's what these conspiracy theories bring to the table.
I get what you're saying. I have no evidence and no inside information. It could be a conspiracy, but I doubt it. It could just be multiple individuals independently being uncomfortable with tacitly approving a huge company they see as potentially responsible for privacy problems, ethics problems and AI misuse. I don't see these as invalid concerns either. And I don't see being conflicted about giving an award to an employee of a company tied to big ethical concerns as anti-science or having a lack of integrity.
Discovering breakthroughs in machine learning is a profound achievement and deserves to be recognized. Wielding powerful tools against humanity for the sake of money, not so much. But, like I said, I could be dead wrong, and this is probably why I wouldn't be a good person to serve on one of these committees.
The age too, and the fact that Google lost prestige in AI over the years.
It's the company that didn't see the potential of Transformers, and that presented a half-assed Bard when LLMs were already in production in other companies.
But Hinton was not in favor of LLMs anyway, he argued backprop is not what the brain does and that we should do better than these models. I'd say Google would be a great place for someone thinking like that.