> At some point, some SEO figured out that random() was always returning 0.5. I'm not sure if anyone figured out that JavaScript always saw the date as sometime in the Summer of 2006, but I presume that has changed. I hope they now set the random seed and the date using a keyed cryptographic hash of all of the loaded javascript and page text, so it's deterministic but very difficult to game.
I don't get why the rendering had to be deterministic. Server-side rendered HTML documents can also contain random data and it doesn't seem to prevent Google from doing "duplicate elimination".
Byte-for byte de-duping of search results is perfect and fairly cheap. Fuzzy de-duping is more expensive and imperfect. Users get really annoyed when a single query gives them several results that seem like near copies of the same page.
Tons of pages have minor modifications by JavaScript, and only a very small percentage have modifications done by JavaScript that result in JavaScript analysis resulting in improved search results.
So, if JavaScript analysis isn't deterministic, it has a small negative effect on the search results of many pages that offsets the positive effect it has on a small number of pages.
I don't get why the rendering had to be deterministic. Server-side rendered HTML documents can also contain random data and it doesn't seem to prevent Google from doing "duplicate elimination".