Yeah, this is a holdover from where LLMs grew out of: academia. "Technical report" is what you reach for when you don't want to compare to actual competitive baselines.
I'm sorry, this is nonsense. Technical reports exist to fill in information that is useful for readers but not necessary to understand the key contributions of the work, and/or that don't fit within the journal or conference's page limit. I'm not sure where you got the idea that it is something people do to avoid competitive baselines; IME, the peer-reviewed portion of the publication is far more likely to contain misleading benchmarks than the technical report, since the paper is trying to "sell" the work in a way the technical report is not.
What this is an instance of is Google's approach to academic publishing of releasing a paper that contains almost no actionable information, but which is considered important and publishable solely because it came from Google and therefore is used in industry. This has been exhibited many times before--e.g. see the original Spanner paper, which was so light on details and confusing that they needed to release a followup paper several years later to explain what the system was even using the atomic clocks for!
I agree that's what TR's are for. However, my point is, if you want to publish academic writing without peer review, a TR is a way to go about that. You can also just publish a preprint somewhere, which - surprise surprise - is also common for these same actors.
I get what you're saying, I just think this is more of a Google thing than a TR thing. Their peer reviewed papers have the same issue as their preprints, TRs, and whitepapers, generally speaking--Google researchers feel no incentive to actually share how they did things, perform accurate or up-to-date comparisons to comparable frameworks, or even bother outlining their key contributions, because they know the paper will be published, widely read, widely cited, and influential even if they don't do any of those things. It's to the point that I think it might actually be house policy to neuter their papers of specific details as much as possible, presumably to retain what they perceive as Google's competitive advantage, because it makes no sense otherwise that wildly different papers with different authorship groups coming from so many different areas of CS could all have these same problems.
This is (IMO) quite different from, e.g., the cases of academics publishing misleading benchmarks, which is more often just being wedded to a bad idea because you spent years of work on it and your position is at risk if you didn't end up outperforming existing approaches. Often I can still get a lot out of papers with misleading benchmarks, even if what I get is "don't try this technique, it doesn't work." Whereas I frequently get nothing at all out of Google publications. If I had to describe the way Google seems to view academic publishing in one word, it would be "marketing"--it's advertising for people to either come work at Google or use their products, not something written with the intent of advancing the wider state of the art, or even the less noble goal justifying the time and money they put into whatever they're writing about.