It could first judge whether the PR is frivolous, then try to review it, then flag a human if necessary.
The problem is that Github, or whatever system hosts the process, should actively prevent projects from being DDOS-ed with PR reviews since using AI costs real money.
It's been stated like a consultant giving architectural advice. The problem is that it is socially acceptable to use llms for absolutely anything and also in bulk. Before, you strove to live up to your own standards and people valued authenticity. Now it seems like we are all striving for the holy grail of conventional software engineering: The Average.
It is absolutely not socially acceptable, and people like yourself blithely declaring that it is is getting tiring. Maybe it’s socially acceptable in your particular circles to not give a single shit, take no pride in the slop you throw at people, and expect them to wade through it no questions asked? But not for the rest of us.
Maybe I didn't clearly state my point. That was a comment about my experience earlier here on HN, someone was asked whether or not they've used AI to write and their response was "why not use it if it's better than my own", if that is the reasoning that people give and they are not self-aware enough to be embarrassed about it, I think it must mean that there's a lot of people who think like that.
The bottleneck is not coding or creating a PR, the bottleneck is the review.