If this feature isn’t already part of the Claude API it likely will be at some point, in which case many Claude requests will be automated with no way to distinguish between user-driven or otherwise.
Simply put, at the end of the day you lose, AI blocking will not work.
I mean, currently the AI request comes from the datacenter running the AI, but eventually one of two things will happen.
AI models will get small/fast enough to run on user hardware and use the users resources: End result? You lose. The user will set their own headers and sites will play the impossible game of identifying AI.
AI sites will figure out how to route the requests via any number of potential methods so the requests appear to come from the user anyway: End result? You lose. The sites attempting to block will play the cat and mouse game of figuring out what is AI or not AI.
Note, this doesn't mean AI blocking isn't worth doing, if nothing else to reduce load on the servers. It's just not a long term winning strategy.
You may not be able to stop AIs from crawling web sites through technological means. But you can confiscate all the resources of the company that owns the AI.