It seems plausible that as more sites adopt this kind of technology, automated web access (e.g. scraping) the web will become harder -- for whatever purpose, good or ill. This has long been an "arms race" between hiding and detection. I can hope that reasonable uses of automation still remain feasible.
Just use a bot with a clear User Agent, not "Mozilla/5.0 (iPad; CPU OS 7_1_2 like Mac OS X) AppleWebKit/537.51.2 (KHTML, like Gecko) CriOS/36.0.1985.57 Mobile/11D257 Safari/9537.53".
And, don't forget to start by reading my /robots.txt.
If you behave yourself and abide by the rules, why should I ban your bot?
If for whatever reason I don't want to allow your bot in, you might still try and contact me to ask, and perhaps I could arrange for your bot to scrape my site.
Automated web access to my site must obey my rules because you're using my bandwidth and resources.
This comment seems like a non-sequitur. My comment had nothing to do with a particular site, much less "your" site.
I was making a general comment about automation and detection. If the detection gets better than the automation, it could change the dynamic. There is no fixed rule that says that content providers will or will not allow scraping based on robots.txt or other guidelines. Some could elect to disallow any/all robot behavior, if they have the capability.
That strongly depends on the data you have. I have written boots to scrape sites with government data so that I could do searches that wasn't possible using their online forms. I did not look at, nor attempt to obey, their robots text, nor would I have given a rats ass about breaking whatever they would put up to stop me.
There is also the argument that when you make something available on the web you make it available to everybody.
It was good enough for 4chan, a site that used to recieve countless spam from probably dozens of botters before they got recaptcha. And the botters probably kept trying to spam afterwards without much success.