The rule set must be more complex. I often use VPN which results in captchas on many pages but I never get one on Google. I guess the 300 queries/IP only count if other parameters indicate crawling.
But it's a bit clunky. I was running searches through an embedded webbrowser in a c# application, which is really an embedded internet explorer and was very quickly presented with a captcha. It was a human viewing the results, but a script constructing the query string, but that was enough the be labelled as a crawler.
often all it takes is to be using webbrowser. I would love to use webbrowser for small search utilities because it's so easy to use, but it seems to be a magnet for problems.
Yea, it was a very general example, since there is at least one rule that is based on rate limiting too, and this 300/IP limit is what have seen on average.