> How in the world would making it more clear how to write more secure extensions possibly worsen the extension store's malware problem?
Unfortunately, information that helps the good guys get their extensions past the audit check is exactly the same information that helps the bad guys get their extensions in too. The bad guys simply move onto the next security flaw that Google hasn't anticipated.
Maybe the bad guys use some common tactics to get their scam extensions in the store which good guys don't, which is easy for Google to detect and flag. If you release a list of known no-no's, the bad guys just get smarter and avoid them.
This obviously skews in favour of refusing some good extensions to keep most bad ones out.
In terms of Google already auditing every app, check out the source code for Dark Reader https://github.com/darkreader/darkreader. It's fairly complex. I can only imagine how many extensions are as, or more, complex than that. I wonder how much auditing is done manually vs automated.
Unfortunately, information that helps the good guys get their extensions past the audit check is exactly the same information that helps the bad guys get their extensions in too.
This is just an assertion I'm wrong. It can't possibly persuade me or the people upvoting me. Would you be persuaded by me just asserting you're wrong?
In terms of Google already auditing every app, [e.g.] the source code for Dark Reader [is] fairly complex.
Firstly, Apple is able to do it, there's no reason to make excuses for Google. See anecdotes elsewhere in the thread about how Apple attaches screengrabs, explains rejections by phone conversation, even decompiles apps to point to exact methods/lines of code in apps they reject from the iOS App Store, even small free ones: https://news.ycombinator.com/item?id=23170498
And anyway, without reading a single line of Dark Reader's source code, I can deduce plenty of permissions it shouldn't need, e.g. "cookies" (which it doesn't ask for). Without reading a single line of PushBullet's source code, one can easily deduce it shouldn't need access to "https://*/*", and indeed, it doesn't, yet it asked for it.
Can you explain concretely (not just by pointing to unnamed "no-no's") how could harmful side-effects result from Google telling PushBullet that "https://*/*" specifically was in violation?
If you release a list of known no-no's, the bad guys just get smarter and avoid them.
Firstly, isn't bad guys avoiding no-no's exactly what we want?
If you're saying that there may be some, probabilistic red flags that Google uses to find possible bad guys—sure, that could be true, I have no idea and you don't either, you admitted it was speculation. But in this case, "Request access to the narrowest permissions necessary to implement your product’s features or services." is not a probabilistic red flag, it's a hard rule.
Again, concretely how could harmful side-effects result from Google pointing out the specific violation?
Unfortunately, information that helps the good guys get their extensions past the audit check is exactly the same information that helps the bad guys get their extensions in too. The bad guys simply move onto the next security flaw that Google hasn't anticipated.
Maybe the bad guys use some common tactics to get their scam extensions in the store which good guys don't, which is easy for Google to detect and flag. If you release a list of known no-no's, the bad guys just get smarter and avoid them.
This obviously skews in favour of refusing some good extensions to keep most bad ones out.
In terms of Google already auditing every app, check out the source code for Dark Reader https://github.com/darkreader/darkreader. It's fairly complex. I can only imagine how many extensions are as, or more, complex than that. I wonder how much auditing is done manually vs automated.