Actually, I think that could have been fixed. If the originating site signed a message, and the each relaying site signed it, then one would be able to calculate a reputation score for sites. Spammy originating sites (or originating domains pretending to be relaying sites) would be easily identifiable, and thus blockable.
Recall that the way Usenet works is that each site gets its new feeds from one or more other sites.
Imagine that I'm site A, connected to sites B and C. B is connected to D and E; C is connected to D, F & G. Whenever a user at a site submits an article, that site attests 'this message was sent via site X.' Whenever a site sends a message to another site that it received from somewhere else, it attests 'this message was sent via site X.' As part of the setup process, sites exchange signing keys (recall that Usenet peering was fairly static, something arranged by email).
So, any messages local to Site A don't (need) any attestation: they were generated locally, and read locally. Any messages submitted to Site B and sent to A are provably from Site B (and A would in fact reject any message claiming to be from Site B but without a signature). Likewise, any message submitted to Site E would be signed by E, then signed by B, then arrive at A. In the normal course of events, A doesn't care about Site E at all; he cares about the quality of his feed from B.
Say a spam arrives at A. He knows that it didn't come from C, because it's signed by B. So he returns it to B, stating, 'yo, police your users!' B can see his own signature on the spam, so he knows that he can trust the origination of the message from his point of view (whether it's his own user, or one from Sites D or E); in fact, B could automatically forward the spam notification to the originating site if it weren't his own.
Let's say that B is actually malicious, and has invented a hundred fake sites that he pretends to forward messages from. I don't care: any message I get from B is signed by B: if I receive enough spams from him, I can choose to just refuse articles from him.
> (X) Requires immediate total cooperation from everybody at once
Some solutions do require complete cooperation. Given that Usenet's effectively dead, is an example which kills Usenet and replaces it with Usenet-Prime all that different?
> (X) Anyone could anonymously destroy anyone else's career or business
Completely and totally wrong, since in the model I'm talking about reputation is local to my site, not shared.
> (X) Lack of centrally controlling authority for email
Nope: no centrally-controlling authority is needed: each site only cares about the relationships it has established, and doesn't care about downstream relationships ('police yourself!').
> (X) Dishonesty on the part of spammers themselves
Nope: dishonesty doesn't matter because each site only cares about the quality of the feed it receives from other sites. A dishonest site will have a poor-quality feed, and may be disciplined by being ignored.
> (X) Ideas similar to yours are easy to come up with, yet none have ever been shown practical
Seems quite practical to me. 'We've not bothered to try' doesn't count as a proof of impracticality.
> (X) Blacklists suck
Better than whitelists, no? At some point one has to refuse to do business with bad actors; that's a blacklist.
> (X) Countermeasures must work if phased in gradually
Why? As noted, Usenet is basically dead by now; how would killing it and recreating a new, more trustworthy one not have been better?
> (X) Why should we have to trust you and your servers?
I think maybe you thought that I was proposing some sort of centralised reputation system, which would indeed be crazy.
> A dishonest site will have a poor-quality feed, and may be disciplined by being ignored.
So you're really just doing current antispam, with some additional identification which may or may not be respected (its value relies in forcing manual labor on initial key exchange) and may or may not be scalable (say you put a new honest mailserver online -- are you going to statically exchange keys with all other servers out there? And how do you verify a key is still valid at any point? Are you going to pull something out of DNS? this sounds familiar...). Hardly a silver bullet IMHO, sorry.
> Completely and totally wrong, since in the model I'm talking about reputation is local to my site, not shared.
It doesn't matter: you're a stolen key away from suffering. If I get your key and start signing spam with it, sending it all over the place, you'll soon be overflowed by "yo!"s, you'll have to regenerate keys, rebuild trust (a process you've forced to be slow), and your mail won't go through for days or weeks. The typical exactness of caching algorithms will ensure that minor servers will still be "yo!"ing you for months or years, and again your mail won't go through.
We really have no idea how many private keys get stolen every day. They are just not deployed unless they produce valuable activity (accessing a control panel, signing a malicious driver and so on). Spamming is an immediately-valuable activity, so they would become prized loot overnight.
Also, we've seen enough attacks on HTTPS and other "secure" implementations by now, that we should know signatures are not a panacea.
And all this effort just to add "another signal" to the pile of checks we already do...
> Better than whitelists, no?
They are effectively the same thing, which is why the following option in the list is "Whitelists suck".
> (X) Countermeasures must work if phased in gradually
Why?
Are you going to have "email switch day"? Good luck with that. Look at IPv4 vs IPv6, email is on that scale.
EDIT: apologies, I see you were concentrating on NNTP. However, I don't think the two worlds are particularly dissimilar -- NNTP just never reached the scale and criticality of SMTP, at which point your solution becomes basically unmanageable.
> its value relies in forcing manual labor on initial key exchange
Setting up NNTP peers has always been manual, and one has always gotten one's newsfeed through one's peers which is why this can work; as your edit noted, this can't work for SMTP because it's a different model.
> If I get your key and start signing spam with it, sending it all over the place, you'll soon be overflowed by "yo!"s, you'll have to regenerate keys, rebuild trust (a process you've forced to be slow), and your mail won't go through for days or weeks.
And for NNTP that would be okay.
Solving the SMTP spam problem is something else entirely, and I wouldn't want to taken on that burden. But Usenet could have been saved.
Please don't dust that of. It did way more damage to the web than any spammer ever did.
In addition to that you failed to understand what signatures do: there is no way you can have sign a message with anybody else signature. And it doesn't require immediate cooperation if a signed message with a good reputation is used as one of several spam signals. You want to discuss viagra? Well better sign your message with a public key and have a track record of not being a spammer.
Blacklists are the de facto solution that we implemented because using that list prevented any consensus of any other method of combating spam.
How is "a list of bad signatures" different from existing blacklists? It isn't. Spammers can just generate new signatures faster than you can mark them as bad... same as IPs, really. And if I manage to get somebody else's private key (which happens so often these days, it's not even funny), I can ruin someone else's career while I push my spam.
Unless, of course, you have a closed system everyone cooperates in; which is nothing like current email and cannot be rolled on gradually.
Honestly now? That list hurts because it's true, and you know it. You cannot wipe out spam without losing the decentralization element built into current protocols. Your "solution" would just add yet another "signal" to the long list of hacks spammers routinely implement.
(Not that I really care: I use gmail, trading my privacy for some of the best antispam systems money can buy, and I'm happy enough. I'm just a bit too old to still believe spam can be fixed, or that "webs of trust" can work at significant scale.)
> Unless, of course, you have a closed system everyone cooperates in; which is nothing like current email and cannot be rolled on gradually.
I was writing about NNTP, not SMTP—they are different.
> You cannot wipe out spam without losing the decentralization element built into current protocols.
NNTP was never decentralised like SMTP is. Each NNTP site had a very limited number of feeds, set up on a social basis. There was no concept that anyone could submit articles anywhere; rather, users submitted articles to their local site, which forwarded them on. In this environment, cutting bad feeds out would have worked (or at least, kept spam to a manageable level, which is all anyone really cares about).
Which is why GP said: "And if I manage to get somebody else's private key (which happens so often these days, it's not even funny), I can ruin someone else's career while I push my spam."
Also, the biggest problem with PGP-type signing as a mechanism to control spam is not whether you can spoof the signature or not, it's that there's nothing in it that stops the spammer from generating a new identity once the current identity is tarnished.
With NNTP, you pretty much have to login to a system in order to post... this means, you are trackable, and if the origin server is doing the signing, this can be traced... It's generally very easy to trace a message through it's server chain.
That would require a central authority to verify signatures. Otherwise intermediate sites could just make up new identities once old ones became tarnished.
And a central authority is not really all that decentralized, is it?
> That would require a central authority to verify signatures.
Nope. Each site would be its own verifying authority. All I care about as a site is that the sites which feed me are responsible for what they send me; all each of them cares about is the same. If any one site gets bad enough, I'll start to reject their articles.
Actually, I think that could have been fixed. If the originating site signed a message, and the each relaying site signed it, then one would be able to calculate a reputation score for sites. Spammy originating sites (or originating domains pretending to be relaying sites) would be easily identifiable, and thus blockable.