It'd be better, of course, if we didn't rely on Signal not storing all that metadata and instead used a protocol which made it impossible for anyone to be in a position to choose whether or not to store it. Unfortunately, the protocols that enable truly traffic–analysis-resistant messaging (I believe the Pynchon Gate[1] is currently the best-of-breed) tend to have increased latency and consume greatly-increased bandwidth.
I don't really know what the solution is, but I'm very uneasy about the central point of failure Open Whisper Systems is. Moxie's previous points about the difficulty of upgrading a federated protocol[2] are correct, but I think that despite the difficulty it's important to do.
To me, the big question is what a trustworthy political solution would look like.
I see this desire raised a lot, in contexts from HN to Valley-mocking pieces on how encryption is no substitute for advocacy. I completely understand the instinct, but every incarnation of it seems to struggle with the same question. Namely: how do you know when you've won?
Restrictions against collecting data on US citizens didn't produce the expected results. Testimony to Congress didn't accurately depict what's collected, even in secret. In the early days, the existence of these agencies was classified to help go around restrictions on existing agencies. Years ago, back in the Puzzle Palace days, the DoJ cited systematic criminality but concluded that they were unable to prosecute it.
So... what does winning look like? What regulation, what testimony, what promise could possibly convince people that a solution had been reached, even for the moment?
You never win. To use a controversial example: Who thinks abortion rights people "won" with Roe v Wade? Their opponents have been relentlessly chipping away at that "victory" ever since. When you make something a political issue, you are guaranteeing that it cannot be won with any kind of finality.
I agree, but I'm talking about a scale even shorter than that.
Roe v Wade was a clear and unambiguous advance for abortion rights, and the battle lines are now arrayed somewhere different than they were before Roe. The fight isn't over, but it's fairly clear who holds what.
I'm talking about even knowing when you've made progress. If a federal directive came through tomorrow expansively forbidding the NSA from collecting data on US citizens, privacy advocates wouldn't even hope that bulk surveillance of citizens would stop. They know better, because it basically happened, and the definitions of words got rearranged until the program could continue unabated.
Political issues aren't settled until they fade into consensus belief, but it's usually possible to make progress and then defend it. On surveillance and privacy, there's no law or court decision or whistleblower or even prosecution that can guarantee things aren't continuing exactly the way you didn't want them to.
Exactly. This is why the fight to maintain gun rights is a never ending battle as well. Arming yourself with a gun is the best defense against a violent attacker. Legally enshrining that right is important. But even if we didn't have that right, we could still defend ourselves clandestinely via illegal means and home made weapons.
Similarly when it comes to security, privacy, and anonymity the best defense is to arm yourself with mathematical security.And to enshrine our right to those defenses in law. A subpoena only works if there is information that can be handed over in the first place. If strong and private encryption is made illegal then the best defense is still to use that technology clandestinely.
I feel safer when every private citizen I interact with uses strong encryption. I do not feel safer when every private citizen I interact with carries a gun.
I genuinely appreciate your response but it is not an argument.
fallacy [1]
>I feel x
isn't an argument. for example: I feel safer with a gun.
fallacy [2]
> ... when every private citizen I interact with carries a gun.
Having the right to carry a gun or the right to encryption does not imply everyone will/must carry one or use it at all times.
With encryption criminals around you can plan attacks, steal your identity, and trade child pornography without fear of the prying eyes of law enforcement ever being able to discover the evidence. That would probably make a large number of other people 'feel' unsafe as well. feeling a certain way isn't an argument.
Of course emotions are an argument. We're not Vulcans. Pretending that emotional impact is an irrelevant factor is a great way to win an argument without ever making anyone care what you said.
But there are no objective conclusions in politics, and asking for such is shutting the door to any useful progress. Ideally, a good political solution is one where all involved parties "feel" that they have realized more of their demands than the others -- not one where one party gets all the spoils based on winning 51% of an artificial binary vote.
I feel (part of) the reason your society is in political gridlock is because everyone keeps looking for that mythical "objective" proof that ensures a 100% victory for their side. But that's just another unicorn.
The challenge I proposed is intended to be an objective exercise otherwise it's pointless because everyone can feel however they want.
Here are some examples of how both a gun and encryption can be used for the same end goal.
X can secure a financial transaction
X can stop a thief from obtaining my credit card information
X can stop someone from forcibly obtaining my identity
X can stop an attacker from obtaining private data stored in my home.
The only thing I've been able to think of that applies to encryption and does not apply to a gun is:
Encryption can verify that a message actually came from me by decrypting it using my public key.
This is objectively true for encryption and objectively false for a firearm. Also a firearm doesn't really help with anything on the internet except maybe a shady craigslist transaction in a dark parking lot. But I meant to imply that a realistic and suitable physical analogy can be applied.
To me, the goal isn't the only thing that matters. How you achieve that goal matters too, and "using a threat of violence" ranks pretty low on the ladder of civility. Trying to equate the arguments based on goal alone is starting your argument from a false equivalence.
nah, I'm still right where I started - arguments that ignore emotional impacts are great little learning exercises but pointless if you are trying to achieve something in the real world. Good luck out there.
There have been countless times where courts have told three-letter agencies to stop doing things and they have stoppped. The judiciary has the power to protect us, much more than we give credit for.
There's still rule of law, and the executive mostly listens to what the judiciary tells it to do. For all its flaws, some of our institutions work pretty well compared to most places. I cannot think of another country where judges are able to overtake heads of states in substantial policy outcomes.
I'd actually be interested to see examples. Most of the ones I know are of courts and Congress ordering three letter agencies to stop doing things, and being lied to and ignored.
The infamous one: in its early days, the NSA was ordered to stop surveillance of US citizens. It went before the Church Committee and testified that the relevant sites had been closed for more than a year. This was a lie, bottom to top. The sites were actively operating as those words were spoken, and they weren't closed down until whistleblower James Bamford exposed the lie. https://theintercept.com/2014/10/02/the-nsa-and-me/
There's a list a mile long of similar stories. Court decisions, executive orders, and acts of Congress have bounced off these agencies without result.
I'd like to see examples, but I agree that you're not wrong in general. The agencies do respond to court decisions sometimes. My point is that when legal compliance is a coin flip and there's no way to check for results, you can't be sure that legal decision has changed anything at all.
I think part of the problem in the US is the culture where for example police departments compete for resources, basically to make their own department bigger. One of the ideas in the Ron Paul movement was that governments needs to be smaller. Of course, in the real world it probably should not go as far as Ron Paul suggests, but....
> Their opponents have been relentlessly chipping away at that "victory" ever since.
There, the "opponents" are a subset of society that have a legitimate right to not to agree with the decision and they are acting within the public framework of our governance to overturn a decision.
Here, the "opponents" of strict privacy rights are spooks and crooks in government and international corporations. It is entirely a differnet matter.
Established inherent rights -- specifically the rights of free speech, freedom of assembly, and protection from unreasonable search and seizures -- need to be protected in context of new capabilities afforded by modern communication, surveillance, and data retention technologies.
Corporations will not pull a dissenting "Roe vs Wade" that would challenge citizen rights. Just let them try that.
Overreaching elements and sub-systems of the government can try and present cases where our (updated) rights present obstacles to their performances of their legitimate legally mandated activities. And there is ample precedence for oversight for such matters.
A technological cold war with government and industry on the technological field is not a realistic option. First of all, it is politically useless since that approach implies that the constitutional framework and our entire system is in effect broken. Second, the "mathematical" bit in the secure and private mediated communication systems is the only element where one could possibly argue for parity in terms of the contending parties' capabilities. Why pick a losing fight when there remains the constitutional field where we have the upper hand, by definition.
Enacting laws under the guise of trying to improve women's health with the effect of closing abortion clinics is only surface level "acting within the public framework of our governance". These laws are ruled unconstitutional by the Supreme Court. I claim the legislators and their supporters knew this before they enacted the laws. Unfortunately they also know that it will take the Supreme Court time to make this ruling. And in that time the state laws will act to close many abortion clinics. It doesn't seem very legitimate to me.
> When you make something a political issue, you are guaranteeing that it cannot be won with any kind of finality.
But you also set up the infrastructure to fight the good fight forever. Which is what it takes to make democracy work, and work well.
Because everything important is a political issue, whether you want it to be or not. The Superconducting Supercollider, which was as clear a piece of pure science as you could imagine, was killed by politics. End to end encryption could be too.
Honestly, a major reason we are in this mess now is that for decades Silicon Valley has avoided politics and tried to pretend that the federal government does not exist. Now that it can't be ignored anymore, the tech industry does not have any of the civic institutions needed to build broad public support for its issues.
After Roe, antiabortion strategies had to change. You can say that this was a weakness of Roe, but I think it's complex enough of a situation not to attribute it to a root cause. I prefer to think of this in terms of the Red Queen Syndrome[1]: solving problems reveals new ones.
Roe v. Wade was still a great start and major victory, relatively speaking. Change takes time.
It's a bit more complicated here, though, since so much of this activity is clandestine. We can't know what rules they may be breaking (unintentionally or otherwise).
At this point, winning looks like the people responsible for abusing their power and overstepping their constitutional authority going to jail. And for a long time.
As long as the only consequence of illegal activity that violates the constitutional rights of citizens is being told to stop, there will be continued efforts to chip away and push the envelope for what they can get away with. If it were made apparent that there are personal consequences above and beyond the scope of their jobs, perhaps some of the people in those jobs would more carefully consider the constitutionality of their actions.
You can no longer decouple strong cryptography from the global economy, that could be considered a strong political solution because it represents a loss of political control.
Essentially, there is no trustworthy political solution, because politics can always change. As long as the people say, “I've got nothing to hide”, surveillance fans and fear mongers will always find the necessary support.
I always ask, "what happens if it's suddenly illegal to be gay, drink, take selfies, or worse?" I don't think I've ever left a conversation where a person said that and they left not at minimum answering that question.
What's scary, is some people seem to want that stuff - as long as it's not them getting taken away and thrown in a cell.
Political solutions change as politics change: I think it's better to be mathematically secure than politically secure, since the one is forever and the other only sure until the next election.
A political decision to stop sabotaging technological solutions would be a good starting point. Even if changed later, all the technologies already developed would still be available.
Being provably secure is great, but this is a tall order -- there are always conditions to satisfy (solution is secure if A, B and C and governments and other attackers might invalidate those by a tap point, decree, a court action, etc.).
> A political decision to stop sabotaging technological solutions would be a good starting point.
In America, we have had those, and it hasn't helped. We have a First Amendment and a Second Amendment, and yet we have campaign-finance restrictions and gun control.
Political decisions simply don't stand. It's terribly sad.
Interesting perspective. I think the majority of the rest of the world is in awe that those two specific restrictions (such as they are today) are so inadequate at protecting anybody (from the rich and crazy people with guns, respectively).
The Bill of Rights was never intended to protect citizens from each other, the intent was very clearly to protect citizens and states against the federal government. Even with this limited scope, political pressures have overcome those two amendments (and most of the others). The parent is very right to point out that it is hard to restrain the majority, even with the constitution (and/or law) on your side.
I don't think it's sad, politics are supposed to change. Obviously you can be sad at specific changes, but the idea that political decisions in general can change is a good thing.
So we went from a time when tracking or reading your communications (mail) was unconstitutional to the present, where the government can track you, listen in on you, and record all of your (electronic) communications 'just in case' they might want to check what you've been doing. Doesn't leave me inspired with optimism.
I'm only responding to "Political decisions simply don't stand. It's terribly sad." I'm sure we can all think of many examples of good political changes. In fact, I think most people would agree that good political changes vastly outnumber bad ones. Blaming current problems on the entire nature of politics seems a little overly broad. We would not be better off if we froze all politics forever in whatever your favorite year was.
"Mathematical security" can be politically banned, is banned in many parts of the world. You only have the option to use mathematical security because other people have been doing the politics for you.
You cannot protect your privacy with just mathematics. It won't help against government tracking your cell phone location or looking into your bank account.
There are plausible solutions for both of those, or at least a clear direction to look for solutions. Unfortunately, there tends to be a critical mass that decentralized systems need to reach in terms of adoption before they become truly feasible.
Consumers aren't interested in buying a "UWB mesh hub" or some such, which wouldn't appear to do anything but drain batteries. But they might, for example, buy a car stereo or security system that uses a wireless device to deliver specific features, and which also happens to help saturate the city with a p2p mesh network.
Likewise with cryptocurrency, most people are not interested in the hassle for some intangible privacy benefit. But a lot of people might be interested in a crypto video game currency that can be easily traded, even between games. Or perhaps a currency-like mechanism to implement quotas on the mesh.
They wouldn't need to outlaw or ban all cryptography, only the particular communication systems that don't have "lawful intercept" capability. There are already laws for this on the books, although they've generally been interpreted in such a way so as not to apply to non-telephony products so far.
TBH it probably wouldn't require that much Orwellian apparatus; you just make the software slightly harder to use than it already is today, and network effects basically ensure that only people who are really interested in communications without government interception (who the government is presumably interested in) are using it. Then you can start doing endpoint attacks, deanonymization via compromised downloads, etc.
The government's -- and I don't mean just the U.S.'s, but most large governments', I think -- ideal for Internet communications is something similar to the telephone network circa 1975. They're fine with privacy between one individual and another (i.e. keeping your conversations private from your neighbors), but they aren't going to be satisfied with any technology that prevents wiretaps by state-controlled apparatus.
I am not exactly bullish on the ability of technology or technologists to resist this, over the long run. Unless there is a widespread and overwhelming realization on the part of individuals that governments shouldn't have this ability, and I don't think that consensus exists even in the liberal West if you frame the question even moderately advantageously to the government, then they will get it. There will always be pockets of noncompliance, and an ensuing cat-and-mouse game, but the steady state will likely be one that deters mainstream usage.
And if we really are seeing the end of Anglo-American geopolitical dominance in favor of countries whose political systems emphasize stability and harmony over individual rights and dissent, then it becomes very difficult to see that consensus ever manifesting itself at a meaningful global level.
> Trump said Tuesday that he would be "fine" with restoring provisions of the Patriot Act to allow for the bulk data collection, something candidates such as former Florida Gov. Jeb Bush have also called for that was banned with the passage of the USA Freedom Act, which Cruz supported.
Well I don't think I'm going to agree given the only person who made it to the GE is a guy who supports surveillance.
RSA, ATT, and Verizon all took bribes from the NSA to screw over their users. At least one worked with DEA as well. These were leaked in prominent media with exposure to millions of American consumers and business people. So, let's test your theory:
RSA Net Income (2011-2015): 426M 320M -306M -115M 79M
Verizon Net same period: 2.4B 875M 11.5B 9.63B 17.88B
ATT Net same period: 3.94B 7.26B 18.25B 6.22B 13.35B
RSA took quite a hit but it could be market as well. I don't know what it's status was pre-Snowden but much of hit comes during year of the leaks. Revenue dropped a billion or two with profits turning to losses but rebounding into $75 million in 2015. Verizon and ATT are doing great. Other companies that are managed profitably that cooperate tightly with Washington are Microsoft, IBM, Google, and Oracle. Their net incomes are in the billions.
So, I think the market data indicates you're wrong even in the worst scenario for working with the surveillance state. Also, the more lock-in the business has, the better it does despite any evil choice it makes. Rule of thumb.
It already is for exports unless you get license. They probably approve it but they left high-assurance security + a bunch of other stuff classified as munitions. The "victory" of the Crypto Wars was for mass market stuff that's basically insecure.
Note: I could be really misreading the material due to not being a lawyer or pouring through regs all the time. I think it says all this stuff is still 5A002 (munition) outside the exemptions they compromised on.
Ex post facto laws are unconstitutional, and something like this (where millions would instantly be in violation) would absolutely, 100% end up in front of the Supreme Court.
Some examples of when a law can be retroactively applied:
When the Securities Exchange Commission decides that something is a security, it retroactively applies the civil and criminal compliance back to 1934 because it was always a security. I mean, you can argue it in front of a judge if you want, but thats how they established jurisdiction.
Same goes for discretionary tax law at the IRS
or any regulatory agency
I agree its a problem, but if you live your life under your version of reality it is easy to get railroaded in the dragnet
> When the Securities Exchange Commission decides that something is a security, it retroactively applies the civil and criminal compliance back to 1934 because it was always a security.
That's not a retroactive application of the law. If they are correct in their interpretation of the law, it was already the law. If they are incorrect, the courts will not allow it (whether the enforcement concerns acts before or after the determination by the SEC.)
The issue is that they, and other administrative organs, often change their interpretation of what the law "has always meant". Particularly irksome when they issue private letters with differing interpretations and then override all of them with subsequent administrative rulings.
> The issue is that they, and other administrative organs, often change their interpretation of what the law "has always meant".
Yes, and if that conflicts with what the courts believe the law has always meant, those decisions won't survive contact with the legal system. An ex post facto law is a law creating (or enhancing) criminal penalties for acts that exist before the law is passed. Changing administration interpretations are like changing prosecutorial priorities (and the former comes with a lot more notice and specificity than the latter) -- they only have effect so long as they are within the bounds of what the courts will accept was covered by the law when it was passed.
The courts will accept a hell of a lot, due to Chevron deference. Statutes will use a term (eg, "readily convertible" or "replica"), and the administrative agency will decide this term means different things depending on the day of the week and which party is in power. All it has to be is "plausible", not consistent, and the courts will defer to their interpretation.
Just a note, there is a possibility that the person you replied to is comfortable with circular logic about why the behavior is not controversial under the supposition that "the law is the law." This may be a semantical discussion about why it is not "retroactive" in a legally damning sense, despite the similarities of the distinct ability to civilly and criminally sanction somebody for something they did in the past. Long before representatives of the government decided that person's prior actions would fall under their jurisdiction.
I'm not sure what you mean, all of those agencies are formed under the constitution and the systems that support them are also abiding by the constitution
The Constitution states, 'no Bill of Attainder or ex post facto Law shall be passed'; to the extent that those executive agencies' enabling legislation permits ex post facto regulations, that legislation is unconstitutional.
They're definitely not. They've resisted due process for some time. NSA's reps even argued in Jewel case that judicial branch shouldn't be allowed involvement at all. Which is sort of the status quo for intelligence agencies and courts. Prosecutors similarly have a combo of broad powers, immunity in common abuses, and virtually no accountability. The government as it exists certainly doesn't run in the framework of the Constitution except in a partial way.
But they can instead make it illegal to use such software, which would not be an ex post facto law (it would only apply to uses of said software after the law was enacted).
Yes. However, that doesn't change the fact casual users that are largely harmless ended up losing years of their lives and had their ability to earn a good living to support themselves casually destroyed in the process.
Think democracy. We have to fight for democracy over and over, and we have a technical solution in the form of an election process that's designed to make tampering hard and in the form of institutions controlling each other for a good reason. There is no political solution to the risk of putting all your trust in a single person, aka a dictatorship, there is only a technical solution, and that is democracy: A system of government that avoids the single point of failure at great cost.
Such a thing isn't coming. The population at large doesn't care, so no major party candidate will ever fall on this line. We have to fight using technology until it becomes a politically relevant issue (which may never happen).
Maybe the pirates win the upcoming Icelandic election. They give asylum to Snowden and decide to become kind of like the Estonia of online privacy. Maybe they invest public funds in accelerators and scholarships etc. With a political climate like that, Icelandic businesses could make stronger claims about protecting data, making that a point of global competition. And then maybe the US politics could slowly start to change.
You want the government to give up the right to access communications that it has the ability to access? You're talking about a fundamental weakening of government greater than has ever been attempted. Governments have always had the power to access your mail and papers and such, the only changes over time is the legal hurdles they must use to exercise that power.
This may seem like a small nitpick to some, but I think it's extremely important that people remember that, at least in the US, rights are reserved for The People; the government, by definition, does not have rights, only authority (explicit and implicit).
It's not even a nitpick as much as a distraction from what's being talked about. In the sense that I used the word "right", it can be exchanged for the word "power" and have the exact same meaning. In fact, I make that exchange in my comment. What exactly is your point?
My point is that terms like 'right', 'power', and 'authority' are not interchangeable, they mean different things. There are other tangential points, but this entire thread is essentially about people objecting to government exercising power without authority in a way that violates rights. I hope that illustrates both the differences between these terms and why it's important to use the correct language in this discussion.
The problem is, I don't think you can say that it's a form of a giving up, or a fundemental weakening of power.
We are creating absurd amounts of information compared to before. Just because US Gov could access the measly amount of info that was generated before doesn't mean that they should be able to access the crazy amounts created now. It is from a very narrow perspective that anyone can call this a "fundemental weakening of government". Compared to before the internet, they're still drowning in insane amounts of data.
We can also add that if they can access some things, they will manipulate their way into accessing more things. Which means that reducing privacy and security is just optimization for them. And that will have costs beyond the US Government's own doing.
Sure, the current law climate seems to be that they can access it. But that climate was created with pushes from LE agencies and ignorant politicians. You may argue that that has always been the case, but clearly there's increasing demand for this to be decided democratically. So US Gov "giving up" this "right" might be the thing that democracy wants.
I'm also kind of tired of having to fight my own government every step of the way. My conclusion is different though: I'm no longer interested in fighting for political solution that will be overturned the next time we turn our backs, meaning constant fighting and more time where shitty laws apply than not. Hence, I prefer a technical solution.
Can someone explain how people are imagining protocols that do not to create / store metadata? This seems like something fundamentally impossible on a packet-switched network. After all, the data has a source and a destination, and goes through the infrastructure that's tappable (and in big part already tapped) by a state-level actor.
About the only thing that comes to my mind would be a digital equivalent to broadcasting a radio signal - a protocol, under which everyone receives all communication that's done over that protocol, but each person can only decrypt the part that's addressed to them directly. This would reduce the metadata to "who's broadcasting", without revealing the listener.
EDIT: Some back-of-the-napkin calculations on such broadcast protocol:
I took a look on my today's communication with my SO; rounding somewhat up, it would be ~100 messages of on average 50 characters, going in both directions. That gives, using 2 bytes for character and multiplying by 1.5 to account for protocol-related padding:
- 15000 bytes / user / day of a single conversation
Say this protocol has 1M user, that gives us:
(50 * 100 * 2 * 1.5 * 1000000) / (100010001000)
15 GB of data, spread over the whole day.
Seems manageable; especially if one would be to bucket it by e.g. hour by default, or less, if client is active and streaming data continously. Definitely a mobile bandwidth killer, though.
Look into verifiable shuffles, private information retrieval, and dining cryptographer networks.
By combining such techniques, it's possible to be much much better than naive broadcast, and also possible to use a client/server architecture so that low bandwidth endpoints can participate.
Have a look at systems such as Dissent and Riffle.
The Pynchon Gate solution boils down to private information retrieval: a number of distribution servers hold all blocks of information, and one then downloads from each server the XOR of certain blocks, such that XORing all blocks together yields the single block one is interested in.
One way your broadcast can be achieved is through a newsgroup. The best you can do is identify who is a member and who posts messages. But so long as the messages are encrypted, you can't tell who gets which message, since everyone in the newsgroup gets every message.
Still not perfectly metadata proof, but it carries a lot less metadata than peer-to-peer messages.
This has actually been done for decades with usenet. Every encrypted message winds up on a common list, and clients occasionally scan the list and see if they have a key to decrypt any new message. The metadata is then that you put a message there or that you asked for a list of new messages, though that can be reduced to "you used tor" which until such a message server became commonplace might be better.
In addition to all the systems and protocols mentioned by sibling posters, I'll add ricochet:
https://ricochet.im/
It deals with the metadata leaks by using Tor. Every user runs a Tor hidden service, and the users identity is the address of that service. So no single point in the chain can tell who is talking to whom, without having to resort to a broadcast protocol like you describe.
Blockchains can do this. But they create relatively absurd systems of who can/can't send messages based on how much they've mined. Block chains are HORRIBLE for encrypted messaging. Since your message is PERMANENTLY part of the chain. So if a encryption is broken, or your password gets leaked. ANYONE can read your messages.
In most cases a DHT is far simpler. But naturally some nodes can be evil and log metadata. This is just a risk, but it also exists in block chains. Furthermore DHT's are lower latency then block-chains.
Blockchains aren't a magic fix all solution for network consistency/privacy models. Outside of currency transactions it is a horrible distributed system model.
Also if you look at it in terms of the CAP Theorem:
Consistency (every read receives the most recent write or an error): Not True for block chains. All reads are dirty. The further back in time you go the higher the probability the read isn't dirty.
Availability (every request receives a response, without guarantee that it contains the most recent version of the information): 100% true.
Partition tolerance (the system continues to operate despite arbitrary partitioning due to network failures): Yes the system will continue to function. But you can suffer data loss when the partition is healed.
Sorta, but not. The only similarity would be that you download global state. But you could continously throw that state away if you don't need archiving of messages / long history.
Paper conclusion says the cost of running Vuvuzela is pretty high per month, so you would need a benevolent millionaire to make it happen.
As for signal what every prosecutor wants is metadata to show the court that user A was in communication with user B. The actual contents of the messages aren't important especially in a conspiracy case and user B is an informant, their word against yours plus metadata showing you communicating is good enough.
Besides forcing Signal to keep this metadata in the future, I wonder if they can just obtain it themselves by watching all traffic on their federated servers and timing it to discover communication networks.
The cost of running Vuvuzela is dominated by bandwidth, and the paper used AWS prices to estimate the cost; purchasing IP transit directly would lead to about an order of magnitude reduction in costs (still non-trivial, of course).
Order of magnitude? 1Gbit on amazon costs 22k/month. Even an order of magnitude cheaper sounds pretty damn expensive, two orders of magnitude still isn't "cheap".
It's really easy to underestimate how big of a ripoff EC2 bandwidth pricing is.
1Gbit in Chattanooga, TN cost $350 a month. You gotta buy some servers and maybe a colocation fee. Way less than $22,000 a month. Actually, while I was looking that up, a recent press release says they're deploying 10Gbit to homes for $299 a month. Things may have improved on business side, too. :)
On the expensive end you'd be looking at around $600 for such at $NOT_AMAZON, and on the cheap end you'd get it for "free" because a plenty of hosts have excess bandwidth.
DO doesn't separately charge for BW or enforce any limitations AFAIK.
> Unfortunately, the protocols that enable truly traffic–analysis-resistant messaging (I believe the Pynchon Gate[1] is currently the best-of-breed) tend to have increased latency and consume greatly-increased bandwidth.
One recent project that validates this is from ACM SOSP'15 titled "Vuvuzela: scalable private messaging resistant to traffic analysis"[1] (open-access URL):
> Vuvuzela has a linear cost in the number of clients, and experiments show that it can achieve a throughput of 68,000 messages per second for 1 million users with a 37-second end-to-end latency on commodity servers.
> Vuvuzela works by routing user messages through a chain of servers, as shown in Figure 1, where each of the servers adds cover traffic to mask the communication patterns of users.
Similar to the P2P project Bitmessage[2] where clients receive and forward traffic not related to themselves.
There were some comments on HN discussing FreeNet and how it similarly forwards traffic unrelated to an individual client, but which has led to conviction by police, unfortunately.
>> Freenet ... which has led to conviction by police, unfortunately.
> Just for using it? That's crazy! Do you have a source?
Hm, I admit fault, hastily writing the above reply. I do not have sources for actual convictions, so what I wrote is not validated.
s/has led to/may risk/
Source[1] that I read prior to my comment, which is under the thread[2] "Suspect jailed indefinitely for refusing to decrypt hard drives". The discussion was along the lines of, if you have encrypted data, and the state "knows" it has illegal content, your not decrypting it makes you liable for it. Thus the extrapolation to use of Freenet, which forwards encrypted content from others, and is heavily littered with CP[3], according to HN commenters.
No, not just for using it that I am aware of. Just meaning that pure anonymity on Freenet is not the absolute best and they suggest trusted peers only. Traffic analysis (among other things) is a hard problem to solve without introducing noise and requiring large bandwidth. I think what the parent was saying was that [1] is possible is unfortunate (without making any statements supporting the criminals of course).
What metadata? All they were able to produce was whether or not a phone number was associated with Signal at all, and the last time that phone number's account pinged the Signal service for any reason. They produced virtually no metadata to the investigation.
Only because they don't store it. They are able to choose to store it at any point; we can only rely on their honesty (and lack of compulsion). It's better to have a protocol in which there isn't any significant metadata to choose to store.
I don't distrust them today, but I have no way of knowing what their future behaviour will be. I'd prefer not to have to trust.
How many people build from source? Moxie worked to stop distribution of binaries outside the Play Store, which is not unreasonable in itself but leaves a nice central point to target individuals.
Who owns Open Whisper Systems? I know Twitter bought WhisperSys, and with it Textsecure and Redphone though unsure who currently owns Open WhisperSys and it's products Signal (I assume Twitter does). Twitter is also rumoured for sale with Microsoft looking at aquiring it, so the future of Signal not keeping this metadata depends on who aquires it.
Open Whisper Systems is a nonprofit founded by Moxie Marlinspike in 2013, and it's a different entity than the "Whisper Systems" Twitter bought earlier.
Yeah, I wish he went with another name, too. Not only is Open Whisper Systems making it more confusing, but it's also quite a mouthful.
Not by subpoena alone, but in theory a court order could order them to modify their software (server or client) to collect data. They could fight that court order, and in particular it seems like they'd have a good case on the grounds that such an order would destroy their entire business, but legally a court could at least attempt to issue such an order.
I'm more concerned about this NSA/FBI partnership that secretly forces cooperation somehow. It might apply up to and including forcing a backdoor into the binary. Here's the program for US cooperation:
Notice how the lower levels of secrecy talk about how the companies cooperate or are partners. Then, you hit TS/COMINT and TS/ECI to find additional detail:
"the fact that FBI provides assistance with compelled and cooperative partnerships associated with WHIPGENIE"
"details of FBI assistance with compelled and cooperative partnerships with WHIPGENIE"
So, it's a backdooring program with major U.S. companies that the FBI "compels" assistance with for those that don't willingly cooperate. I have no idea what that entails but it works. It might work better on smaller firm with less resources but also maybe less as I think damaging income stream matters to greedy CEO's at public firms more. So, who knows.
Personally, I think they're pulling a variation of Escobars silver or lead policy. You take generous bribes to do what they want or you take generous donations of lead. In this case, legal fines or imprisonment rather than actual lead.
They would also be able to figure out what phone numbers it was communicating with, at what times and how often - they do the routing. We're relying on them not to store that metadata, which is the problem.
No, not really - they can decide at any time to start storing that metadata of their own will, use it internally, sell it to advertisers or turn it over via a direct law enforcement portal if they want. Their access alone is in many ways a problem.
Of course, we are agreeing. But as the other front page story shows us, even companies with the best of intentions and made up of people with strict moral codes devoted to providing their users a secure product can be usurped by government forces with ulterior motives.
I believe Whisper doesn't want this data. I can even trust that they would never collect it of their own volition. But it's irrelevant if they can simply be compelled to collect it, or even worse, someone within their organization can be compelled to secretly collect it and may even do a better job at this secret collection than Mayer's lackies at Yahoo!.
How would they be able to do that? Can you quote where in the article you got that and how that would happen?
"Notably, things we don't have stored include anything about a user's contacts (such as the contacts themselves, a hash of the contacts, any other derivative contact information), anything about a user's groups (such as how many groups a user is in, which groups a user is in, the membership lists of a user's groups), or any records of who a user has been communicating with."
If they don't store the contacts, or even a hash of the contact, how can you figure out who was talking to who?
> If they don't store the contacts, or even a hash of the contact, how can you figure out who was talking to who?
Think of it like email. The body is encrypted, the subject is encrypted, any attachments are encrypted, but in order for them to route the message to the correct destination every message you send still has a "To:" field - their server still decides who to send the notification to.
They don't need to read the contacts or anything of the sort - they just read the "To:" field.
"Please don't insinuate that someone hasn't read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that."
> Unfortunately, the protocols that enable truly traffic–analysis-resistant messaging (I believe the Pynchon Gate[1] is currently the best-of-breed) tend to have increased latency and consume greatly-increased bandwidth.
Yeah both of these are a pretty large problem for a messaging service designed to replace SMS on phones. Really anything other than a forum or email replacements will have a very hard time getting adopted if they have too much latency.
So, when is it going to be considered misconduct for Dana Boente and the (not so) honorable Theresa Buchanan to tack on gag orders for no good reason? How do we change that? Calling our representatives in Congress won't help. Signing petitions is laughable. I'm at a loss for how to change this as a regular citizen.
I'm confused. It seems you have more information than the blog post and attached documents entail.
Do you know both how long the investigation would be under for? Do you know the timing?
Stating there's no good reason is not true - it's quite possible that a gag order is issued to protect the investigation, including identification of suspects, the number of suspects (at least two in this case), change of behavior (e.g. switch from Signal to smoke signals, fax, or just lay low for a while), etc.
What happens when two days after a terrorist attack, OWS publishes a subpoena for the first time? I for one welcome that they go through the official channels to get the redacted version approved. Let's not botch investigations for the sake of pitchforking the "everything should be public" slogans.
I think a middle ground is possible in this specific case. The government could have said "we're placing a selective gag order, meaning you can't publish this notice but you can publish the redacted notice that we've helpfully attached", rather than wait for OWS to file a petition to publish the redacted version.
Uh, this isn't the direction I was expecting this to go. When you start killing judges, corrupt or not, you have Mexico. I was thinking more along the lines of being disbarred and exposed to civil and criminal liability.
Gag orders should be required to have a short duration and, once lifted, be subject to an open and thorough examination, where it's the state's responsibility to prove that the gag was necessary. Failing that, the prosecutor and signing judge would be liable.
The problem is our lack of political momentum and any lever for average citizens to use to move the judicial system to reform.
When has assassination ever accomplished what the assassin wanted? Most successful political movements, at least in recent history, do not involve killing anybody.
With higher profile assassinations, the results seem to be less predictable. But I think we tend to ignore all the smaller assassinations that gave just the right edge to the opposition since those people just fade into the background noise.
Do you have an example? If you're talking about government-sponsored assassinations, I think that's less relevant, we're talking about how "normal" people can influence politicians.
The assassination of JFK & RFK put Johnson into power. Power mongers pulled all kinds of schemes from there. The assassination of MLK per testimony in court case was intended to stop overthrow of U.S. government by masses tired of their shit. That didn't happen. Numerous witnesses to various things in intelligence misconduct never lived to say it in court. Those things are still debatable due to missing information. And so on.
Assassinations that have worked so far were done by the power establishment to expand or protect its power. They tend to work. They're rarely needed, though, as they can do things like media spins, disenfranchising voters, regular arrests, and so on.
As I said in another comment, this thread was about how average people could influence politicians. Government-sponsored assassinations are a different discussion.
Assassinating the most corrupt officials on a regular basis could have similar influence. They'd know they could pull schemes for money but would die past a certain point of damage to the people or elimination of their end of constitution. It works similarly.
It would just be harder as they'd have (a) lots more security, (b) most police going after whoever killed them while simultaneously ignoring whatever the politicians were doing, and (c) media protecting them since corruption works to media owners' benefit as well.
Your first sentence assumes we know the assassins' motives. I question that.
Your second sentence assumes that any assassinations would be publicly known (eg assassination of a public figure). Assassination of non-public figures may well be impactful and yet go largely unheeded.
I'd have a vastly different view on politics if I believed in the Kennedy assassination being an inside job, as your offhand "grassy knoll" comment suggests you do.
In all seriousness, and meaning no disrespect, have you researched the Kennedy assassination and come away with the impression there was no conspiracy?
Because I didn't even think it was particularly controversial that the official story is bunk. Making no more assumptions about the circumstances of his death (ie, who or how), a conspiracy would be required to produce an official explanation that was lean of some inconvenient facts. But I'm always in the market for convincing evidence that I'm wrong.
Yeah I've been quite interested in it an other conspiracy theories, like the 9/11 conspiracy theory (also bunk) etc.
Most of what these conspiracy theories consist of is just anomaly hunting, you take a really complex event and try to find any sort of unusual things, which of course you're going to find since it's a big & complex event.
Check out this interview (around 40m) in [1] with Gerald Posner, the author of Case Closed[2] (a non-conspiracy Kennedy assassination book) for a good start down the rabbit hole of non-conspiracy coverage of the Kennedy assassination.
Sorry for the late reply, but I didn't mean to imply anything about an inside job or other conspiracy theory. My point was that a sure-fire way to affect a politician is to kill them. Everything else is a "this might work..." proposition. I prefer absolutes.
I see how my comment could have been misinterpreted by so many people in hindsight. Sometimes I look at things through a very specific-to-my-experience lens. In another life, myself and squadmates(think coworkers) would get orders such as "Cleric(foreign equivalent of mayor/publicly-supported-bad-actor) <John Doe> and all his/her supporters are threats and should be neutralized". That didn't mean we should protest their policies or otherwise attempt to subvert them...those aren't guaranteed solutions. The finality of killing is a guaranteed solution. If they become a martyr and create even more undesirables through their demise...don't despair - just accept that the list of undesirables has grown and await the very predictable orders. It's tried, true, and only protested until the protester's name joins the list and is subsequently checked off.
I'm not too surprised to see an attempted overreach by federal investigators. Too bad there's no measure of meaningful accountability here.
Outside the usual "let's ask for more than we're legally entitled" shtick, there's nothing particularly alarming about this subpoena; it was narrowly focused on two phone numbers, for which only one was a Signal user.
It's good on OWS to fight so hard for transparency.
Funny that Open Whisper Systems wrote in the last chapter that they essentially should come back with a court order or search warrant to get more data, but forgot to include the critical information, that even then the FBI will not get more information, because Open Whisper Systems has no technical ability to provide that data at all.
It's volatile data exchanged between the clients only, but not centrally stored anywhere (contrary to all other secure chat systems out there).
The FBI has probably no idea how Signal works, what is stored and what not.
Even a grand-jury subpoena has no chance to produce more data. But maybe they can force them to re-implement Signal with a government backdoor (because it's a police state after all), and that's what Open Whisper Systems is really objecting to?
Or just logging the metadata? (Which btw. duckduckgo does, even if it slows down their webserver by at least 20%).
Or did they just try to mess with the FBI lawyers?
They do start with "Although OWS does not have, and therefore cannot produce, other categories of information listed in subpoena ..." So they do invite search warrant, but they do warn that nothing should be expected from it.
What is the solution here? The solution that minimizes damage involves everyone building from source and connecting in a peer to peer fashion that makes it pretty difficult to push a malicious update if you're looking for targeted surveillance.
However, even this requires an understanding government that isn't willing to poison the well in order to get to the target. A government that justifies dragnet (and whose agencies allegedly buy and sit on a stash of zero days) isn't something I'd trust to be bothered by the idea of leaving many people vulnerable in order to catch one bad guy.
I know it sounds trite but technology will not provide a full solution here. We need a lot of lobbying and a lot of PR to have any chance. Co-ordination will be very challenging when our goals very so wildly. But I guess we need to ask ourselves where we stand on this issue. Given we have difficulty getting almost half of the people to even bother registering and showing up to vote, this is an uphill task.
The obvious solutions is a federated protocol. There's no reason for Whisper or Google to be involved in routing messages except to own the system's concept of identity.
Trust in binaries is a harder problem, but reproducible builds is probably an important part of it. If several separate entities vouches for the binary, you have reason to believe what you run corresponds to be published source code.
I'm really happy they provided documentation on how to fight an unconstitutional gag order on a subpoena. They put gag orders on subpoenas they're not supposed to all the time, and it's good to show people an "easy" way to fight them.
FYI, Signal has access to all metadata about messages and calls (but not the content of course). They claim not to store it and I believe them for now but someone else could be storing it.
They don't have access to group message membership directly. A group appears as a bunch of one to one messages between the participants, so they might still be able to infer it.
I think you misread woah's comment. Signal has access to the metadata but chooses not to store it in order to be able to remain unresponsive to queries like this.
So the real threat here is that the FBI wants to come in with a search warrant, install one of their famous splitters (a Windows machine btw) and routes all metadata traffic to the NSA.
To legally get contact data in real-time for one criminal under investigation. Essentially the Lavabit case.
Or is that what they called PRISM? Legal route splitting at the endpoint, i.e. legally declared as such by the secret FISA court, because there's oversight...
Notably, the source for the voice call server is not available as far as I know, and there's no guarantee that the text messaging server is running in an unaltered state on their production servers.
The messages and calls are routed through their servers, so they could (but do not) store whatever metadata is required to route messages and calls. This would at least be the recipient's account and IP address, the sender's IP address (but not necessarily their account), and the current time.
I believe the GP is echoing zeverb's sentiment, that it would be preferable if OWS (Signal) could not even be ordered to collect such data.
The thing I don't understand about the GP's post is "but someone else could be storing it". I would expect the entire message (including headers / metadata) to be encrypted in transit, with a pinned key, so that only OWS has access to the routing metadata. Please correct me if I'm wrong.
Check the metadata portion. One thing to note, this isn't surprising at all. All of the centralized IM servers can do this and, usually more. The alternatives that try to minimize or obfuscate metadata are far from market-ready.
Yes, some mitm entity between OWS and the internet would be able to deduce who is speaking with whom once a big enough number of messages have been sent. In fact, any big ISP could do that for parties communicating from inside its network.
That's very neat and really glad to see privacy enhancing technologies working.
I'm curious what type of metadata Facebook would have from the signal integrations with Whatsapp and Messenger. Is there more, less, or same? Has anyone looked in to this?
I really hate that every messaging app nowadays requires a phone number to use, sure it makes some things easier but its very difficult to get a phone number anonymously.
They should include an email signup option or even better just a username/password option although that would cause some issues with spammers, which can probably be mitigated in other more creative ways.
I actually trust OWS in this case. They have taken every precaution to make data seizure (all but) impossible.
Signal is the best shot we have at widespread, usable private communications at this point. It's about time we get around to supporting it. Be pragmatic.
At the bottom of the gag order is states that OWS "may disclose the attached subpoena to an attorney for [OWS] for the purpose of receiving legal advice".
Is that required to be there? Is that just a curtesy? It would be unconstitutional otherwise, but, I don't know. It just seems odd. Is the attorney now bound by a gag order?
Well maybe they could contact an attorney through an online chat system that does leak conversations and surprisingly posts those private conversations about shady things govt wants to their website front page. What a bummer. Some lawyer needs to create www.gagorderattorney.com
"In the "first half of 2016" (the most specific we're permitted to be)"
I note that the documents use a proportional width font, and there's been previous research into using the width of blacked-out sections of redacted documents along with information about the font to work out possible character combinations that fit appropriately...
Can a privacy service really be built in the US and that too in SFO which is ground zero for the fantastic new surveillance economy being imagined and built.
We know freedom loving software engineers after decades of posturing have long folded and left Snowden holding the baby.
We also know companies here are either closely linked to intelligence agencies or bending over backwards.
We know the executive branch is in the middle of a full blown identity crisis of whether they are the good guys or bad guys of the world. Closesly followed by a legal system that has developed a third world regime like affinity for blanket gag orders and rubber stamping with 100% approval rates. This is a bit like tasking the fox to protect the hens.
What stops a goverment friendly company from acquiring whispersystems, or whisper itself being some sort of a release valve operation?
WhatsApp does store metadata in plaintext which makes it susceptible to law enforcement or 3rd parties.
The contents of the messages are still end-to-end encrypted; that being said WhatsApp does default to backing up chats in the cloud and those could be subpoenaed by a government.
And presumably you have a record of those same chats revealing the content was, "I'm not interested in participating in your conspiracy to commit fraud, stop talking to me."
If you meant to imply they could abuse this capability to get a warrant, I'll be concerned about it when they have any trouble getting warrants.
It might be worth considering why authorities think banning or regulating encryption is tractable.
- 100+ years of business telecommunications without significant strong encryption.
- Robust wiretapping and law enforcement access laws and practices that mean there is NO place or piece of information within US sovereign territory that is inaccessible to an authorized agent of the state.
- they have the expectation of total control. Hell, beat cops can shoot you over minor "comply or die" orders.
- Crypto isn't about your email or even evidence in a particular case, it is about the completeness and totality of their authority.
- States around the world routinely decimate their populations in civil wars and massacres to ensure the same people remain in power. From the LE perspective, anyone who threatens the sovereignty of the state is a terrist they would complete for the opportunity to shoot.
Hackers don't get it. If the crypto debate ever gets real, you cannot imagine how real it will get.
100% to EFF because they specialize in issues important to me more than ACLU, which has a broader mandate, and because EFF's budget is about 10% of ACLU's.
The ACLU does a lot of important work but goes far beyond tech. You should take a closer look at the ACLU to look at what else they do and how you feel about that.
> All message contents are end to end encrypted, so we don't have that information either.
The way I'm reading/understanding this is that they have the encrypted messages, but don't specify whether they are stored. However, since the messages are encrypted, they don't have the message contents/that information. Concluding, they may have all the messages saved, albeit in an encrypted format and with minimal metadata.
Did I come to the right conclusion? Or does Signal not store the encrypted message data either?
Considering the way they claim to minimize the metadata stored, I wouldn't expect them to store encrypted message content after it is delivered to the client.
It'd be difficult to delete metadata about a message, but still keep the content. And they are claiming to not retain message metadata.
At the risk of sounding clueless, how is it possible that Signal can’t say which account belongs to which phone number when the only concept of a username on Signal is that of a verified phone number? When I initiate a conversation, am I not at that moment using a lookup which now people are saying does not exist!?
Why do they mean by "upstream and downstream providers"? I would think it'd mean the ISP associated with the IP address from the logs if they had it? Not sure why they worded it that way or if they meant somthing else?
While the GGP idea was just misinformed and basically outright dumb, it doesn't mean that just because a politician chooses to use a tool or a person a a given time then they will protect that person or tool in the future. The world is full of examples in the contrary.
Bottom line, just because Hilary's campaign is using Signal, it doesn't mean that in the future her administration won't gag them or make legislation available in order to use them so spy on their own citizens.
In the future please don't blank out the officer's email address, especially when they (as explained in the reply) overstep their bounds in terms of what they are allowed to request, thus essentially abusing their position and the trust we as citizens have provided to them.
The redacted letter was provided by the court, so I assume that they weren't willing to disclose it and revealing it would be a violation of the court order.
I don't really know what the solution is, but I'm very uneasy about the central point of failure Open Whisper Systems is. Moxie's previous points about the difficulty of upgrading a federated protocol[2] are correct, but I think that despite the difficulty it's important to do.
[1] http://freehaven.net/anonbib/cache/sassaman:wpes2005.pdf
[2] https://whispersystems.org/blog/the-ecosystem-is-moving/