Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thank you to everybody who cautioned against judgment before hearing the whole story. Here is my response: https://www.facebook.com/notes/alex-stamos/bug-bounty-ethics...


I think the root cause of the problem is the unclear policy by FB. Privilege escalation can be hard to catch, and can be a separate bug in and of itself, even if it requires a separate exploit to get the initial privileges.

The published policy didn't say anything about not doing what he did. I'm not going to argue that what he did should or shouldn't be ok, but FB has no control over what other people do. Yeah, maybe it'd be better if people asked for clarification first instead of asking forgiveness, but there's no way to force them to do that. FB does have control over what their policy says and allows/disallows. If you don't want people to exfiltrate any data and look at it on a local machine instead of just keeping a session on the exploited machine, then put that in the policy. If you don't want people poking around for other exploits after gaining access, then spell that out in the policy.

The point of the policy isn't to stop everyone. Sure it will stop some/most people, but some people don't listen. The point is that when it happens again you can point to the clear policy and say "you're an asshole, we're not paying you because you violated our explicit policy, and we are reviewing what you did with our lawyers to see if we should notify law enforcement".

Yes, doing this fix/policy update now doesn't fix this situation, but it prevents anyone else from doing something similar and claiming ignorance of this situation and FB's position.


Correct, the policy isn't clear and needs improvement. The bug bounty's policy definitely falls under the CSO's purview. So even if you approve of Alex's handling of the matter, you can't forgive him for running a sloppy bug bounty program. It's one thing if he claims mea culpa and says we could do better. But there's not one iota of regret, remorse, or apology on not making things more clear in Alex's response.

If you're going to persecute someone on details, you had better make sure your policy is very detailed, not vague, and not left open to interpretation. In this regard, Mr. Stamos failed.


The policy reads clear enough to me to warrant a huge reward. Adding additional conditions after the fact is dealing in bad faith.


most RCE bugs can be compounded into major data dumps. That doesn't make each individual RCE a million dollar bug.


Every RCE that could have been sold on the black market for a million dollars is worth almost the same reward (modulo the advantage of being legal).


Think about this guy being a Russian hacker and selling the ability to access restricted accounts, pose as Instagram administrator and I assume access user's data freely


I would have come here to say this if you had not said it already.

A major root cause is that the published guidelines say nothing directly about exfiltrating sensitive data. This leads to legitimate confusion for exactly the reasons given. The actual policies make sense given what the published guidelines say, but that's not good enough.

The policy needs to be changed. Not by much, but it needs changing. Here is a Responsible Disclosure Policy that might work better than your current one:

We expect to have a reasonable time to respond to your report before making any information public, and not to be put at any unnecessary risk from your actions. Specifically you should avoid invading privacy, destroying data, interrupting or degrading services, and saving our operational data outside of our network. We will not involve law enforcement or bring any lawsuits against people who have followed these common sense rules.


Why do the policy specifics matter? A blackhat won't be respecting those rules, and won't need to negotiate a reasonable payday with facebook.

The real issue here is facebook's poor infrastructure security and slow response time. If the exploit had been previously reported, why was the privilege escalation still possible? Why did a (supposedly) known-to-be-vulnerable host have access to secret information at all?

The exfiltration of data may have been unethical, but facebook has no one to blame but themselves for it even being possible.


> Why do the policy specifics matter?

Companies take big risks in running bounty programs. They are giving hackers permission to test their live site. This isn't something that is popular with everyone inside a company. Bounty hunters need to respect that bounty programs are a two way street. If you find a serious issue like remote code execution you need to be extra careful. Wineberg was an experienced hunter. He should have known better.


Usually serious security issues requires some kind of escalation, and escalation probably requires, at some point, exfiltration of (non personal) data. If the rules of the program are that restrictive I don't know how many serious bugs will be found by "ethical" hackers...


That might not be the point. The point might be to allow the intersection between what is palatable for the company and what serious exploits white hat hackers can come up with.

No company wants to include in their privacy policy that anyone can legally access and download your data if they are trying to perform exploits on the system.


No, the root cause is having a 2 year old, known RCE that was only patched after this researcher got SSL certs and app signing certs.


The policy hasn’t been changed, though. There’s still no explicit statement that privilege escalation invalidates a report: https://facebook.com/whitehat


Let's take a step back here: Facebook threatened to have a security analyst arrested for demonstrating and promptly disclosing the full extent of a serious exploit in a non-destructive manner. Whatever other behavior he engaged in that was unnecessary or ineligible for the bug bounty program, that's incredibly unethical on your part. Especially so, because you clearly didn't believe he was going to do any damage to your system or you would've actually called the FBI instead of someone he worked with.

So, you just wanted to cause him reputational damage and personal problems as an act of petty retaliation. You're right on some of the technical issues here, but in terms of ethics, your behavior has been far worse than his. I don't think you realize how much long-term damage you're doing to your relationship with the wider security community by threatening to jail people who were at no point acting maliciously and at no point caused any damage.


This isn't all that complicated, as far as I can tell.

Guy discloses a vulnerability. He knows it potentially has wide reaching security concerns, and downloads enough data to prove that if necessary.

Guy gets shortchanged on the bounty, indicating that either a) facebook is trying to shortchange him, or b) facebook doesn't realize how big of a vulnerability this truly is

Everything about Facebook's response indicates b): they didn't realize how big a vulnerability this truly was. Otherwise, the data he downloaded would have been useless by the time he used it.

You can argue that the guy "went rogue" by hostaging information, but fact is he deserved to be paid more and he was able to prove it. Now facebook looks bad.


Guy discloses vulnerability. Facebook is not as impressed as guy would have hoped. Maybe it's because he's one of several people to disclose the same vulnerability. Maybe there are just a lot of vulnerabilities (they've paid out 4.3m in bounties).

Guy's reaction to rejection: take hostages and threaten Facebook. Facebook moves to defense and cuts guy off.

You are not a good neighbor for kidnapping someone's family to prove to someone their busted lock is a big deal. You show them their lock is busted and trust they can figure out what harm that could lead to. The alternative is companies being hostile to people just looking around their locks, which is the world in the 1990's and 2000's that responsible researchers are trying to avoid going back to.


This is, of course, Facebook's narrative which conflict's with Wes's.

One obvious hole I can see in Facebook's story is that they insinuate that Wes broke back into the server after they disputed the bounty. If this were true, they did nothing in response to the problems Wes found for over a month.

If you look at Wes's timeline, he says access to the server was no longer possible a few days after he filed the second report.

It comes down to who you believe. Personally, I find Wes to be more credible. It sounds like it was most likely a misunderstanding by FaceBook. Now they are doing damage control.


"With the newly obtained AWS key... I queued up several buckets to download, and went to bed for the night."

He definitely took data off of Facebook's server.

Also you misunderstand his access being denied was a firewall change earlier in his story. This was merely to speculate other systems he could have penetrated--completely separate from the S3 buckets he took data from.

From Facebook's perspective it could very well have seemed like he went back for the goods since he submitted three separate reports, the last of which triggered the response. But this is also irrelevant, the question is whether he took data off or not and this is unambiguously yes, by Wes's own admission.


Honestly, I think he did go too far downloading the S3 data, but nothing in their policy stated or implied that was against the rules. He did not violate their written guidelines. And so, Facebook should have paid him (and then changed their policy), even if begrudgingly.


Here is what's happening right now:

FB: He's an experienced bug bounty hunter and should know where reasonable borders are.

All the experienced security guys itt: He's an experienced bug bounty hunter and should know where reasonable borders are or at least not pivot/escalate without asking. Also never dump and hold data.

Everyone else: What he did isn't technically against the rules FB wrote, so they are screwing him, despite it also being written that they have sole discretion.


> All the experienced security guys itt...

Ah, so those who disagree are inexperienced? No true scottsman indeed!


How is that a "no true scotsman"? Most people in this thread commenting have not indicated they work in the infosec industry.

(For the record, I do, though I'm not sure I'd flatter myself by saying I'm "experienced" exactly.)


The problems I have with your absolute statement:

* You are stating that all (not some) experienced security folks are agreeing unanimously. The implication is that those show disagree are not "experienced security guys" (as you called them: "everyone else") - they are the ones who aren't true scotsman

* you assume those who don't explicitly indicate that they work in infosec industry do not work in the infosec industry

* also, you do you need to be "experienced" in the infosec industry to be correct / wrong.


I wasn't the one who made the comment you're referring to. I'm just saying there is no evidence of a "no true Scotsman" here, as far as I can tell.


apologies - didn't notice you weren't OP. IMO, the "no true Scotsman" is implied (might be unintentional)


The general theme of the thread seems to be security industry people, like tptacek (or commenters self-identifying as being in the industry), expressing concern with the researcher's actions (while still admitting Facebook didn't handle it well). The primarily negative comments don't seem to have a specific affiliation tied to them. And given HN's demographic, odds are much more of them are developers than are infosec people.

I don't think the person you were replying to was suggesting that any infosec people who fully support the researcher aren't real infosec workers. I just don't think he saw any who even claimed to be.


I disagree; this is non customer, non-financial data which is often considered fair game because downloading data is useful to locate many security bugs. Source code or config data is a prime target, but so is network diagrams etc.

Defense in depth means every defense needs to be validated not just the outer layers.

PS: Further, if FB says they know about a bug then anything he downloaded could easily be in the wild and should be investigated.


This. Literally every single person who identified themselves as in the security fields that I saw said the researcher went too far.

What's really getting to me is the overwhelming number of responses containing idea that everything that isn't explicitly banned is permitted, despite the recipient saying "No" (even indirectly/without justification) at some point. How to deal with the grey area of consent is something that every adult should know, and it's worrying to me that so many here seem to feel entitled to whatever they can take as long as it wasn't explicitly forbidden.

Obviously FB should update their policy, but at the same time it's important that we as the community use this as an opportunity to learn and discuss where the implicit boundaries are, where one needs clear-cut agreement to proceed.

Consent is sexy.


I'm a security guy and I think what he did towards the end is dubious and strange, but again, he was following their guidelines as written.


I disagree. It's not about whether or not he downloaded the data. That is an undisputed fact between both parties.

The question seems to be if he did it in good faith and within the rules of the bug bounty program.


No. The question is whether FB understood the severity of the bug and paid in proportion to its severity. When you run a bounty program, that's what you do.


This whole thing is silly. Facebook (or any other tech company) have a lot of flexibility and hardly any accountability in defining what a "million dollar bug" is. You really can't believe they are going to just hand you over 1m because you think it is a 1m bug. It very well may be but in the end facebook will be the one deciding the value of said bug and you will have nothing to do with their decision so assume they just won't do that.


Sure, they'll be the one deciding. Except, that other bounty hunters are watching their reaction and their fairness in paying out people for their work.

The next $1M bug that gets discovered will probably go out onto the black market because of Mr. Alex's actions here.


No, the free market decides the value of the bug. You can either pay that value to a white hat to find it or wait til a black hat sells it.

Facebook has now demonstrated that they will not only not pay you, but they will attacking you publicly, slander you, and threaten you. Now what does that mean for the next hacker coming along? Someone who is clean and wants to stay clean will avoid Facebook. Someone who isn't will realize that Facebook is now an easier target because of the clean guys staying away.


Exactly this. Facebook have just demonstrated that at best they'll get an anonymous warning and then all their private keys dumpd onto pastebin when they do nothing.

At best.


I don't think he is claiming 1 million for the bugs, he mostly wanted to share the whole story (that title was just to get some eyeballs instead of using maybe "facebook cheated me")


At no point did he take hostages. It's that sort of thinking that lead to all this drama in the first place. He did however disclose, which is pretty reasonable considering a lot of us are trusting these services to protect our information.

What if Instagram blead all your browser information? So people can now fingerprint billions of people and figure out who (and their pictures) are surfing their sites? What if there are pics on instagram that people rely on being private?


Downloading data is where he crossed the line and what I meant by hostage:

"Wes was not happy with the amount we offered him, and responded with a message explaining that he had downloaded data from S3 using the AWS key..."


You make "downloading" sound more sinister than it is. Downloading something from the network is the only way to see that it's there or know what it is. There is no substantial difference between downloading and viewing in this case.


> "With the newly obtained AWS key... I queued up several buckets to download, and went to bed for the night."

This isn't about whether viewing files on an internet is technically downloading them; this is about retrieving files of enough size and quantity that you have to queue them up for an overnight download.


He kept it for a month. That is different than looking at it.


Under the assumption the keys would be revoked it's just trash anyways - it'd have been useless anyways, but apparently they didnt realize how serious stuff was, otherwise they would have revoked it A month is plenty of time to change critical S3 credentials


And how long does your browser cache the pages and assets you've looked at?


"Maybe it's because he's one of several people to disclose the same vulnerability"

The thing that gets me about this whole situation is that Facebook either didn't understand the extent of the vulnerability (which seems to be the case to me, and in which case I think Wes Wineberg should have been rewarded far greater than they did for showing them how serious it was, though I wouldn't say this is literally a "million dollar" bug) or they were grossly negligent for not patching it up a lot sooner than they did. They can't have it both ways.

Are they bad at managing their bug bounty program, or just bad at responding to serious security issues? It has to be one or the other.


I'm not sure you understand how the law works


I'm not sure anyone really understands how the law works when it comes to bug bounty programs and legal retaliation by companies. Is there any case law precedent yet?


In most cases where the opposing parties are one large publicly-traded company and one small company or individual, the law works like this:

* little guy offends large company, usually through some totally well-meaning and innocent activity that, if illegal at all, is only so due to obscure, obsolete, and/or obtuse laws

* large company unleashes unholy wrath of $1000/hr law firm on little guy threatening to destroy little guy's world if he doesn't immediately comply with all demands

* lawyers laugh at the plight of little guy and say it doesn't matter what he thinks because he can't afford to oppose large company

* little guy is forced to comply no matter how absurd large company's demands are, because only other large companies can oppose large company in court

* should the large company feel inclined to sue the little guy even after he acquiesced to their ridiculous demands, little guy loses all of his possessions in his attempt to pay legal fees. little guy will run out of money before the case wraps, resulting in him getting saddled with a judgment for massive personal liability (cf. Power Ventures)

* large company is free to make the same infractions whenever they feel it's appropriate to do so, because what are you gonna do, sue them? (cf. practically every company who has ever brought a CFAA claim; Google's whole business is violating the CFAA, as well as various copyright laws)

* bonus points: large company has friends in the prosecutor's office and gets the little guy brought up on life-destroying criminal charges (cf. Aaron Swartz). if the case makes it to trial, little guy spends time in jail (cf. weev)

I don't think I missed anything.


Total aside: I have a startup idea to throw a wrench into your accurate depiction of how things currently play out: little guy hires full time lawyer from large pool of unemployed lawyers, suddenly has legal counsel at reasonable (relative) price for extended time. Suddenly little guy has more of a fighting chance to fight back against lawsuit, instead of having to pay out his counsel at $1,000/hr. (He can add a full time yearly lawyer at the clip of every 2 weeks of his adversary's costs)


Especially when Facebook expressly authorizes this type of activity (to some degree). The relevant passage is cited in the original article.


I'm not sure in this case, that's true. But whether or not this was illegal I generally support skirting laws if it makes everyone else more secure. To that end, I also support Snowden.


laws aside, USD2500 for all that data? hmmm, is our data that cheap?


Sounds like FB acted pretty unprofessionally both in the infrastructure department and in handling of the situation. You had some embarrassing mistakes and instead of acknowledging them you tried to scare the reporter into shutting up and leaving you alone. That part is pretty clear. Whether he violated your rules and how much you pay him I don't care.


Especially in the infrastructure department. This is the huge story here.. putting all your creds on S3 in the open protected by one key?? Craziness.


Yes, exactly this. Without escalating an RCE, how would he have been able to expose this absolutely huge flaw? The initial report was inconsequential, but this seems like at the very least a much more than $2500 bug. If things like this are considered "unethical" it kind of makes finding million dollar bugs in a bug bounty close to impossible.


I agree. According to Stamos, though, there was no flaw:

> The fact that AWS keys can be used to access S3 is expected behavior and would not be considered a security flaw in itself.


If he thinks that is how it should be and nothing needs to be changed then god save their user data. He conveniently missed out the key separation and privilege escalation shown by the researcher.


Yeah, that's like gaining root access on a server and being told "well, the fact that those commands will execute is merely Linux working as designed". Talk about missing the point...


That surprised me too. Of course, AWS keys can be used to access S3, but I don't see how exposing private AWS keys on a public facing server can be "expected behavior".


He only got $2500 because the bug had already been reported by others. Most programs pay nothing in that case.


Then the first one to report it should have been paid a lot more than the $2500. The fact is that FB didn't understand the impact of the bug, and it needed Wes to show them how severe the bug was.

And once they knew how severe it was, they ought to have acknowledged the severity and paid him a lot more.


I feel that privilege escalation/lateral movement is implicitly discluded from almost all bug bounty programs, most researchers know that.

It's a really grey area beyond an initial 'access bug', so it pays not to go there. Otherwise, where should Wes have stopped? keep proving more vulnerabilities until he's downloaded their code? or got private photos of Zuckerbergs kid? Just to show that it is indeed a serious bug?


why would private keys be on any system somehow accessible from the internet? gotta put all in the cloud?


Yeah, scaring the guy with his employer and telling him that kind of bug is USD2,500 worth makes me think about how important is my data for them


The hypothetical question Facebook should ask is:

"If the security researcher did not disclose the RCE, but instead sold it to highest bidder, how much would that likely pay in this situation?"

Paying security researchers to properly disclose is a way of financially encouraging the right behavior. While it may be tough to stomach a large payout for responsible disclosure, do you really want them considering the alternative? It's like tipping in a restaurant to ensure food quality.


Agreed. To me as an outsider, this escalation bug looks a max bug, definitely dwarfing any particular admin console vulnerability, and that the processes the researcher claims to have followed were pretty much necessary to show it. Whether or not this followed the letter of the policy, by responsibly reporting the escalation in the spirit of the policy, the researcher has fulfilled the spirit of the goal.


How is this unprofessional behaviour ?

They are trying to condone the behaviour of data access which in all honesty falls on borderline unethical behaviour.

Any professional who participates in any company's bug bounty should respect their rights as well.

Whether the keys were accessible and it is a technical blunder is secondary but the action the researcher took a) accessing the data he did not need to b) making this into a big deal when he was the one not respecting the bug bounty's limits makes this a case for FB.


I am not saying that the sec researcher is right here. I don't care about him, he is just some random guy who wants publicity. Talking about FB is more interesting because it is a huge public corporation which should behave smartly. But if you want talk ethical/not ethical -- he found a serious problem in their infrastructure. Had he not looked at the data ("respected their privacy") he wouldn't have found it. You can't make the omelette w/o breaking eggs. Perhaps this is more of penetration testing, not bug bounty stuff, but again, i don't care. He found stuff. He didn't use it (AFAIK) for anything bad. FB has to thank him and quickly fix their process. Complaining to his boss and acting all pissed suggests that they do not understand they they did mess up big time.


I am not saying that the sec researcher is right here. I don't care about him, he is just some random guy who wants publicity. Talking about FB is more interesting

You're right. An important thing has gotten lost in the shuffle. We should be pointing and laughing at Facebook. Then when the giggling dies down, asking: Something this bad and with such a "trivial" vuln manged to get published, what else have their now-proven-to-be-shitty practices left open?

He found stuff. He didn't use it (AFAIK) for anything bad.

Reminds me of the way business dudes and non-security devs used to react before security got all popular and legit. And they could have even avoided the whole public brewhaha if communication had been better between the tester and the product staff. Classic blunder.

Complaining to his boss and acting all pissed suggests that they do not understand they they did mess up big time.

They jumped to contacting someone over his head before engaging in real talk with him. And then their public response is covering their ass by arguing over the fine print of how he shouldn't have been poking around where he was.

Obviously there are differences, but similarities are fun too!


So the bits where you lost the ssl keys, auth cookie keys, app signing keys, push notification keys - and had to ask him (via his employer) about what data he'd accessed are all true? Implying you have no records of who else might have done this and acquired those keys?

Boggle!


That's one interpretation: the other is that you're placing faith in them being honest, and you'll get a list of what he'd got without the time of doing forensics of the systems, and hence being able to change the keys sooner.


"Placing faith in them being honest", in the same conversation you're having with their uninvolved employer saying what they found is " trivial and of little value" at the same time as threatening them with Facebook's legal team and law enforcement?

Doesn't pass the sniff test from here.

(Admittedly there's no doubt an iceberg-sized bit of this whole drama that neither side are admitting exists.)


The bigger issue here, and the one that Alex at Facebook seems to gloss over - if Wes got this data using a 2 year old well known exploit -- then who else got it without anyone knowing?

While Alex may have a right to be upset at Wes for taking data, Alex should recognize Wes is likely the least of his worries now. Wes wasn't/isn't a professional security researcher... and he was able to do this. That should frighten Alex, and Facebook should have been much more rewarding to Wes for forcing this issue to be taken care of.


"This bug has been fixed, the affected keys have been rotated, and we have no evidence that Wes or anybody else accessed any user data."


> and we have no evidence that Wes or anybody else accessed any user data

This raises way more questions than it answers. Most notably: why aren't you recording who accesses user data?


Reads to me that they are recording the access and no-one did access it.


Not necessarily. If that were the case, wouldn't they use the stronger, "and we have evidence that no data was accessed through this exploit"? The fact is: They can't possibly protect user data once the private SSL keys leak. At that point anyone can intercept user data on third-party, non-Facebook servers if they're affiliated with an ISP, wifi hotspot or other point of access. Anyone could send targeted phishing emails for their servers: How would they know, if the SSL cert looks legit and the DNS is regionally poisoned?


Because of the sequence of events that played out...


Yes and he got paid for it.


I'm not quite sure I understand your point? Of course he got paid, that's how bug bounties work... that doesn't detract in any way from the point I made above.


And I don't understand yours. You were concerned about other people other than Wes accessing the same data via the same flaw, Alex said that did not happen.


But until Wes told them, they had no evidence that Wes was accessing the data! Or are you saying that they did have evidence, but chose to take a "wait and see" approach to someone gaining control of their entire platform?


No, he claimed _not to have any evidence_ that it did happen.

"Quick, shut off the logging on those servers, so we don't have any record of who logged in on them!"


Alex said they "have no evidence" it happened, which is classic slippery legalese. From that phrase it is reasonable to infer either that they have evidence of absence, or absence of evidence, which are not the same thing.


It's standard wording for something like this even if they had 100% evidence of absence.


Correct. It's the standard wording, whether or not they actually have evidence. Therefore we cannot assume, as you have earlier in this thread, that they do in fact have it.


This response deepens my concern about the situation, rather than alleviating it. In this response, you make it sound like calling this security researcher's employer's CEO was a reasonable escalation of the situation, and that is deeply concerning to me, especially given the actual text of the post Wes published here.

It also appears, based on your post, that you think that stating, approximately, "I hope we don't need to contact our legal teams or law enforcement about this," does not constitute a threat of legal or law-enforcement action, and I also find that deeply troubling. While I think you could make a legal distinction that these weren't technically threats of such action, any reasonable person in the researcher's position would by positively idiotic if he/she failed to feel threatened in that way by such statements.


I told Jay that we couldn't allow Wes to set a precedent that anybody can exfiltrate unnecessary amounts of data and call it a part of legitimate bug research, and that I wanted to keep this out of the hands of the lawyers on both sides. I did not threaten legal action against Synack or Wes....

In case it isn't clear, most people will interpret "I want to keep this out of the hands of lawyers" exactly as a threat to start legal action. To be honest I'm not really sure how else it should be interpreted?


"I want to keep this out of the hands of lawyers" is almost universally understood to mean "please do what I say so that I don't have to sue you, which is what I will do if you do not comply".


Maybe someday the response to this sort of threat will be "In the interests of sharing, I already passed on this information to your favorite class action law firm and the media. It's already in the hands of lawyers and your company is already being sued."


Alex, I am always in to hearing from both sides. But despite your reply, I see wrong doings on both sides unfortunately. I dont think you have discussed this message with public relations dept or rep management one.

OK, so lets look at this - your response showed us one extremaly important issue. No clear rules in your system. Wes actually by exploiting your system, exploited your lack of rules regarding the handling of white hat hackers.

Listen, hacker should exploit ALL possible issues. He exploited your weakest one - the rules behind the system. Close the case - reward him XX,XXX for exploiting weakness in your policy for dealing with white hat hackers, spend another as much to bulletproof your policy. Do not reward him for hacks, that are unethical, as it would be wrong, but do it for the other exposure - small dent on your white hat hacker system.


The lesson here is when you find Operations issues (particularly Security Operations) at Facebook don't report them. Those make the CSO look bad directly.


Yep. Code bugs, no problem. Engineers don't report to Alex!


Ok, so here's the thing. Your $2500 payout was not commensurate with the severity of the bug. It ought to have been more. A LOT more.

You're basically telling bounty hunters to not go any further to "prove" the severity of the bug because you're saying, "Trust us. We'll measure the maximum impact and reward you fairly"

And yet, you're not being fair at all. So the bounty hunter needs to "prove" the severity of the bug for you. You're digging your own grave here by not acting in good faith. The next guy who finds a good bug is not going to disclose it to you - he's going to sell it on the black market for a few hundreds of thousands. Or millions.


The real question is did you rotate the keys (and do further hardening, I hope!) because of the vuln report Wes made? If so, than you should be grateful for his work pointing out your mistaken single point of failure via AWS S3 security and you should have rewarded him handsomely.


I think that is the key right there. It seems like the sensu.instagram.com was simply firewalled at first and the AWS keys were not changed. He was then awarded the bounty for reporting this bug. Afterward he demonstrated that the AWS keys were another vulnerability, and it wasn't until after reporting this, that the AWS keys were rotated.

To me, this demonstrates that had Wes not reported the AWS keys, then Facebook would never have rotated them. I would argue that the fact Facebook found need to take action to resolve Wes' third vulnerability submission, could be considered an admission to its legitimacy as a bug. Therefore concluding that the bug is indeed worthy of a bounty.


I can't work out how to not make this sound almost infinitely cynical, but their ssl key expires in 13 days - they only had to shut him up for another few weeks and they could have pretended they weren't currently MITM-able:

https://www.instagram.com

Not Valid After: Thursday, 31 December 2015 11:00:00 pm Australian Eastern Daylight Time

Maybe they'll upgrade it to something better than: Signature algorithm SHA1withRSA WEAK


Does this have anything to do with the SHA1 sunset on 31 December?


That'll be why the key expires on Dec 31 even though it was only issued back in April.

It doesn't explain why Instagram has been happily using a known-compromised wildcard ssl key for two weeks now.

Makes you wonder who actually values and protects Instagram's user privacy more - the researcher or the Facebook CSO...


>Makes you wonder who actually values and protects Instagram's user privacy more - the researcher or the Facebook CSO...

No, I don't wonder about this at all.


Different key, dude. We rotated what was exposed.


So this new rotated key I'm seeing that has an April 2015 start date is a different key to the one your team replaced after it expired and broke everything back in April?

What a coincidence...


[flagged]


> Do you believe that after this chain of events anyone still believes you?

Personal attacks, which this crosses into, are not allowed on Hacker News. Please comment civilly or not at all.


I don't see that as uncivil or a personal attack. It's either a reasonable direct question or a rhetorical one. And as a rhetorical question, it's not a personal attack, but rather makes the point that other posts seem to damage his credibility.


It's obviously not a direct question (there are people defending him in this thread, so of course he "believes" that), and as a rhetorical one it implies that he is lying. That's not a civil debate tactic—there's a reason why parliamentary systems expel people for using it.

Everyone needs to err in favor of respect when addressing someone on the other side of an argument, especially when one's passions are agitated, because the default is to forget all that.


Seems like a reasonable, if rhetorical, question. Hope Alex doesn't complain to his employer about it though ;-P


I am not talking about the person, but the company.

And I am sorry, but after these acts the company has taken, the little bit of trust that was left in the company is gone.

I am sorry if it sounded like a personal attack, that was not intended.


OH COME ON Dang, Alex called up Wes's employer and threatened him with criminal charges and then had the balls to lie about it in his facebook post that he didn't "Threaten". Are you seriously defending this??


Asking HN users to be civil defends nothing except civility.

There's a relevant general point here though. Reactions like this, and many others in this thread, are reflexive. That's really not what this site is for. Good comments for HN aren't reflexive, they're reflective. Practicing that distinction is the most important thing for being a contributor here, and it's orthogonal to one's actual views.


Asking HN users to be civil defends nothing except civility.

This would only be true if that request were applied equally whenever HN users were uncivil. As it stands, it does generally come off as defending specific users.

...it's orthogonal to one's actual views.

Believing this is going to made you a worse moderator -- this is "fair and balanced"-style thinking. There are many perspectives whose projection onto comment reflectivity are anything but zero.


> if that request were applied equally whenever HN users were uncivil

That's asking us to operate like machines—supermachines, in fact, with incivility detection and moderation powers. That's unrealistic. HN users' capacity to be uncivil exceeds our capacity to ask them not to, so the latter maxes out.

> it does generally come off as defending specific users

We try hard not to play favorites. I'm biased, of course, but there's more than one kind of bias here. People are more likely to notice us criticizing a comment they identify with than the cases that go the other way. We're biased to notice what we dislike and assign more weight to it.

> Believing this is going to made you a worse moderator

In that case I'm a bad moderator already, because everything I've learned about HN is packed into what I said there.


outstanding question.


If he had reported the keys along with the original submission, I think it's safe to assume they probably would have rewarded him handsomely.

Instead, he sat on the keys for over a month, and in the meantime used them to download everything he could find onto his personal computer. Simply testing that the keys were live and disclosing this immediately would have been more than enough proof of a bug here.

Edit: downvoters - please explain how using keys to access production systems for over a month without disclosing is acceptable white-hat behavior?


They said they did rotate the keys.


Bug bounties are supposed to represent a high probability payoff of a lesser amount of money for finding a bug. This is in comparison to going the black hat sales root, where probability of sale might be lower, but the payoff might be higher. I can imagine one or two state actors who might pay top dollar to have keys to the kingdom to a major social network.

All I'll remember of this entire story is the outcome- huge vulnerability found (high black market value), and Facebook is talking about lawyers and paying small bounties. Nobody will remember that technically he broke a rule that wasn't well explained. The next Wes will have his major vulnerability in hand, and have this story in his mind. It may change his decisions.

Make this right. Even if you are in the right who cares? You need the perception of your program to be impeccable, paying more than researchers expect. Facebook can afford it more than they can afford to blemish the image of their big bounty. Invite Wes to help you rewrite the confusing parts of the rules. Leave that story in everyone's memories instead.


According to the rules https://www.facebook.com/whitehat/ "We only pay individuals"

Wes COULDN'T have been working for Synack to find bugs as your program doesn't even allow for it.


And according to the update on the post, Alex chose to contact his 'company' (that he had contracted for) even though he had not contacted them through the company email (meaning he sought out a way to go about intimidating Wes). Seem's incredibly petty and intimidating of Alex and reflects poorly on Facebook imo.


Yea, wouldn't want to "set a precedent" that infosec researchers will be rewarded for doing the right thing.

Next time someone uncovers your private keys at least they'll know upfront that there is no money in doing the right thing which might just make selling them to the highest bidder seem like a more compelling option.


With regard to to your final sentence: "Condoning researchers going well above and beyond what is necessary to find and fix critical issues would create a precedent that could be used by those aiming to violate the privacy of our users, and such behavior by legitimate security researchers puts the future of paid bug bounties at risk." Regardless of whether one thinks Weinberg's actions were ill-advised, there seems to be a general consensus that they were instrumental in the discovery of some very critical issues, and that you are lucky it was he who found them.


There is a definite issue with the Facebook bug bounty program in that there are many serious issues with the platform that don't fit within the relatively narrow parameters of the program. I reported an issue that enabled anyone to customize a wall post that says it goes to any site of my choosing in the post (cnn.com, whitehouse.gov, etc), completely customize both the content and photo of the post, and have the link actually go to a URL of my choosing instead of the domain it shows in the post. Examples at [1] and [2].

This issue, which enables uber-credible phishing and other attacks with the assistance of Facebook (since Facebook falsely reports to the user that the link goes to a credible domain of the attacker's choosing while actually sending them to any URL controlled by the attacker), was rejected. Not only was I told that it was not a bug that I could be paid for, but that it really wasn't a bug at all, and that they would do nothing about it.

If these kinds of serious issues are essentially ignored because they don't meet the very narrow guidelines set forth in the bug bounty program, Facebook is going to miss a massive number of problems with its platform.

[1] http://prntscr.com/9fj40t

[2] http://prntscr.com/9fj46h


Thanks for the response, but why did you start by contacting the CEO of Synack instead of the researcher directly?


> At this point, it was reasonable to believe that Wes was operating on behalf of Synack. His account on our portal mentions Synack as his affiliation, he has interacted with us using a synack.com email address, and he has written blog posts that are used by Synack for marketing purposes.

I feel like that bullet point answers your question pretty well.


Sorry but this is a coverup that Alex is using to defend himself. He had easy access to Wes, as Wes was actually demanding a reply via Facebook's own system in place to communicate with researchers, and not receiving one.

Alex would have been aware via the original RCE bug that Wes was reporting on behalf of himself and not his employer. Also, it is reasonable that Wes would have mentioned that he is reporting the bug on behalf of his employer from the beginning.

I presume that Alex knew these things, but he decided to take a more dramatic approach to get Wes to stop, by contacting his employer. It obviously would be leverage, and Alex knew that he could also leverage his position at Facebook to use a security firm in the industry (who would understandably not want to do anything to jeopardize its relationship with one of the largest internet companies in the world) to ask their employee to stop.

I do not believe that Alex legitimately believed that Synack (Wes' employer) was behind the research, but he knew it would be an effective way to stop Wes from continuing, so he decided to pull those strings.


I'm more questioning the flow of researcher reports vulnerability, company awards bounty, researcher disputes bounty value, CSO of company contacts CEO of researcher's company. Is that normal escalation procedure?


Wait, you just made something up.

Even the researcher doesn't claim that Alex contacted the CEO of Synack because of a dispute over the bounty.

Rather, it's the other way around: the researcher disputed the bounty, and did so by revealing that he'd retained AWS credentials from Instagram long after they'd closed the vulnerability that he used to get them.

Alex contacted the CEO of Synack to ensure the credentials weren't used, because if they were, Alex couldn't be control Facebook's response: they've got a bug bounty participant who has essentially "gone rogue" and is exploiting Facebook servers long after they've told him to stop. They need him to stop.


The "bug" here is that they aren't really keeping track of their AWS buckets and keys at all. Least privilege, access logging, remote IP flagging, etc. These operational failures are ostensibly the responsibility of the CSO.

I'm not saying this researcher was 100% in the right, but this is the CSO ass covering. "Don't pay attention to the obvious operational deficits, the problem is the researcher overreaching."

A simple phone call directly to the researcher that cut through the bullshit would have made everything better. But he had to make sure it didn't get out and the only way he could do that was by using the only leverage he had: The researcher's employer.


Alex has in the last few months built one of the best teams in application security at Facebook (Facebook security is now seemingly most of O.G. iSEC Partners). I get it, everyone hates big companies and especially Facebook evil Facebook but, come on. They know what they're doing.

If you understand how security works inside of big companies, this is a really silly theory to run with. CSOs are happy when shit like this gets discovered, because it gives them ammunition to get the rest of the company to adjust policies.

If you were working from the understanding that a CSO comes in and just immediately tells a team of (what is it) NINE THOUSAND developers how to do stuff differently... no. That's not how it works.

The problem is that nobody at Facebook with the possible exception of like 10 people none of whom are Alex can make huge operational changes like "change all the ways we store keys across an entire huge business unit". So, you tell Alex you took AWS credentials he didn't know existed and you're going to start mining them for a story you're bringing to the media, and now Alex is in a position where he's NOT ALLOWED to sit back and try to manage the situation himself.

Delete the keys or I have to tell legal what's happening.

The researcher NEEDED TO HEAR THAT.


>> Delete the keys or I have to tell legal what's happening.

>> The researcher NEEDED TO HEAR THAT.

I'm not in security, but from the outside looking in, how things worked out just doesn't smell right.

If "the researcher NEEDED TO HEAR THAT" is the priority, then why waste time looking up who the guy works for and calling them instead?

The simplest and most obvious way to tell the researcher is to tell him directly in the clearest way possible. It isn't as though there wasn't a pre-existing line of communication with the researcher.


My reading of tptacek's subtext is that Facebook wanted to show the researcher that they were really, ALL-CAPS serious, as in "get you fired and ruin-your-livelihood if you don't stop" serious. These mafia tactics are fine because the Facebook CSO "built a good team and knows what he is doing"


If FB wanted to show the researcher that they were really, ALL-CAPS serious, then they would talk to him directly as in "You've got stolen data and we're going have the FBI arrest you, seize your computers and put you in jail ruin-your-livelihood if you don't stop" serious.

So I still don't see how calling the guy's boss trumps that in terms of scariness. Because if I'm the wronged party (i.e., FB), that's what I'd do if I couldn't resolve it amicably.


If we are disagreeing, I don't quite follow your argument - I never said that this was the worst/scariest thing Facebook could to do (there's no upper limit). What I meant was that the action by Facebook was intended to intimidate (and not that the specific form of intimidation was the worst possible)


We're not disagreeing. I think your interpretation of tptacek's subtext is the same as mine.

In some of his posts, he has been, however, comparing the researcher's dump to criminal activity -- something I am not in disagreement with.

His implication that calling the researcher's boss is a sensible approach to intimidating the researcher for potentially criminal activity -- that in particular seems like a stretch if he's being truly objective.


...which all adds up to a smell of "tptacek knows that team is good, because he's on it".


I don't doubt that he's put together a great application security team. Or that he even knows his shit. And I do understand how it works. CSOs are happy when this kind of shit gets discovered when they can't get other teams onboard to fix it. They're unhappy when it gets discovered when they intentionally ignore it in favor of another initiative (particularly if there's a paper trail showing that someone brought it to their attention). Or when they've already spent a bunch of money and resources fixing it only for everyone to find that they haven't fixed it at all.

There are basic things you can do to mitigate or isolate damage in AWS and they either aren't doing it or have done it badly. Even if he couldn't convince the rest of the company that god-mode keys are bad, he still could have built out some basic infrastructure to track when and where they keys were being used from so red flags could be raised when some random IP address is being used to pull down several buckets.


You're perfectly right, but his employer didn't need to hear it. And that's the whole crux of the matter.


If you read the article, his company does security research and found a vulnerability in Hotmail. Plus he was using his company's email address.

> At this point, it was reasonable to believe that Wes was operating on behalf of Synack. His account on our portal mentions Synack as his affiliation, he has interacted with us using a synack.com email address, and he has written blog posts that are used by Synack for marketing purposes.

That's a big mistake. DO NOT EVER USE YOUR COMPANY EMAIL ADDRESS if you are doing this on your own. The employer has the right to know. Imagine using a company email address on Ashley.com. Yeah, plenty of people were embarrassed after that hack.


Actually his write up makes pretty clear that he didn't use his company email until after Alex went over his head to the CEO.

Second, everything else being equal, Alex going to the CEO without calling or mailing the researcher first was a mistake. Going to someone's boss and saying "please do something, I don't want to get the lawyers involved" IS an implicit legal threat, both to synack and the researcher.


What Alex wrote is a bit interesting given his update (emphasis my own):

>At this point, it was reasonable to believe that Wes was operating on behalf of Synack. His account on our portal mentions Synack as his affiliation, he has interacted with us using a synack.com email address, and he has written blog posts that are used by Synack for marketing purposes.

According to Alex this is the timeline:

1. Researcher not happy with sum

2. Researcher already in contact using Synack email address

3. Alex calls Synack CEO

From the researcher's blog:

>I never contacted Facebook or Alex using my work email account. It was only after Alex contacted my employer via email that I sent a reply from my work account. Alex indirectly contacted me at work, not the other way around.

This means that Alex is lying, is telling exactly the facts needed to come to a specific conclusion and nothing more or the researcher is lying. And he's "written blog posts that are used by Synack"? Come now. Reads a lot like someone looking for a third item so they can make it a comma separated list of reasons. His post smells like bullshit.


I like how we're talking about Stamos warning a guy running around with stolen AWS credentials for all of Instagram in the same fashion as we'd talk about a DMCA threat. "Implicit legal threat"? There's nothing "implicit" or subtle about what was happening here.


You seem levelheaded throughout the thread and make good points more articulately than I ever could but this seems a bit emotionally involved.

It's possible that we're all correct: This guy could be a wildcard researcher that plays fast and loose and the CSO could be covering his own ass. You say he's building a first rate application security team. Is it hard to believe that he could have made the mistake of focusing almost exclusively on that?


LAUGH .. I love it, even his staunch defender friend says he's lying: " I did not threaten legal action against Synack or Wes "

https://www.facebook.com/notes/alex-stamos/bug-bounty-ethics...


I get your point in these threads, but unless I'm misunderstanding, who cares about stolen, potentially undeleted Amazon creds? Revoke the key in the portal and be done with it?

Given who I'm replying to, I'm assuming that I'm missing some key piece of the puzzle.

(And I totally acknowledge it doesn't change the circumstances of what either side has done, I'm just curious)


The point is, having those is a prosecute-able offense, if Facebook chose to prosecute. So it's a big threshold to cross legally, even if not meaningful from a programmer's perspective.


Facebook's terms say they will not prosecute /report whitehats to law-enforcement. Facebook could prosecute, at the price of some goodwill from the security industry (or part of it). I'm sure a competent lawyer to mount a robust defence for the security researcher (beyond reasonable doubt, IMO).


You're missing my point and talking about something entirely different from what I'm talking about. I'm not talking about whether Facebook will prosecute and what the consequences of that will be (whether they'll win or lose whatever).

I'm just pointing out that taking AWS keys is a big deal, because it's legally a big deal.


Facebook's disclosure policy reads:

>If you give us reasonable time to respond to your report before making any information public, and make a good faith effort to avoid privacy violations, destruction of data, and interruption or degradation of our service during your research, we will not bring any lawsuit against you or ask law enforcement to investigate you.

IANAL: but it could be argued (in court) that he had Facebook's permission to getting the AWS keys. In his opinion (and mine) he made good faith efforts to avoid privacy violations.

Facebook's official disclosure policy has legal weight. There is legal concept (whose name is escaping me) that could apply that in laymen's terms say the official disclosure policy gives him Facebook's tacit approval - I first heard about it in the Oracle v Google where Google argued a blog post congratulating Google provided tacit approval.


The part you emphasized is dependent on the first part of that sentence, however. In this hypothetical lawsuit Facebook's lawyers would easily be able argue that they would not have done anything for the initial exploit or even demonstrating that he had recovered valid AWS keys but that attempting to hoover up data from S3, etc. violated the “good faith effort to avoid privacy violations” part.


>but that attempting to hoover up data from S3

That's a mischaracterization given his description. He examined the filenames/metadata specifically to avoid buckets that might contain user data.


1. This assumes his description of his actions is completely accurate

2. This assumes that he was perfectly accurate in his assessment of an unfamiliar project's naming conventions, data structure, etc.

3. This assumes that he was perfectly reliable in making the actual copies and didn't accidentally include potential personal data (e.g. who knows what might be in a log file?)

The problem is that we're talking about someone who already decided to exceed the bounds of what was clearly protected under the bounty program. He'd already reported the initial vulnerability and been paid for it but waited until later to mention that he'd copied a bunch of other data, had access to critical infrastructure, and wanted more money.

It seems fairly likely that this wasn't malicious but rather just poor judgement, but that makes it very hard to assume that outside of that one huge lapse in judgement he did everything correctly. It's really easy to see why Facebook couldn't trust his word at that point since it's already far outside normal ethical behaviour.


To your first point: There's being skeptical and then there's calling someone a liar without actually calling them a liar because you don't have any justification for doing so. This is far from the first time I've seen this on HN and it's really not okay. There's no point in speculating about the veracity of this person's statements until there's a reason to.

To the second and third: They only require that a researcher "...make a good faith effort to avoid privacy violations..." and I'd say he met that. You can argue that the entire endeavor wasn't in good faith but he certainly made a significant and conscious effort to avoid private data.

I think his biggest lapse in judgement was that he brought security operations issues to light in a bug bounty program run by the people that would be most embarrassed by them. Application security bugs are created by the engineering team and the CSO's application security team fixes them (or advises or whatever). Security operations issues are entirely the responsibility of the CSO's department.

Facebook (as an organization) should be thanking him. While he didn't expose application security bugs he exposed significant operational issues and blind spots. Keys with far too much access, lack of log inspection, lack of security around what IP addresses a key can be used from, etc. Operational issues and lapses in operational security are what got Twitter in hot water with the FTC in 2010. It's not as easy to play cowboy with operations as it used to be.

The CSO hasn't been around for long but by all accounts he poured a lot of effort into hiring an application security team. Perhaps that's his specialty but even one experienced technical manager hired for security operations could have caught these basic issues. They probably wouldn't have addressed the lack of least privilege in that time frame but they could have easily spun up logging to catch some rando on an unknown IP address using their keys.

But like I said, he hasn't been there for long so I don't blame him for the failure. What I do blame him for is calling up the employer to threaten them as leverage to shut up the researcher. I blame him for posting a thinly veiled justification for doing so. He could have addressed this openly, talked to the guy directly and went to the other C-level execs with it as a justification for getting everyone on board with fixing it but he tried to keep it contained to his department.

I understand how he must feel being the new guy who's responsible for the outcome but not for creating it. I know he'll get questions that he might not be able to answer since they probably aren't logging bucket access. Questions like, "Who else got a copy of these keys and what did they access?" Saying "I don't know and we may never know" in response to that, even if you weren't in charge more than three months ago, is rough.


Again, you're quibbling about legal details that are not relevant to my point. I'm pointing out that his actions are a big deal because they crossed a legal threshold where a company would have a somewhat decent case to prosecute you. I don't care whether or not they would succeed.


Then why the immediate escalation?

Wouldn't it have made more sense to contact the researcher directly, rather than using his position of power to pressure the researcher's company's CEO?

Why not assume good faith? (Which is what I would think a white hat bug bounty program should assume)


I am not sure what part of

"he has interacted with us using a synack.com email address,"

invalidates my reading that he was using his company's email?


Bottom of his post he replies to Alex his post:

> I never contacted Facebook or Alex using my work email account. It was only after Alex contacted my employer via email that I sent a reply from my work account. Alex indirectly contacted me at work, not the other way around.

If that is true, it is either poor judgement from Alex, or bad intent to call Synack


It says nothing about who initiated contact using his company email address. It could have said, "he contacted us using" or "his facebook account was associated with" but instead it says "he has interacted with us using". Sometimes what's not said tells us as much if not more.


The Facebook reply does not state that he was using his company email address to report issues or to communicate prior to them reaching out. The researcher says that he only used the email after Facebook got his employer involved.

The Facebook post does not, in any way, contest that.


Technically it doesn't contest that but he uses multiple weak points in bullet prior to the one about contacting the employer's CEO. The intent was clearly to establish that his decision to contact the researcher's employer had merit. It was clearly carefully written so it remained factual while implying things that aren't.

I'm not disagreeing with you, only making it clear that yeukhon was played by Alex exactly as intended so he'd be out there defending him on sites like HN.


He didn't use his company address until he was contacted through his company.


In any case, there is one question remains. How do facebook defines a "million dollar" bug if the security team is not aware of the damage it can do. Since this is not the first time this bug was reported, did they actually give a big bounty to the first person who did the initial report(Given that it can lead to this much damage)? Or just another small bounty saying that it's not a very important security flaw.


There are enough laws against "cybercrime". If Alex felt threatened he should have escalated the issue to the FBI. There is no single reason to call the employer. By doing so Alex has threatened Wesley to fuck up his life.

edit: Or -after calling the CEO- he should have contacted Wesley directly and so they could deescalate the problem together.


> The researcher NEEDED TO HEAR THAT.

I don't disagree. But why go through his employer, when they already had a direct line to the researcher himself?


Intimidation.


Relative security teams are almost useless. In 2-3 years FB might have it's shit together, but three months is no where near long enough to fix there problems.


Agreed, after looking at this linked in Profile, it's hard to blame Alex for the problems as he's only been there a short while. However, he can be blamed for creating all this unnecessary drama.


If you understood how big companies work, you'd know it takes more than a few months to build "one of the best teams". This is one thing in Alex's favor though, he's new to the job. Still, if you also understood how big companies work, you'd know that everyone hates the drama queen.

The right move here would have not been to threaten Wes, pay him, and just update the policy.

Lesson learned for Alex and his friends: Do not threaten individual contributors or suffer massive freaking drama. Thank you internet.


> I'm not saying this researcher was 100% in the right, but this is the CSO ass covering. "Don't pay attention to the obvious operational deficits, the problem is the researcher overreaching."

The response from FB's CSO is very specific to a very specific blog publication. Not regarding the flaws in how their AWS Buckets are used.


I'm not sure what you're getting at.


Your statement:

> "Don't pay attention to the obvious operational deficits, the problem is the researcher overreaching."

mischaracterizes what the response by FB CSO as one that is attempting to draw criticism away from operation flaws by instead placing focus/blame on the researchers methodology.


I disagree.

A security researcher went public with a story of "I found this massive security hole and Facebook tried to avoid paying what I thought it was worth, and then threatened me with legal action"

The response that Alex thinks he needs to make is "my actions were reasonable because ..."

From external appearances it seems as though he is more concerned about looking like a heavy-handed, lawyer-invoking, CSO than the publicity around FB having an unpatched RCE that allowed access to highly-privileged AWS keys.

What he chooses to write about is reflection of what he saw as the most important news in the original blog post.

I suspect he's actually right. The blog post will probably raise more bad publicity around the way FB handled the research & disclosure than the existence of the bug, and it's the piece that needs to be resolved well.


You're right, that was the purpose of trying to keep him quiet by contacting the CE-freaking-O of his place of employment with an implicit legal threat. The blog post is an attempt to do damage control when he realized the researcher wasn't going to put up with that and went public.


> by revealing that he'd retained AWS credentials from Instagram long after they'd closed the vulnerability that he used to get them.

How would that change anything?

If Facebook did rotate all keys the moment the researcher reported it, they made no difference.

If Facebook did not, then they aren’t taking care of their security properly.


Without defending the researcher here, I thought that was the weakest point in Facebook's response. Was he interacting with Facebook using his synack.com email address during this exchange rather than at some point in the past? Was he signed up on Facebook with his synack.com address? (I haven't used the bug bounty program but it appears to require a user account.) Did he mention his employment with Synack in the course of the exchange? If any of those things were true, I suspect they'd say so, rather than leaving it at "has interacted..."

I don't know, if the guy was just shaking them down then maybe trying to get him fired is indeed a reasonable thing to do, but I don't buy that anyone would have just assumed under the circumstances that he was doing all of this on the clock.


"I never contacted Facebook or Alex using my work email account. It was only after Alex contacted my employer via email that I sent a reply from my work account. Alex indirectly contacted me at work, not the other way around."


I don't think it does. Wes asked for communication via Facebook's own tools for it, didn't get it, and they went around him to his boss. That's crap.

Now, Wes exfiltrating data rather than just looking at it? Not cool. But Facebook's side of the story is just as biased as his.


But it seems obvious that in doing so he wasn't acting in good faith.


Yeah, why not just a quick email- "Hey are you working for Synack here or independently?"


Supposedly he was using his synack email address, why would they assume he worked independently?


He posted a reply on his blog saying that the only used his synack email address after the initial exchange with the synack CEO


At this point, it was reasonable to believe that Wes was operating on behalf of Synack.

Huh? how did you make this connection? Why would he then report his findings to you?

From my point of view, contacting his employer was clearly meant as a gut punch.


This section was 100% written by a lawyer, and is intended to sound obvious without in fact being obvious at all.


Shame on you for contacting his employer directly. This teaches a good lesson to all the black, grey and white hats out there. Next time they'll know to just p0wn to 0wn.


Imo you are just trying to cover up yourself poorly, you should accept the guilt of having had a server with a well known vulnerability that had the keys to the kingdom instead of blaming everything on Wes.


Um... Have to side with Wes here. Your rules were not nearly adequate, and instead of going at Wes directly with adequate and in-depth communication, the CSO went after his employer - which is _not_ ethical.


Sorry, but it looks like your technical issue has become a PR issue. Contacting his employer was an act of intimidation, and no amount of cover-up will make up for it.


Quite frankly I'm not surprised Wes is sour about how this was handled and the amount granted as bounty.

It's very rare for a single vulnerability to grant you keys to the kingdom. If you check pwn2own vast majority of the hacks leverage more than one. Most major attacks start with a small bug.

The real severity of the vulnerability is how far can it be pushed to broaden the scope. In this case that admin panel was just an entry point to a whole chain of security SNAFUs (aws keys in files at a multi-billion-dollar internet company, seriously?).

To reiterate, he got access to: - source code - aws keys - plethora of 3rd party platform keys - a bunch of private keys - user data

This might not be the million dollar bug, but close.

Just thing about what an actual attacker could have done with it: - login as / impersonate ANY instagram account - impersonate whole instagram (code + ssl keys!) - inject malware into instagram app and sign it with your keys - download tons of user data - wreck havoc in aws (possibly expanding what he has access to - we don't know what else he would have been able to access had he spent weeks not hours exploring).

This is not a missing permission check allowing you to delete other peoples photos. This is huge and based of that credit and significantly higher bounty is due.

Aside from that the handling of the whole matter was not good: - if your policy is not precise interpret it to your disadvantage. you screwed up not making it clear - contacting his boss should only happen (if at all) after he has been asked the same account - the post about "bug bounty ethics" misses the point. Following your logic heartbleed investigation should have ended when someone discovered a buffer over-read without exploring where that leads.


"I did say that Wes's behavior reflected poorly on him and on Synack, and that it was in our common best interests to focus on the legitimate RCE report and not the unnecessary pivot into S3 and downloading of data."

You lost me at this point. Who do you think you are really?


He must be pretty delusional if he thinks that's an OK thing to write on a blog. If I was him I'd deny, deny, deny or try and make it seem a whole lot less sinister than it is.


> The fact that AWS keys can be used to access S3 is expected behavior and would not be considered a security flaw in itself.

Isn't it a security flaw that a single AWS key was able to access all of Instagram's data?


No excuses for contacting his employer though. Just plain intimidation.


You talk about ethics like it is an entirely black and white concept. I would consider a lot of Facebook's practices unethical in comparison to my own set of ethics. There are ethical dilemmas, which are basically what most discussion about ethics is about to begin with. You use the word unethical but without discussing ethical dilemmas, and that makes your argument weak even though you potentially have a very strong argument.


What he did do is expose that you guys don't know how to use aws and S3. Those keys should have never been on a server in the first place. I think it would have been in your best interest to fix it and pay him. Now that other hackers know Instagram sucks at server management it is only time before someone finds another key. Guess what they are not going to do? They are not going to report it but download and sell your info.


I hope someone calls your CEO and talks to him about your conduct.


I'm not really impressed by your reaction...


If the intention of a bug bounty program is for white hat disclosure, you have done pretty much everything you can for vulnerabilities to be dealt with a black hat manner.

Well done.


> The fact that AWS keys can be used to access S3 is expected behavior and would not be considered a security flaw in itself.

A security "mistake" then? :)


Thank you for the response, Alex, especially the details about the researcher's email address and affiliation. It makes your actions seem reasonable, in my opinion. As a security researcher, I personally would not be dissauded from reporting to the Facebook Whitehat program due to this incident.

I'm glad companies can offer transparency like this.


I think his response was too personal. They're both adults, and calling his employing company's CEO to make a point because you can, is to me, way too close for comfort.

There were other personal attacks in his response that I've talked about here: https://news.ycombinator.com/item?id=10755402


> Thank you for the response, Alex... It makes your actions seem reasonable... I'm glad companies can offer transparency like this.

The people who like you the most and are the easiest to persuade.


> At no time did we say that Wes could not write up the bug, which is less critical than several other public reports that we have rewarded and celebrated.

There is no bug more critical than one that results in complete access to Instagram infrastructure. Sure, the bug is stupid, but you are fooling yourself.


Couldn't it be argued that instagram's choice to store private keys in a third party system (amazon) is a million(s?) dollar bug?


Why have you not rotated your private keys?

  notBefore=Apr 14 00:00:00 2015 GMT
  notAfter=Dec 31 12:00:00 2015 GMT
(Feel free to respond here if you want to pay me the bug bounty for this)


    $ echo | openssl s_client -connect www.instagram.com:443 2>/dev/null | openssl x509 -noout -dates
    notBefore=Apr 14 00:00:00 2015 GMT
    notAfter=Dec 31 12:00:00 2015 GMT
AWS bucket creds are not the same thing as SSL certs and were most likely specific to only relevant s3 buckets which are totally separate from any load balancers.


I never claimed that AWS bucket creds were the same thing as SSL certs.


Then rotating their SSL keys shouldn't be relevant.


Unless I'm misunderstanding, it's relevant because this researcher was able to access (from the blog): -- SSL certificates and private keys, including both instagram.com and *.instagram.com

If this researcher was able to access it via not much more than a hole that was _already reported multiple times_, then I think it's not a stretch to think that [many?] other less honest parties could (and in my opinion most likely do) already have it.

If it was me, even if it's definitely only a single researcher who got access (and it doesn't sound to me like they know for sure - but regardless), something _that_ sensitive would have to be rotated anyways. If it was someone outside the teams that strictly require access to it operationaly, I'd rotate it, let alone outside the company.


Going to his employer, instead of talking to him direct was just petty.


Sorry Alex, you're in the wrong here. Your threats to go to law enforcement completely undermine the credibility of your bug bounty program. Your publicly calling another professional "unethical" is a serious charge for what is a grey area at best, and the facts and history of issues reported by this person would not lead a reasonable person to conclude malice. And ignoring him but going to his boss, that's just petty.

Not even one attempt to talk to the guy like an adult about what he was doing? You couldn't even be bothered to say anything?

You'd be amazed how a polite reply to the effect of, "thanks, you've proven your point, and we are getting a little uncomfortable with where this is headed" might have solved all of this. If he ignored you and kept hacking after that, by all means steamroll him, but if you don't even have that much respect for your peers, I'm not sure why you bother with the bounty program.


Agreed. You've have quite a list of arguments defending the researcher when only his track record should have been enough to prove his good will. Despite the landslide of evidence of good will, Facebook decided to act in bad faith. Unacceptable, I hope other researchers read and remember this story.


CXOs do not talk directly to anyone other than CXOs right?


8


Yep, my opinion of Facebook reinforced to the highest extent. Utter amateurism and disgusting behaviour. What an absolutely idiotic way to handle this situation, and coming from the very top. I haven't used Facebook in years, thank you for an excellent reminder to delete my Instagram account.

edit: Alex, how about the "shit, we really fucked up; I apologise to our users, yadda yadda" blog post?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: