Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They meant "will have worked out enough of the kinks by their 5th successful launch so as to be operating at a high success rate."

Lots of falcons failed, but you don't see anybody worried about their payload or crew on Falcon 9 these days.



As long as the organization has and maintain a culture that is aggressively seeking out problems large and small, and proactively fixing aprox. all of them, you will end up with a program with a high rate of operational success.

It's the X-origin vs Slope issue - steeper slope always wins.

The problem is maintaining that aggressive problem-seeking culture after long periods of success


This is part of the culture distinction. On one hand SpaceX is attacking problems, but IMO they often don't go towards attacking the root causal understanding. That means there's a possibility of unknown latent risk.

As an example, they had issues with failures related to their COPVs rupturing. On the one hand, they addressed the problem by redesigning their system. On the other, they never really investigated fully why the COPVs were failing in the first place. Instead, NASA decided to fund that investigation on their own. One possible consequence is that their redesign didn't fully address the risk because they never fully investigated the root cause.


>will have worked out enough of the kinks by their 5th successful launch so as to be operating at a high success rate."

I agree with the first part, but the second half extrapolates too far IMO. One successful launch tells us very little about operational reliability.


But the apples to oranges here is >1 launches (SpaceX, some successful and some unsuccessful) vs 1 successful launch (traditional modern aerospace).

From an outcome perspective, it's hard to ever see the lower launch rate dominating from any perspective.

You don't have economies of manufacturing scale, because your assembly rate is so low it doesn't make sense not to treat each as exquisite.

You don't have rapid iteration on manufacturing improvements, because the tyranny of safety checks on manufacturing time balloons {time from fix to flight}, after a proposed fix is identified.

And most importantly, you leave yourselves extremely vulnerable to unknown-unknowns, that you can't imagine in the design phase.

For example, if NASA had been launching the shuttle more rapidly, with the bulk of those being uncrewed launches, they probably would have picked an uncrewed launch to test expanding the temperature bounds at Cape Canaveral, and Challenger would have exploded without a crew.

As was, NASA's shuttle launches were so rare that there wasn't acceptable launch rate and weren't low-impact launch opportunities to do so. So they tested it on a crewed mission with disastrous results.

Point being: they backed themselves into a low-volume/high-risk corner of their own strategic design

SpaceX's most brilliant achievement was using Starlink to artificially boost launch demand and give them a minimally-profitable/break-even place to sink higher-risk launches.


> Point being: they backed themselves into a low-volume/high-risk corner of their own strategic design

Because they knew they had to "get it right" the first time because a bunch of buffoons(congress) would consider any crash or explosion as a failure and pull funds immediately.


>Because they knew they had to "get it right" the first time because a bunch of buffoons(congress) would consider any crash or explosion as a failure and pull funds immediately.

That’s not the way the CCP contracting works.


I agree with many of your points, but it comes across as slightly biased because you don't acknowledge a single downside to any of them.

>You don't have rapid iteration on manufacturing improvements, because the tyranny of safety checks on manufacturing time balloons

Rapid iterations has obvious benefits. But there are also downsides because it makes it harder to arrive at a stable, reliable design, it introduces vendor issues, etc. Tesla is also known for iterating faster compared to their major competitors but it has resulted in logistics and reliability issues.

IMO SpaceX's most brilliant achievement was leveraging govt contracts to work out the kinks of their designs, which could then be leveraged at a lower risk for Starlink. In effect, they let the public take the burden of the risk (because the govt is really the only entity capable of shouldering that size of a risk for an unproven quantity) and then transitioned to a private means of revenue in Starlink. (I'm not saying that as a slight btw, I think it's mutually beneficial).


Frankly comparing Tesla and SpaceX is getting to be a tiresome argument. They're owned by one "eccentric" billionaire, but he's not an engineer, they don't share staff, facilities or manufacturing outside of "hey this alloy is pretty good".

SpaceX's strategy for the Falcon 9 worked and it's one of the most reliable rockets in the world, flying the most often.


>Frankly comparing Tesla and SpaceX is getting to be a tiresome argument.

You're focused on the wrong takeaway. You're making this about a person, I'm talking about a process. I used Tesla because it's easy to see how one culture translates to the other. Insert any company that uses rapid iteration in place of Tesla if you prefer.

The point is that there are certain circumstances where rapid iteration is useful and others less so. When reliability matters, rapid iteration may be working against you. (It's a continuum, of course, so the real question is where is that tipping point)

>SpaceX's strategy for the Falcon 9 worked

The point I'm driving at is there is a distinction to be made when finding out why certain iterations didn't work vs. just changing the design without fully understanding the failure mechanisms of the first. One leads to a greater understanding than the other. It's a difference in an engineers mindset and a scientists mindset.


Again: the Falcon 9 works. It works now. It is a rocket, built by the same company which is building Starship.

You are driving at a point by pretending there's some important difference because "in this industry".

It's the same industry. Building the same type of product. By the same company.


No. I am not saying it’s the “industry” as much as the context of risk. That's why it doesn't matter if the analogy is Tesla or some other safety-critical manufacturer. To be clearer: how many F9 launches have carried humans?

Now go look at the history of Shuttle for the equivalent number of launches at that risk level. Would you claim they are equivalent in terms of human-rated safety?

If not, it’s only because you have the benefit of knowing the long-tail probabilities with the Shuttle.

>It's the same industry. Building the same type of product.

By extension of your logic, Starship should then already have the same launch reliability as F9. So either this is an example of a low-probability event, or your logic is flawed.


There were 135 space shuttle launches, of which 2 failed. There have been 162 launches of the F9 block 5 with 0 failures. Why do you think we have more knowledge of the long-tail probability of the Shuttle than F9?

True, few of those F9 missions were crewed, but that's the point. There's no difference between a crewed and a noncrewed F9 launch vehicle, so there's no reason to think the presence of humans would change the risk. So, you get to accumulate most of that reliability data without putting people at risk doing so.


>Why do you think we have more knowledge of the long-tail probability of the Shuttle than F9?

Because the nature of the two programs was fundamentally different. The Shuttle was a product contract, while CCP is a service contract. On the former, the govt has much more control, and will detail more rigorous acceptance criteria. This generally gives a much higher pedigree on quality control. On the latter, they take a much more hands off approach and have limited insight.

As an analogy, imagine you are making a big bet on acquiring a software company. One company gives you their source code, shows you all their most recent static analysis, unit test results, allows you to interview their programmers etc. The other company allows you none of that, but gives you a chance to play around on their website to see for yourself. Both systems seem to work when you try the end product, but which do you have higher confidence in?

At the end of the day, "reliability" is just a measure of how much confidence we have that a product will do what we ask of it, when we ask.


Are you asserting that we gain better insight into the reliability of a system by thinking about it deeply rather than by observing it perform its function? Because I don't believe that for a minute.

I'm not saying you can get by without thinking, but it's difficult for humans to estimate the reliability of a complicated system. Reality, though, has no problem doing it.

Plus, I think your analogy is flawed. NASA surely has a more hands-off approach on the CCP than on the Shuttle, but to say it's hands-off is misleading. They do have a lot of access.


No, I'm not saying by "thinking about it" (although that has its place). Everything I listed is a form of testing. But there's a distinction between iterative testing at a lower level, and end-to-end testing. Again, both have their place.

Take the example of the F9 strut failure. They could have tested the material outside of the final test configuration and saved themselves a lot of trouble. They chose to forego that testing, and instead 'tested ' it as part of part of their launch configuration. (I put it in quotes because it's not clear to me that this was a conscious testing decision).

There’s also a difference between “we’re not completely sure of the fundamental principles, but our testing indicates it works” and “our testing indicates it works and we have a solid understanding why”. The latter allows you to know the limits of your application much more readily. The risk in the former is that you don’t know what you don’t know, so you can never be wholly sure if you’re good or lucky. And luck can be fickle. And this is also where rapid iteration can lead to issues: the more you change, the less sure you can be about whether your results are attributable to luck.

>Plus, I think your analogy is flawed. NASA surely has a more hands-off approach on the CCP than on the Shuttle, but to say it's hands-off is misleading. They do have a lot of access.

They have many engineers who want more access and are effectively told to back off because it's not their place in this type of contract. So I'm not saying they have no access, I'm saying they have very limited access by comparison. It would have been better if I made the analogy that they get the results to a small select number of tests, but not all.


Considering his vast wealth of knowledge and expertise in the field he might as well be called an engineer. In fact probably more so than many real engineers.


I think the risk is that they are both driven by what the CEO finds cool which may or may not be what's most effective


And SLS is that much further behind


No doubt, but operational tempo is a different issue than system reliability.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: