Hacker Newsnew | past | comments | ask | show | jobs | submit | quasirandom's commentslogin

The last quote from Desair really sums it up nicely, "The language of finance can be insidious. Words like leverage and concepts like diversification can morph from narrow financial terms into much more general ways of understanding the world."

This kind of thinking has infected corporate America, which is optimizing return based measures--typically IRR--rather than profit based measures. That kind of thinking will lead you to believe offshoring your fabs is a good idea, because is reduces assets in the denominator.

On the other hand, financial thinking can help you better understand the world. One of the more powerful insights that comes to mind is Merton's model of corporate capital structure. It turns out that equity is "equivalent" to a long call on firm assets and debt is "equivalent" to a risk-free bond and a short put on assets.

Seeing things this way tells you something about how firms are run. Equity owners (management) have an incentive to increase asset volatility, which increases the call value. This value is taken straight from the bond holders short put. This is why you see buybacks in situations where buybacks seem crazy (Intel in 8/2020).

https://www0.gsb.columbia.edu/faculty/ssundaresan/papers/Mer...


> That kind of thinking will lead you to believe offshoring your fabs is a good idea, because is reduces assets in the denominator.

I am under the impression that offshoring fabs is a good idea due to drastically lower labor costs and other costs such as complying with environmental regulations as well as legal costs arising from those regulations. At least it was, and now there may be supply chain advantages to the offshore fabs.


>This kind of thinking has infected corporate America, which is optimizing return based measures--typically IRR--rather than profit based measures.

In uni, this difference was explicitly used to differentiate investors and financiers.

Investors are picky where they put their money and want profit.

Financiers want every dollar always working and covet IRR.

... of course, out in the real world it isn't so cut and dry.


What's the difference between the US and a despotic regime? We have a gulag system. There is a degree of legal nihilism that borders on lawlessness. For example, the DOJ charged someone with a federal crime for throwing an imported tequila bottle. When the laws mean anything and nothing, you have rule by fiat. There's corruption everywhere...


I’m sure there’s corruption to be found everywhere, however in some places you have recourse, in others you don’t. Borders are quite special areas that one ought to take with utmost seriousness anywhere. You can imagine what would happen if you threw out some coke in the Kuala Lumpur or Singapore airports.


Who said anything about a border? A person was arrested in Seattle for throwing a Jose Cuervo bottle imported from Mexico. The importation was the "foreign or interstate" nexus. Do you own anything that is not imported? That's legal nihilism.


> The difference being that then we saluted getting access to raw data, whereas now we are being conditioned to consider them "misinformation".

I like this point. I wish it would be reported as an "influence operation" rather than "misinformation". The most effective propaganda is 80% true.


\


I don't mean to blame you, but for the sake of anyone reading this -- you could absolutely have negotiated this down with the hospital by pleading financial hardship.

I'm not trying to justify the system overall, but please don't take a $150k bill at face value, if you can't reasonably afford to pay it.


> if you can't reasonably afford to pay it.

It's true you can negotiate, but it's also true their bar for "reasonably afford to pay it" absolutely can include emptying your savings and other assets.


Hospitals' bar for reasonably afford is low, in my experience. They want proof that you make poverty line wages or below to write it off, below 300% of poverty line (around 38k) for reductions.


To totally write off, sure.

But they will often vastly reduce and/or over 0% interest long term payment plans.


There's more information here as well. Cloudflare was apparently operating network connected facial recognition cameras in their offices.

I'm not someone who's crazy about privacy, but this is a pretty dark indicator for a company housing DNS query records. Maybe its time for someone to build a proxy for tunneling Cloudflare DoH/DoT over tor or some other free mixing network.


How is that something to be worried about? There are companies out there that try to monitor if employees are in rooms/areas that they're not supposed to be. You can do that with badges/RFID but then people can take a card or slide by in various ways. (Happens all the time at big companies - people just tailgate) If anything, they might be taking privacy more seriously by not letting people without authorized access into secure areas.

I think you give up any sense of privacy as to where you're located in an office or where you've been in an office when you decide to work in an office owned by some employer. I don't know why there'd be any expectation there.


I find it fascinating how okay you are with your employer tracking you. We aren’t to the life contract part of the dystopia yet, quit trying to skip ahead and give away your freedom so easily


If you are in a secure area, like a server room for example, it's perfectly normal for there to be badged entry, cameras everywhere etc. There will also be signs everywhere telling you this.

If it's really secure there will be monitoring of all entrances, including corridors. (And there will still occasionally be people successfully tailgating, usually for perfectly innocent reasons like forgetting their badges at their desks etc. Real security is all sorts of fun.)


I've gotten stuck in a datacenter because I forgot the correct badge-out process. Tripped an alarm and got stuck in a man-trap.

For the uninformed - badge in to open the entrance door. The room then locks and you use your badge to open the exit.


Interesting. Seems like that would be a fire hazard, or was there a hold-to-escape type crashbar?


In fire/hazard conditions, security systems are required (at least in Australia) to permit free handle egress from any point in the building to a fire escape.

Any access control system has the capability to integrate with a fire system and allow this.


I believe the general policy in the States is "one swift motion" to exit a room which is why you see mostly crashbars and lever handles as egress, mostly on push doors for the primary egress path.

In secured areas where they want you to swipe out or places where they might get tripped accidentally, they sometimes have like a 15 second lockout before actually tripping the door.

I've been in just one server room and they just had a motion-deactivated maglock tied to an electric strike so in the case of a power outage a simple mechanical lock could be opened but otherwise you need to badge in/out.


Some facilities get exceptions to fire policy and require employees to go through training of sorts. Diablo Canyon Nuclear Power Plant is one place I visited that did not have emergency egress. No badge? Call the guards, that is the only way out.


Fire hazard and false imprisonment. Ask walmart. You accidentally lock someone in the store and you are looking at a civil rights law suit. You cannot restrict another humans' movement without due process.


Never been to a Walmart specifically, but every store around here broadcasts a "closing the store in X minutes, please be outside of building by then" message on the loudspeakers a bunch of times, then the employees pack some things up and walk through the entire shop to check if everything is ok and the security personnel, being the last to leave, check all cameras before finally locking up.

This seems like plenty of due diligence for the store not to be liable is someone gets locked inside.


Obviously not the same type of facility, but I have seen buildings where the closing of smoke shutters opens otherwise locked doors, revealing an alternate fire escape from the corridor to the stairwell.


I'm okay with my employer tracking me if I'm on their premises using their property that they've given me. I'm not okay with them knowing anything I do or where I am outside of work, but if I'm at work then I'd be confused if I wasn't being tracked in some way.

Not at all in the paranoid "are you slacking off?!" sense, but just security information like knowing when I've been in a server room, or knowing if my work computer sent traffic to a known botnet C&C. If there's a security or theft incident and they don't know who's been in their building or what their computers are doing, it's pretty much impossible to investigate anything.

I understand that in places like Europe there's a very different culture and workers have a lot of protections from things employers may want to do, but not everyone around the world feels that way. Basic record-keeping of when badge-restricted doors and computers are authenticated to doesn't feel invasive to me in the slightest, even if others may strongly feel it is invasive.

There are many things I would find egregiously invasive, such as a manager inspecting all the websites someone visits to assess how productive they are, or timing people's bathroom breaks, but I just avoid such companies.


I don’t understand why people think the employer cannot check whether the employee is slacking off.

Maybe what we should prevent is employer keeping months of proof and only bringing it up as inappropriate later, but if the employer uses the camera to tell an employee within 24hrs that he needs to ramp up, it feels ok. Maybe we should impose rules like “24hrs max” and “can’t be used legally, just orally.”


> I don’t understand why people think the employer cannot check whether the employee is slacking off.

On some level it depends on what 'slacking off' means.

I've had employers where 'slacking off' meant actively doing some %mundane/repetitive/unnecessary% task with every moment of my free time. We were literally pulling the finish off the counters; there was no need to keep dusting them.

I've had software shops where reading integration documentation was 'slacking off'.

An interesting data point; In Germany, MS Office doesn't track how long you have been editing a document. My understanding is this is because the law there more or less says if you pay someone to do a task, you aren't supposed to (i.e. can't) care about how long it took them to actually do it as long as it was done on time.

So I guess that's my problem. There's a very fine line between employers using surveillance to catch 'bad actors' and employers using surveillance as another tool to bully substandard work conditions onto people.


My guess is that micromanagement actually decreases quality and productivity as well, just due to the disconnect between management opinions and real-world employee experience. If you are judging performance on the output correctly, the employee will, out of own self-interest, maximize the quality and quantity of the output while minimizing their own effort expended in creating it.


100% of the information an employer needs to determine my productivity can be found by looking at my work output. They don't need to know what I'm doing or looking at at every given moment. The results speak for themselves.

I'm sure they have a legal right to check (in the US), but I really wouldn't want to work for such a company and would immediately start looking for a new job if it happened to me.


The day one says I'm "slacking off because we noticed inactivity on your laptop" is when I stand up and walk out the door. Hasn't happened yet but I suspect it will at some point.


Yeah I’m waiting for it too. It’ll be funny because otherwise people have had nothing but good things to say about my output.


Cloudflare sells security to people. If you don’t want to work at a company that has security requirements like that, don’t work there. Lots of people choose to donate their fingerprints, facial data, life history, and polygraphs to work for the government. That’s their choice to make.


If that info is well taken care of is one think. If it ends up floating on the internet is another. Rfid badge data floating on the net creates is useless however other personal data could be very toxic in the wrong hands. And usually this info leaks thats why its not a great idea to let it outside the network let alone record it in the first place


> If that info is well taken care of

It isn't. https://en.wikipedia.org/wiki/United_States_Office_of_Person...


How is being recorded by your employer while on their premises giving away your freedom? It would be a different thing if they were tracking you out of work, but when you enter a premises owned by a business you kind of implicitly agree to be surveilled by them, as it is their right and freedom to protect their assets.


I worked I the defense sector.

We were tracked by contract (badge into building, badge into area). We couldn’t leave the work area un-attended which was a pain, so there were “processes in place” (last person badge etc..).

Generally we knew they left you alone unless you were cheating. (Having someone badge you in when you weren’t there was a fire able offense).

I don’t miss it, but it wasn’t that bad. Of course having the work network not on the internet what else could we do but work...


Its only natural really in this race to the bottom. If your zero hour contract doesn't have room to pay the bills you are not just not worried about tracking, you'd take anything that might show how hard you've tried.


One of the biggest use cases presently is SARS-COV-2 tracing to figure out who needs to be notified they were in proximity for X-time of someone with COVID-19.


It really comes down to how it's used. As another commenter pointed out, any company using badges to swipe into doors can track your movements. Most cameras are positioned near entry doors, exteriors, or public areas as it is. The main difference here is the amount of information collected on an unauthorized entrant, and the fact that maybe badge-borrowing doesn't go unnoticed anymore.

I really doubt Cloudflare is the type of company to be tracking where each employee is and whether they are taking too many bathroom breaks. It's definitely an area abuse is possible, but probably not an area it's likely in Cloudflare's case.


> It really comes down to how it's used

I absolutely agree. The thing that concerns me is these cameras sitting on the internet. It says something about how overworked the security team is. I trust that they have good faith, but I don't know if they have the resources they need.


What is a specific credible concern about cameras with public API?


People other than the ones you agreed to let monitor you, well, monitoring you. Also, it's a major risk to the company itself, who knows what can be read off of employees screens if they're compromised.


Yeah, it seemed far fetched to me at first but I guess surveillance might be useful to a 3rd party. I read an article a while back on how people make equity trades based on data found through satellite images of refinery tanks and whatnot. I guess unsecured internal surveillance cameras could allow an outsider to find out if a company was really busy or just faking it.


Every IOT device is an attack vector against the network.


I wonder if there is a way to dumb down IOT devices so they can’t be an attack vector like that.


Lock the memory so that update is physical only and restart regularly to avoid no-memory malware. Not 100% secure and very inconvenient, so people prefer to isolate IOT in its own network and preferably have a good network security like putting the devices behind VPN/firewall/other gatekeeper.

Actually, if you want to have IOT access outside of the network, the best approach is to close all ports and for the device to initiate connection with a control server. The device is dark when scanned while a heartbeat signal will ensure connectivity. This will require a good security on the control server, but that is okay because server security is much better understood and does not suffer from the constraints of the embedded software.


Someone wanting to break in can check if anyone is there or see where easy to steal stuff is kept? Or on a larger scale you might leak when and how security guards make their rounds.


Cloudflare was apparently operating network connected facial recognition cameras in their offices.

We do not use that feature and do not intend to.


But did you ever use it?


No, this was never in active use.


If they did, wouldn't it be a good sign if someone came along and said, hey, this doesn't align with our values, and was able to get it removed?


Yes it would be, but I wasn't after good signs.


When I read a paper like this I'm looking for four things: (1) the data, (2) the benchmarks, (3) the architecture, (4) the controls/ablation.

1. The data:

"We used a sample of 1,085,795 participants from three countries (the U.S., the UK, and Canada; see Table 1) and their self-reported political orientation, age, and gender. Their facial images (one per person) were obtained from their profiles on Facebook or a popular dating website... Facial images were processed using Face++37 to detect faces. Images were cropped around the face-box provided by Face++ (red frame on Fig. 1) and resized to 224 × 224 pixels."

2. The benchmarks:

"For example, when asked to distinguish between two faces—one conservative and one liberal—people are correct about 55% of the time."

3. The controls:

"What would an algorithm’s accuracy be when distinguishing between faces of people of the same age, gender, and ethnicity? To answer this question, classification accuracies were recomputed using only face pairs of the same age, gender, and ethnicity."

A. A complaint:

Geography and income are two powerful conditioners. These can leak in so many ways: uncropped background (geography), image color and quality (income), eyeglass shape (geography and income). This study really needs more controls. Geography and income would be a nice start.


What stood out to me was

> Their facial images (one per person) were obtained from their profiles on Facebook or a popular dating website

so of course the first thing to comes to mind is "how good of a predictor is just knowing which of those two sites the image came from?"


Geography and income are two powerful conditioners. These can leak in so many ways: uncropped background (geography), image color and quality (income), eyeglass shape (geography and income). This study really needs more controls. Geography and income would be a nice start.

But then the data wouldn't represent the natural world: nature as it is.

Raw data is the correct thing to use, because it's what a hypothetical other person would also use if you ran the same experiment yourself.


Uh, the headline claim is about faces, how does it make sense to then insist that you must leave the background in?


This reminds me of an early ML study about detecting skin cancer from pictures with a high accuracy rate.

The problem was, that with the ML, they ended up building a ruler classifier, because most of the pictures with skin cancer happened to also have a ruler in them to measure the size.


Or the commercial model that identifies criminals from their photograph. Turns out people who frown are criminals. People who smile aren't. Or so you'd believe if you anchored your expectations comparing mug shots to social media profile pictures.


That wasn't the claim. The claim here is that we should scrub certain faces from the dataset in order to change the dataset in a certain favorable way.


No that's not the claim. A control is to understand how your model works, it's not what you release as the final product.


It would be nice to see a logistic regression using at least some of the features known to be useful (including geography and income).

That way we can see how much of the performance is from magic AI pixie dust, and how much is from basic 19th century statistics.

Every time I read a paper like this, I have this Margaret Mitchell talk [1] in the back of my mind.

[1] https://youtu.be/XR8YSRcuVLE


Yep, these papers don't usually pass the sniff test. My bet is you can predict the phone brand from the camera grain and that correlates with geography & income.


Some problems that Fukushima had: 1950s vintage design, active cooling system, backup power at sea level in a seismically active area. This kind of failure was not just predictable, it was predicted.

People travel to Japan from around the world to learn how to build earthquake resistant structures. Their nuclear engineers are top-notch. It was the bureaucracy that failed, not the talent.

In short, the problems were human not technical. People get complacent and greedy. They use every procedural tool they have to delay upgrades, maintenance, and improvement. I think that is at the core of most nuclear skepticism. Does anyone honestly think the United States has institutions sound enough to safely manage nuclear power over multiple decades? Or will they neglect basic maintenance and upgrades?


I think that the US Nuclear Regulatory Commission is world-leading. They identify problems proactively and require operators to phase in safety upgrades even for plants built 50 years ago. I live near an operating nuclear reactor and I prefer it over any form of fossil plant. Power reactors operating in the United States are reliable, safe, and have extremely low life cycle emissions of greenhouse gases.

Unfortunately, one of the most common refrains from nuclear boosters is that nuclear power is over-regulated. I don't want American nuclear plants held to the same lax safety/environmental standards as fossil plants. If we used taxes to internalize the costs of pollution from fossil-fired plants, low cost natural gas plants probably wouldn't be pushing reactors into early retirement. But leveling the playing field by slashing nuclear safety/inspection down to the low standard expected of fossil plants is the wrong way to go.

I am open to specific proposals for reducing regulations in the nuclear sector if there are regulations that impose additional process overhead, don't actually serve a purpose, and survive only from inertia. I wouldn't be surprised to hear that there are some of these. But I've been discussing nuclear power for 20+ years, starting back on Usenet, and specific proposals are much less common than generic "get rid of red tape" bluster.


Here, I'll come up with a proposal. If Congress is serious about climate change, then they can ask (and allocate the budget) the Department of Energy to procure and operate a bunch of naval nuclear reactors. With whatever internal regulations they have, the US Navy has not had a single incident in their entire history of operating nuclear reactors. They are also quite cost effective, for example the cost of the 2 reactors A1B [1] that power a Gerald Ford carrier is about $1 BN. That comes to about $2BN/GW, which is about a tenth of what a civilian reactor costs. The US Navy builds about 1 carrier every 4 years so that comes to 1 reactor every other year. If the DoE gets the Congressional mandate to procure a few reactors per year, the cost is going to surely come down. Also these reactors don't need refueling for about 2 decades, while civilian reactors are refueled every 1.5 years.

[1] https://en.wikipedia.org/wiki/A1B_reactor


This is not a very good idea for several reasons. Naval reactors require fuel that is much more enriched than normal reactors. They also produce significantly lower electricity. The Palo Verde facility produces 3GW of electricity and cost $11B in 2019 dollars. Each of the A1B reactors generates 125 MW. Life span of the reactor is not specified, but it's predecessor the A4W had a 23 year life span. By comparison, new nuclear plants are slated to last 50-80 years.

The net cost per GWh of electricity of the naval reactor is significantly worse than commercial plants. This is to be expected, because naval reactors are built to be compact and withstand the rocking of a ship at sea. Commercial reactors can leverage the efficiency of larger scale, and are built to be much more long lasting.


An A1B generates 125 MW electricity, but also 260 MW of additional thermal power used to power the propellers. If you convert the latter one to electricity at a 45% efficiency (typical efficiency for a generation IV nuclear power plant steam turbine), you get 117 MW, for a total of 242 MW. Two reactors could produce then about 0.5 GW. At a $1 BN cost, that's $2 BN / GW.

Palo Verde was brought online more than 30 years ago. If you look at Vogtle 3-4 (to be brought online in the next 2 years... if we are lucky) or Hinkley Point C, you'll see projected costs of respectively $25 BN for 2.5 GW and $32 BN for 3.2 GW. In both cases that comes at $10 BN/ GW. That is 5 times more expensive than the naval reactor.

Now, as you said, the cost of a naval reactor is very likely inflated by the exacting demands of its military usage. It needs to be compact, to work on a rocking ship, presumably it needs to be able to survive a certain amount of abuse that's to be expected if a ship/boat actually participates in combat, and I'm sure there are 100 other things that I'm missing here. All these factors make military devices absurdly expensive compared to the same devices intended for civilian use.

The logical conclusion is that if DoE wants to repurpose naval reactors for civilian use, then it can achieve significant cost savings. What I'm saying is that even not factoring these savings in, you still end up 5 times cheaper than the civilian reactors that are currently being built.

Edit: The lifespan of a Gerald Ford-class carrier is expected to be 50 years. The Nimitz aircraft carrier was launched 49 years ago. They do not replace their reactors. So, a naval reactor is designed to work for at least 50 years.


You also need to build a secondary containment vessel for the reactor, which is a significant expense. Because the cost of this containment is a function of surface area and generating capacity is a function of volume it's better to increase size. You also need to build steam turbines, heat exchangers, transformers, etc. The cost of the reactor is only a portion of the cost of the whole nuclear plant.

> Palo Verde was brought online more than 30 years ago. If you look at Vogtle 3-4 (to be brought online in the next 2 years... if we are lucky) or Hinkley Point C, you'll see projected costs of respectively $25 BN for 2.5 GW and $32 BN for 3.2 GW. In both cases that comes at $10 BN/ GW. That is 5 times more expensive than the naval reactor.

And by comparison you have the Taishan plant built for $7.5B with 3.5 GW generating capacity. If we want to go around cherry-picking examples we can also cherry-pick the cheap plants.

We have already tried using maritime nuclear reactors for grid generation. The first nuclear plants brought online for grid generation were maritime reactors repurposed for grid production. Larger purpose-built reactors won out.


Vogtle and Hinckley aren't cherry picking expensive plants, it's cherry picking middle of the road.

VC Summer is expensive, many billions spent and nothing to come of it ever.

Where do your cost numbers from Taishan come from? How do you come to costs that are believable from massive Chinese construction, or at least a cost that might be transferable at all to the rest of the world?

The history of nuclear is very clear: keep on increasing costs throughout construction, just enough that, taking into account the sunk cost fallacy, it makes sense to soldier on. VC Summer overshot that, and had massive corruption in the auditing of all parts of the project. Somehow Vogtle continues.

We literally do not know how to build nuclear in a cost effective manner any more. We can't structure contracts in the right way, we can't perform engineering to a high enough degree to make constructive plans. At Vogtle they literally poured the wrong concrete, and had to go back and get the design recertified with the NRC, because the original design was impossible to build, and on site they just plowed ahead with what they thought they could build. This is the level of incompetence, ball dropping, and bad contract structure.

Perhaps this sort of thing is fixable, but not on any reasonable timeline. The management is rotten from the top, so there's nobody that we can even order a nuclear reactor from.

Suppose you had $7.5B and wanted 3GW of nuclear at one of the many sites in the US that would welcome nuclear and its jobs. Who do you even bring that money to in order to build it? Rosatom? Are they going to meet NRC standards?


Great by the same logic you should use a commercial reactor over a naval reactor, you should also just use a different power source over nuclear.


What other power source generates carbon-free energy without intermittency or geographic dependency?


The construction, fuelling and cleanup of a site is far from carbon zero. There is also a geographic dependency, or should be.

Nuclear power puts out more CO2 than solar or wind according to Nature (hydro isn’t mentioned for some reason).

“carbon emissions ranged from 1.4 grammes of carbon dioxide equivalent per kilowatt-hour (gCO2e/kWh) of electricity produced up to 288 gCO2e/kWh. Sovacool believes the mean of 66 gCO2e/kWh to be a reasonable approximation.”

https://www.nature.com/articles/climate.2008.99


Solar + wind with storage and a grid. The parts are all there, and it's cheaper than nuclear today.


No, the storage part is not there. Hydroelectric storage is expensive, takes a long time to build, and is geographically dependent to boot. Only ~5 minutes of global electricity storage can be provided with batteries using all known lithium deposits. Only 19 minutes worth of storage is available with all the lithium we can mine with today's equipment [1].

This is why plans for a solar and wind grid assume that some silver bullet is going to provide dirt-cheap and nigh-infinitely scalable storage.

1. https://dercuano.github.io/notes/lithium-supplies.html


These are not particularly relevant or helpful comparisons for knowing whether lithium ion is ready to deploy now (it is), or whether storage will be achievable with lithium ion and other chemistries (it will).

This is only looking at currently known reserves, a number which has doubled in only a few years. It also compares it to total energy consumption, a meaningless comparison for the coming decades.

Further, the same industrial capacity for lithium ion batteries also works for sodium chemistries. We have only focused on lithium because the primary applications are in mobile things at the moment: cars and mobile devices, where the weight advantage of lithium is important.

For grid storage, weight and specific energy are not important, and sodium chemistries will be ideal. There are also entire classes of flow chemistries that are in their infancy.

But what is mature and cost effective is lithium ion storage. The only place where we have open data about the feelings of investors, the PJM and ERCOT interconnection queues, storage is being deployed in GW comparable to new natural gas GW. This number alone, the GW and not the GWh, tells us that investors think this new tech is ready and deplorable. And it is falling in cost exponentially. Other battery tech is following and dropping in cost too, but lithium ion is benefitting from having existing markets that can fund massive learning.


> This is only looking at currently known reserves, a number which has doubled in only a few years.

False. It is estimating at the total amount of accessible lithium, not just the known reserves.

> For grid storage, weight and specific energy are not important, and sodium chemistries will be ideal. There are also entire classes of flow chemistries that are in their infancy.

Feel free to cite this as an option once sodium batteries actually become available at scale. Until then this amounts to, "hope some future solution solves storage."

> But what is mature and cost effective is lithium ion storage. The only place where we have open data about the feelings of investors, the PJM and ERCOT interconnection queues, storage is being deployed in GW comparable to new natural gas GW.

This is not even remotely true. We don't even have 1 GWh of battery storage [1]. Sure, we're not deploying "new" natural gas because energy demand is decreasing and we already have existing natural gas plants. But the point is that

> And it is falling in cost exponentially. Other battery tech is following and dropping in cost too, but lithium ion is benefitting from having existing markets that can fund massive learning.

Cost is a function of supply and demand. If you actually try to use lithium ion batteries for grid storage, this will create massive demand and thus increase cost. Again, there is insufficient accessible lithium to provide even half an hour of energy storage.

1. http://css.umich.edu/factsheets/us-grid-energy-storage-facts...


False on all counts.

The GitHub estimate is only using known resources and reserves, a number which goes up every year as we discover more. It is not an estimate of total accessible lithium. Lithium resources, the type where we get most of our lithium, increased from 40M tons to 80M tons from 2016 to 2020 estimates, and will continue to increase:

https://en.wikipedia.org/wiki/Lithium#Reserves

> This is not even remotely true. We don't even have 1 GWh of battery storage [1].

I don't know where that number comes from on that page, but it's wrong. More than 2GWh were connected to the US grid in Q4 2020 alone:

https://pvbuzz.com/woodmac-new-battery-storage-systems-q4-20...

And even if your number were right, it doesn't address the core point that battery storage deployment is growing at an absolutely incredible pace. In cost-competitive grids, it's replacing natural gas:

https://rmi.org/clean-energy-is-canceling-gas-plants/

> Cost is a function of supply and demand

This is just bad economics. These all affect each other. As production costs fall for lithium ion batteries, demand is growing, as shown by that RMI document. The cost of batteries is not falling because the demand is falling, the cost of lithium ion battery is primarily determined by manufacturing costs at the moment. The input costs of lithium is not going up because there's not enough lithium. And if supply of lithium does get constrained in the future, then there are alternative chemistries that are not supply limited.


> The GitHub estimate is only using known resources and reserves, a number which goes up every year as we discover more. It is not an estimate of total accessible lithium.

Yes, it is. 5 minutes is the amount provided by known reserves. 19 minutes is what can be provided with all accessible lithium. This is known reserves, plus the amount we expect to find later.

> I don't know where that number comes from on that page, but it's wrong. More than 2GWh were connected to the US grid in Q4 2020 alone:

Which amounts to a whopping... 14 seconds worth of energy storage.

> And even if your number were right, it doesn't address the core point that battery storage deployment is growing at an absolutely incredible pace. In cost-competitive grids, it's replacing natural gas:

17 GW of natural gas was constructed in Texas alone. In fact, not even all of Texas, just the part serviced by ERCOT. Your claim "storage is being deployed in GW comparable to new natural gas GW" is not even remotely true, and your own sources prove it.

> This is just bad economics. These all affect each other. As production costs fall for lithium ion batteries, demand is growing, as shown by that RMI document. The cost of batteries is not falling because the demand is falling, the cost of lithium ion battery is primarily determined by manufacturing costs at the moment. The input costs of lithium is not going up because there's not enough lithium. And if supply of lithium does get constrained in the future, then there are alternative chemistries that are not supply limited.

The assumption that the price of lithium won't go up if we try to use it for grid storage is bad economics. Let me put the staggering mismatch between battery supply and storage demand in perspective:

* The US alone uses 500 GWh of electricity each hour. The world uses 2.5 TWh of electricity every hour. * The entire world produces ~300 GWh of lithium ion batteries annually [1].

If we actually tried to provision one hour's worth of electricity storage the price of batteries would skyrocket, because there isn't enough supply to meet demand. We could provision one hour's worth of storage even if we bought every single lithium ion battery produced anywhere in the world for a whole year.

And this issue is going to become even worse as we switch from fossil fuels to electricity for heating, transportation, industrial chemical production, and so forth.

1. https://cleantechnica.com/2019/04/14/global-lithium-ion-batt...


Nuclear power plants are thermal power plants and that means they need cooling. The power density of nuclear power plants is so high that most of them can't be placed near rivers because rivers have a variable flow rate.

If the flow rate is too low you risk killing aquatic life in the river ecosystem so instead the nuclear plant is turned off. You can avoid this by placing the nuclear power plant near the ocean. That's what the Japanese did with the Fukushima power plant even though it's a tsunami prone area.


What gives you the idea that nuclear power plants can't be placed near rivers? Almost all that aren't on the coast are near rivers.

And they don't need to use potable water. The Palo Verde plant uses wastewater.

Because humans need water to survive, all population centers are built with access to water. Thus, cooling is available pretty much anywhere one would want to build a nuclear plant.


So you can build nuclear in a tsunami zone? in a seismic zone? in an area without cooling?

So you can mine and enrich uranium without carbon?

Nuclear does none of the things you fantasize it to do really.


> So you can build nuclear in a tsunami zone? in a seismic zone? in an area without cooling?

Yes, you harden the structure against tsunamis and earthquakes. That's part of why nuclear plants are so expensive.

Atmospheric cooling can indeed be done anywhere. It's typically easier and more efficient to use water cooling. And humans need water to survive, and thus population centers are built near sources of water, water cooling is almost always an option. Also nuclear plants can be cooled with seawater.

This is in stark contrast to hydroelectricity which needs both a river and a valley to be viable. Geothermal power needs magma near enough to the surface to heat water into steam.

> So you can mine and enrich uranium without carbon?

I don't see why not. Use electricity produced by nuclear plants to drive centrifuges. Also use said electricity to power mining equipment.

And you didn't answer my question: What other carbon-free sources provide energy 24/7, besides ones that need very specific geography like hydroelectricity and geothermal power?


> What other carbon-free sources provide energy 24/7

Wind + solar + biofuels + waste + batteries.

Batteries are primarily for peak usage, and it could be car batteries (V2G).

Biofuels are primary for seasonal usage (e.g. winter).

Nuclear is too expensive if you take into account the risks, which are currently externalized.


There is not enough accessible lithium to provide nearly enough storage [1]. 5 minute with known deposits, and 19 minutes estimated to be accessible with current mining techniques.

Biofuels are low energy density, and don't provide nearly enough power. Not to mention they aren't carbon-free. Burning biofuels releases carbon into the atmosphere that would otherwise be trapped.

1. https://dercuano.github.io/notes/lithium-supplies.html


Your source seems to be low by about 2 orders of magnitude on the energy density of lithium. They assume ~100% of a battery is made of lithium. There are only 200-300g of lithium metal per kwh in a lithium ion battery[0,1], or 12-18MJ per kg.

[0] http://www.meridian-int-res.com/Projects/How_Much_Lithium_Pe... [1] https://www.researchgate.net/deref/http%3A%2F%2Fgreet.es.anl... page 10


Battery: it doesn't have to be lithium (even thought, currently all planned ones use lithium-ion). Sodium-sulphur would be an option as well.

Biofuels are low energy density: this isn't about aviation or transportation, so that's not a concern at all.

Biofuels don't provide enough power: citation needed (are you moving the goalpost again?) - note that most energy will come from wind and the sun, so there is relatively little need for biofuels.

Burning biofuels releases carbon into the atmosphere that would otherwise be trapped: No, it would be released anyway (well, unless if you burry it really deep).

The problem with nuclear power is cost, due to high risks. And even then, the insurance (which is really expensive for nuclear plants) doesn't cover all the risks. The biggest risk is externalized: if e.g. a power plant in Switzerland would blow up, almost the whole country would be become un-inhabitable. And there is no insurance company paying for that.


> Battery: it doesn't have to be lithium (even thought, currently all planned ones use lithium-ion). Sodium-sulphur would be an option as well.

Right: we assume some other form of energy that has yet to be commercialized will provide cheap storage. Get back to me when this solution actually demonstrates feasibility.

> Biofuels are low energy density: this isn't about aviation or transportation, so that's not a concern at all. Biofuels don't provide enough power: citation needed (are you moving the goalpost again?) - note that most energy will come from wind and the sun, so there is relatively little need for biofuels.

Biomass provides 1MWh per ton of dry wood [1]. On average, forests have 38 tons per acre [2]. The US consumes 11.5TWh of electricity daily, so this works out to 319,444 acres per day. The US has ~750 million acres of forest. So we have 2,343 days worth of biomass energy. Or about 6 years.

Sure, forests grow, but they take longer than 6 years to grow. Also the figure of energy was in raw BTUs, so the actual electricity generated is only about ~50% of that.

> Burning biofuels releases carbon into the atmosphere that would otherwise be trapped: No, it would be released anyway (well, unless if you burry it really deep).

It would be trapped in the form of trees and vegetation. If burning biofuels doesn't release carbon into the atmosphere why are people concerned about deforestation?

> The problem with nuclear power is cost, due to high risks. And even then, the insurance (which is really expensive for nuclear plants) doesn't cover all the risks. The biggest risk is externalized: if e.g. a power plant in Switzerland would blow up, almost the whole country would be become un-inhabitable. And there is no insurance company paying for that.

This is not even remotely true. The plants in Switzerland have secondary containment. Even Chernobyl, which had no secondary containment, created an exclusion zone of 40x40km. "Almost the whole country would become un-inhabitable" is laughable. It really just demonstrates that aversion to nuclear is not based on rational thinking.

1. https://www.nacdnet.org/wp-content/uploads/2016/06/AppendixA...

2. https://www.nrs.fs.fed.us/fia/maps/nfr/descr/xlivebiohw.asp


> we assume some other form of energy that has yet to be commercialized

Both sodium-sulphur and lithium-ion are commercialized and widely used already (currently pumped storage is a lot more widely used, but it's not possible everywhere). [1]

Biofuels: as I wrote, it is only needed to fill the gaps [3], e.g. in winter, not to power 100%. It is already widely used, for example in Europe [2]. And it's not wood (CO2 is trapped in wood for some time, but not in vegetation). This doesn't displace forests.

> The plants in Switzerland have secondary containment.

So did Fukushima. There were many problems with nuclear plants in Switzerland, e.g. [4]. There is no 100% safety. In Switzerland, most people live in cities... sure, you could still live in the mountains, right.

> It really just demonstrates that aversion to nuclear is not based on rational thinking.

Actually, it is based on rational thinking. As the catastrophic events in Fukushima and Chernobyl, and the near catastrophes elsewhere have shown, nuclear power is dangerous. The population has to bear that risk. The companies would just get bankrupt. The insurance would only cover a small part of the costs.

1. https://en.wikipedia.org/wiki/Battery_storage_power_station 2. https://www.iea.org/data-and-statistics/?country=UK&fuel=Ene... 3. https://www.iea.org/articles/how-biogas-can-support-intermit... 4. https://en.wikipedia.org/wiki/M%C3%BChleberg_Nuclear_Power_P...


Biomass generates ~10% of the electricity from one country in Europe. Biomass is useful in countries like Brazil where extensive farmland means biodiesel is a viable automobile fuel. But for grid generation, the watts per acre is insufficient.

Globally, biomass is used for 0.7% of total energy demand [1]. Almost all of it for fuel, it doesn't even make it on the chart for electricity generation.

> So did Fukushima. There were many problems with nuclear plants in Switzerland, e.g. [4]. There is no 100% safety. In Switzerland, most people live in cities... sure, you could still live in the mountains, right.

And the secondary containment in Fukushima meant that most of the radiation was contained. Fukushima is already being resettled. You harbor this skewed perceptions where nuclear catastrophes render massive swathes of the earth uninhabitable, "almost the whole country [Switzerland] would be become un-inhabitable". No it would not. Even an uncontained meltdown resulted in a 40x40km exclusion zone. An a contained one is much less drastic. Three Mile Island didn't even result in any permanent exclusion zone.


Easy, there’s many: 1) Solar + transmission lines. It’s always sunny somewhere

2) wind + transmission lines, always windy somewhere

3) use some energy produced by hydro to manufacture some concrete river beds and reservoirs

4) use some of the energy produced by 1-3 to dig real deep for geothermal everywhere

5) Ocean thermal energy conversion

Don’t get me wrong , I’m not anti nuclear , I’m a huge fan of the big reactor in the sky it produces all we need with perfect reliability there’s no reason to do something as dumb as trying to build terrestrial reactors


> 1) Solar + transmission lines. It’s always sunny somewhere

> 2) wind + transmission lines, always windy somewhere

These don't produce power consistently. That's why one would need to build redundancy. Also it's not always sunny somewhere, unless you build transcontinental transmission lines. And even then, there's a period of time where most sunlight is hitting the pacific ocean.

> 4) use some of the energy produced by 1-3 to dig real deep for geothermal everywhere

> 5) Ocean thermal energy conversion

Both of these are geographically dependent. Might as well has just said hydroelectricity.


> Solar + transmission lines. It’s always sunny somewhere

Unless it is sunny 24/7 in a given country or even group of coutnries (e.g. the EU) this is not viable.

Countries will not give up their energy security and put themselves at the mercy of the other side of the planet (where it is sunny) plus whoeever might want to damage those transmission lines and cripple a country. It is already an issue with oil and gas.


> So you can build nuclear in a tsunami zone? in a seismic zone?

Why not? It worked at Onagawa: https://www.sciencedirect.com/science/article/pii/S259012302...


It’s also failed. Why not just avoid it? That’s the approach taken according to your link.

The solution to the problems faced at Onagawa were to decommission the plant, and this process would take longer than the duration for which the plant actually ran.

“the 2011 events strongly influenced the decision to decommission the Onagawa Unit 1 early, brought to attention the length of the decommissioning process (which will surpass the operation stage)”


The decision to decommission the plant is political, not technical.

Onagawa was closer to the epicenter than fukushima and suffered no ill effects. It can be done, the main different between Onagawa and Fukushima is that they were owned by different companies and one company took safety seriously.

For more reading, see: https://thebulletin.org/2014/03/onagawa-the-japanese-nuclear...

and IAEA report: https://www.iaea.org/newscenter/focus/actionplan/reports/ona... which the article is based on.


The US stockpile of HEU would be depleted a lot faster this way, but enrichment could start again. I don't see major downsides to this proposal. Thanks for providing a specific and plausible idea!


Thats a great idea. I'm trying and failing to find gotchas.


There is a potential gotcha: proliferation potential. The naval reactors use highly enriched uranium; if it falls in the wrong hands, you can end up with someone being able to build a bomb. That's why I said such a program needs to be run by the Department of Energy, the same department that has to maintain the nukes. I don't have a personal objection to this, but a lot of people would be unhappy with an essentially military program to be established for a problem that is not military in nature.


Hopefully the DOE will continue to be led by a person who didn't previously campaign on disbanding it!


> I think that the US Nuclear Regulatory Commission is world-leading.

Comparisons are irrelevant - Japan's regulations weren't the worst in the world when Fukushima happened => something similar could happen anywhere else (a lot of factors influence a lot of decisions/policies - the past years demonstrated that even the US nowadays isn't the most stable country).

> They identify problems proactively... even for plants built 50 years ago.

Maybe they do "now" (I'm not a "pro" in this area therefore I cannot confirm nor deny that) but in any case there are never guarantees about the future. Additionally that "even for" sounds ugly - that MUST happen as long as such a plant exists.

> I live near an operating nuclear reactor and I prefer it over any form of fossil plant.

It's well-known that fossil plants are the absolute worst => such a change isn't a great improvement from my point of view.

> Power reactors operating in the United States are reliable, safe...

So far, and the terms are flexible - incidents did happen with civilian & military reactors, Wikipedia has a lot of nice-to-read articles with timing summary, analysis etc... .

> ...and have extremely low life cycle emissions of greenhouse gases

Correct - basically almost 0 (don't know, probably the truck that delivers the uranium stabs does generate some gases, maybe as well the mine&plant that create them, but not a lot compared to gas&oil&coal). But then... that's it? No other remark about maybe what is generated and where to put it and how to take care about it for the next 10000 years?

I'm definitely not/never going to approve any measure to deregulate a sector which has a near-infinite potential impact when something breaks and/or something is not properly taken care of.


Sorry but I don't trust any nuclear plant in the US to put safety over profits over the long term, especially after all the illogical deregulation done the last 4 years.

There's also already some questions on safety in regards to current plants. They're constantly loosening tolerances and changing the way tests are performed to make otherwise failed tests fall within acceptable limits. Plus the plants are already operating 2x their engineered lifespan. Yeah, no thank you.


How much are loosening regulations a concern for nuclear in the US?

Obviously recently general utilities haven't fared well as of late (Texas) or nuclear in the past (e.g. Rocky Flats). But as a foreigner who thinks as far as nuclear power is concerned, the DOE seems to being an OK job as of late. Could you share the specifics of the tests you are referring to?


This first link makes me absolutely furious. There's too much to quote from here, but this succicnt excerpt touches on the water test. It goes into more detail in another part of the article. The post has numerous example of very concerning issues.

> When valves leaked, more leakage was allowed — up to 20 times the original limit. When rampant cracking caused radioactive leaks from steam generator tubing, an easier test of the tubes was devised, so plants could meet standards.

https://www.nbcnews.com/id/wbna43455859

> The proposal comes as most of the nation’s nuclear power plants, which were designed and built in the 1960s or 1970s, are reaching the end of their original 40- to 50-year operating licenses. Many plant operators have sought licenses to extend the operating life of their plants past the original deadlines, even as experts have warned that aging plants come with heightened concerns about safety.

https://www.nytimes.com/2019/07/17/climate/nrc-nuclear-inspe...

> The nuclear industry is also pushing the NRC to cut down on safety inspections and rely instead on plants to police themselves. The NRC “is listening” to this advice, the Associated Press reported last month. “Annie Caputo, a former nuclear-energy lobbyist now serving as one of four board members appointed or reappointed by President Donald Trump, told an industry meeting this week that she was ‘open to self-assessments’ by nuclear plant operators, who are proposing that self-reporting by operators take the place of some NRC inspections.”

https://newrepublic.com/article/153465/its-not-just-pork-tru...


Thank you for the detailed and insightful reply!


The Union of Concerned Scientists has posted a great blog series "Role of Regulation in Nuclear Plant Safety." It's written by Dave Lochbaum, a degreed nuclear engineer who worked at American nuclear plants for 17 years. I think it's a better overview of NRC action and plant safety than any one incident. I've collected all the links here.

Series introduction: https://allthingsnuclear.org/dlochbaum/role-of-nuclear-regul...

Flooding at Nine Mile Point: Regulation and Nuclear Power Safety #1 https://allthingsnuclear.org/dlochbaum/flooding-at-nine-mile...

Three Mile Island Intruder: #2 https://allthingsnuclear.org/dlochbaum/three-mile-island-int...

Empty Pipe Dreams at Palo Verde: #3 https://allthingsnuclear.org/dlochbaum/empty-pipe-dreams-at-...

Yankee Rowe and Reactor Vessel Safety: #4 https://allthingsnuclear.org/dlochbaum/yankee-rowe-and-react...

Flooding at a Florida Nuclear Plant: #5 https://allthingsnuclear.org/dlochbaum/flooding-at-a-florida...

Containment Design Flaw at DC Cook Nuclear Plant: #6 https://allthingsnuclear.org/dlochbaum/containment-flaw-at-d...

Pipe Rupture at Surry Nuclear Plant Kills Four Workers: #7 https://allthingsnuclear.org/dlochbaum/pipe-rupture-at-surry

Anticipated Transient Without Scram: #8 https://allthingsnuclear.org/dlochbaum/anticipated-transient...

Naughty and Nice Nuclear Nappers: #9 https://allthingsnuclear.org/dlochbaum/naughty-and-nice-nucl...

Breaking Containment at Crystal River 3: #10 https://allthingsnuclear.org/dlochbaum/breaking-containment-...

Fatal Accident at Arkansas Nuclear One: #11 https://allthingsnuclear.org/dlochbaum/fatal-accident-at-ark...


I'm fine with rational regulation and good safety inspections. Here's an example of a regulatory framework that needed reform:

Several years ago I got to attend a meeting between a bunch of people from advanced nuclear startups, and a former head of the NRC. The startup people said their biggest problem was that the NRC required near-complete blueprints before they would even look at the design. Then they would give a flat yes or no. If yes then you still had just a paper reactor, and if no then you were out of business.

Getting to that point required several hundred million dollars. That's a pretty difficult environment for investors. They said just a more phased process would help a lot. The NRC person was unsympathetic, said it wasn't the NRC's job to help develop new nuclear technology, and was uninterested in climate change.

Fortunately Congress has gotten involved since then and things seem to be improving.


I live near one too, I'm not sure if I should be impressed that disaster was averted in 2002 or if I should be concerned how close things got:

https://en.wikipedia.org/wiki/Davis%E2%80%93Besse_Nuclear_Po...

It would be nice to hear from someone who is more knowledgeable on the subject than myself.


If an accident almost happened in 2002, then the probability only increases as time goes on.

I personally wouldn’t feel safe, in the long run, buying a house or living near an aging nuclear power plant.


> I am open to specific proposals for reducing regulations in the nuclear sector if there are regulations that impose additional process overhead, don't actually serve a purpose, and survive only from inertia. I wouldn't be surprised to hear that there are some of these. But I've been discussing nuclear power for 20+ years, starting back on Usenet, and specific proposals are much less common than generic "get rid of red tape" bluster.

Some specific proposals would be to put a minimum nuclear fuel limit on the existing nuclear power plant regulations and create a new class with loosened containment requirements for active reactor designs that are passively safe by nature. Existing regulations are written around reactor designs that hold thousands or tens of thousands of kilograms of nuclear fuel that they have to moderate and keep in check. This is clearly not a viable option for nuclear long term against natural gas and renewables due to the overwhelming cost of manpower and materials which scales poorly. The regulatory overhead, transportation, and storage costs on that much radioactive fuel alone is prohibitively expensive, so we really need to focus on making progress in powered nuclear fission reactors which are impossible under the current regulatory regime.

Designs like the nuclear lightbulb - studied and tested by UTC under a NASA Mars program contract in the late 60s/early 70s - take tens of kilograms of fuel and heat & compress it till it reaches criticality at hundreds of atm and thousands of degrees. Any failure in the system causes a loss of pressure and the core returns to subcritical; even in a worst case scenario like a conventional bomb exploding in the reactor chamber, it would be a minor incident on the level of Three Mile Island. There are many tweaks that have been theorized but untested that would make the reactor even safer. However, any design like this requires regular maintenance of the reactor and completely different levels of containment that are either prohibitively expensive or impossible right now.


There was a long list of engineering failures at Fukushima. The idea with airplanes is not "design this component so it cannot fail" but "design the system so it can tolerate component failure". Fukushima had a list of failures it could not tolerate.

You mentioned one, the vulnerability of the backup power to the seawall being overtopped. The generators could have been put on a raised platform. There were others:

1. the hydrogen was vented into an enclosed space

2. no way to add water to the cooling system with a gravity fed device

3. critical machinery should not be located in the reactor core building

4. no way to bring in electric power from elsewhere


The problem with fault tolerance is that it allows the normalization of deviance, since something is always failing, but it's okay because there is always a backup (until there isn't).

The bigger issue with nuclear power is that we can trust humans to keep up the level of effort to keep it working without a fault for a few decades, maybe centuries if we're lucky, but there's no way you can operate a plant for a millennia without a catastrophic accident, but accidents take much more than a thousand years to clean up. So it's all totally imbalanced unless you just assume we'll have fusion in fifty years, so nothing matters. But I don't think we can assume that anymore.


> The problem with fault tolerance is that it allows

We do that with airplanes. Think about it - you're flying at 30,000 feet, 500 mph, 50 degrees below zero, no land in sight over the North Atlantic, in a tin balloon loaded to the gills with fuel and two flaming engines.

And yet you're perfectly safe.

How did that come about? Tolerance of failure.


The machines are designed to tolerate fault, but the FAA is designed to not let you take off unless you do a checklist that proves all the engines are working, not just the one you need for a crippled landing. So the system as a whole requires that the FAA not give in to the pressure from industry to sign off on less fault tolerance. It's a difficult issue for systemantics.


Um, there isn't a backup if the backup isn't operable before you take off.


That's their point. If the FAA didn't mandate it you can be sure budget airlines would be taking off and doing routes with one engine broken :-)


With wing mounted engines on two-engine airliners, there is physically no way to take off on one from other than a dry lake bed. The thrust from the operating engine will introduce more yaw on the airplane than the rudder, nosewheel steering, and wheel brakes can counteract.

Even tail mounted engines (with a shorter coupling arm to the centerline) will typically have a Vmcg (roughly, speed at which lateral control on the ground is lost with one engine inoperative) that will preclude takeoff on one (physically, not by regulations) from available runways.


That's not a fault of the airliner design.

> you can be sure

Not likely. Who you gonna get to fly it? Who you gonna get to pay to fly in it?


> Who you gonna get to fly it? Who you gonna get to pay to fly in it?

a. people in less developed countries

b. people in said countries with less money to spend than others

this has happened before (south america in the 70s, etc), which is why the aviation industry has the regulatory system it has today


Really? You know of examples of passenger planes taking off with only one engine turning? Or of any twin engine airplane doing this deliberately (other than a test flight or desperate emergency, like the volcano is gonna blow any moment).


i was lalking about the situation of planes taking off in barely flyable/safe situations that would not be allowed by modern faa regulations, which it think is larger point that was being argued, not debating about the single engine or propeller case

sorry if we ended up talking past eachother


Airplanes are highly standardized. Dozens and hundreds of essentially the same model are built. A few of them are built specifically to test in various ways and even crash and burn, and make sure they behave reasonably in such situations.

Civilian nuclear reactors are mostly built by a handful, rarely by a dozen. This makes learning from past mistakes and taking preventative measures across the fleet hard.

I think France has partly solved it exactly by having a small number of standardized reactors, and a number of nuclear plants which can be run in a reasonably uniform way.


> Airplanes are highly standardized.

Not really. Every one coming off the line is different. They are constantly being improved. Every part on the airplane is carefully tracked, from manufacturing lot to which airplane each is installed on. Everything is designed by engineers, not custom made on the spot by a mechanic.


Yes, French nuclear powerplants were standardized and built in batches ("séries", in French). This does not magically creates conditions for a perfect design and building process. See for example https://theecologist.org/2016/sep/29/sizewell-b-and-27-other...

Planes aren't perfectly safe (my brother was killed when SR111 crashed in 1998 after failures).

Anyone preferring not being exposed to a plane crash can abstain to travel in planes. Anyone preferring not being exposed to nuclear reactors boo-boos and hot waste has no real way to do so.


The failure points aren't always the aspects engineered by anyone related to airplane manufacture. Swiss Air 111 may have come down due to a fault/failure in wiring for its add-on entertainment system.

https://www.swissinfo.ch/eng/electrical-fire-downed-swissair...


Not just tolerance of failure. Also strict incident investigations and reporting requirements, including for "near misses"; also a strong safety culture made possible by strong unions and strict seniority-based promotion rules; also...


> a strong safety culture made possible by strong unions and strict seniority-based promotion rules

??


Pilots can't get ahead by cutting corners, and (to a somewhat lesser extent) it's hard for maintenance people to be pressurised to sign off on unsafe work.


Strict seniority rules means no incentive for doing quality work above the minimum.


Indeed, but also no incentive for bypassing safety checks that are redundant most of the time (which is how you get the normalisation of deviance that eventually leads to catastrophe). Sometimes that's the right tradeoff.


Could nuclear power plant operators be incentivised by using "proactive risk detection" metrics or something along those lines?

I suppose there will then be incentives to exaggerate or "invent" new risks for career advancement, but is that not preferable to the alternative?


All metrics are gameable. I think I once saw a study that suggested that every metric applied to professionals ended up having a net negative effect on actual productivity - by and large people understand their job and want to do it well, and while a metric may incentivise the few that don't, it also ends up distracting the majority.



Look at how few of them there are, despite millions and millions of flights.

It's an absolutely incredibly good safety record. You're much safer flying across the Atlantic than driving to the airport.


Hundreds of millions in the U.S. alone, over that 18-year span.


I think we need to look at what France is doing. They seem to have a good safety culture as a society, 90% of their power is nuclear and has been for decades they’ve never had a serious accident. Other examples, they have also never had a serious high speed train accident. They seem to be able to build these things considerably cheaper than we are able to in Britain and way cheaper than you can in America. They are a first world country with equivalent living conditions to the UK so unlike comparisons with China where many people blame poor working conditions and under regulation for cheapness, you can’t make the same argument against France. By the way I don’t know if that’s true about regulations in China (who does) but it is an argument that many people make that is a lot more easily refuted by just comparing with France instead.


Complex systems should be assumed to run in a partially broken state. Accidents are more things getting broken quicker than failsafes and operators can react to.

That’s not to say I like nuclear power - IMHO opportunity cost is too high. I could build, operate and decommission a renewable solar or wind plant in the time it takes to plan a new nuclear plant.


Part of the reason why some fault tolerance measures were neglected was because discussing backup plans was seen as a sign of weakness and were leveraged often by oppositions.

“You sound like you’re looking forward for some disaster coming with those plans” worked in Japan in those times. Still do to some extent.


I'm going to need some of evidence of this claim, because it seems quite a bit counter to the timeline I'm familiar with.

Opposition to nuclear's safety did not start until well after construction had started on the US's reactors. And for nearly all US reactors, the utilities had already realized that they had over-ordered nuclear reactors in the 1970s, and that there were far too many construction delays and cost-overruns for nuclear to be cost effective.

This is detailed in a 1985 Forbes cover article, Nuclear Foibles, which is not anti-nuclear, but is withering about the mismanagement of nuclear in the US. Here's the only reprint I have found, which has a weird rant about Gore at the top that can be ignored:

http://blowhardwindbag.blogspot.com/2011/04/forbes-article-r...

The idea that designs from the early 1970s refused to plan for failure because of some theoretical opposition, when there was basically no opposition to our greatest period of building nuclear reactors, doesn't make much sense to me.


Sorry I was trying to discuss Japanese climate but my writing wasn’t best. As for the evidences, it’s hard to find a well compiled list but Fukushima did have a number of safety issues unaddressed for reasons other than budgetary causes.

Off-site center for disaster control built 5km(3mi) off site, effectively on-site, all backup generators being at basement levels, and recently discovered issue of emergency vent lines terminating inside the containment building comes to mind.


Not sure about that interpretation - usually "fault tolerance" just means "additional costs"... .


> The idea with airplanes is not "design this component so it cannot fail" but "design the system so it can tolerate component failure".

that's not true. yes a lot of systems on airplanes are redundant but also there are plenty of you die if this breaks so we build it N times stronger than we can imagine it every happening... also, teach pilots not to do things that would bring that to be more possible. on a helicopter they have a single jesus nut that if it breaks the rotor is gone.

In rock climbing as well there is redundancy where there can be but some things are built strong to the point where under most foreseeable conditions the component will not break. (the most common dynamic ropes for lead climbing twins and half ropes aside, belay device, belay loop, belay carabiner, harness are all built for worst case without redundancy.)


> there are plenty of you die if this breaks so we build it N times stronger than we can imagine it every happening

That's simply not true. Every component is redundant. Nothing is built "N" times stronger. The safety factor is 50% stronger than the maximum anticipated load.

(I worked for 3 years at Boeing designing flight critical systems for the 757.)

> are all built for worst case without redundancy

Why I'm not going rock climbing.


is the jesus nut redundant? is the jackscrew nut for the elevator redundant?(one famously stripped and caused inverted flight for 30 min to try and save it but eventually crashed into the ocean)... they improved the design from that but it's still one mechanism and one screw. there are simply no completely reliable planes and helicopters without some form of single point reliability being required.


> is the jackscrew nut for the elevator redundant?

Yes. (It's for the stabilizer, not the elevator.) First off, the jackscrew is hollow and has a rod running through the center to keep it together if it cracks through. Secondly, the nut rides on steel balls in grooves. If the nut cracks and all the balls fall out, there are solid ice scrapers attached to the nut at each end that fit in the grooves, but don't contact them under normal operation. The ice scrapers peel any ice off the grooves so it doesn't jam the nut. But the scrapers are also strong enough to hold the nut in place if the balls fall out.

This is on the 757. I don't know the setup on the McDonnell-Douglas bird that crashed due to nut failure, except it's a much older design. I don't know if it had the ice scrapers on it, for example.

BTW, the jackscrew is made by Saginaw Gear. It's made from the finest steel forging money can buy, and Saginaw has been making them for a long time and knows what they're doing.

After the first trim gearbox assembly arrived, Boeing's test group had the job of applying the ultimate load, 150%, to it to see if it would buckle, crack, or bend. The test guys told me they were going to bust it. They put a big old steel I-Beam pinned at one end and my poor little jackscrew gearbox pinned at the other end. A hydraulic ram was applied to the I-beam, and the test guy cranked up the pressure.

The I-beam bent into a bow.

HAHAHAHAHAHAHAHAHHAHAHAAA I love Saginaw Gear.

> there are simply no completely reliable planes and helicopters without some form of single point reliability being required.

Helicopters, you're right. They won't survive losing a blade. Planes, you're incorrect.


P.S. My very first assignment at Boeing was to determine the size of that jackscrew needed to carry the load. I panicked, and went to my lead engineer. He laughed, and said "you know how to do column buckling calculations, right?" I said yes, and he said go to it.

After 3 years of working on the gearbox I knew everything there was to know about it, including all the failure modes anyone could think of. I was also fortunate to have a couple of Boeing's best engineers mentoring me.

It's redundant.


Is the main spar counted as a single component?


The main spar structure is redundant.


It's tolerant of random failure of individual components, yes, but the entire spar could fail under an overload condition. For this failure mode, the only way to ensure a suitably low failure rate is by setting an appropriate safety factor.


> but the entire spar could fail under an overload condition.

Each component individually is designed at 150% of the maximum load ever expected.

The spar has redundant components. Any part of the spar can crack all the way through, and it will still fly safely.


Redundancy protects against some failure modes (e.g. unrevealed fatigue cracking) but not overload, which is a common-mode failure that doesn't care about redundancy if the load is high enough. It becomes a matter of "probability of exceedance".

Electrical/mechanical systems are different and can usually be separated/segregated etc, but there is only one structure.


There was a famous crash where the pilot flew through some wake turbulence and caused the tail to fall off by improper rudder inputs. at a certain point there is only one of something.


The rudder structure is redundant as well. That particular accident was caused by unexpectedly high loads on the rudder, not a lack of redundancy.


And it seems likely that with enough operating plant, there always will be engineering failures. Aeroppanes sometimes fail catastrophically too of course.


Do they? I can't really recall an instance of catastrophic airplane failure over the last decade outside of 737 MAX certification / regulatory capture issues

I also think the amount of airplanes that exist is higher than the amount of nuclear reactors we'd need for it to be a strong power source, and I also suspect that airplanes face slightly more volatile conditions


> outside of 737 MAX certification / regulatory capture issues

Well there's one example! Why would you discount it?


It's a key example, and is the same failure mode nuclear power has.

Nuclear power could be engineered to be at least as safe as (most) commercial flight.

But it won't be - and this is absolutely predictable. Because of politics and money.

There is no answer to this, except to fix politics and money and make them as safe as commercial flight.

That's a whole different scale of problem to fixing climate change.

IMO this isn't a utopian fantasy, it's absolutely critical for species survival. But it doesn't look as if we're going to be starting the process any time soon.

Exporting the same problems to Mars or upload space or wherever won't solve them either.


Right, fair question. I read "engineering failures" above, so I want to highlight that this isn't so much an engineering failure as it is a capitalistic failure driven by incestuous relationships in US aerospace.

I do totally agree this is a real risk for any domain, especially energy which has so much money flowing, but I just don't think "engineering" is actually the issue which these things fail under


> it is a capitalistic failure driven by incestuous relationships in US aerospace.

What do you call Chernobyl? A socialistic failure driven by inept bureaucracy and central planning?


Both are institutional failures.

We don't have any technical defense against institutional failure. In some places and times there are cultural defenses, but those are often seen to erode.

The best defense is not to need any. There is much less need to defend against institutional failure in the case of renewables, because the technical failures to guard against have limited impact, well constrained in cost, time and space.


Honestly, I’m not well educated on Chernobyl’s mode of failure or political incentive structures. I’d probably agree with your implication that if procedures can’t be followed consistently/successfully than that is exactly an engineering failure, but as I said I do not know this circumstance


> Aeroppanes sometimes fail catastrophically too of course

It's become extremely rare.


Well sure. But while extremely rare is fine for aeroplanes, it's less clear that it's fine for nuclear reactors. So far we've been lucky that none of the big incidents have affected a major metropolitan area.

I'm not completely anti-nuclear. But it seems clear to me that it should be seen as a stepping-stone technology on the way to a renewables + storage future rather than a long-term solution.


> Does anyone honestly think the United States has institutions sound enough to safely manage nuclear power over multiple decades?

Seeing as they have done so for 70 years, yes. I don't just think it, I observe that it has safely managed nuclear power. All of the plants have run safely, save for Three Mile Island. And even in that case, safety measures worked and the secondary containment prevented large scale contamination.

I don't think it's infallible. But it's aware of its own fallibility and enforces measures like secondary containment.


> I observe that it has safely managed nuclear power.

This is not a correct statement. You cannot assert, for instance, that the pressure vessel head corrosion issue at Davis-Besse[1] was a 'safely managed' power plant.

[1] https://en.wikipedia.org/wiki/Davis%E2%80%93Besse_Nuclear_Po...


I'm not sure I follow. The vessel head corrosion was detected, and the Nuclear Regulatory Commission had the plant shut down. How does a story of a safety issues being detected, and operations ceased accordingly indicate unsafe management? It demonstrates the opposite.


The vessel head had corroded completely through the 6.63" steel pressure head, and the pressure vessel was relying only on the inner cladding to contain pressure. They were just a transient away, for years, from a steam explosion that would completely disassemble the pressure vessel and core and would place maximal stress on the containment building itself.

The issue was only "detected", after being covered up for years by falsified reports, when the engineer doing inspections decided to turn himself in.

There is no way this condition can be regarded as safe operation, and if that is what you are arguing there can be no question that it is flat wrong.

There are many, many of these kinds of situation where, just by the grace of whatever, we dodged a bullet and didn't have the catastrophe. You can't count those situations as adding to a cherry-picked "safe operation record".

There is a huge different between "didn't explode today", and "can't explode ever". We have spent too many days, months, years, in the former, rather than the latter. The so-called safety record is a lie.


> steam explosion that would completely disassemble the pressure vessel and core and would place maximal stress on the containment building itself.

This venturing into the realm of hyperbole, at best. Nothing in your link mentions an explosion that would "completely disassemble the pressure vessel". Stress on the containment building isn't mentioned at all. These statements seem to be of your own invention.

Can you substantiate your claim that a pressure vessel failure stood to compromise the containment building?

> There is a huge different between "didn't explode today", and "can't explode ever"

Again, we set up our safety measures such that the danger is contained even if a meltdown occurs. Even the most scrutinized designs may fail. Humans are never perfect. You're right: no plant can guarantee that it can't fail. That's why safety measures are built to withstand failure.


I know nothing about DOE/NRC inspection requirements, but..

> The issue was only "detected", after being covered up for years by falsified reports, when the engineer doing inspections decided to turn himself in.

Is it really policy that the same inspector can be responsible for successive inspections accumulating to years? That would be stark raving insanity for any critical systems.

Financial businesses have a traditional 2-week enforced vacation for critical systems employees. This is not an aggressive work-life balance effort. :)


It's down to luck that the corrosion was detected before a serious incident occurred.


While there are a lot of human faults in this disaster (I think it is hard to deny that generators in the basement were a bad idea) it is also a complicated problem. One factor that isn't frequently brought up is that the Tōhoku earthquake was the 4th largest ever recorded and the largest in Japan (9.1) (second largest recorded was an 8.5 in 1896 and the second largest theorized was an 8.9 in the year 869. Remember this is not linear growth). Fukushima wouldn't have happened with an 8.5. A big reason this is important is because it really sets this event apart from that of Chernobyl, which I'd argue was much more dependent upon human error and bureaucracy.

But that means that the problem was both human and technical. What was considered good enough regulation was the issue because it is hard to predict earthquakes and even harder to estimate for earthquakes we've never seen before. No one thought a 9.1 magnitude earthquake would hit Japan and Nuclear safety is typically magnitudes of safety above what is needed (see radiation dosages) and this is a good thing (even though many that are pro nuclear, but never worked in the industry, claim that we're too strict).

But you are right that there is infighting between the scientists/engineers and the bureaucrats. But that's been true for every industry I've been a part of. I'm just trying to say that the story of why Fukushima happened is substantially more complicated than I see in the general discussions here on HN, Reddit, or elsewhere.


I just think that people need to out things into perspective. The tsunami that cause Fukushima was dar more damaging than the nuclear event, but people seem to only remember the nuclear event. I think in our mind we make these events far far more serious than they were. Not that they were not serious but every thing in life is a tradeoff and you need to look what you are trading and what you are getting.


No. The tsunami was extremely damaging and the death count was shocking. But it’s over.

The nuclear event had fewer immediate deaths, but the whole area is still unlivable, the sea is still getting more polluted every second, nothing is over, and won’t be for at least hundreds of years if we ever engineer a way to deal with the core of the reactor.


> The tsunami that cause Fukushima was dar more damaging than the nuclear event, but people seem to only remember the nuclear event.

because we still live with the nuclear accident, while the tsunami damages are mostly repaired? Do you want to swim in the water in front of the plant? I probably wouldn't.


I think what bugs me more is the armchair expertise, or rather the confidence behind this. It is the people whose argument essentially boil down to experts being idiots and not seeing things that are clearly obvious. I don't see these people significantly different from anti-vaxers. Both do real harm to society and make it substantially more difficult to solve the issues at hand because we're distracted by misinformation and often radicalizes others. Don't get me wrong, I'm happy that people are researching and learning. I like that people question authority and expertise too. But there is a balance here. You can say things with confidence if you have only read a few wikipedia articles on it, but if someone disagrees with you don't pull out a baseball bat. I find this behavior frequently common on places like HN and Reddit. I often find that the real answers are buried in a thread because they are complicated and nuanced, or non existent. I don't think I'm immune to this behavior either, but I do try to use the Murry Gelman Amnesia affect as a metric to check myself, and I think there are other good strategies that we should utilize and encourage. But I don't think our society encourages honesty over simplicity.


The reason why I find this logic faulty is that none of the "known defects" were sufficient to shut down the plant. It's not a question of bureaucracy -- rather, the bureaucrats are on the other side. To cover their asses, they issue constant statements of imminent danger, and since those dangers never manifest, nobody believes them anymore.

If anyone took the warnings seriously the plant would have been shut down ages ago. And that's the problem with tail disasters -- they happen so infrequently that the system is assumed to be redundant to all of them, so even a "failure" as predicted would be met by a failsafe.

That's why I personally have turned against nuclear power. It's too complicated and the risks live out on the tail, and they are large (though not "fat" in the Taleb sense -- they're still bounded geographically).


> In short, the problems were human not technical.

I disagree. Problems have been in both human and technical realm and, even worse, there is no way to clearly disentangle those two factors. Good arguments are given in Charles Perrow classic work "Normal Accidents" [1]. It is worth citing the tree main conditions which will result in an accident probability of greater than acceptable

1.The system is complex

2.The system is tightly coupled

3.The system has catastrophic potential

[1] https://en.wikipedia.org/wiki/Normal_Accidents


I agree with the point about technical vs people. On HN people are more familiar with how the applies to software. It may be technically possible to write bug free safety critical code in C. But in the real world we are all human and make mistakes and we don't have any choice about that. The existence of a hypothetical perfect solution is not a good defense.


It's not only that, but also nuclear plants run by for profit organisations, where cutting corners will at some level be appreciated to ensure the bottom line.

/edit: it's funny that here many are calling for tougher regulation, while in other post many who are pro nuclear complain about toomuch red Tape and top much security.


> nuclear plants run by for profit organisations, where cutting corners will at some level be appreciated to ensure the bottom line

How does that explain the inept handling of Chernobyl?


Cutting corners was appreciated to meet arbitrary plans and quotas. The failure mode wasn't that different.


Personal profit, zealotry, career seeking, incompetence, design flaws, political agenda. This also includes the design phase. The RBMK reactor was an irresponsible design from the onset, even without the unknowns.

Graphite moderated reactors are prone to graphite cracking, as also evidenced by UK's AGR reactor fleet. Maybe pebble bed reactors are safer, because new pebbles are continuosly fed in and the spent ones are extracted for reprocessing. We'll se how the HTR-10 and the HTR-PM fare.


From what i know about this accident, the profit in this case was not to loose face for the higher ups running the plant.


Good luck designing and operating a complex, dangerous system with purely altruistic, utterly selfless people.


> Good luck designing and operating a complex, dangerous system with purely altruistic, utterly selfless people.

I know you're beeing sarcastic, but you just have to run it by the book. No need to be a saint.


What I'm saying is a proper organization takes advantage of peoples' base motives, instead of trying to defy them.

Free markets work so well for that reason.


I don't think this works in any savety relevant industry or there are not many proper organizations. Most regulations are a response to accidents.


Lawsuits and loss of reputation has halved the value of Boeing from the 737MAX mistakes.


Yet, no one personally responsible has suffered in the slightest.


I think one of the test pilots was brought up on charges for lying to the FAA. There may be others. I haven't followed that aspect that closely.

All who owned stock in the company had that cut in half. Lots more lost their bonuses. The CEO lost his job.


Golden parachute. Millionaires losing bonuses or equity never need to tighten their belts. Almost everybody who lost equity had no say.

The test pilot would be covering for a person actually responsible. Charges were probably to force fingering that person. But if that person wasn't charged, then it is all just business expense.

The losses are borne by pension funds.


Did “profit” just get redefined?


I know nobody likes the wiseass but

Profit = to gain an advantage from something: profit from sth/doing sth I profited enormously from working with her.

https://dictionary.cambridge.org/de/worterbuch/englisch/prof...


The handling, as in, the reaction once the top officials actually understood the magnitude of the situation, was nothing short of spectacular.

No expence was spared cleaning up the mess, removing top layer of soil at a massive scale and enclosing the failed reactor in sarcofagus. This expence and reputation damage contributed considerably to bringing the end of USSR.

You've got to keep in mind how little was known about lethality and handling of radiation back then, compared to today. In fact good chunk of today's knowledge comes from Chernobyl.


While we learned a lot from Chernobly, the culture back then was already very fearful of nuclear.


I mean more like having tools and equipment to handle the situation that previously simply didn't exist.

From hazmat suits to robots, what was avaliable was extremely basic.


It's not a universal rule that all for profit companies will "cut corners." Airplanes are vastly safer than other forms of transportation. When the public found out that Boeing cut corners with their design of the 737 max, their share price dramatically plummeted, signaling that this was the wrong decision.

Also, cutting corners isn't necessarily worse for nuclear than other energy sources. More people die from wind turbine accidents than nuclear power.


> run by for profit organisations, where cutting corners will at some level be appreciated to ensure the bottom line

as though government bureaucrats and congressmen don't like coming in under budget, future consequences be damned.


Comming in under budget seems like a rare problem with building nuclear plants.

While in operation, i think congressman and bureaucrats don't even know about the real costs.


> the problems were human not technical..

The logical conclusion is that if we are to continue building nuke plants, we need to keep humans out of picture.

We have no technology to ensure that huge institutions handling existentially hazardous technology do not become corrupt and irresponsible. The solution we know of is to avoid handing over such technology to the control of readily corruptible institutions.

Corruption is arguably the chief purpose of almost any past nuclear power initiative. A public works project that involves tens of billions of dollars almost inevitably devolves into a nest of corruption, whether it's a nuke plant, a new urban tunnel (cf. Big Dig, NY 2nd Ave), or US military procurement. There is a reason why small and portable nuke generation has not been able to compete: there is little scope for corruption in small nukes.

Wind and solar power are not subject to such systemic failures, and are also quite a lot cheaper than nukes, and getting cheaper every year. To prevent failures, we just need to shut down the nuke plants and replace them with solar and wind power. The only remaining question around renewables concerns storage of peak power output for dead times. But power storage is low-tech, thus low-risk, with numerous alternatives--gravity, pressurized-air, chemical--vying simply for the title of cheapest.


The challenge is ownership at core, and we don't do well in having organizations not trend toward bureaucracy. As much as people hate bureaucracy, they love the order and predictability they produce.

It's probably why nuclear power has a ways to go, and it isn't the tech that needs upgrade; it's the people and the philosophy.


It's worth pointing out that essentially the entire US navy is powered by nuclear reactors that service lives in the 3+ decade range, and it's worked astonishingly well. It's not completely without incident, but wow, yeah, civilian nuclear power could really work if held to military standards of engineering and maintenance.


It's worth pointing out that US civilian nuclear power plants with service lives in the 3+ decade range have worked astonishingly well. It's not completely without incident, but wow, yeah, military nuclear power could really work if held to the same standards of engineering and maintenance.


Naval reactors are different, their scale is two orders of magnitude lower. One could make other safety guarantees at that scale. They also use enriched uranium which means no refuelling is needed during the service life of the reactor. SMRs can also make some of these safety guarantees.


They do refuel naval reactors [https://en.wikipedia.org/wiki/Refueling_and_overhaul].

Given the (relatively) small size of these reactors, why can't they put 100 of them on site at a NPS?


> Given the (relatively) small size of these reactors, why can't they put 100 of them on site at a NPS?

That's what companies developing SMRs aim to do. For instance NuScale has a design using up to 12 modules of 77 MWe each in a separate stainless steel lined concrete pools of water. The modules are quite innovative and incorporates passive safety features such as natural circulation, redundant passive decay heat removal, gravity driven safety systems.

https://www.nuscalepower.com/benefits/safety-features


> Their nuclear engineers are top-notch. It was the bureaucracy that failed, not the talent.

This.

All of the articles I've read about the disaster, all continually scapegoated the engineers as the reason for the failure, allowing the politicians and government to get a free pass. I'm not sure why this was the case considering Japanese engineers are some of the best, but the vilification of them never sat well with me because then it cast a negative cloud over every Japanese engineer unfairly.


TEPCO dropped the ball pretty massively too, although IMO it should be the government's responsibility to assume that power operators are going to and not allow them to.


I can't find the article for the life of me. From what I can remember, it was soon after the disaster, something came out about another plant that was "hit" by high waters as well. The difference was, their seawall was stupid high. One of the civic engineers during development fought tooth and nail to build the excessively high wall compared to what the gov building code was. If I remember correctly, it ended up being only a two or so meters taller than the tsunami that hit them. The engineer had a really baller statement in the article about how bureaucrats are useless and shouldn't have an opinion when it comes to life safety. I wish I could find it.



How the hell did you find it so quick?

I'm surprised I remembered it relatively well. Though, 1 meter buffer between the tsunami and seawall and the politician quote is better in his words:

>"Matsunaga-san hated bureaucrats," Oshima said. "He said they are like human trash. In your country, too, there are probably bureaucrats or officials who never take final responsibility.


Divination via DuckDuckGo :-)

From words in your comment, I used this search string: japan nuclear plant saved engineer battle high sea wall

The result was third in the returned list. (Settings - safe search off, global region selected.)

Edit to amend: words and concepts in your comment. Also, I'm chuffed to have been able to be of use.


You... you're beautiful. Thank you. You may have just made me a convert to DDG.


Aw! The beautiful person in this effort was Karen Spärck Jones [1], who gave us tf–idf [2].

[1] https://en.wikipedia.org/wiki/Karen_Sp%C3%A4rck_Jones

[2] https://en.wikipedia.org/wiki/Inverse_document_frequency

But thank you.


> Does anyone honestly think the United States has institutions sound enough to safely manage nuclear power over multiple decades? Or will they neglect basic maintenance and upgrades?

Objectively, yes. There hasn't been a major nuclear reactor leak in the ~75 years the nuclear industry has existed in the USA. Even Three Mile Island, the worst disaster the US ever saw, was fully contained due to regulator-forced safeguards.


I'm not sure why you're downvoted.

After 50+ years of routine operation generating a nontrivial proportion of energy, we can look back at a decent amount of data. And what we see is that nuclear has been remarkably safe. Up here in Canada, coal mine disasters alone have killed far more people. When you start adding in air pollution and other such nasties, it's an enormously vast gulf in lethality.

A cynical take. Estimate how many people would have died from air pollution due to a coal power plant generating the same amount of electrical energy as the reactor at Chernobyl that blew up. Estimate how many died from Chernobyl. The reasonable estimates of the high end of the former, and low end of the latter, are overlapping. It's not entirely preposterous to suggest that replacing unscrubbed coal plants with shoddy reactors that simply explode after 20 years of operation could actually save lives in net.


https://en.wikipedia.org/wiki/Three_Mile_Island_accident

We got super super lucky. And there's some debate about how bad the accident was with regards to NRCs monitoring.

Frankly, the whole plant was a disaster in the making. There was tons of warning lights and other systems but they were essentially useless because they constantly flashed and for poorly understood reasons.

3mile island is an excellent engineering study of what not to do with monitoring. We got very VERY lucky it was as small as it was.


Sure, all of which are problems which we've since fixed. But the core point is that there wasn't a major release of radiation like Chernobyl, and the reason why is because there were a regulator-imposed safeguard in place: the containment building.

There were a lot of things that went wrong in 3MI. Many of the lessons learned from that were incorporated into future designs. But one thing that went very right was that there was defense in depth, so that a N different things would have to go wrong to create a nuclear disaster. And in this case the number of failures was less than N. That's an engineering and regulatory success story.


"Wasn't a major release" meaning what?

A large amount of radioactive krypton gas was "vented", meaning it was released to spill down to the river and gas anyone who lived nearby. There was no tracking, so we don't know who or how many were exposed, or how much.


We can certainly ballpark estimate how much gas was vented--we knew the pressures and duration of the vent.

But this is a night-and-day comparison with, say, Chernobyl where the core was exposed and burning unmitigated for nine days. Many more orders of magnitude more release of radiation.


Everybody agrees Chernobyl was the worst. But that doesn't mean the others were picnics. A common thread is systematically discounting harm to people exposed. With such pervasive dishonesty throughout the industry, rigorous oversight has proved impossible, in practice.

The nuke navy is always cited as having no incidents, but that doesn't pass the smell test. Military failures are easily classified and buried.


There was no luck about it. It was a meltdown, and the pressure vessel was compromised. Secondary containment saved the day. Three Mile Island didn't become a Chernobyl not because of luck, but because the US didn't cheap out and skip building concrete condom over the reactor like the Soviets did.


Even if Chernobyl would have had a containment vessel, what would have been the best case scenario? I'm not an expert but the blast threw the multi-ton slab of steel and concrete lid into the night sky, surely the containment vessel would have had a giant hole in it, albeit saving some of the radiation from the atmosphere of course but not all of it. One reason I think Three Mile Island wasn't as bad is because nobody in the west was crazy enough to build an RBMK.


The whole point of the containment building is to contain a pressure vessel failure. American containment buildings are built to withstand impact of a fully loaded passenger airliner. That's why the containment vessels are reinforced concrete more than a meter thick.

If Chernobyl had secondary containment, the burning fuel rods would not have been exposed directly to atmosphere. Basically, if you have a fire emitting toxic soot it's a lot better to have this fire happen in a concrete dome versus totally exposed.


False, see my SSFL links elsewhere in the thread. Direct link:

https://news.ycombinator.com/item?id=26348051


The US Navy does a really safe nuclear power program, but it costs a lot. If you could combine the organizational fortitude of the Nuclear Navy to manage the plants and the maintenance with a reasonable cost, I think that would be sufficient to ensure safety, as long as the designs are also done well. Personally, I think nuclear power is too risky for commercial use, i would put my money in HVDC links, pumped hydro storage, geothermal, and overbuilding renewables as the best path to reducing emissions, and continue to use natural gas for emergency generators for hospitals and stuff.


There is also a human element. Unit 1 had been retrofitted with an Isolation Condenser, which is capable of cooling the core and preventing a meltdown without needing the pumps that couldn't run due to lack of power. This is exactly the type of upgrade people often suggest.

Unfortunately, for reasons that are still murky, this system wasn't activated, and Unit 1 melted down. The problems at Unit 1 also contributed to the problems at other units, causing radiation hazards, diverting personnel and attention, etc.

In fact, a larger version of this system is touted as one of the major safety features of the newer AP1000 plants, because all it requires is that you open a couple of valves, and the reactor can be safely shutdown as long as you add water every couple of days. Unfortunately at Fukushima, they didn't open the valves.

All of that said, the absolute damage from the accidents at Fukushima was tiny in comparison to the other damage from the tsunami, and much less than the damage of operating coal plants with no accidents whatsoever in Japan.


Also from my understanding the earthquake wasn't really much of an issue for most of Japan. It was the tsunami that we don't yet have good protections against.

Why do humans of the 21st century love building delicate structures on the shoreline at sea level? Historical civilizations generally avoided building on the coast, very likely for good reasons, both for disaster resistance and for military reasons. Most ancient cities of the world are not located on the oceanside, but rather along inland rivers or smaller bodies of water, or at least within some safe distance of the coast.

Recent modern cities seem to love building on the coast -- New York, Shanghai, Shenzhen, Singapore, Los Angeles, Vancouver, Dubai -- all these had relatively little history or at least were nothing more than small towns until the past couple hundred years, and are all terrible places to build a city in terms of tsunami resistance.


Moving goods by sea is vastly cheaper than by land: for all those cities, being a port is why they are significant economic engines. And power plants need cooling and can use sea water for that purpose.


Ancient cities needed to drink the freshwater from the rivers.

Now we have man-made reservoirs and aqueducts to deliver drinking water to the coasts.

Japan certainly seems to have cities along its rivers, but it also has a lot of costal cities (presumably because it's a small island nation, unlike, say, European civilizations).

For Fukushima in particular, I was under the impression they were using the ocean water to cool the plant itself. (Under non-meltdown conditions, you can transfer heat without contaminating the water itself...)


Los Angeles was founded pretty far from the coast, and at a decent elevation, roughly 77m / 253feet. It just expanded in every direction. Santa Monica is protected by cliffs as well. Farther south isn't so lucky.


Wouldn’t the plant have been located where demand and cooling capacity were co-located?


And yet only 1 person was killed when it melted down. But somehow, it gets more attention than the 10's of thousands that were killed in the tsunami.


Human death is not the only dimension that matters. The cleanup could easily cost over $500 billion. That is a massive opportunity cost.


Only 1, unless you count the rest.


> People get complacent and greedy. They use every procedural tool they have to delay upgrades, maintenance, and improvement.

That is the core argument for a meaningful regulatory regime.

Large-scale base load generators only work from a business sense with predictable, steady demand. The price of that guaranteed demand is a near-fixed, managed return on assets and tight regulatory oversight.


This is arguably worse than an engineering failure though... In an engineering failure we can identify a concrete reason for the failure and integrate that into our engineering knowledge. On the other hand, we will never be able to eliminate the human / bureaucratic element.


Bureaucracy destroys - regulation is useful, if applied effectively, however the ever growing bureaucracy, and laziness of average people tends to happen without constant pressure or growth. Working as a programmer in Government has made me realize this.


All problems are human and not technical though when it comes to engineering failures.


That's the problem I have with nuclear, it's not the technology, it's that our species is not necessarily well-suited to managing the risks associated with nuclear (with some exceptions, maybe France?).


not so sure about that:

https://en.wikipedia.org/wiki/Nuclear_power_in_France#Accide...

france often hides minor stuff, which often results in these more severe events. well france also has only 3 reactors as far as I know that were built in the 2000s.

btw. it's also my take. as long as they are operated to turn a profit or in a way that somebody might gain something, it will be basically impossible to have "safe" nuclear power. humans are dangerous.


The French were just a more responsible than the Soviets and don't have to deal with earthquakes and tsunamis as much as the Japanese. The also invested a lot in several nuclear designs, comitted to nuclear power and are therefore quite experienced.


I can't find any specifics about the partial melt down in 1980 at Saint-Laurent, but it seems a serious accident could bankrupt france in an instand.

https://www.businessinsider.com/potential-cost-of-a-nuclear-...


> Does anyone honestly think the United States has institutions sound enough to safely manage nuclear power over multiple decades?

if US institutions can't manage nuclear power, what else can't they manage?


- a global pandemic

- a war that lasts longer than 1 month

- media legitimacy

- global financial system


Capitol security?


Of course nuclear plants would not be safe if “the president” ordered citizens to attack them.


That is an interesting idea. What would happen in a civil war? Would the engineers stay at their post safely shutting down the reactors?

Nuclear energy demands a level of civilization that we simply cannot guarantee.


>...Does anyone honestly think the United States has institutions sound enough to safely manage nuclear power over multiple decades?

All indications are that much was learned by industry and the NRC after TMI: "...The NRC said the TMI accident also led to increased identification, analysis and publication of plant performance information, and recognising human performance as “a critical component of plant safety”. Key indicators of plant safety performance in the US have improved dramatically. Those indicators show:

• The average number of significant reactor events over the past 20 years has dropped to nearly zero.

• Today there are far fewer, much less frequent and lower risk events that could lead to a reactor-core damage.

• The average number of times safety systems have had to be activated is about one-tenth of what it was 22 years ago.

• Radiation exposure levels to plant workers have steadily decreased to about one-sixth of the 1985 exposure levels and are well below national limits.

• The average number of unplanned reactor shutdowns has decreased by nearly ten-fold. In 2007 there were about 52 shutdowns compared to about 530 shutdowns in 1985."

https://www.nucnet.org/news/three-mile-island-led-to-sweepin...

No one ever promised that there would never be a nuclear accident - that would be unrealistic for any power source. But historically nuclear power has been much safer than all the alternatives that have been available. If only other power sources were as safe:

https://www.statista.com/statistics/494425/death-rate-worldw...

https://ourworldindata.org/safest-sources-of-energy

https://www.nextbigfuture.com/2011/03/deaths-per-twh-by-ener...

https://www.forbes.com/sites/jamesconca/2012/06/10/energys-d...

Unfortunately anything at all related to nuclear is covered by the media orders of magnitude more than other power sources so many people have an understandable misperception that it is more dangerous than other sources of power. 200 thousand people had to be evacuated in CA a couple of years ago because of a lack of maintenance on a hydroelectric dam could have let to catastrophic failure. We got lucky that time as the rains stopped just in time, but how much did the media cover that story? How much would the media have covered that if 200 thousand had been evacuated because of a nuclear power plant?

A recent Harvard study shows that pollution from fossil fuels is much worse than previously thought and they estimate that it is responsible for more than 8 million people yearly. We need to move away from burning fossil fuels and we need to use all the tools that are available.

https://www.seas.harvard.edu/news/2021/02/deaths-fossil-fuel...

It is possible there will be some major advances in grid storage that will allow us to stop using natural gas to cover for the intermittent nature of wind and solar. In that case - great! But... what if that doesn't pan out? The dangers we are facing in the coming decades are immense. Texas has shown us what happens with even a small disruption of energy. If it came down to a situation where you were forced to choose, would you prefer the world to suffer through catastrophic climate change rather than use nuclear power?


> the intermittent nature of wind and solar

is kinda dependent on your region.

where I live, wind is pretty unreliable except at a narrow band of latitude.

but solar is -very- reliable. more like regular than intermittent. So we have two kinds of storage requirements: short term buffers for 15 mins of passing cloud cover, and overnight. Because of this manageable profile our economy is swiftly ramping up solar not just for current demand but in pursuit of 10x cheap new power to drive new industry.


It isn't as easy as you are implying. Trying to rely only on intermittent power sources has huge storage requirements due to weather along with daily/seasonal variation. If grid energy storage was a simple problem it would have been done decades ago.

For example, one estimate is that for Germany to rely on solar and wind would require about 6,000 pumped storage plants which is literally 183 times their current capacity: >...Based on German hourly feed-in and consumption data for electric power, this paper studies the storage and buffering needs resulting from the volatility of wind and solar energy. It shows that joint buffers for wind and solar energy require less storage capacity than would be necessary to buffer wind or solar energy alone. The storage requirement of over 6,000 pumped storage plants, which is 183 times Germany’s current capacity, would nevertheless be huge.

https://www.econstor.eu/bitstream/10419/144985/1/cesifo1_wp5...

There is a large variation in daily electrical usage (particularly in summer months). For example in the US: https://www.eia.gov/todayinenergy/detail.php?id=42915

Contrary to what advocates claim, people have been looking at grid energy storage for decades and it isn't as simple as they claim. As Bill Gates said in an interview: "…They have this statement that the cost of solar photovoltaic is the same as hydrocarbon’s. And that’s one of those misleadingly meaningless statements. What they mean is that at noon in Arizona, the cost of that kilowatt-hour is the same as a hydrocarbon kilowatt-hour. But it doesn’t come at night, it doesn’t come after the sun hasn’t shone, so the fact that in that one moment you reach parity, so what? The reading public, when they see things like that, they underestimate how hard this thing is. So false solutions like divestment or “Oh, it’s easy to do” hurt our ability to fix the problems. Distinguishing a real solution from a false solution is actually very complicated."

https://www.theatlantic.com/magazine/archive/2015/11/we-need...

Gates is investing in 4th gen nuclear and energy storage companies so he is putting his money where his mouth is.


> It isn't as easy as you are implying.

I'm not saying it's easy, I'm saying it's happening.


>...I'm saying it's happening.

Hopefully someday, but it's not happening yet. Those big battery farms installed by Tesla (et al) are used primarily for grid stabilization. Most current grid storage is pumped hydro and that has limited potential to expand. Like I said, it is possible there will be some major advances in grid storage that will allow us to stop using natural gas to cover for the intermittent nature of wind and solar. In that case - great! But... what if that doesn't pan out? The dangers we are facing in the coming decades are immense. Texas has shown us what happens with even a small disruption of energy. If it came down to a situation where you were forced to choose, would you prefer the world to suffer through catastrophic climate change rather than use nuclear power?


I'm no power engineer, I'm just assuming that the people pouring literally $billions into huge-scale solar infra in my region know what they're doing.


Yeah, it is always some human who fails but in the case of this tech, the failure becomes a catastrophe.

So we either take out the human factor or the technology out of the equation. Right now we can only do that with the tech.


No, even with the biggest failures, nuclear is still vastly more safe than the alternatives.

What is catastrophic is the news reporting, particularly in Germany.

For example, Fukushima was caused by a Tsunami.

Death Toll

   Tsunami:    15,899
   Fukushima:       1



  —> OMG FUKUSHIMA!!!!  <—

If you looked at the press coverage, you would think it was the other way around, that there was this Tsunami but it wasn't a big deal and there was the huge "catastrophe" of Fukushima. And many people do believe, fervently, that this is the case, that it was the other way around.

But it wasn't.

In fact, in Germany Fukushima is considered a "Super-GAU", with a GAU being the "Größter Anzunehmender Unfall", the largest potential accident. So "GAU" itself is already the superlative, but no, we have to rhetorically top the superlative, make it the superest largerest.

And that's for an accident that has caused a single death (a worker recently passed and it is considered likely it was an effect, before the death toll from the accident was zero).

The only thing that's a Super-GAU is the hyperbole of the hyperventilating press coverage.


How is it that you consider only the immediate deaths from the event and neither the follow up casualties, the evacuation measures and everything else which hangs on this? Do your really think your opposite is so stupid? And yes, I did look at the press coverage a lot since I was in Tokyo at that time. But I also looked at it later on and no, I did not think it was the other way around however I'm also not that blind to ignore all the other consequences this catastrophe had for the region and the people who lived/live there.


Because I also only considered the immediate deaths from the Tsunami. And actually, the 1 death is a follow-up casualty, it wasn't immediate. So if we really only count immediate deaths, that number is 0. Zero.

--> OMG FUKUSHIMA!!! <--

What the long-term death rate is going to be is very nuclear partly because even the worst-case estimates (those that had to be continuously revised downward) show increases of the cancer rate of less then a percent, so completely lost in the noise and other effects, and completely impossible to trace.

Now to the evacuation.

"Many deaths are attributed to the evacuation and subsequent long-term displacement caused by mass evacuation that was not necessary for the most part"

My emphasis.

https://en.wikipedia.org/wiki/Fukushima_Daiichi_nuclear_disa...

The same happens to be true for Chernobyl, where the health-effects due to the evacuation far exceed the health-effects due to radiation. Whereas for example the wildlife in both exclusion zones is doing just swimmingly.

So:

Fear of nuclear is killing more people than nuclear.

This is generally true, because the use of nuclear energy has saved over a million people from premature death and will (or would) save millions more:

https://blogs.scientificamerican.com/the-curious-wavefunctio...

But somewhat surprisingly, it is also true when nuclear goes wrong, when there are accidents. Check out the decennial Chernobyl reports by the WHO, they are absolutely fascinating. Spoiler alert: with each report, so every ten years, they massively reduced their estimate of how many people would die as a result, usually by an order of magnitude.

Now that doesn't mean that there should not have been any evacuation, but it in both cases it was both to widespread and way too long.


You completely missed the message here. I wonder if it was intentional. Let me repeat it again:

It's not only deaths if it comes to say what a "safe" technology is. A technology to leads to whole regions being evacuated including every economical, social and environmental fallouts resulting from that, IS NOT SAFE.


Hmm...you appear to have missed: "caused by mass evacuation that was not necessary for the most part"

What leads to whole regions being evacuated is the exact irrational fear and panic-mongering you promulgate.

Once again: irrational fear of nuclear kills way more people than nuclear.


I did not miss that.

I just don't consider some nuclear fanbois after the fact one sentence opinion a viable argument. Especially not if it's main aim is to derail and/or cloud the actual facts.


You missed the fact that "caused by mass evacuation that was not necessary for the most part" was a direct quote from the respective Wikipedia page, backed up by the data (see the WHO reports on Chernobyl etc.).

Of course, you believe that all data that contradicts your irrational beliefs must be just opinions by "fanbois", because to actually check up on the facts would mean risk shattering your strongly held but weakly backed belief system.


Neither you nor this paper considers the fact, that if the disaster would have become worse, people would complain: why didn't you evacuate. Saying AFTER THE FACT that it was unnecessary is completely useless and ignorant. It's not like it won't happen again with the next disaster.

This is like saying that the airbag or the safety belts in my last car were unnecessary since I didn't have an accident which would justify them.


That's not a "fact". That's a counterfactual which you hypothesise, without any reason or evidence whatsoever, will have horrible consequences. And an analogy that doesn't work. As we say in German: "Nicht alles was hinkt ist auch ein Vergleich".

Your seatbelt analogy, apart from being pulled out of thin air, has absolutely nothing to do with what happened. The seatbelts are preventative measures before an accident happens. These were measures after the accident happened that were way over the top. A better analogy is a doctor seeing a bruise on an arm and deciding to amputate the arm, just to be safe. And then amputating both legs as well, because "better safe than sorry".

The "cure" is far worse than the disease.

With Chernobyl there was the excuse that they didn't know better, they only found out in the decades after the accident that their initial estimates for the harm caused by the radiation were way too high, as in several orders of magnitude off. I really recommend reading the WHO reports[1], they were an actual eye opener for me, because they contradicted what I "knew" to be the case.

Again, they were not off in terms of the scale of the accident, they were off on the effects of an accident of a particular scale. Not like your seat-belt analogy at all.

Now a good question to ask is why they were off by so much. It looks like the Linear Non Threshold Model of radiation damage is simply wrong[2]. As far as I can tell, this model was never actually validated by data, it was just assumed to be the case, and if you look at the "pro" voices, they also provide no evidence for, just that they think the lack of evidence means it should be viewed as true, which is...odd.

With Fukushima, there is less of an excuse, as they could and should have known. See also J-value assessment of relocation measures following the nuclear power plant accidents at Chernobyl and Fukushima Daiichi [3]. Money quote:

"•Relocation was unjustified for 75% of the 335,000 people relocated after Chernobyl.

• Relocation was unjustified for the 160,000 people relocated after Fukushima."

Mandatory evacuation of residents during the Fukushima nuclear disaster: an ethical analysis[4]:

"We examine the measures from an ethical perspective and argue that if the government's aim was to avoid health risks posed by radiation exposure, then ordering compulsory expulsion of all residents cannot be ethically justified. We assert that the government may not have ordered the mandatory evacuation solely based on health risks, but rather to maintain public order."

[1] https://www.who.int/publications/i/item/9241594179

[2] https://en.wikipedia.org/wiki/Linear_no-threshold_model#Cont...

[3] http://www.sciencedirect.com/science/article/pii/S0957582017...

[4] https://academic.oup.com/jpubhealth/article/34/3/348/1557028


Considering NASA recommends that nuclear power be "significantly expanded" despite its drawbacks, I think they are sound enough. The US has a pretty squeaky clean record when it comes to nuclear safety and storage protocols.

Also, the current status quo of "look we built all this renewable energy! just ignore all those gas peaking plants propping them up!" has to end.

Nuclear is green. Renewables + gas is not renewable, not sustainable and not green.


>The US has a pretty squeaky clean record when it comes to nuclear safety and storage protocols.

One single nuclear site is consuming 10% of the DoE's budget, and its still leaking. https://www.tri-cityherald.com/news/local/hanford/article228...


Allow me to introduce you to the relatively unknown Santa Susanna Field Laboratory meltdown/explosion, due to extensive cover ups over decades:

https://en.wikipedia.org/wiki/Santa_Susana_Field_Laboratory

https://en.wikipedia.org/wiki/Sodium_Reactor_Experiment

Still not fully cleaned up.


The Hanford site is a WWII era nuclear weapons facility and is not at all comparable with nuclear reactors for power generation.


while the comment you're replying to didn't make a distinction, i'll make the distinction that that was a nuclear weapons production facility (run by the federal government). further, some of it was constructed during WWII for the manhattan project.

so... not great handling, true. strong evidence about how nuclear power plants will be operated in the future? no.


The NRC does not regulate defense nuclear facilities.


Is Figure 8 an unconditional empirical CDF of inter-arrival times? Apart from the heavy right tail (which covers ~0.01% of the data), it looks pretty exponential to me. If I'm understanding what I'm seeing, it sounds like like the homogeneous Poisson assumption was pretty solid. Especially considering its purpose. Maybe it would have been more accurate to say "there's a mixture of two Poissons: the bulk and the network disruption". But I think that possibility would occur to most people reading the paper at the time.

Also, Figure 7 seems to show very little change in mean block inter-arrival time.

In fairness the authors say, "Performing the Lilliefors test on the LR data rejects the null hypothesis that block mining intervals are exponentially distributed, at a significance level of α= 0.05." But this isn't physics. We want to know how useful the approximation is, and whether there is a similarly tractable one with better predictive power.


> Is Figure 8 an unconditional empirical CDF of inter-arrival times?

My understanding is that it's the inter-arrival times after some cleaning and resampling. If I've understood correctly, when they resampled the data, they did so uniformly between the neighbours of the points they omitted, which would actually make the data appear more like an exponential distribution.

> Especially considering its purpose. Maybe it would have been more accurate to say "there's a mixture of two Poissons: the bulk and the network disruption".

Could be. Could also follow a power law or a phase type distribution.

> But this isn't physics. We want to know how useful the approximation is, and whether there is a similarly tractable one with better predictive power.

It's worse, it's math :-) I take your point though, it all comes down to what you're trying to do. If inter-arrival times did follow an exponential distribution with parameter $\lambda$, then we'd have finite variance and I'd be pretty confident that I could build a performant predictive model. The presence of a heavy right tail makes me think otherwise.


Its been a while, but I just read an article arguing the case that Len Sassaman was a Satoshi. It was a neat article, so I watched one of Len's Defcon talks about remailers from waaay back in the day.

In his talk, Len mentioned that most remailer security analysis assumes homogeneous Poisson email arrivals. He pointed out how bad an assumption that is for email.

I still think it was a solid assumption in the Bitcoin white paper.

https://leung-btc.medium.com/len-sassaman-and-satoshi-e483c8...


Feynman's lectures on the character of physical law should be something everyone sees in school.

   In general we look for a new law by the following process: first we guess it, then we compute the consequences of the guess... and then we compare those computation results to experiment.  

   If it disagrees with experiment it is wrong. In that simple statement is the key to science.  It doesn't make a difference how beautiful your guess is.  It doesn't make a difference how smart you are, who made the guess, or what his name is.  If it disagrees with experiment its wrong, that's all there is to it.

   Notice however that we never prove it right... In the future there could be a wider range of experiments, or you could compute a wider range of consequences, and you may discover then that the thing is wrong. That's why laws like Newton's laws for the motions of planets last such a long time... It took several hundred years before the slight error in the motion of Mercury was developed.
https://www.youtube.com/watch?v=EYPapE-3FRw


Then there is the Asimov quote : "John, when people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together."

https://chem.tufts.edu/answersinscience/relativityofwrong.ht...

A good scientific model is less wrong than the one preceeding it.


The key to the quote above is that nothing is proven right.

That is the fundamental flaw with science, if you understand this, you understand science.

In science and therefore reality as we know it, nothing can ever be proven true. It is fundamentally impossible. Every single claim made by every single scientist since the beginning of science has never ever been proven true.

This “flaw” is what drives most of the debate and misunderstandings about science in the cultural and political arena. For example... No scientist can ever prove global warming to be true... it is fundamentally impossible. Hence the debate rages on endlessly.


>> In science and therefore reality as we know it, nothing can ever be proven true.

In engineering we take the science and use it to predict how a design will work. When things work as intended it is a confirmation that the science is useful, if not "correct".

To me that is what science does. It allows us to make useful predictions that can inform decisions.


No that is not science.

Science is a process of falsification. You have a hypothesis and you attempt to falsify the hypothesis.

What you're describing is a model that has stood the test of falsification. Someone comes up with a mathematical model that can predict the future. They test that model under science to see if they can falsify the model.

If they fail to falsify the model then at best they say this model might as well be true because we couldn't determine otherwise.

That is science.

When you use the model to predict how something will happen in the real world for some real world application. That is more called "engineering."

Engineering is application, science is best attempt verification.


I think you're splitting hairs, but ok. You describe the process of science and how it produces models. I described how we use those models in engineering - not the science I suppose, but the output (useful models) from it. In a sense, engineering is hoping their efforts confirm the model not falsify it :-)

I think there is some important stuff in this, so maybe splitting hairs to explain accurately is important.


I'm not splitting hairs.

There are people who are called "scientists" who test models with scientific experiment. Then there are people are "engineers" who use the models. If our social structure splits the difference by occupation how am I splitting hairs?

Literally if you didn't run an experiment with a hypothesis you aren't doing science, and people in society therefore won't refer to you as a scientist.


Every engineer tests a hypothesis by designing things based on that hypothesis and then testing that they work as intended. We tend to use the hypothesis' that have already been elevated to the status of theory though.

I've done some Greenfield science as an engineer, and plenty of scientists do some engineering. Titles dont really mean that much to me. It's all sort of a continuum and our place on it isn't strictly defined.


Ok let's be real. You're the one splitting hairs now. There is a clear difference between what either occupation does but of course sometimes a scientist needs to do carpentry or machinist work to build the tools for his experiment. Does this make machining and carpentry the same thing as science? No.


Your arrogance is astounding.


And denialists exploit the fact science a isn't process of proving things true. Scientific American recently had an essay on the Denialist Playbook that I found interesting - https://www.scientificamerican.com/article/the-denialist-pla...


For sure. If someone can come up with a way to prove anything to be definitively true, most of these arguments would be over.

It is this flaw in science that is the origin of all these controversial debates that surround science, religion, global warming and other controversial topics.

Nothing can be proven and therefore reality will always be open to interpretation.


Just to clarify, things can be proven false, they just can't be proven true. That's why we speak of a good scientific theory being falsifiable - there must be a way to show it's not true.

The Scientific Method then becomes a process of incremental refinement. Newton's theory of gravity didn't become "wrong" just because Einstein's theory was more accurate - heck we still teach, and use, Newton's theory to this day in both high school and college. It's useful. We just know that all bets are off when velocities approaching the speed of light are involved. Turns out that's not usually the case in everyday situations. We suspect that Einstein's theory may not be the last word on gravity either - but the cases where the theory breaks down are getting more and more extreme. It's not like apples are going to start falling up from trees because we changed our theory!


Yeah I'm already aware of this. Almost no one else is though.

Keep in mind though, Incremental refinement is not a given. There is still no way to know if each new theory is actually closer to the truth. In fact it may even be a step backwards.

Additionally limited accuracy in observation tools actually make it impossible to falsify anything either. Can you trust that the critical observation was 100% accurate? But tbh this is just splitting hairs.


> Incremental refinement is not a given

I was wondering if you were going to call me out on that!

> Additionally limited accuracy in observation tools actually make it impossible to falsify anything either

That's a problem with GR too - it's true only within our current ability and accuracy to test that it's true. Better measurement may suddenly reveal something to not be true that we currently believe to be true.

c'est la vie!


I have seen so many people interpret the standard cautious phrase "there is no evidence for..." to somehow mean "there is evidence against..."


> No scientist can ever prove global warming to be true

Or to be false. Focusing on "true" vs. "false" leads nowhere except, as you note, to endless debate.

The real question is, do we have models that can make reasonably accurate predictions? That is the right question for ordinary citizens, concerned about what, if any, political policies they should support or oppose, to be asking about any scientific claims.

In the case of global warming, the answer to that question is no; we have models, and they make predictions, but the predictions aren't very good, and they haven't gotten any better over the last few decades despite a lot of effort. That means we should be very careful putting much confidence in those models.


Falsification in science is possible.

If I hypothesize that all zebras have stripes then observing one zebra with spots falsifies the entire hypothesis. This is definitive.

However, no amount of zebras that I observe with stripes can ever prove my hypothesis correct. I can observe 500 zebras all with stripes and at any point in time the 501st zebra can have spots. I can observe 1 billion zebras with stripes and the possibility still remains open that the next zebra I see has spots.

Science is not symmetric. Falsification is possible, proof is not.


> Falsification in science is possible.

Falsification of really, really simple hypotheses (like "all zebras have stripes", or the classic "all ravens are black") is possible, yes.

But no hypothesis of any real significance in science is that simple. In any scientific model of any significance, there are always ways to patch the model to account for new observations. In a model that is destined to be supplanted, the patching gets more and more cumbersome and less and less plausible over time; but there is no hard and fast breaking point at which there is too much patching, that's always a judgment call about which different scientists can disagree. Also, unless and until there is some new model available that can account for all the same observations, including the new ones, in a simpler and more intuitively plausible way, preferably also with more accurate predictions, scientists will continue to try to use the old patched model because there is no alternative.


Except science _can_ set error bounds. No, global warming can never be "true" from the perspective of 100% accurate with no room for error. Science is able to set error bounds though. The latest particle physics results come with "this theory is correct to within 99.99% of the theoretical model". That means that even if the theory is wrong, it can account for 99.99% of all observations you make. That is the power. Obviously climate change models aren't like particle physics so we don't expect such accurate error bounds (too many sources for error). However, the question you must ask is "are the error bars sufficient for the decision I have to make?".

My conclusion thus is that unlike dietary science or pop psychology, the evidence here is very likely real, despite any doubts that may persist. I could of course be wrong. That's a fundamental truth one has to admit to oneself & other peers who understand how science works. That's not a political truth that would make sense to admit because saying "I can never know but I'm pretty sure" has been weaponized into "you don't know anything & you're wrong". Same reason when speaking among science-literate friends I say "global warming" but am careful to say "climate change" to everyone else because the warming aspect got corrupted into "you said it's global warming but winter was especially cold this year".

Maybe there has been a massive social pressure within the scientific community into confirmation bias. Certainly the community does self-police & ridicule anyone who doesn't ascribe to it. Why do I think the theory is reliable & it's not a massive conspiracy (intentional or otherwise)? I trust that there's lots of very bright people who have studied the math, from within & without the field & validated the models don't have any fundamental issues of any kind (conceptual, numerical issues, computer sims are solid, etc). I trust that people outside of the field who have related degrees have validated huge swaths of it. I trust that technological advances have provided us with exponentially better sensors & monitoring and that has fed back into exponentially better simulations to validate models. And with all of that development, over the course of 20 years, the results haven't changed, no new hypothesis have borne out. The scientific community is both small & large. It's large in that there will always be some amount of bad behavior somewhere, bad or unethical science, wrong results, mistaken theories that take hold for a time, etc. It's small though in the sense that when there's a very real problem, it can be brought to light & it's really hard to suppress that knowledge. That's why the chorus about global warming has been increasing. Oil companies knew about this as a problem & suppressed their own voices to the debate because it would hurt their bottom lines. In fact, they often fund the opposition. This isn't a conspiracy theory. There are court documents showing this. A well-informed person can use all of this knowledge to make a guess about who to listen to. A less-informed person will let themselves be swayed by the opinion they want to hear & or as a "fuck you" to the scientific community for ruining their career choice.


No. This is incorrect. You're talking about model accuracy and observation accuracy.

I am talking about something far more fundamental. Using axiomatic logic and probability, you cannot prove anything in science. Even with observations that are 100% accurate with 100% precision. This literally has fundamental consequences on our interpretation of reality as we know it and has affected out perception of reality and our science as well.

This occurs because at any point in time a new observation can be made that falsifies a theory. Let's say you have a hypothesis that all zebras have stripes. You can observe 500 zebras with 100% accuracy and see that all of those zebras have stripes. But at any point in time in the future you can happen upon a hidden island that has 2 million zebras on it that has spots instead. 500 observations is minuscule in the face of 2 million and it literally renders your initial hypothesis ludicrous. Zebras are creatures that are more likely to have spots then stripes is the complete U-turn conclusion based off of new observations.

Keep in mind the new conclusion occurred regardless of how methodological and accurate your initial observations were. The accuracy of the observation is Completely and utterly irrelevant. Because at any point in time I can encounter another new island with 1 billion zebras that has stripes rendering my second conclusion completely wrong, again.

This is the fundamental flaw of science. It is far more fundamental then limited accuracy in observational measurements.

For example take newtons laws of motion. There is no 99% right or wrong on that model. Assuming that our observations are accurate, Newtons laws of motion are 100% percent wrong.

Yes they may be accurate numerically to a certain extent but the theory has ultimately been falsified and we now know relativity is a more accurate description. However, keep in mind that even relativity is not "proven" it can Never be proven and it will always be open for a complete reversal the same way Newtonian motion was.

In fact Newtons laws of motion is the perfect example. It was the ultimate example of scientific verification. All experiments pointed to the theory being completely accurate, to disbelieve the science was to disbelieve reality. It was at the time equivalent to disbelieving evolution.

This is the fundamental flaw with science. Nothing can ever truly be proven. And everything even the fundamental pillars of reality we rely on today from Newtons laws to evolution can never actually be proven to be true, and is always open for a complete rewrite.

This is the exact reason as to why people can pick and choose the reality they believe in, whether it be Christianity or evolution. Neither can in actuality be proven, nothing has and nothing ever will.


You seem to be very strong down the nihilism philosophy. I have a view point that nihilism isn't a useful philosophy & doesn't yield any particularly meaningful insights that help you find success in this world. It's very much, at least to me, of the same vein as the Omphalos hypothesis (also known as Last Thursdayism by atheists such as myself) which says "Sure sure. You've got all these fancy theories. But how do you *know* the universe wasn't created in its current state Last Thursday & so all your measurements are meaningless?".

Worrying about an epistemological definition of "truth" that is different from the scientifically one is equally unhelpful. Scientific philosophy & the inquiry stemming from that actually yields results in any field you look into & just building on that. Worrying about a higher order definition of truth and certainty that only exists in your own mind (since no two people will agree) is irrelevant & unhelpful. Medicine has come a very long way from where it was & our understanding of it is drastically better than it was. Is it perfect? No. Is it infallible? No. Does it matter? Not really because at the end of the day it's infinitely better than where we started & continuing along this path will continue to yield results over time.

> In fact Newtons laws of motion is the perfect example. It was the ultimate example of scientific verification. All experiments pointed to the theory being completely accurate, to disbelieve the science was to disbelieve reality. It was at the time equivalent to disbelieving evolution.

I'm always fascinated by people who claim that Einstein's theory of relativity somehow undermines the bedrock of scientific inquiry when it's 100% the thing that supports it. Newton's theories weren't wrong. They were 100% correct for the environments we were testing them in. Like the all the equations behind the theory of relativity, if you turn down the speed & mass variables to every day human values, they literally turn into the same classical Newtonian mechanics equations. The *only* instance that you should be questioning the scientific validity of a field is when there's competing theories & the experiments themselves don't really help make decisions. Like dietary science. That's a field that constantly produces contradicting results. There's definitely some good advice but it's mostly hokum except for the parts that actually intersect with medical research or have really wide studies done because of the problems of limited observations. Same with pop psychology & other human-centered inquiries that don't have external sensors against which to measure results & large sample sizes to deal with the variation. Non-physical inquiries suffer very few of these problems & are easier to experiment with.

If it helps you, the scientific method of inquiry of in some ways is directly supported as a fundamental tenet of mathematics (via the fields of probability/calculus). If you sample an underlying distribution enough times with a random enough sample (no bias that's causing you to overlook things), the more the samples match your estimate of the distribution, the less likely it is that your estimate and reality diverge. That resolves, at least for me, the philosophical conundrum of "what is truth" and "have you really done enough measurements". For religious arguments, your form of argument is "the God of the gaps" or "God of the cracks". If you just focus on a crack, all you can see is all that empty space & not the bridge that the crack is a non-critical part of. Even science's philosophy is underpinned by a mathematical truth & our challenges sticking to it are our own failures, not those of science. I recognize this sounds like religion, but the difference is: * Falsifiability. Good scientists will very quickly discourage any attempt at scientific inquiry of anything that can't be disproven through experimentation. * Free sharing of knowledge. We're not as great here because of the economic realities of our society, but certainly better than religion organizations that tend to have more of their documentation in private vaults. That being said, this is the most fair point of criticism against scientific inquiry for me & the one where today's scientific industry gets closest to religion. * Consistency of conclusions. It doesn't matter if a discovery fails to take hold. Over time the same thing gets rediscovered eventually. Like Calculus being simultaneously invented by Newton & Leibniz. Good ideas just have their time & inevitability comes from a build up of knowledge. Religions don't really share this property. Neither does philosophy which just has a bunch of models & no way to model/investigate them. Philosophy is useful as a hypothesis generation machine or maybe as a way to examine how humans can improve the scientific field. That's about it & we need to be quick to discard it when science starts providing answers. * Belief or lack of it doesn't matter. Science is about making predictions. If the predictions are based on faulty science, they'll not hold up over time. If the predictions do hold up, then they're more likely to be right. Probability is where this gets tricky, especially so when polling human sentiments. That is walking a razor's edge.

Still, science is the only philosophy that's actually yielded tangible results consistently over any period of time. Religion & other philosophies have not.


> You seem to be very strong down the nihilism philosophy. I have a view point that nihilism isn't a useful philosophy & doesn't yield any particularly meaningful insights that help you find success in this world. It's very much, at least to me, of the same vein as the Omphalos hypothesis (also known as Last Thursdayism by atheists such as myself) which says "Sure sure. You've got all these fancy theories. But how do you know the universe wasn't created in its current state Last Thursday & so all your measurements are meaningless?".

What I'm talking about isn't a philosophy. This is the fundamental tenet as illustrated by academia. I'm not pulling this out of my ass. This is what educated scientists understand about science. If you don't know this you literally don't know what you're talking about. I am not arguing my opinion here, I am arguing the academic definition of science.

To prove it to you I'll literally quote Einstein:

  "No amount of experimentation can ever prove me right; a single experiment can prove me wrong."
If you don't understand why he said the above quote. You don't understand science in the same way a physicist or a scientist understands science. In fact the above was said in reference to Einsteins and newtons theories.

What einstien is basically saying is this. Science can never prove anything to be correct. It can ONLY falsify things.

>I'm always fascinated by people who claim that Einstein's theory of relativity somehow undermines the bedrock of scientific inquiry when it's 100% the thing that supports it. Newton's theories weren't wrong. They were 100% correct for the environments we were testing them in.

Your fascinated at the entire academic definition of science being different from your own personal definition? You're misunderstanding of science is the real enigma here.

Nothing is undermined. I'm not against science I am simply elucidating what science is to you in the sense that science can never prove anything to be true. Science can ONLY falsify things. It is a very limited tool, but it is also the only tool we have.

I'm an atheist like you, I get where your coming from. But you have not explored science deep enough. Look deeper into this as you are not understanding what is going on here. I am not arguing for religion or creationism or any of that BS as "valid" I am simply stating a fundamental well known flaw with science that is known by all people who know the technical definition of science.

Additionally, Newtons theory is 100% wrong in every environment. It only appears to be correct given limited accuracy of tooling. When you increase the accuracy of the observation the environment is irrelevant, it is always wrong.

>If it helps you, the scientific method of inquiry of in some ways is directly supported as a fundamental tenet of mathematics via calculus.

This is highly highly misguided. Logic and Science are completely separate. This is well known among people who understand the concept.

Logic is a game with rules axioms and a well understood domain. We create the rules and universe and therefore we're able to prove things within that universe.

Science is not the same. It is not an axiomatic game created by us. Science is the consequence of applying certain assumptions to a universe we did not create but only participate in.

We assume two things that are true in science. We assume logic is true. We assume rules like induction will always work even though we have no means of verifying it will work. We also assume probability works. We assume rolling a six sided dice will produce a certain outcome based off of probability and we again currently have no way of verify why or how this occurs. We just assume it.

Based off of these two assumptions we can create the scientific method. But this method is limited as it can only axiomatically falsify things. We can never prove anything to be true with science. This occurs, again because the domain of the real world is not limited like it is in our logical games of math. At any point in time the domain can change, shift and we can encounter a new unexpected observation that can change the entire arena.

Again, this isn't some BS I'm pulling out of my ass. This is science as Feynman and Einstein understood it. You lack understanding and I suggest you read up on the notion of what "proof" and science is.

Proof is only relevant in maths and logic, it is irrelevant in science and therefore reality as we know it. Science is the best tool we have but it is highly highly limited in the sense that it can never actually prove anything.

What ends up happening is science at best produces conclusions in the form of "We think this is true because our repeated attempts to falsify this hypothesis have failed." It can never produce anything definitive.


I find this is too simple, because it assumes the experiment is correct. Experiments can have errors and being able to doubt experiments, especially when you have disagreements between experiments, is a very important step in science.

This isn't to say that one should hold to a belief without any evidence to back it, only that we consider the possibilities that experiments themselves are flawed and take that into account, such as by designing seemingly unrelated experiments to test a single guess.

To give another Feynman quote.

>We have learned a lot from experience about how to handle some of the ways we fool ourselves. One example: Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It's a little bit off because he had the incorrect value for the viscosity of air. It's interesting to look at the history of measurements of the charge of an electron, after Millikan. If you plot them as a function of time, you find that one is a little bit bigger than Millikan's, and the next one's a little bit bigger than that, and the next one's a little bit bigger than that, until finally they settle down to a number which is higher.

>Why didn't they discover the new number was higher right away? It's a thing that scientists are ashamed of—this history—because it's apparent that people did things like this: When they got a number that was too high above Millikan's, they thought something must be wrong—and they would look for and find a reason why something might be wrong. When they got a number close to Millikan's value they didn't look so hard. And so they eliminated the numbers that were too far off, and did other things like that ...

https://en.wikipedia.org/wiki/Oil_drop_experiment#Millikan's...


> In general we look for a new law by the following process: first we guess it, then we compute the consequences of the guess... and then we compare those computation results to experiment. If it disagrees with experiment it is wrong. In that simple statement is the key to science. It doesn't make a difference how beautiful your guess is. It doesn't make a difference how smart you are, who made the guess, or what his name is. If it disagrees with experiment its wrong, that's all there is to it.

That elides a lot of complexity, mostly around the validity of the experiment design and execution. Confounding factors, confirmation bias, selection bias, p-hacking, etc. can all skew the results in ways both overt and subtle.

The following quote from the movie 'Thank you for smoking' may illustrate:

"This is where I work, the Academy of Tobacco Studies. It was established by seven gentlemen you may recognize from C-Span. These guys realized quick if they were gonna claim cigarettes were not addictive they better have proof. This is the man they rely on, Erhardt Von Grupten Mundt. They found him in Germany. I won't go into the details. He's been testing the link between nicotine and lung cancer for thirty years, and hasn't found any conclusive results. The man's a genius, he could disprove gravity."

So no, 'if it disagrees with experiment it is wrong' is a gross oversimplification.

Not to mention that there may be very good reasons that an experiment can't (or shouldn't) be done, or that various 'natural experiments' must be relied upon.


I like it but in every theory, there is always some edge case where the theory doesn't work. In a sense, every models in natural science are wrong but some are more right than other.


It seems to me this is exactly the "science as inquiry" model that the OP is arguing against.


Treating the words of authority like scripture is what got us into this mess.


I feel like authorities betraying public trust is a large part of what got us into this mess. I trust the scientific process. I don't trust the people involved in the process and in charge of making and following policy based on scientific and technological process.


One thing that allows for this is the oft neglected "method of discovery".

The usual "scientific process" concerns itself with how to keep an experiment correct/truthful - but how do we decide what experiments to conduct in the first place? Scientific funding can be biased towards the successes of the past, and so future funding can be guided by political agendas.


I think you are missing the point.

Pop culture thinks science means space, chemicals, and electronics. We aren't even hearing the words of authority in this case. A scientist describing what science is doesn't sound like indoctrination or blind faith to me.


Science is a method for understanding the world, not an ideology.

I do give most non scientific people the benefit of the doubt. They are at least trying. What is really disturbing is that this is a real problem within sections of the scientific community and especially within the public relations of the scientific community. “Science Communicator” is almost synonymous with this trap. I think it stems from oversimplification and a need to generate funding for research.


Agreed, let’s not!

What part of what Feynman said do you disagree with?


It boggles the mind to see this quote by Feynman when the entire premise of the article is that this is already how we teach science and it leads to a whole host of problems.


I read the article and disagree with its conclusions about science education. I think science education isn’t actually taught the way the article says it is. At least that’s not how I was taught it in good public schools until college, at which point it was taught the way Feynman says. If it actually was taught that way at earlier points, I think we’d be in a better place.

We teach a bizarro world version of feynman’s quote, where it’s more like the way we teach the formulas in math. It’s the right formula that they’re teaching, but the way it’s taught is encouraging its use as a black box.


Better, for sure, I agree, but is it enough as the OP argues it's not?


Asking if people read the post is against HN guidelines:

‘Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that."’

https://news.ycombinator.com/newsguidelines.html


I read the article. It seems to be an argument that the problems with trust in science come from teaching a kind of logical empiricism, itself a straw man, and not what you’d come away with from listening to Feynman.

There is nothing in the article that supports this claim about the cause of the ‘problem’ or even a really usable understanding of what the problem is, other than that cigarette companies have been able to make people doubt things they shouldn’t have doubted, and that climate deniers and other kinds of ‘deniers’ are doing the same building on the work of the PR firms used by the cigarette companies.

The best I can read as what the problem is, is that we think that teaching people differently could make them immune to propaganda, and so the problem is that we aren’t teaching them differently. I think this is highly questionable.

It argues that we should instead teach science based on a feminist critique.

As far as I’m concerned the article is complete bullshit. It sets up a straw man, and then argues to a completely unsupported conclusion. It’s terrible.

Having said that here’s what it touches on that is useful:

Teaching scientific method alone as ‘science’ is outdated. Science is part of public discourse and so it is important for students to understand how science works as social, sociological, and political processes, as much as it is an epistemological method.

From this point of view, I respect the various ideas that they are advancing as valid for study. However ‘valid to study’ is very different from attempting to claim that science education should be based on this ideology, and is an obvious political land-grab which must be rejected.

The article makes a bunch of naked assertions as if they are simply facts about reality, when in fact they are very much the subject of social science itself.

Consider these statements:

> Believing based on trust is a pervasive human practice, not confined to scientific inquiry.

Seems true enough, right?

> It starts in infancy, when children learn language, everyday facts, and even religious beliefs from their caregivers.

Does it? This is definitely not settled science. Obviously children learn from caregivers, but what they trust is a deeper question, and a subject of study. Also, then sources are far wider than ‘care givers’.

This seems like an attempt to anchor the conversation in a blank slate kind of conception of the emergence of belief. This is discredited in social science, but is popular in the humanities.

> Trust is only as reliable as the source of the knowledge; when that source is unreliable, we sometimes regard beliefs based on trust as the product of indoctrination.

‘Sometimes’ being the operative word.

> Trust can be eroded when there is evidence of the unreliability of the source.

Yes, but research has shown that it can be strengthened when there is evidence of the unreliability of the source, too. It depends on other factors, such as who is providing the evidence. Again this statement is simply not objective or reflective of current social science.

> Reflective knowledge should therefore include some account of the reliability of the source of knowledge.

If you delete all of the preceding elements, and just say:

Reflective knowledge should include some account of the reliability of the source of knowledge.

We are left with a reasonable proposal.

Weirdly, one which is already common amongst science students, who don’t ignore the reliability of the results on which they base their work.


I once thought that by leaving my old religion I was leaving dogma behind. But it appears that in the absence of one dogma/doctrine, people will create another.


1. The mind always tries to simplify. One way that manifests is in dogma. Many people are satisfied or just dont have the time and energy to continually update or critically assess their own beliefs (not that this is justified, just an explanation). This is the entire reason formal science education exists at the highest levels. We didnt evolve to be critical or rational. But at our best we can learn to approach it collectively.

2. Many people think that what they learn in school or from other authorities is immutable fact and that's the end of discussion. This is an unfortunate failing of the education system.


People are people. Take away any differences in appearances, beliefs, communities, etc. and people will still find reasons to divide up and seek to destroy the outsiders.


> Since nobody knows how to reliably ship secure commercial software, liability will mostly have the effect of making it difficult to start new software businesses.

I think you would agree that having critical string parsing logic written in C and shipped in an opaque binary is both negligent and more dangerous than a reasonable person would expect. Traditional products liability claims focus on those kinds of questions: was there negligent conduct? is the product more dangerous than a reasonable person would expect?

Attaching some kind of liability is a good way to encourage the industry to settle on standards of non-negligent conduct. If you provide vendors an opt-out mechanism (e.g. disclose your source and there's no liability), it will not dramatically interfere with the cadence of development.

Common law is very well suited to establishing standards in a dynamic environment. It got us through the first industrial revolution.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: