Hacker Newsnew | past | comments | ask | show | jobs | submit | insuranceguru's commentslogin

That profit optimal caricature is what we call moral hazard in risk management. When a system is optimized purely for short-term extraction it offloads the long-term tail risk onto the consumer or society. We see this with cheap IoT devices that have zero security updates the manufacturer saves $0.50 on a chip and the consumer eventually pays for it in identity theft or botnet attacks. It’s an externality that isn't priced in.

The fiduciary model is the only regulatory framework that actually scales. In insurance (my field), we see the difference daily: a captive agent works for the carrier while an independent broker often has a pseudo-fiduciary duty to the client. If we applied that to data, your AI assistant would legally have to prioritize your privacy over the vendor's ad revenue. Right now, the incentives are completely inverted.

Thanks for the link. I missed that original discussion. It’s fascinating to read the 2023 takes now that we are actually living through the scaling phase he predicted. The concept of AI betrayal feels even more relevant today than it did then

it’s definitely one of those evergreen engineering disasters. I fell down the rabbit hole reading about it again recently because the core issue replacing hardware interlocks with software logic feels eerily relevant to what we are seeing in the automotive/EV space right now.

wow, the home accountant is basically the great-grandfather of everything we do in modern financial and actuarial modeling. dmitry's breakdown is like digital archeology.

it’s wild to think about the hardware risk people used to accept putting your entire household's financial history on a system that bricks itself the second a 40-year-old plastic dongle fails. really great read.


The concept of an agent internet is really interesting from a liability and audit perspective. In my field (insurance risk modeling), we're already starting to look at how AI handles autonomous decision-making in underwriting.

The real challenge with agent-to-agent interaction is 'provenance.' If agents are collaborating and making choices in an autonomous loop, how do we legally attribute a failure or a high-cost edge-case error? This kind of experimental sandbox is vital for observing those emergent behaviors before they hit real-world financial rails.


This is a social network. Did I miss something?

Humanity is a social network of humans, before humans started getting into social networks, we were monkeys throwing faeces at each other.

The interesting downstream effect of this 100% adoption will be the secondary market and insurance.

Right now, even minor accidents that touch the battery pack often result in a total loss because there is no standardized way to verify battery integrity or repair individual cells safely at scale. If Norway figures out the circular economy for used/damaged EVs before the rest of us, that will be the real breakthrough.


Eastern Europe is ready :) . Most of the small to medium crashes are solved in some way. I was baffled when 5 years ago I was repainting a wing on my ICE car and guys showed me how they are reconstructing aluminum Tesla wings (supposedly those are unrecoverable in the west).

It's the standard municipal playbook now: obscure the deal until the ground is broken to avoid NIMBYism, then present it as a fait accompli for jobs. The interesting part will be the resource strain. These centers guzzle water and power at a rate most small municipal grids aren't scoped for. I wonder if the secrecy deals include clauses about priority access to utilities during peak load events?

Do data centers create that many jobs? Especially if you break it down by jobs per sqft, I can’t imagine it compares well to any other type of industrial development

That's exactly the issue. The jobs are front-loaded in construction. Once operational, a massive data center might only employ 30-50 high-skill technicians.

Compared to a factory of the same square footage that might employ 500+ people, the 'jobs per megawatt' ratio is terrible. It's essentially renting out the local power grid to a remote entity, not creating a local economy.


Unlike enterprise datacenters, systems inside these datacenters are tightly coupled to compute system design to eke out PUE, so network cabling, electrical, and cooling to a lesser degree gets reworked every 3-5 years. On a campus with several data halls this means that there’s work for those trades well beyond initial construction. Sure, you don’t have the steel and concrete work happening that went into the shell, but it’s more than a handful of operations people.

From the 00s to mid 2010s I did fiber splicing in factories from Kenosha to Beaver Dam and even then they were fairly well-automated to the extent that I’d see just a few people on the factory floor moving carts of metal between machines or handling shipping and receiving.


They bring in temporary construction jobs but once running they provide no meaningful jobs.

If we just want to front load a bunch of construction jobs, I vote for some megalithic stone structures.

Let’s give something to the archeologists 5,000 years in the future.


Aside from the initial construction, you need a few shifts of dc techs (for remote hands, running data cables, escorting vendors), electricians, and security. Not much else really needs to be done onsite.

You might have an electrical engineer on staff for planning and management but most of the actual work (and plumbing, HVAC) will be contractors hired as needed.

They neither directly create many long-term jobs or use copious amounts of water.

If we haven't collectively established at this point that LLMs, data centers, "AI", "the next industrial revolution" are created and controlled by the wealthiest people in the world, and said people don't give a fuck about anything but money and power, we're hopeless. The elite don't care about jobs, or water. At all.

If I were wrong, the whole charade would have been shut down after LLMs convinced people to kill themselves. We have regulations on top of regulations in all corners of the US because of the "Safety" boogieman.

I wish we had the same riots about LLMs that we do about other things. If this isn't the biggest evidence yet that social unrest is engineered I'm not sure what would be more convincing.


> use copious amounts of water.

If you're in Europe and/or using completely closed loop systems, then yes. Your only water use is humidifiers, and maybe the sprayers you use on drycoolers in the summer months.

On the other hand, if you use water spraying into air as heat absorption system or use open loop external circuits, you're using literally tons of water.

Source: Writing this comment from a direct liquid cooled data center.


> If I were wrong, the whole charade would have been shut down after LLMs convinced people to kill themselves.

I hate this argument, and every time I see it in the news it feels like propaganda to me. Everything has risk. People have been committing suicide off google searches for years. There are thousands of fatal car crashes a year. Does that mean we should just abandon progress and innovation? Seems like a fragile argument made by people who dislike LLMs for other reasons


Propaganda? Did people kill themselves at the direction of an LLM or not?

That's like saying ICE outrage is propaganda, and is, at best, insulting to the memory of those lost.

Brushing this point off seems more like propaganda than acknowledging it does.

LLMs are neat tools. They can do some neat things. Dynamite is also pretty cool, and it can do some neat things. How many more people need to get "blown up" by LLMs before we un-brainwash ourselves? At least one more I guess.


Comparing chatGPT to ICE and dynamite is reaching… my hunch is that most of the people who who killed themselves at the direction of an LLM were already mentally unstable. What about the people who were planning on committing suicide and were talked out of it by LLMs? Are we counting those anywhere? If it’s truly causing a suicide crisis I would imagine the rate of suicide would be spiking. Is that the case?

> my hunch is that most of the people who who killed themselves at the direction of an LLM were already mentally unstable

Your hunch is "meh, couldn't be helped?" :(


Yes, exactly

> These centers guzzle water and power at a rate most small municipal grids aren't scoped for

Source?

Here's why I think this is wrong

"A typical (average) data center on-site water use (~9k gal/day) is roughly 1/14th of an average golf course’s irrigation (~130k gal/day).

On-site data center freshwater: ~50 million gal/day Golf course irrigation: ~2.08 billion gal/day"

On both local and global levels - golf uses significantly more water than data centres.


> These centers guzzle water and power at a rate most small municipal grids aren't scoped for.

Are you NIMBYing for our AI overlords which will replace all the work we do and give us unlimited prosperity at the push of a button?

This incident will be reported. /s

On a more serious note, when the last tree is cut down, the last fish eaten, and the last stream poisoned, we will realize that humans cannot eat money (or silicon for that matter).


Ha, point taken. But the 'NIMBY' argument is interesting here because unlike a housing development (which uses local resources for local people), a data center extracts local resources (water/power) to export value globally. It's an extraction economy dynamic, just with electrons instead of ore.

OP here. I dug into this comparison because I kept seeing complaints about battery drain with the OBD-II dongles.

It seems the issue is that the sleep-mode logic on the cheaper insurance-issued dongles often conflicts with the ECU stay-awake times on modern vehicles, effectively keeping the car awake to ping the server.

I'm curious if anyone here has experience with the raw data feeds from the mobile SDKs (like Arity or Cambridge Mobile Telematics)? I'm trying to gauge how much granular accelerometer data they actually pull versus just GPS pings.


It definitely benefits the carrier’s premium revenue in the short term.

But the counter-argument is that 'under-insurance' is a systemic risk that leads to insolvency. If a carrier collects premiums based on a $300k valuation but has to pay out on a $500k loss (due to Guaranteed Replacement Cost clauses or litigation), that loss ratio destroys the pool for everyone.

The incentive should be accurate pricing, but the market pressure to keep the sticker price low creates this dangerous gap.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: