Hacker Newsnew | past | comments | ask | show | jobs | submit | willio58's commentslogin

Tangentially related, but it is increasingly obvious that there's an ever-growing chasm between these two aspects of medicine in the U.S.:

- What's possible for medical professionals to do for certain conditions, in large part due to the amazing levels of investment into research and implementation.

- How difficult it is for ordinary people to receive care. Primarily due to private insurance companies intentionally making it more difficult to get care.

Like the fact we're giving stem cell therapy to fetuses successfully is amazing, yet any time I go to a doctor's office or bloodwork company I hear an elderly person explain to the front desk person that they've been on the same insurance for decades and only recently started receiving bills they can't afford, or listening to the front desk person explain that now medicare no longer covers them for a routine thing.

Ideally, we could have both great research _and_ great general care in this country. I just don't know if I will ever see that day.


I think the largest issue with health care right now is that the US is artificially shrinking the supply of Doctors. This is due to:

1. Size of medical school classes not increasing with population

2. US has an artificially small amount of residency slots.

These are largely due to AMA lobbying afaik and bad bills. But if we allowed every qualified medical student to enroll, and gave a residency slot to every graduate. In a decade we would have really shrunk the gap.


Does that matter though? My impression is that most people don't see doctors anymore. Every urgent care visit I've had in the past few years has been with a physicians assistant or nurse. Same for our pediatrician, I can't remember the last time we saw her instead of one of the nurses.

I actually have a routine visit with a specialist at one of the top hospital systems in the country in 2 days, and I see in the portal I'm seeing a "CRNP, MSN", not a doctor.


This affect is because of the doctor shortage though.

I am in the process of trying to find a primary care provider, and I cant find anyone accepting new patients.

Bigger places you basically see the doctor for 2 minutes when you actually need one. I went to a ortho surgeon and they had a dozen patients “seeing them” at the same time. As he just went between rooms and nurses prepped everything.


I went down a Reddit rabbit hole, a sub called /r/noctor. Basically people, mostly doctors, complaining about the prevalence of nurse practitioners, PAs practicing independently/outside of their scope, etc. The general consensus I see there is that the only people benefiting from this are private equity firms trying to squeeze more profit since they bill the same based on whether you see a doctor or an NP. This in turn has an affect where it doesn’t make sense financially to go through so much school and take on so much debt.

The primary utility of most medical professionals is to act as a gatekeeper to distinguish me from a drug-seeker. They are glorified security guards around medication. Fortunately, I always get what I want.

As an internist (not in the US), I would like to put in my two cents to say this is just wrong.

The primary utility of most medical professionals is to diagnose and treat a condition correctly. In the ER and elsewhere, the correct diagnosis is indeed often "drug seeking behaviour". And this is also a major aspect of medicine that many relatively healthy people interface with and remember. They are in pain for whatever reason, they desire to be relieved of said pain, and that desire puts them into contact with the skepticism and hesitancy around opiods that physicians have built up out of unfortunate necessity. It's often a hurtful and protracted experience, and so they remember it and form opinions like yours.

But this area of contact with medicine is a tiny, very visible tip of a much larger iceberg. Your description of "security guard around medication" is not strictly wrong for my field, seeing as internal medicine is largely about administering the right drug at the right time, but the 99% of the drugs we guard are not desirable at all for any drug-seeker. They are potent, full of side effects, are sometimes potentially deadly. But they do work. And you do not see any of this until you get properly sick, which to most people does not happen very often often (at least until they approach 70). And when it does happen, most people tend to focus on the one little side of the ice berg they come into contact with. But it is there, and it is about much more than distinguishing you from a drug seeker.


No professional has ever taken kindly to being told their primary function. The notion of greater grandeur infects everyone from janitor to president. I'm not foolish enough to tell doctors these things. If I did, I doubt I'd get what I want.

There are limits, naturally. I don't really expect to fit the percutaneous pins into my hand myself, even if I had third hand capable of equal dexterity. But if I have to sing a song you can be sure the song is sung. It's no different from selling B2B SaaS. You just need to make the sale.


I'm sure that's at least somewhat correct, but if I'd offer a similar reply, I could say that amateurs rarely takes kindly to being told that they do not understand what they are talking about. Dunning Kruger is endemic, and especially prevalent in populations making reductive comments about a group of professionals they maintain an adverserial relationship with.

My point was not about the emotional experience of being presented with a certain viewpoint of the function of physicians. My point was simply that if you look at the details of what physicians actually do, the stated viewpoint is wrong.

Of course, "primary function" is a somewhat subjective concept that you could define however you'd like, so it is more or less unfalsifiable as a standpoint.


Haha that is just as true. I suppose I should say “the primary function to me of doctors who are not family members is”. They are a vending machine with a code and fortunately I know the code.

Others need to be told to “advocate for themselves”. I simply get what I want and it always works.


What exactly is the problem with giving drugs to someone who might be a drug seeker? Is it worth letting someone sit in pain on the chance you might allow an addict to get high?

Harm reduction by just giving drugs to addicts in an organized fashion is honestly a strategy that might work fine on a societal level, and I'm not against it (although I am unsure about the details of implementations). However when your society does not practice it, and the ER/family med practioner becomes the one point of contact for potentially cheap drugs, you run into some practical problems over time. Essentially you can't have an open "drug seekers in line B" policy due to legal issues, so drug seekers will have to lie about being in pain and figure out a convincing lie.

Let us say they try to simulate an acute ruptured appendicitis. If they do this convincingly, they will get an acute CT with contrast. In my hospital system these machines and interpretation of resulting images is expensive and resource constrained, especially during evening and night time, meaning that the prioritisation of one patient will generally mean that another, let us say a patient in the process of having a very real stroke, might get delayed if traffic is high.

This is beyond the fact that roughly 30-120 minutes of the physicians time in the ER will be wasted in examining the patient, ordering blood work, the imagery, writing notes, and so on, which means that another patients time, who is often literally waiting in line for your time, is being wasted. Furthermore this kind of clientele have an unfortunate tendency to become unpleasant when you tell them that you can't find any reason for their pain or giving opioids, which is an extremely unpleasant and frankly often traumatic experience for green eyed doctors that enlisted in this career with the goal of aiding the sick. You can only get threatened, spat upon or assaulted so many times and maintain your professional enthusiasm. Many quit for this reason. And for the ones that don't, the experience of being forced to take on the role of distinguish between drug seekers and non drug seekers will generally turn you into a more unpleasant human being.

In summary, mostly due to unfortunate societal circumstances, you really, really, really do not want to encourage drug seekers to try their luck. It is an expensive waste of everyone's time, in circumstances where both money and time is tight.

Conversely, you really cannot predict in advance which ones of your opioid-naive patients will become addicts because the opioids that you gave them, which effectively means that you've fucked their life forever. Opioids are really, really dangerous. Sometimes people are obviously in pain and you open the tap quickly. But there's a name for the historical consequence of playing fast and loose with pain relief, it's called the opioid epidemic.


the largest issue in American health care is private equity and middle men raising the cost of everything.

edit if doctor scarcity were the issue then doctors would have a lot more leverage in salary negotiations than they do, which is to say they don't have much. because the hiring practices are limited by what they can bill, which they have no power over.


Private Equity is the effect not the cause. We need them to create efficiency because of the shenanigans that the AMA guild did in limiting doctor supply. Just allow people to take an exam to get credentialed, we'd have foreign doctors flown in by the hundreds of thousands and care would be as cheap as it is in India.

private equity doesn't create efficiencies. The real world is not some MicroEcon 101 class.

> “As our investigation revealed, these financial entities are putting their own profits over patients, leading to health and safety violations, chronic understaffing, and hospital closures. Take private equity firm Leonard Green and hospital operator Prospect Medical Holdings: documents we obtained show they spent board meetings discussing profit maximization tactics—cost cutting, increasing patient volume, and managing labor expenses—with little to no discussion of patient outcomes or quality of care at their hospitals. And while Prospect Medical Holdings paid out $645 million in dividends and preferred stock redemption to its investors—$424 million of which went to Leonard Green shareholders—it took out hundreds of millions in loans that it eventually defaulted on. Private equity investors have pocketed millions while driving hospitals into the ground and then selling them off, leaving towns and communities to pick up the pieces.”

https://www.grassley.senate.gov/news/news-releases/private-e...


Private Equity does not create efficiency and we do not need them. What they do is to take debt to buy healthy companies, transfer debt onto them and then kill them.

None of that is efficiency in any reasonable sense.


That doesn’t make sense - private equity has done the same thing in completely orthogonal industries, like manufacturing.

Ugh I wish this braindead populist 'private equity boogieman' meme that's taken ahold of reddit-types would die.

No, private equity is not the reason healthcare costs in the US are out of control, you can even ask chatgpt.

PE is a 3rd tier mild symptom in certain niche health markets that sits downstream of all the structural root issues created due to the twisted public/private incentive misalignment nightmare of US healthcare.


People would have an opportunity to change their stance if you explain why they should hold a different one with evidence and persuasion. Berating them and then saying they are wrong without explaining why is not going to change anyone's mind.

I used to do that when HN was a more rational, thinking-man's place years ago.

It's been poisoned by the hysterical climate of US politics like everything else on the internet, so there are no thinking men left.

It's a lost cause. If I were to explain the situation rationally I would get downvoted for not cheering on the shooting of CEOs.

Upvote communities are all dead and dying. There are no more interesting conversations happening in them anymore.


AMA was lobbying for more residencies for years. And residencies are the bottleneck.

What may be necessary is for other countries to be better. These treatments / studies don't just affect USAmericans but everyone everywhere, and if there's enough signals of "this treatment saves kids abroad but we can't afford them in the US because of policy", MAYBE said policy will change. Maybe. Not likely because the corporations have control over the government, and the US government system is stuck in laws drafted up in the 1700s.

Does that really happen "any time you go to a doctor's office"?

That aside, what if novel therapies like this are linked to the fact that US healthcare is expensive? If you make it cheap -- as in other countries -- there's less incentive for companies to invest and you get less research and fewer breakthroughs. Also fewer doctors, hospital beds, and more rationing.

In an ideal world, everyone would have exactly the right amount of healthcare. But our world isn't ideal, it runs on incentives, and it's not clear to me that all the hand-wringing over US healthcare will lead to positive changes.


> Does that really happen "any time you go to a doctor's office"?

Yes. I recently made a resolution to get established with all the medical professionals I don’t have set up. So a primary care, dermatologist, etc. over the past 2 months I’ve visited and had to go back a couple of times. I’ve literally overheard insurance-related issues in all cases. Whether it was the person in line before me or just overhearing people complaining while I’m in the waiting room.

Just last week I was waiting to get my blood drawn and the woman at the front desk, after continued prodding by an elderly man frustrated with lack of coverage, out loud said “Well, that’s insurance in America for you. Go ahead and call the number on the back of your insurance card because we can’t do anything for you.” Just deeply disheartening stuff to watch a late 80s man not realize after 15 minutes of being tossed between automated insurance phone responses that he simply won’t get the help he needs.


Well I just had an MRI and didn't see one elderly person complaining about bills. Guess that cancels our anecdotes out

This point of view runs directly against mutually agreed upon matters of fact: https://petrieflom.law.harvard.edu/2022/03/15/ama-scope-of-p...

The US healthcare system is not a market system nor did it occur naturally. Do you have any conflicts of interest that could cause you to have an emotional need to misunderstand basic information about it?


> That aside, what if novel therapies like this are linked to the fact that US healthcare is expensive?

you don't have to wonder, people have been writing about this as a major factor of costs for nearly 50 years


The US is a country of cowboys. There is literally nothing that can be considered fair. The only thing what is left is the kindness of it's people. If that detoriates, well...

Hank Green in collaboration with Cal Newport just released a video where Cal makes the argument for exactly that, that for many reasons not least being cost, smaller more targeted models will become more popular for the foreseeable future. Highly recommend this long video posted today https://youtu.be/8MLbOulrLA0

Just cancelled. I’ll give my money to a company with leaders that have a modicum of backbone.

Thanks for the reminder. Doing the same now.

The little respect I had left for Sam is now wiped. Makes me sick.

Growing up I always thought AI would be this beautiful tool, this thing that opens the gates to a new society where work becomes optional in a way. But I failed to think about human greed.

I remember following OpenAI way back when it was a non profit explaining how AI uncontrolled could be highly detrimental. Now Sam has not only taken that non profit and made it for-profit. It seems he’s making the most evil decisions he can for a buck.

Cancel your subscription, tell your friends to. And vote to heavily tax these companies and their leaders.


> i'd rather it feel awkward and human than efficient and cold.

So deeply ironic considering he claims he’s doing this because AI can do the jobs these people did.

These billionaires will learn one day that removing humans doesn’t stop at the bottom layer. It’ll continue to happen at layers above until their own position starts to be put into question. They’ll realize those people who are removed due to AI taking their jobs still need to put food on their tables. It’ll take time, but ultimately there are only so many ways that can go. The answer will be extreme taxation on the billionaires.


I do genuinely wonder about the endgame here. Why would the objective winners of the _current_ system, our billionaire class, want to disrupt that system? Do they really believe that they will necessarily be winners in the new world too, are they that arrogant?

They already understand the current system and status quo is going away. They understand, on some level, the consequences of the technocapitalist system they've built and perpetuated.

They're making their own accommodations, rather than trying to change the course: https://www.theguardian.com/news/2022/sep/04/super-rich-prep...


I think assuming human agency (building technocapitalism, correcting course) or the possibility to escape capitalism and its consequences (in bunkers), underestimates what capitalism is.

Astro isn’t solving the same surface as next. Astro is great for static sites with some dynamic behavior. The same could be said about next depending on how you write your code, but next can also be used for highly dynamic websites. Using Astro for highly dynamic websites is like jamming a square peg into a round hole.

We use Astro for our internal dev documentation/design system and it’s awesome for that.


But presumably if you could do this for Next it would be at least as easy for Astro?

Astro is not a static site builder.

Someone at my job just said yesterday “I can’t see a reason to hire any additional devs in the future with AI being the way it is now”

I disagreed vehemently, but it’s really gotten me thinking about just how screwed some orgs are. Especially those with poor technical leadership. Like I can try to convince people otherwise but ultimately they’re not going to believe it until they see productivity reduce by relying solely on AI.

The other part that’s odd to me is I feel like once we do truly reach a point where dev jobs are made irrelevant, I believe that level of intelligence will make essentially all white collar jobs irrelevant, all the way up to the CEO. So it’s kind of this race to all of us not having jobs. It’s just funny that the higher ups at some companies are so delusional to think what they do won’t also be replaced


Based on that list it boils down to 2 things it seems:

- cost (no longer a problem)

- too much code needed and it bloats the data pipelines. Does anyone have any actual evidence of this being the case? Like yes, code would be needed, but why is that innately a bad thing? Bloated data pipelines feels like another hand-wave when I think if you do it right it’s fine. As proven by Waymo.

Really curious if any Tesla engineers feel like this is still the best way forward or if it’s just a matter of having to listen to the big guy musk.

I’ve always felt that relying on vision only would be a detriment because even humans with good vision get into circumstances where they get hurt because of temporary vision hindrances. Think heavy snow, heavy rain, heavy fog, even just when you crest a hill at a certain time of day and the sun flashes you


Just for the record though, Musk isn't blindly anti-LIDAR. He has said (and I think this is an objective fact) that all existing roads and driving are based on vision (which is what all humans do). So that should technically be sufficient. SpaceX uses LIDAR for their docking systems.

I would argue that yes, we do use vision but we get that "lidar depth" from our stereo vision. And that used to be why I thought cameras weren't enough.

But then look at all the work with gaussian splatting (where you can take multiple 2d samples and build a 3d world out of it). So you could probably get 80% there with just that.

The ethos of many Musk companies (you'll hear this from many engineers that work there) is simplify, simplify, simplify. If something isn't needed, take it out. Question everything that might be needed.

To me, LIDAR is just one of those things in that general pattern of "if it isn't absolutely needed, take it out" – and the fact that FSD works so well without it proves that it isn't required. It's probably a nice to have, but maybe not required.


Humans aren't using only fixed vision for driving. This is such a tiresome thing to see repeated in every discussion about self driving.

You're listening to the road and car sounds around you. You're feeling vibration on the road. You're feeling feedback on the steering wheel. You're using a combination of monocular and binocular depth perception - plus, your eyes are not a fixed focal length "cameras". You're moving your head to change the perspective you see the road at. Your inner ear is telling you about your acceleration and orientation.


And also, even with the suite of sensors that humans have, their vision perception is frequently inadequate and leads to crashes. If vision was good enough, "SMIDSY" wouldn't be such an infamous acronym in vehicle injury cases.

For those of us not aware of Australian cycling jargon, "SMIDSY" means "Sorry, Mate, I Didn't See You".

the issue is clearly attention not vision when it comes to humans. if we could actually process 100% of the visual information in our field of view, then accidents would probably go down a shit load.

Humans have both issues. There are many human failures which are distinctly a vision issue and not attention related, e.g. misestimation of depth/speed, obscured or obstructed vision, optical focus issues, insufficient contrast or exposure, etc.

But how many of those crashes not caused by inattention could have been avoided with less idiocy and more defensive driving? I mean, yes, we can’t see as well in fog, but that’s why you should slow down

Again, I'm still not saying that humans don't make bad decisions. I'm saying that, unequivocally, they also get into accidents while paying attention and being careful, as a result of misinterpretation or failure of their senses. These accidents are also common, for example:

* someone parking carefully, misjudges depth perception, bumps an object

* person driving at night, their eyes failed to perceive a poorly lit feature of the road/markings/obstacles

* person driving and suddenly blinded by bright object (the sun, bright lights at night)

* person pulling out in traffic who misinterprets their depth perception and therefore misjudges the speed of approaching traffic

* people can only focus their eyes at one distance at a time, and it takes time to focus at a different distance. It is neither unsafe nor unexpected for humans to check their instruments while driving -- but it can take the human eye hundreds of milliseconds to focus under normal circumstances -- If you look down, focus, look back up, and focus, as quick as you can at highway speeds, you will have travelled quite a long distance.

These type of failures can happen not as a result of poor decision making, but of poor perception.


> But how many of those crashes not caused by inattention could have been avoided with less idiocy and more defensive driving?

Most of them.

We can lump together "inattention" and "idiocy" for the purposes of this conversation, because both could be massively alleviated by a good self-driving car without lidar.

If you look at the parallel comments, you'll see that the majority of accidents and fatalities indeed come from these two factors combined (two-thirds coming from distraction, speeding, and impaired driving), and that kube-system is having to resort to ridiculous fallacies to try to dispute the empirical data that is available.


I didn’t claim vision was responsible for the majority of accidents anywhere in this thread.

> There are many human failures which are distinctly a vision issue and not attention related

Which are a tiny minority. The largest causes of crashes in the US are attention/cognition problems, not vision problems. Most traffic systems in western countries (probably in others, too, but I don't have personal experience), and in particular the US, are designed to limit visibility problems and do so very effectively.


That sounds more like a personal opinion, because I don’t think that data is particularly easy to objectively collect.

Regardless it is irrelevant to the point. Whatever the number may be, lapses in human visual perception are responsible for some crashes


> That sounds more like a personal opinion, because I don’t think that data is particularly easy to objectively collect.

That sounds like a personal opinion?

Maybe do the bare minimum of research before spouting yours.

DOT says that only 5% of crashes are caused by low visibility during weather events.[1]

In 2023, the combined causes of alcohol, speeding, and distracted driving (all cognitive/attention issues) caused 67% of highway deaths. [2]

I was able to find these in 30 seconds. You did zero research to confirm whether your belief was correct before asserting that my claim was opinion. That's pathetic.

> Regardless it is irrelevant to the point.

And your point is therefore irrelevant to the discussion at hand, because the person you were replying to did not claim that vision had no safety impact, but that it had little safety impact:

> the issue is clearly attention not vision when it comes to humans. if we could actually process 100% of the visual information in our field of view, then accidents would probably go down a shit load.

...and, as we can clearly see, the issue is attention (and some bad decision making), not vision.

[1] https://ops.fhwa.dot.gov/weather/roadimpact.htm

[2] https://www.adirondackdailyenterprise.com/opinion/columns/sa...


None of those things you cited is “human vision or perception”

“Low visibility during weather events” is a small subset of this.

A ridiculously common example of the limitations of human vision is when people hit curbs parallel parking because of the inherent limitations of relying on depth perception to estimate the exact location of the vehicle when it cannot otherwise be directly seen. Go look in a parking lot and see how common curbed wheels are.

Also, NHTSA estimates that they don’t have any information for 60% of incidents, because they go unreported.


> None of those things you cited is “human vision or perception”

> “Low visibility during weather events” is a small subset of this.

You're still refusing to do the most basic research or even read my comment:

> In 2023, the combined causes of alcohol, speeding, and distracted driving (all cognitive/attention issues) caused 67% of highway deaths.

Do the math. 100% - 67% is 33%. Even literally not opening Google, you can already deduce that the maximum fraction of fatalities caused by vision is 33%.

Given that you aren't interested in reading or researching and instead just want to push your opinion as fact, I think your claims can be safely discarded.

Edit: Because you're editing your comment because you realize that you're making an absolute fool of yourself:

> A ridiculously common example of the limitations of human vision is when people hit curbs parallel parking

A completely irrelevant distraction - this causes virtually zero accidents and even fewer fatalities, and you know it.

> Also, NHTSA estimates that they don’t have any information for 60% of incidents, because they go unreported.

Aha, so now you actually did research, and found that all of the available data supports my claims, so you're attempting to undermine it. Nice try. "Estimates" vs. actual numbers isn't really a contest.

Come back when you have actual data - until then, you're just continuing to undermine your own point with your ridiculous fallacies and misdirections - because if you actually had a defensible claim, you'd be able to instantly pull out supporting evidence.


Dude, you're arguing with a straw man.

I'm not arguing about fatalities or relative percentages of contributing factors, nor am I arguing that alcohol/speeding/attention are not all also issues. They are, you're right.

The only thing I argued is that "lapses in human visual perception are responsible for some crashes", which is a fact.


Attention is perhaps the limiting factor, but being able to look in two directions at once would help, and would help greatly if we had more attention capacity. E.g. anytime you change lanes you have to alternate between looking behind, beside, and in front and that greatly reduces reaction time should something unexpected happen in the direction you aren't currently looking...

In theory, a computer should be able to do the same. It could do sensor fusion with even more sense modalities than we have. It could have an array of cameras and potentially out-do our stereo vision, or perhaps even use some lightfield magic to (virtually) analyze the same scene with multiple optical paths.

However, there is also a lot of interaction between our perceptual system and cognition. Just for depth perception, we're doing a lot of temporal analysis. We track moving objects and infer distance from assumptions about scale and object permanence. We don't just repeatedly make depth maps from 2D imagery.

The brute-force approach is something like training visual language models (VLMs). E.g. you could train on lots of movies and be able to predict "what happens next" in the imaging world.

But, compared to LLMs, there is a bigger gap between the model and the application domain with VLMs. It may seem like LLMs are being applied to lots of domains, but most are just tiny variations on the same task of "writing what comes next", which is exactly what they were trained on. Unfortunately, driving is not "painting what comes next" in the same way as all these LLM writing hacks. There is still a big gap between that predictive layer, planning, and executing. Our giant corpus of movies does not really provide the ready-made training data to go after those bigger problems.


Putting your point another way, in order to replicate an average human driver’s competence you would need to make several strong advancements in the state of the art in computer vision _and_ digital optics.

In India (among others), honking is essential to reducing crashes

We often greatly underestimate / undervalue the role of our ears relative to vision. As my film director friend says, 80% of the impact in a movie is in the sound


The day a Waymo can functionally navigate the streets of Mumbai is when we really have achieved l5

I'm positive that Teslas have gyroscopes and accelerometers in them. Our eyes actually have a fairly small focal length range due to the fixed nature of our cornea and only being able to change focal length by flexing the crystalline lens.

20 meters away motion vision is more accurate than stereoscopic vision. What is lidar helping to solve here?

Waymo claims its system, which uses a combination of LIDAR & vision, resolves objects up to 500 meters away

https://waymo.com/blog/2024/08/meet-the-6th-generation-waymo...

This company claims their LIDAR works conservatively at 250m, and up to 750m depending on reflectivity

https://www.cepton.com/driving-lidar/reading-lidar-specs-par...


Most of what you said has nothing to do with lidar vs camera

What I said has to do with "vision only systems" (what Musk has claimed will be enough to do FSD) with sensor fusion systems (what everybody else having success in this space does)

Mentioning gaussian splatting for why we don't need lidar depth is a great example of Musk-esque technobabble; surface level seemingly correct, but nonsense to any practitioner. Because one of the biggest problems of all SfM techniques is that the results are scale ambiguous, so they do not in fact recover that crucial real-world depth measurement you get from lidar.

Now you might say "use a depth model to estimate metric depth" and I think if you spend 5 minutes thinking about why a magic math box that pretends to recover real depth from a single 2D image is a very very sketchy proposition when you need it to be correct for emergency braking versus some TikTok bokeh filter you will see that also doesn't get you far.


This is not really true if you have multiple cameras with a known baseline, or well known motion characteristics like you get with an accelerometer+ wheel speed.

> So that should technically be sufficient

Sufficient to build something close to human performance. But self driving cars will be held to a much higher standard by society. A standard only achievable by having sensors like LiDAR.


if a self driving car had the exact vision of humans it would still be better because it has better reaction times. never mind the fact that humans cant actually process all the visual information in our field of view because we dont have the broad attention to be able to do that. its very obvious that you can get super human performance with just cameras.

Whether thats worth completely throwing away LiDAR is a different question, but your argument is just obviously false.


This reminds me of the time I was distantly following a Waymo car at speed on 101 in Mountain View during rush hour. The Waymo brake lights came on first followed a second or two later by the rest of the traffic.

Better reaction times only matter if the decisions are the same / better in every case. Clearly we are not there on that aspect of it yet.

Deciding to crash faster, or "tell human to take over" really fast is NOT better.


Even if they weren’t going to be held to a higher standard for widespread acceptance, tens of thousands of people a year in the us die due to humans driving badly. Why would we not try to do better than that?

Because that's an acceptable loss and better costs more!

Teslas have at least 3 forward facing cameras giving them plenty of depth vision data.

They also have several cameras all around providing constant 360° vision.


Sufficient if all else were equal. But the human brain and artificial neural networks are clearly not equal. This is setting aside the whole question of whether we hope to equal human performance or exceed it.

That doesn't matter. It's not like we use 100% of our brain capacity for driving.

In fact, that's why radio/music/podcasts thrive. Because we're bored when we drive. We have conversations, etc. We daydream.

As long as the skills relevant to actually driving are on parity with humans, the rest doesn't matter.

In fact, in a recent podcast, Musk mused that you actually may have a limit of how smart you want a vehicle model to be, because what if IT starts to get bored? What will it do? I found that to be an interesting (and amusing) thought exercise.


To do gaussian splatting anywhere near in real time, you need good depth data to initialize the gaussian positions. This can of course come from monocular depth but then you are back to monocular depth vs lidar.

LIDAR also struggle in heavy rain, snow, fog, dust. Check how waymo handle such conditions.

It's not only failing, it's causing false positives.


Why is this getting downvoted? It's good faith and probably more accurate than not.

> and the fact that FSD works so well without it proves that it isn't required

The reports that Tesla submits on Austin Robotaxis include several of them hitting fixed objects. This is the same behavior that has been reported on for prior versions of their software of Teslas not seeing objects, including for the incident for which they had a $250M verdict against them reaffirmed this past week. That this is occurring in an extensively mapped environment and with a safety driver on board leads me to the opposite conclusion that you have reached.


If Waymo proven their model works, why the silly automaker is doing several orders of magnitude more autonomous miles?

They aren't. Tesla has logged some 800k total miles with their robotaxi vehicles, including miles with safety drivers. Waymo has logged 200M driverless miles. That's 0.4% of the mileage, with the most generous possible framing.

My understanding is that there's more data processing required with cameras because you need to estimate distance from stereoscopic vision. And as it happens, the required chips for that have shot up in price because of the AI boom.

But I think costs were just part of the reason why Elon decided against Lidar. Apparently, they interfere with each other once the market saturates and you have many such cars on the same streets at the same time. Haven't heard yet how the Lidar proponents are planning to address that.


How does Waymo handle it now? There are many videos of Waymo depots with dozens of cars not running into each other.


Lidar critics like to pretend that anti-collision is not a well-studied branch of Computer Science and telecoms. Wifi, Ethernet and cellphones all work well simultaneously, despite participants all sharing the same physical medium.

I'm not a Lidar critic. I'm really just curious how they're addressing it, or plan to.

And there’s no subscription right?


Icloud subscription.


The other difference between Steve and Tim is Steve would have never been caught dead giving a gold gift to a sitting president. It comes off as desperate and evil, two things Steve would have hated associating with Apple.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: