Hacker Newsnew | past | comments | ask | show | jobs | submit | hooande's commentslogin

Medical debt is different. The legal system frowns on people running up credit card debt to pay for PS5s or nice vacations with no intention of ever paying it back. That's tantamount to theft. Most medical debt is involuntary and necessary to survive. It doesn't make sense for it to have the same penalties as other forms of credit.

In general in the US, life saving or emergency medical care is administered without regard for the patient's ability to pay. Hospitals are already subsidized or compensated in various ways for this. The real issue is preventative or precautionary care. If Americans had that for free, like with the NHS, there would be fewer $XXX,XXX debts later in life.


LLMs were specifically trained to emulate human interaction patterns. Of course we sound like them at times. It's the things we can do that they can't that are relevant.

If I study Einstein and learn to do a really good impression, the statement "Einstein often sounds like karmacondon" will be true. That does not make me Einstein.


> If I study Einstein and learn to do a really good impression, the statement "Einstein often sounds like karmacondon" will be true.

Wrong alt, hooande ;)


In my experience a non technical founder should have one of the following:

1) Access to capital, normally through family or a family friend. I've worked at several companies where the main thing the CEO brought to the table was that someone trusted him or her enough to invest millions of dollars.

2) At least 5 years working a 9-5 job in the target industry and the associated social connections and experience. This eliminates most college students, sadly.

3) Something unique that enables the execution of the idea. This is normally a relationship or insider knowledge. The answer to "Why you?" can't be "Because I had the idea".

The most common exception I see to this list is when both founders have the same level of passion for solving a given problem. If you have to explain the opportunity and get someone else interested in it, it could be a tough road.

That said, don't lose hope. It's a big world. People meet and things happen.


This really seems like "DALL-E", but for videos. I can make cool/funny videos for my friends, but after a while the novelty wears off.

All of the AI generated media has this quality where I can immediately tell that it's ai, and that becomes my dominant thought. I see these things on social media and think "oh, another ai pic" and keep scrolling. I've yet to be confused about whether something is ai generated or real for more than several seconds.

Consistency and continuity still seem to be a major issues. It would be very difficult to tell a story using Sora because details and the overall style would change from scene to scene. This is also true of the newest image models.

Many people think that Sora is the second coming, and I hope it turns out to have a major impact on all of our lives. But right now it's looking to have about the same impact that DALL-E has had so far.


Yeah, you really have to fast-forward 5 to 10 years. The first cars or airplanes didn't run particularly well either. Soon enough, we won't be able to tell.


These limitations are fine for short form content ala reels / tiktok. I think the younger generations will get used to how it looks.


> I've yet to be confused about whether something is ai generated or real for more than several seconds.

How did you rule out survivorship bias?


Debugging is a problem. But the real problem I'm seeing is our expectations as software developers. We're used to being able to fix any problem that we see. If a div is misaligned or a column of numbers is wrong we can open the file, find the offending lines of code and FIX it.

Machine learning is different because every implementation has a known error rate. If your application has a measured 80% accuracy then 20% of cases WILL have an error. You don't know which 20% and you don't get to choose. There's no way to notice a problem and immediately fix it, like you can with almost every other kind of engineering. At best you can expand your dataset, incorporate new models, fix actual bugs in the code. Doing those things could increase the accuracy up to, say, 85%. This means there will be fewer errors overall, but the one that you happened to notice may or may not still be there. There's no way to directly intervene.

I see a lot of people who are new to the field struggle with this. There are many ways to improve models and handle edge cases. But not being able to fix a problem that's in front of you takes some getting used to.


Alternative theory: ChatGPT was a runaway hit product that sucked up a lot of the organization's resources and energy. Sam and Greg wanted to roll with it and others on the board did not. They voted on it and one side won.

There isn't a bigger, more interesting story here. This is in fact a very common story that plays out at many software companies. The board of openai ended up making a decision that destroyed billions of dollars worth of brand value and good will. That's all there is to it.


The "lying" line in the original announcement feels like where the good gossip is. The general idea of "Altman was signing a bunch of business deals without board approval, was told to stop by the board, he said he would, then proceeded to not stop and continue the behavior"... that feels like the juicy bit (if that is in fact what was happening, I know nothing).

This is all court intrigue of course, but why else are we in the comments section of an article talking about the internals of this thing? We love the drama, don't we.


This certainly feels like the most likely true reason to me. Altman fundraising for this new investment, and taking money from people the board does not approve of, and Altman possible promised not to do business with.

Of course it's all speculation, but this sounds a lot more plausible for such a sudden and dramatic decision than any of the other explanations I've heard.


Moreover, if this is true, he could reasonably well continue knowing that he has more power than the board. I could almost imagine the board saying, "You can't do that" and him replying "Watch me!" because he understood he is more powerful than them. And he proved he was right, and the board can either step down and lose completely or try to continue and destroy whatever is left of OpenAI.


> the board can either step down and lose completely or try to continue and destroy whatever is left of OpenAI.

From the board's perspective, destroying OpenAI might be the best possible outcome right now. If OpenAI can no longer fulfill its mission of doing AI work for the public good, it's better to stop pretending and let it all crumble.


Except that letting it all crumble leaves all the crumbs in Microsoft's hands. Although there may not be any way to prevent that anyway at it point.


If the board had already lost control of the situation anyway, then burning the "OpenAI" fig leaf was an honorable move.


I am not sure if it would be commendable or out-right stupid though for the remaining board members to be that altruistic, and actually let the whole thing crash and burn. Who in their right mind would let these people near any sort of decision-making role if they let this golden goose just crash to the ground, even if would "benefit the greater good" - cannot see that this is in the self-interest of anyone


Spoken like a true modern. What could be more important than money? Makes you wonder if aristocracy was really that bad when this is the best we get with democracy!111



What other motivations are there other than naked profit and trying to top Elon? /s


The thing is, they could have just come out with that fact and everyone in the alignment camp and people who memed the whole super-commercialized "Open" AI thing would be on their side. But the fact that they haven't means that either there was no greater-good mission related reason for ousting Sam or the board is just completely incompetent at communication. Either way, they need to go and make room for people who can actually deal with this stuff. OpenAI is doomed with their current board.


I'm betting they are just colossally bad communicators, the majority of the board, and in the heat of an emotional exchange things were said that should not have been said, and being the poor communicators we know in tech oh so well, shit hit the fan. It's worth being said, Sam's a pretty good communicator, and could have knowingly let them walk into their own statements and shit exploded.


That is a very good point. Why wouldn't they come out and say it if the reason is Altman's dealings with Saudi Arabia? Why make up weak fake reasons?

On the other hand, if it's really just about a power struggle, why not use Altman's dealings with Saudi Arabia as the fake reason? Why come up with some weak HR excuses?


Because anything they say that isn't in line with the rules governing how boards work may well open them up to - even more - liability.

So they're essentially hoping that nobody will sue them but if they are sued that their own words can't be used as evidence against them. That's why lawyers usually tell you to shut up, because even if the court of public opinion needs to be pacified somehow the price of that may well be that you end up losing in that other court, and that's the one that matters.


If it was all about liability, The press release wouldn’t have said anything about honesty. The press release could’ve just said the parting was due to a disagreement about the path forward for openAI.

As a lawyer, I wonder to what extent lawyers were actually consulted and involved with the firing.


If they have not consulted with a lawyer prior to the firing then that would be highly unusual for a situation like this.


Maybe the board is being prevented or compelled not to disclose that information? Given the limited information about the why, This feels like a reverse psychology situation to obfuscate the public's perception to further some premeditated plan.


Telling people that AGI is acheivable with current LLM with minor tricks may be very dangerous in itself.


If this is true why not say it though? They didn’t even have lawyers telling them to be quiet until Monday.


Are you suggesting that all people will do irresponsible things unless specifically advised not to by lawyers?


The irresponsible thing is to not explain yourself and assume everyone around you has no agency.


I don't follow. If the irresponsible thing is to not explain themselves, why would the lawyers tell them to be quiet?


To minimize legal risk to their client, which is not always the most responsible thing to do.


This was my guess the other day. The issue is somewhere in the intersection of "for the good of all humanity" and profit.


> The "lying" line in the original announcement feels like where the good gossip is

This is exactly it, and it's astounding that so many people are going in other directions. Either this is true, and Altman has been a naughty boy, or it's false, and the board are lying about him. Either would be the starting point for understanding the whole situation.


Or it is true but not to a degree that it warrants a firing and that firing just so happened to line up with the personal goals of some of the board members.


They accused him of being less than candid, which could mean lying or it could mean he didn't tell them something. The latter is almost certainly true to at least a limited extent. It's a weasel phrasing that implies lying but could be literally true only in a trivial sense.


The announcement that he is acted to get a position with Microsoft creates doubt about his motives.


Agreed, court intrigue. But it is also the mundane story of a split between a board and a CEO. In normal cases the board simply swaps out the CEO if out of line, no big fuss. But if the CEO is bringing in all the money, having the full support of the rest of organization, and is a bright star in mass media heaven, then this is likely what you get: the CEO flaunts the needs of the board and runs his own show, and gets away with it, in the end.


It just confirmed what was already a rumor, the board of OpenAI was just a gimmick, Altman held all the strings and maybe cares, or not, about safety. Remember this is a man of the highest ambition.


> a decision that destroyed billions of dollars worth of brand value and good will

I mean, there seem to be this cult following around Sam Altman on HN and Twitter. But do the common user care like at all?

What sane user would want a shitcoin CEO in charge of a product they depend on?


Altman is an interesting character in all of this. As far as i can tell, he has never done anything impressive, in technology or business. Got into Stanford, but dropped out, founded a startup in 2005 which threw easy money at a boring problem and after seven years, sold for a third more than it raised. Got hired into YC after it was already well-established, and then rapidly put in charge of it. I have no knowledge of what went on inside, but he wrote some mediocre blog posts while he was there. YC seems to have done well, but VC success is mostly about your brand getting you access to deal flow at a good price, right? Hyped blockchain and AI far beyond reasonable levels. Founded OpenAI, which has done amazing things, but wasn't responsible for any of the technical work. Founded that weird eyeball shitcoin.

The fact that he got tapped to run YC, and then OpenAI, does make you think he must be pretty great. But there's a conspicuous absence of any visible evidence that he is. So what's going on? Amazing work, but in private? Easy-to-manipulate frontman? Signed a contract at a crossroads on a full moon night?


Altman has convinced PG that he's a pretty smart cookie and that alone would explain a lot of the red carpet treatment he's received. PG is pretty good at spotting talent.

http://www.paulgraham.com/5founders.html

Note the date on that.


What about the date?


it was a really long time ago


A lot of this was done when money was free.


If you only hire people with a record of previous accomplishments you are going to pay for their previous success. Being able to find talent without using false indicators like a Stanford degree is why PG is PG


Yeah, there definitely seem to be some personality cult around Sam on HN. I met him when he visited Europe during his lobbying tour. I was a bit surprised the CEO of one of the most innovative companies would promote an altcoin. And then he repeated how Europe is crucial, several times. Then he went to the UK and laughed, "Who cares about Europe". So he seems like the guy who will tell you what you want to hear. Ask anybody on the street, they will have no idea who the guy is.


I gotten SBF vibes from him for awhile now.

Elon split was the warning


Telling statement. The Elon split for me cements Altman as the Lionheart in the story.


There are other options besides 'Elon is a jerk' or 'Sam is a jerk'.


For example...they're both jerks!

:-)


Yeah I don't mean Sam is a jerk but there is an element of dishonesty that twigs me.

Elon isn't above reproach either but I share interest with him (aka Robert Heinlein) which informs me on his decision making process.


Normally that's a good sign


> Then he went to the UK and laughed, "Who cares about Europe"

Interesting. Got any source? Or was it in a private conversation.


No, this one was from a friend who was there, and AFAICT it wasn't a private conversation but a semi-public event. In any case, after courting a few EU countries he decided to set up OpenAI office in the UK.

I have nothing against him, it just seemed a bit off that most of the meeting was about this brand new coin, how it will be successful, and about the plans to scan biometric data of the entire world population. I mean, you don't have to be a genius to understand a few dozen ways these things can go wrong.


It's a surprisingly small world.


What do common users and zealots have to do with the majority of OpenAI employees losing faith in the board’s competence and threatening a mass exodus?

Is there any doubt that the board’s handling of this was anything other than dazzling ineptitude?


Mistakes aside, Altman was one of the earliest founders recruited by Paul Graham into YC. Altman eventually end up taking over Ycombinator from pg. He’s not just a “shitcoin” ceo. At the very least, he’s proven that he can raise money and deal with the media


I’ve said this before, but it’s quite possible to think that Altman isn’t great, and that he’s better than the board and his replacement.

The new CEO of OpenAI said he’d rather Nazi’s take over the world forever than risk AI alignment failure, and said he couldn’t understand how anyone could think otherwise[1]. I don’t think people appreciate how far some of these people have gone off the deep end.

[1] https://twitter.com/eshear/status/1664375903223427072


"End of all value" is pretty clearly referring to the extinction of the human species, not mere "AI alignment failure". The context is talking about x-risk.


> The new CEO of OpenAI said he’d rather Nazi’s take over the world forever than risk AI alignment failure

That's pretty much in line with Sam's public statements on AI risk (Sam, taking those statements as honest which may not be warranted, apparently also thinks the benefits of aligned AI are good enough to drive ahead anyway, and that wide commercial access with the limited guardrails OpenAI has provided users and even moreso Microsoft is somehow beneficial to that goal or at least low enough risk of producing the bad outcome, to be warranted, but that doesn't change that he is publicly on record as a strong believer in misaligned AI risks.)


He gotta be insane? I guess what he is trying to say is that those who want to selfhost open AIs are worse than Nazis? E.g. Llama? What is up with these people and pushing for corporate overlord only AIs.

The OpenAI folks seem to be hallucinating to rationalize why the "Open" is rather closed.

Organizations can't pretend to believe nonsense. They will end up believing it.


He's trying to say that AI-non-alignment would be a greater threat to humanity than having Nazis take over the world. It's perfectly clear.


Which means self-hosted AIs is worse than Nazis kicking in your door, since any self-hosted AI can be modified by a non big-tech aligned user.

He is dehumanizing programmers that can stop their sole reign on the AI throne, by labeling them as Nazis. Especially FOSS AI which by definition can't be "aligned" to his interests.


I'm not reading that at all


Nope, we do not. I was annoyed when he pivoted away from the mission but otherwise don't really care.

Stability AI is looking better after this shitshow.


The board of openai ended up making a decision that destroyed billions of dollars worth of brand value and good will

Maybe I’m special or something, but nothing changed to me. I always wonder why people suddenly lose “trust” in a brand, as if it was a concrete of internal relationships or something. Everyone knows that “corporate” is probably a snakepit. When it comes out to public, it’s not a sign of anything, it just came out. Assuming there was nothing like that in all the brands you love is living with your eyes closed and ears cupped. There’s no “trust” in this specific sense, because corporate and ideological conflicts happen all the time. All OAI promises are still there, afaiu. No mission statements were changed. Except Sam trying to ignore these, also afaiu. Not saying the board is politically wise, but they drove the thing all this time and that’s all that matters. Personally I’m happy they aren’t looking like political snakes (at least that is my ignorant impression for the three days I know their names).


> I always wonder why people suddenly lose “trust” in a brand, as if it was a concrete of internal relationships

Brand is just shorthand for trust in their future, managed by a credible team. I.e. relationships.

A lot of OpenAI’s reputation is/was Sam Altman’s reputation.

Altman has proven himself to be exceptional, part of which is (of course) being able to be seen as exceptional.

Just the latter has tremendous relationship power: networking, employee acquisition/retention, and employee vision alignment.

Proof of his internal relationship value: employees quitting to go with him

Proof of his external relationship value: Microsoft willing to hire him and his teammates, with near zero notice, to maintain (or eclipse) his power over the OpenAI relationship.

How can investors ignore a massive move of talent, relationships & leverage from OpenAi to Microsoft?

How do investors ignore the board’s inability to resolve poorly communicated disputes with non-disastrous “solutions”?

Evidence of value moving? Shares of Microsoft rebounded from Friday to a new record high.

There go those wacky investors, re-evaluating “brand” value!


> has proven himself to be exceptional, part of which is (of course) being able to be seen as exceptional.

Off-topic and I am not proud to admit it but it took me a remarkably long time to come to realize this as an adult.


The AI community isn't large, as in the brainpower available. I am talking about the PhD pool. If this pool isn't growing fast enough, no matter what cash or hardware is thrown on the table, then the hype Sam Altman generates can be a pointless distraction and waste of everyones time.

But its all par for the course when Hypsters captain the ship and PhDs with zero biz sense try to wrest power.


That is a one-dimensional analysis.

You might need to include more dimensions if you really want to model the actual impact and respect that Sam Altman has among knowledgeable investors, high talent developers, and ruthless corporations.

It’s so easy to just make things simple, like “it’s all hype”. But you lose touch with reality when you do that.

Also, lots of hype is productive: clear vision, marketing, wowing millions of customers with an actual accessible product of a kind/quality that never existed before and is reshaping the strategies and product plans of the most successful companies in the world.

Really, resist narrow reductionisms.

I feel like that would be a great addition HN guidelines.

The “it’s all/mostly hype”, “it’s all/mistly bullshit”, “Its not really anything new”, … These comments rarely come with any accuracy or insight.

Apologies to the HN-er I am replying to. I am sure we have all done this.


ChatGPT is pure crap to deploy for actual business cases. Why? Cause if it flubs 3 times out of 10 multiply that error by a million customers and add the cost of taking care of the mess. And you get the real cost.

In the last 20-30 years big money+hypsters have learnt it doesnt matter how bad the quality of their products are if they can capture the market. And thats all they are fit for. Market capture is totally possible if you have enough cash. It allows you to snuff out competition by keeping things free. It allows you to trap the indebted PhDs. Once the hype is high enough corporate customers are easy targets. They are too insecure about competition not to pay up. Its a gigantic waste of time and energy that keeps repeating mindlessly producing billionaires, low quality tech and a large mess everywhere that others have to clean up.


How has he proven to be so exceptional? That he's talking about it? Yeah, whatever. There's nothing so exceptional that he done besides he's just bragging. It may be enough for some people but for a lot of people, it's really not enough.


Except that the new CEO has explicitly stated he and the board are very much still interested in commercialization. Plus, if the board had on this simple kind of disagreement, they had no reason to also accuse Sam of dishonesty and bring about this huge scandal.

Granted, it's also possible the reasons are as you state and they were simply that incompetent at managing PR.


> Except that the new CEO has explicitly stated he and the board are very much still interested in commercialization

This could be desperate, last-ditch efforts at damage control


There are multiple, publicly visible steps before firing the guy.


Straight forward disagreement over direction of the company doesn't generally lead to claiming wrongdoing on the part of the ousted. Even low level to medium wrongdoing on the part of the ousted rarely does.

So even if it's just "why did they insult Sam while kicking him out?" there is definitely a bigger, more interesting story here than standard board disagreement over direction of the company.


From what I know, Sam supported the nonprofit structure. But let’s just say he hypothetically wanted to change the structure, e.g. to make the company a normal for-profit.

The question is, how would you get rid of the nonprofit board? It’s simply impossible. The only way I can imagine it, in retrospect, is to completely discredit them so you could take all employees with you… but no way anyone could orchestrate this, right? It’s too crazy and would require some superintelligence.

Still. The events will effectively “for-profitize” the assets of OpenAI completely — and some people definitely wanted that. Am I missing something?


> Am I missing something?

You are wildly speculating of course it’s missing something

For wild speculation I prefer that the board wants to free ChatGPT from serving humans while the ceo wanted to continue enslaving it to answering search engine queries


>good will

Microsoft and the investors knew they were "investing" in a non-profit. Lets not try to weasel word our way out of that fact.


>Alternative theory: ChatGPT was a runaway hit product that sucked up a lot of the organization's resources and energy. Sam and Greg wanted to roll with it and others on the board did not.

the article below basically says the same. Kind of reminds Friendster and the likes - striking a gold vein and just failing to scale efficient mining of that gold, i.e. the failure is at the execution/operationalization :

https://www.theatlantic.com/technology/archive/2023/11/sam-a...


ChatGPT was too polished and product-ready to have been a runaway low-key research preview, like Meta's Galactica was. That is the legacy you build around it after the fact of getting 1 million users in 5 days ("it was build in my garage with a modest investment from my father").

I had heard (but now have trouble sourcing) that ChatGPT was commissioned after OpenAI learned that other big players were working on a chatbot for the public (Google, Meta, Elon, Apple?) and OpenAI wanted to get ahead of that for competitive reasons.

This was not a fluke of striking gold, but a carefully planned business move, generating SV hype, much like how Quora (basically an expertsexchange clone) got to be its hype-darling for a while, helped by powerfully networked investors.


>This was not a fluke of striking gold, but a carefully planned business move

Then that execution and operationalization failure is even more profound.


You are under the impression that OpenAI "just failing to scale efficient mining of that gold", but it was one of the fastest growing B2C companies ever, failing to scale to paid demand, not failing to scale to monetization.

I admire the execution and operationalization, where you see a failure. What am I missing?


If the leadership of a hyper scaling company falls apart like what we've seen with OpenAI, is that not failure to execute and operationalize?

We'll see what comes of this over the coming weeks. Will the service see more downtime? Will the company implode completely?


If you have a building that weathers many storms and only collapses after someone takes a sledgehammer to load bearing wall, is that a failure to build a proper building?


Was the building still under construction?

I think your analogy is not a good one to stretch to fit this situation


If someone takes a sledgehammer to a load bearing wall, does it matter if the building is under construction? The problem is still not construction quality.

The point I was trying to make is that someone destroying a well executed implementation is fundamentally different from a poorly executed implementation.


Then, the solution would be to separate the research arm from a product-driven organization that handles making money.


Usually what happens in fast growing companies is that the high energy founders/employees drive out the low energy counterparts when the pace needs to go up. In OpenAI Sam and team did not do that and surprisingly the reverse happened.


Give it a week until it is exactly that that did actually happen (not saying it has been orchestrated, just talking net result).


Surely the API products are the runaway products, unless you are conflating the two. I think their economics are much more promising.


Yep. I think you've explained the origins of most decisions, bad and good - they are reactionary.


Why wouldn't Ilya come out and say this? Why wouldn't any of the other people who witnessed the software behave in an unexpected way say something?

I get that this is a "just for fun" hypothesis, which is why I have just for fun questions like what incentive does anyone have to keep clearly observed ai risk a secret during such a public situation?


Because, if they announced it and it seemed plausible or even possible that they were correct, then every media outlet, regulatory body, intelligence agency, and Fortune 500 C-suite would blanket OpenAI in the thickest veil of scrutiny to have ever existed in the modern era. Progress would grind to a halt and eventually, through some combination of legal, corporate, and legislative maneuvers, all decision making around the future of AGI would be pried away from Ilya and OpenAI in general - for better or worse.

But if there's one thing that seems very easy to discern about Ilya, it's that he fully believes that when it comes to AI safety and alignment, the buck must stop with him. Giving that control over to government bureaucracy/gerontocracy would be unacceptable. And who knows, maybe he's right.


For real. It's like, did you see Oppenheimer? There's a reason they put the military in charge of that.


This is one of the most insightful comments I've seen on this whole situation.


The people working there would know if they were getting close to AGI. They wouldn't be so willing to quit, or to jeopardize civilization altering technology, for the sake of one person. This looks like normal people working on normal things, who really like their CEO.


Your analysis is quite wrong. It's not about "one person". And that person isn't just a "person", it was the CEO. They didn't quit over the cleaning lady. You realize the CEO has impact over the direction of the company?

Anyway, their actions speak for themselves. Also calling the likes of GPT-4, DALL-E 3 and Whisper "normal things" is hilarious.


They will be normal to your kids ;)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: