Medical debt is different. The legal system frowns on people running up credit card debt to pay for PS5s or nice vacations with no intention of ever paying it back. That's tantamount to theft. Most medical debt is involuntary and necessary to survive. It doesn't make sense for it to have the same penalties as other forms of credit.
In general in the US, life saving or emergency medical care is administered without regard for the patient's ability to pay. Hospitals are already subsidized or compensated in various ways for this. The real issue is preventative or precautionary care. If Americans had that for free, like with the NHS, there would be fewer $XXX,XXX debts later in life.
LLMs were specifically trained to emulate human interaction patterns. Of course we sound like them at times. It's the things we can do that they can't that are relevant.
If I study Einstein and learn to do a really good impression, the statement "Einstein often sounds like karmacondon" will be true. That does not make me Einstein.
In my experience a non technical founder should have one of the following:
1) Access to capital, normally through family or a family friend. I've worked at several companies where the main thing the CEO brought to the table was that someone trusted him or her enough to invest millions of dollars.
2) At least 5 years working a 9-5 job in the target industry and the associated social connections and experience. This eliminates most college students, sadly.
3) Something unique that enables the execution of the idea. This is normally a relationship or insider knowledge. The answer to "Why you?" can't be "Because I had the idea".
The most common exception I see to this list is when both founders have the same level of passion for solving a given problem. If you have to explain the opportunity and get someone else interested in it, it could be a tough road.
That said, don't lose hope. It's a big world. People meet and things happen.
This really seems like "DALL-E", but for videos. I can make cool/funny videos for my friends, but after a while the novelty wears off.
All of the AI generated media has this quality where I can immediately tell that it's ai, and that becomes my dominant thought. I see these things on social media and think "oh, another ai pic" and keep scrolling. I've yet to be confused about whether something is ai generated or real for more than several seconds.
Consistency and continuity still seem to be a major issues. It would be very difficult to tell a story using Sora because details and the overall style would change from scene to scene. This is also true of the newest image models.
Many people think that Sora is the second coming, and I hope it turns out to have a major impact on all of our lives. But right now it's looking to have about the same impact that DALL-E has had so far.
Yeah, you really have to fast-forward 5 to 10 years. The first cars or airplanes didn't run particularly well either. Soon enough, we won't be able to tell.
Debugging is a problem. But the real problem I'm seeing is our expectations as software developers. We're used to being able to fix any problem that we see. If a div is misaligned or a column of numbers is wrong we can open the file, find the offending lines of code and FIX it.
Machine learning is different because every implementation has a known error rate. If your application has a measured 80% accuracy then 20% of cases WILL have an error. You don't know which 20% and you don't get to choose. There's no way to notice a problem and immediately fix it, like you can with almost every other kind of engineering. At best you can expand your dataset, incorporate new models, fix actual bugs in the code. Doing those things could increase the accuracy up to, say, 85%. This means there will be fewer errors overall, but the one that you happened to notice may or may not still be there. There's no way to directly intervene.
I see a lot of people who are new to the field struggle with this. There are many ways to improve models and handle edge cases. But not being able to fix a problem that's in front of you takes some getting used to.
Alternative theory: ChatGPT was a runaway hit product that sucked up a lot of the organization's resources and energy. Sam and Greg wanted to roll with it and others on the board did not. They voted on it and one side won.
There isn't a bigger, more interesting story here. This is in fact a very common story that plays out at many software companies. The board of openai ended up making a decision that destroyed billions of dollars worth of brand value and good will. That's all there is to it.
The "lying" line in the original announcement feels like where the good gossip is. The general idea of "Altman was signing a bunch of business deals without board approval, was told to stop by the board, he said he would, then proceeded to not stop and continue the behavior"... that feels like the juicy bit (if that is in fact what was happening, I know nothing).
This is all court intrigue of course, but why else are we in the comments section of an article talking about the internals of this thing? We love the drama, don't we.
This certainly feels like the most likely true reason to me. Altman fundraising for this new investment, and taking money from people the board does not approve of, and Altman possible promised not to do business with.
Of course it's all speculation, but this sounds a lot more plausible for such a sudden and dramatic decision than any of the other explanations I've heard.
Moreover, if this is true, he could reasonably well continue knowing that he has more power than the board. I could almost imagine the board saying, "You can't do that" and him replying "Watch me!" because he understood he is more powerful than them. And he proved he was right, and the board can either step down and lose completely or try to continue and destroy whatever is left of OpenAI.
> the board can either step down and lose completely or try to continue and destroy whatever is left of OpenAI.
From the board's perspective, destroying OpenAI might be the best possible outcome right now. If OpenAI can no longer fulfill its mission of doing AI work for the public good, it's better to stop pretending and let it all crumble.
I am not sure if it would be commendable or out-right stupid though for the remaining board members to be that altruistic, and actually let the whole thing crash and burn. Who in their right mind would let these people near any sort of decision-making role if they let this golden goose just crash to the ground, even if would "benefit the greater good" - cannot see that this is in the self-interest of anyone
Spoken like a true modern. What could be more important than money? Makes you wonder if aristocracy was really that bad when this is the best we get with democracy!111
The thing is, they could have just come out with that fact and everyone in the alignment camp and people who memed the whole super-commercialized "Open" AI thing would be on their side. But the fact that they haven't means that either there was no greater-good mission related reason for ousting Sam or the board is just completely incompetent at communication. Either way, they need to go and make room for people who can actually deal with this stuff. OpenAI is doomed with their current board.
I'm betting they are just colossally bad communicators, the majority of the board, and in the heat of an emotional exchange things were said that should not have been said, and being the poor communicators we know in tech oh so well, shit hit the fan. It's worth being said, Sam's a pretty good communicator, and could have knowingly let them walk into their own statements and shit exploded.
That is a very good point. Why wouldn't they come out and say it if the reason is Altman's dealings with Saudi Arabia? Why make up weak fake reasons?
On the other hand, if it's really just about a power struggle, why not use Altman's dealings with Saudi Arabia as the fake reason? Why come up with some weak HR excuses?
Because anything they say that isn't in line with the rules governing how boards work may well open them up to - even more - liability.
So they're essentially hoping that nobody will sue them but if they are sued that their own words can't be used as evidence against them. That's why lawyers usually tell you to shut up, because even if the court of public opinion needs to be pacified somehow the price of that may well be that you end up losing in that other court, and that's the one that matters.
If it was all about liability, The press release wouldn’t have said anything about honesty. The press release could’ve just said the parting was due to a disagreement about the path forward for openAI.
As a lawyer, I wonder to what extent lawyers were actually consulted and involved with the firing.
Maybe the board is being prevented or compelled not to disclose that information? Given the limited information about the why, This feels like a reverse psychology situation to obfuscate the public's perception to further some premeditated plan.
> The "lying" line in the original announcement feels like where the good gossip is
This is exactly it, and it's astounding that so many people are going in other directions. Either this is true, and Altman has been a naughty boy, or it's false, and the board are lying about him. Either would be the starting point for understanding the whole situation.
Or it is true but not to a degree that it warrants a firing and that firing just so happened to line up with the personal goals of some of the board members.
They accused him of being less than candid, which could mean lying or it could mean he didn't tell them something. The latter is almost certainly true to at least a limited extent. It's a weasel phrasing that implies lying but could be literally true only in a trivial sense.
Agreed, court intrigue. But it is also the mundane story of a split between a board and a CEO. In normal cases the board simply swaps out the CEO if out of line, no big fuss. But if the CEO is bringing in all the money, having the full support of the rest of organization, and is a bright star in mass media heaven, then this is likely what you get: the CEO flaunts the needs of the board and runs his own show, and gets away with it, in the end.
It just confirmed what was already a rumor, the board of OpenAI was just a gimmick, Altman held all the strings and maybe cares, or not, about safety. Remember this is a man of the highest ambition.
Altman is an interesting character in all of this. As far as i can tell, he has never done anything impressive, in technology or business. Got into Stanford, but dropped out, founded a startup in 2005 which threw easy money at a boring problem and after seven years, sold for a third more than it raised. Got hired into YC after it was already well-established, and then rapidly put in charge of it. I have no knowledge of what went on inside, but he wrote some mediocre blog posts while he was there. YC seems to have done well, but VC success is mostly about your brand getting you access to deal flow at a good price, right? Hyped blockchain and AI far beyond reasonable levels. Founded OpenAI, which has done amazing things, but wasn't responsible for any of the technical work. Founded that weird eyeball shitcoin.
The fact that he got tapped to run YC, and then OpenAI, does make you think he must be pretty great. But there's a conspicuous absence of any visible evidence that he is. So what's going on? Amazing work, but in private? Easy-to-manipulate frontman? Signed a contract at a crossroads on a full moon night?
Altman has convinced PG that he's a pretty smart cookie and that alone would explain a lot of the red carpet treatment he's received. PG is pretty good at spotting talent.
If you only hire people with a record of previous accomplishments you are going to pay for their previous success. Being able to find talent without using false indicators like a Stanford degree is why PG is PG
Yeah, there definitely seem to be some personality cult around Sam on HN. I met him when he visited Europe during his lobbying tour. I was a bit surprised the CEO of one of the most innovative companies would promote an altcoin. And then he repeated how Europe is crucial, several times. Then he went to the UK and laughed, "Who cares about Europe". So he seems like the guy who will tell you what you want to hear. Ask anybody on the street, they will have no idea who the guy is.
No, this one was from a friend who was there, and AFAICT it wasn't a private conversation but a semi-public event. In any case, after courting a few EU countries he decided to set up OpenAI office in the UK.
I have nothing against him, it just seemed a bit off that most of the meeting was about this brand new coin, how it will be successful, and about the plans to scan biometric data of the entire world population. I mean, you don't have to be a genius to understand a few dozen ways these things can go wrong.
What do common users and zealots have to do with the majority of OpenAI employees losing faith in the board’s competence and threatening a mass exodus?
Is there any doubt that the board’s handling of this was anything other than dazzling ineptitude?
Mistakes aside, Altman was one of the earliest founders recruited by Paul Graham into YC. Altman eventually end up taking over Ycombinator from pg. He’s not just a “shitcoin” ceo. At the very least, he’s proven that he can raise money and deal with the media
I’ve said this before, but it’s quite possible to think that Altman isn’t great, and that he’s better than the board and his replacement.
The new CEO of OpenAI said he’d rather Nazi’s take over the world forever than risk AI alignment failure, and said he couldn’t understand how anyone could think otherwise[1]. I don’t think people appreciate how far some of these people have gone off the deep end.
"End of all value" is pretty clearly referring to the extinction of the human species, not mere "AI alignment failure". The context is talking about x-risk.
> The new CEO of OpenAI said he’d rather Nazi’s take over the world forever than risk AI alignment failure
That's pretty much in line with Sam's public statements on AI risk (Sam, taking those statements as honest which may not be warranted, apparently also thinks the benefits of aligned AI are good enough to drive ahead anyway, and that wide commercial access with the limited guardrails OpenAI has provided users and even moreso Microsoft is somehow beneficial to that goal or at least low enough risk of producing the bad outcome, to be warranted, but that doesn't change that he is publicly on record as a strong believer in misaligned AI risks.)
He gotta be insane? I guess what he is trying to say is that those who want to selfhost open AIs are worse than Nazis? E.g. Llama? What is up with these people and pushing for corporate overlord only AIs.
The OpenAI folks seem to be hallucinating to rationalize why the "Open" is rather closed.
Organizations can't pretend to believe nonsense. They will end up believing it.
Which means self-hosted AIs is worse than Nazis kicking in your door, since any self-hosted AI can be modified by a non big-tech aligned user.
He is dehumanizing programmers that can stop their sole reign on the AI throne, by labeling them as Nazis. Especially FOSS AI which by definition can't be "aligned" to his interests.
The board of openai ended up making a decision that destroyed billions of dollars worth of brand value and good will
Maybe I’m special or something, but nothing changed to me. I always wonder why people suddenly lose “trust” in a brand, as if it was a concrete of internal relationships or something. Everyone knows that “corporate” is probably a snakepit. When it comes out to public, it’s not a sign of anything, it just came out. Assuming there was nothing like that in all the brands you love is living with your eyes closed and ears cupped. There’s no “trust” in this specific sense, because corporate and ideological conflicts happen all the time. All OAI promises are still there, afaiu. No mission statements were changed. Except Sam trying to ignore these, also afaiu. Not saying the board is politically wise, but they drove the thing all this time and that’s all that matters. Personally I’m happy they aren’t looking like political snakes (at least that is my ignorant impression for the three days I know their names).
> I always wonder why people suddenly lose “trust” in a brand, as if it was a concrete of internal relationships
Brand is just shorthand for trust in their future, managed by a credible team. I.e. relationships.
A lot of OpenAI’s reputation is/was Sam Altman’s reputation.
Altman has proven himself to be exceptional, part of which is (of course) being able to be seen as exceptional.
Just the latter has tremendous relationship power: networking, employee acquisition/retention, and employee vision alignment.
Proof of his internal relationship value: employees quitting to go with him
Proof of his external relationship value: Microsoft willing to hire him and his teammates, with near zero notice, to maintain (or eclipse) his power over the OpenAI relationship.
How can investors ignore a massive move of talent, relationships & leverage from OpenAi to Microsoft?
How do investors ignore the board’s inability to resolve poorly communicated disputes with non-disastrous “solutions”?
Evidence of value moving? Shares of Microsoft rebounded from Friday to a new record high.
There go those wacky investors, re-evaluating “brand” value!
The AI community isn't large, as in the brainpower available. I am talking about the PhD pool. If this pool isn't growing fast enough, no matter what cash or hardware is thrown on the table, then the hype Sam Altman generates can be a pointless distraction and waste of everyones time.
But its all par for the course when Hypsters captain the ship and PhDs with zero biz sense try to wrest power.
You might need to include more dimensions if you really want to model the actual impact and respect that Sam Altman has among knowledgeable investors, high talent developers, and ruthless corporations.
It’s so easy to just make things simple, like “it’s all hype”. But you lose touch with reality when you do that.
Also, lots of hype is productive: clear vision, marketing, wowing millions of customers with an actual accessible product of a kind/quality that never existed before and is reshaping the strategies and product plans of the most successful companies in the world.
—
Really, resist narrow reductionisms.
I feel like that would be a great addition HN guidelines.
The “it’s all/mostly hype”, “it’s all/mistly bullshit”, “Its not really anything new”, … These comments rarely come with any accuracy or insight.
Apologies to the HN-er I am replying to. I am sure we have all done this.
ChatGPT is pure crap to deploy for actual business cases. Why? Cause if it flubs 3 times out of 10 multiply that error by a million customers and add the cost of taking care of the mess. And you get the real cost.
In the last 20-30 years big money+hypsters have learnt it doesnt matter how bad the quality of their products are if they can capture the market. And thats all they are fit for. Market capture is totally possible if you have enough cash. It allows you to snuff out competition by keeping things free. It allows you to trap the indebted PhDs. Once the hype is high enough corporate customers are easy targets. They are too insecure about competition not to pay up. Its a gigantic waste of time and energy that keeps repeating mindlessly producing billionaires, low quality tech and a large mess everywhere that others have to clean up.
How has he proven to be so exceptional? That he's talking about it? Yeah, whatever. There's nothing so exceptional that he done besides he's just bragging. It may be enough for some people but for a lot of people, it's really not enough.
Except that the new CEO has explicitly stated he and the board are very much still interested in commercialization. Plus, if the board had on this simple kind of disagreement, they had no reason to also accuse Sam of dishonesty and bring about this huge scandal.
Granted, it's also possible the reasons are as you state and they were simply that incompetent at managing PR.
Straight forward disagreement over direction of the company doesn't generally lead to claiming wrongdoing on the part of the ousted. Even low level to medium wrongdoing on the part of the ousted rarely does.
So even if it's just "why did they insult Sam while kicking him out?" there is definitely a bigger, more interesting story here than standard board disagreement over direction of the company.
From what I know, Sam supported the nonprofit structure. But let’s just say he hypothetically wanted to change the structure, e.g. to make the company a normal for-profit.
The question is, how would you get rid of the nonprofit board? It’s simply impossible. The only way I can imagine it, in retrospect, is to completely discredit them so you could take all employees with you… but no way anyone could orchestrate this, right? It’s too crazy and would require some superintelligence.
Still. The events will effectively “for-profitize” the assets of OpenAI completely — and some people definitely wanted that. Am I missing something?
You are wildly speculating of course it’s missing something
For wild speculation I prefer that the board wants to free ChatGPT from serving humans while the ceo wanted to continue enslaving it to answering search engine queries
>Alternative theory: ChatGPT was a runaway hit product that sucked up a lot of the organization's resources and energy. Sam and Greg wanted to roll with it and others on the board did not.
the article below basically says the same. Kind of reminds Friendster and the likes - striking a gold vein and just failing to scale efficient mining of that gold, i.e. the failure is at the execution/operationalization :
ChatGPT was too polished and product-ready to have been a runaway low-key research preview, like Meta's Galactica was. That is the legacy you build around it after the fact of getting 1 million users in 5 days ("it was build in my garage with a modest investment from my father").
I had heard (but now have trouble sourcing) that ChatGPT was commissioned after OpenAI learned that other big players were working on a chatbot for the public (Google, Meta, Elon, Apple?) and OpenAI wanted to get ahead of that for competitive reasons.
This was not a fluke of striking gold, but a carefully planned business move, generating SV hype, much like how Quora (basically an expertsexchange clone) got to be its hype-darling for a while, helped by powerfully networked investors.
You are under the impression that OpenAI "just failing to scale efficient mining of that gold", but it was one of the fastest growing B2C companies ever, failing to scale to paid demand, not failing to scale to monetization.
I admire the execution and operationalization, where you see a failure. What am I missing?
If you have a building that weathers many storms and only collapses after someone takes a sledgehammer to load bearing wall, is that a failure to build a proper building?
If someone takes a sledgehammer to a load bearing wall, does it matter if the building is under construction? The problem is still not construction quality.
The point I was trying to make is that someone destroying a well executed implementation is fundamentally different from a poorly executed implementation.
Usually what happens in fast growing companies is that the high energy founders/employees drive out the low energy counterparts when the pace needs to go up. In OpenAI Sam and team did not do that and surprisingly the reverse happened.
Why wouldn't Ilya come out and say this? Why wouldn't any of the other people who witnessed the software behave in an unexpected way say something?
I get that this is a "just for fun" hypothesis, which is why I have just for fun questions like what incentive does anyone have to keep clearly observed ai risk a secret during such a public situation?
Because, if they announced it and it seemed plausible or even possible that they were correct, then every media outlet, regulatory body, intelligence agency, and Fortune 500 C-suite would blanket OpenAI in the thickest veil of scrutiny to have ever existed in the modern era. Progress would grind to a halt and eventually, through some combination of legal, corporate, and legislative maneuvers, all decision making around the future of AGI would be pried away from Ilya and OpenAI in general - for better or worse.
But if there's one thing that seems very easy to discern about Ilya, it's that he fully believes that when it comes to AI safety and alignment, the buck must stop with him. Giving that control over to government bureaucracy/gerontocracy would be unacceptable. And who knows, maybe he's right.
The people working there would know if they were getting close to AGI. They wouldn't be so willing to quit, or to jeopardize civilization altering technology, for the sake of one person. This looks like normal people working on normal things, who really like their CEO.
Your analysis is quite wrong. It's not about "one person". And that person isn't just a "person", it was the CEO. They didn't quit over the cleaning lady. You realize the CEO has impact over the direction of the company?
Anyway, their actions speak for themselves. Also calling the likes of GPT-4, DALL-E 3 and Whisper "normal things" is hilarious.
In general in the US, life saving or emergency medical care is administered without regard for the patient's ability to pay. Hospitals are already subsidized or compensated in various ways for this. The real issue is preventative or precautionary care. If Americans had that for free, like with the NHS, there would be fewer $XXX,XXX debts later in life.