Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How do we know the mission got thrown out a window? The board still, after days of intense controversy, have yet to clearly explain how Altman was betraying the mission.

Did he ignore safety? Did he defund important research? Did he push forward on projects against direct objections from the board?

If there’s a good reason, then let everybody know what that is. If there isn’t, then what was the point of all this?



He went full-bore on commercialization, scale, and growth. He started to ignore the 'non-profit mission'. He forced out shoddy, underprovisioned product to be first to market. While talking about safety out one side of his mouth, he was pushing "move fast and break things", "build a moat and become a monopoly asap" typical profit-driven hypergrowth mindset on the other.

Not to mention that he was aggressively fundraising for two companies that would be either be OpenAI's customer or sell products to OpenAI.

If OpenAI wants commercial hypergrowth pushing out untested stuff as quickly as possible in typical SV style they should get Altman back. But that does seem to contradict their mission. Why are they even a nonprofit? They should just restructure into a full for-profit juggernaut and stop living in contradiction.


chatgpt was under provisioned relative to demand, but demand was unprecedented, so it's not really fair to criticize much on that.

(It would have been a much bigger blunder to, say, build out 10x the capacity before launch, without knowing there was a level of demand is known to support it.)

Also, chatgpt's capabilities are what drove the huge demand, so I'm not sure how you can argue it is "shoddy".


Shipping broken product is a typical strategy to gain first mover advantage and try to build a moat. Even if it's mostly broken, if it's high value, people will do sign up and try to use it.

Alternatively, you can restrict signups and do gradual rollout, smoothing out kinks in the product and increasing provisioning as you go.

In 2016/17 Coinbase was totally broken. Constantly going offline, fucking up orders, taking 10 minutes to load the UI, UI full of bugs, etc. They could have restricted signups but they didn't want to. They wanted as many signups as possible, and decided to live with a busted product and "fix the airplane while it's taking off".

This is all fine, you just need to know your identity. A company that keeps talking about safety, being careful what they build, being careful what they put out in the wild and its potential externalities, acting recklessly Coinbase-style does not fit the rhetoric. It's the exact opposite of it.


In what way is ChatGPT broken? It goes down from time to time and has minor bugs. But other than that, the main problem is the hallucination problem that is a well-known limitation with all LLM products currently.

This hardly seems equivalent to what you describe from Coinbase, where no doubt people were losing money due to the bad state of the app.

For most startups, one of the most pressing priorities at any time is trying to not go out of business. There is always going to be a difficult balance between waiting for your product to mature and trying to generate revenue and show progress to investors.

Unless I’m totally mistaken, I don’t think that OpenAI’s funding was unlimited or granted without pressure to deliver tangible progress. Though I’d be interested to hear if you know differently. From my perspective, OpenAI acts like a startup because it is one.


A distasteful take on an industry transforming company. For one, I'm glad OpenAI released models at the pace they did which not only woke up Google and Meta, but also breathe a new life into tech which was subsumed by web3. If products like GitHub Copilot and ChatGPT is your definition of "shoddy", then I'd like nothing more for Sam to accelerate!


I'm just saying that they should stop talking about "safety", while they are releasing AI tech as fast as possible.


Because the mission is visibly abandoned. There's nothing "open" about OpenAI. We may not know how the mission was abandoned but we know Sam was CEO, hence responsible.


There was never anything open about open ai. If there were I should have access to their training data, training infra setup and weights.

The only thing that changed is the reason why the unwashed masses aren't allowed to see the secret sauce: from alignment to profit.

A plague on both their houses.


They don't publish papers now, they actually published papers and code before.

No doubt OpenAI was never a glass house... but it seems extremely disingenuous to say their behavior hasn't changed.


What as "open" about it before that?


The first word in their company name.


Isn't Ilya even more against opening up models? OpenAI is more open in one way - it's easier to get API access (compared to say Anthropic)


What was "open" before ChatGPT?



In terms of the LLM's, it was abandoned after GPT-2 when they realised the dangers of what was coming with GPT 3/3.5. Better to paywall access to and monitor it than open-source it and let it loose on the world.

ie. the original mission was never viable long-term.


> How do we know the mission got thrown out a window?

When was the last time OpenAI openly released any AI?


Whisper v3, just a couple weeks ago https://huggingface.co/openai/whisper-large-v3



Whisper maybe?


Exactly.

All this "AI safety" stuff is at this point pure innuendo.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: