You have to read between some lines and connect some dots here, since nothing is really publicly known and it's difficult to know if we can trust public statements anyway. Which of course means I could be completely wrong.
Still, an example I'd point to is this article where Dustin Moskovitz, who has close ties to the OpenAI board [1], is quoted saying "The thing I’m most interested in is making sure that state-of-the-art later generations, like GPT-5, GPT-6, get run through safety evaluations before being released into the world". [2] I'm not sure how else to read that other than that he doesn't think normal peons should have unfiltered access to this tech, and it would be unsafe if they did.
Beyond that, if the board shared my view (perhaps yours as well?) that the safest thing would be to make all of this FOSS, and either profits or elitist safetyism weren't important to them, why wasn't everything FOSS already?
Less concretely - the kind of elitist "we know best" attitude is exactly what I've come to expect from the effective altruism crowd, which several of the players involved here have publicly expressed alignment with.
Thanks for the links. I think overtly that yes it’s all about dot connecting.
I think a lot of the definitions of safety is early days and has been mangled by OpenAI’s privatization.
I also think the GPT vX formula will only ever be original for so long. Eventually Facebook or Google will catch-up with something just as comparable and if it’s Facebook they will nullify the value of GPT by open sourcing it anyways.
This to me is the reasoning that downplays the safety by elites angle as valueless however you look at the word safety. In the end this battle is about two things, betterment of humanity or making money. One will trump the other it’s just about how much you want to gain in the short term. I’d personally love to see all these already wealthy people hang their heads about a smaller paycheck if it means humanity gets a leg up a little earlier, safe or not. FOSS the damned thing.
PS - What a world where Zuckerberg may be the great equalizer…
Still, an example I'd point to is this article where Dustin Moskovitz, who has close ties to the OpenAI board [1], is quoted saying "The thing I’m most interested in is making sure that state-of-the-art later generations, like GPT-5, GPT-6, get run through safety evaluations before being released into the world". [2] I'm not sure how else to read that other than that he doesn't think normal peons should have unfiltered access to this tech, and it would be unsafe if they did.
Beyond that, if the board shared my view (perhaps yours as well?) that the safest thing would be to make all of this FOSS, and either profits or elitist safetyism weren't important to them, why wasn't everything FOSS already?
Less concretely - the kind of elitist "we know best" attitude is exactly what I've come to expect from the effective altruism crowd, which several of the players involved here have publicly expressed alignment with.
[1] https://news.ycombinator.com/item?id=38353330
[2] https://www.cnbc.com/2023/06/24/asanas-dustin-moskovitz-is-b...