Much as I like the idea of punishing Boeing, this doesn't make any sense in terms of personal safety. Airliner crashes are so unlikely to kill you that, in terms of your own personal safety, it just isn't worth worrying about.
No, what I said was correct. It simply isn't the case that avoiding Boeing airliners appreciably improves your personal safety.
Airliner crashes always make the news, so many casual observers overestimate their frequency by orders of magnitude. The statistics remain clear as day: airliner crashes are incredibly rare. They're incomparably rarer than road traffic accidents, for instance. Most years, no US airlines have any fatal crashes.
I recommend this YouTube video as a general overview of aviation safety. It's about the Kobe Bryant helicopter crash, but also covers airliner safety and big-picture aviation safety.
Edit for context (thanks /u/janice1999) there are 11,182 Airbus A320s and ~8400 Boeing 737 NG / Max so even pro rated Boeings recent planes are worse and the A320 has been out a few years longer too.
We'd probably also want to see separate stats for issues that occur shortly before landing or after takeoff -- stuff that may be more likely to come up with every flight regardless of duration.
Airbus’s human factor engineering was so bad that it led experienced pilots to fly a perfectly good aircraft into the ocean. They were repeatedly warned about this, and still have not fully fixed it.
Airbus’s flight controls worked sufficiently well that the pilots could still keep the plane in the air and then land it successfully despite the massive damage to the plane.
They acted on it. Wikipedia writes: "On 12 August 2009, Airbus issued three mandatory service bulletins, requiring that all A330 and A340 aircraft be fitted with two Goodrich 0851HL pitot tubes and one Thales model C16195BA pitot (or, alternatively, three of the Goodrich pitot tubes); Thales model C16195AA pitot tubes were no longer to be used."
Yes, they were.
They got stuck with ice and they disagreed in the air speed sent.
Because of this the autopilot was disabled and the flight controls were switched in alternate law 2.
The pilot that was flying failed to realise that this meant that his inputs had a much bigger effect compared to normal flight and he panicked and he also failed to relinquish control to the much more experienced pilot (on that model) multiple times.
How are the pitot tubes not the root cause?
This is incorrect. The correct analogy would be if I set the speed of my automated truck to 20mph over the speed limit causing me to earn 10% more income for 10 years. Then the truck crashes and burns and my neighbors pay for it.
I’d agree that’s what happens with the big rescue loans that save businesses. But, that’s not what’s happening here. There just such extreme hyperbole about how this removes all risk for banks.
I guess where I can meet you in the middle is that in this crash from excessive speed (not over the speed limit, but only because they lobbied to have the speed limit raised) the customers are getting taken care of, the business owner loses his business and his competitors have to pay for the cleanup.
I’m curious how the banks feel about this. I really don’t believe that doubt about the banking industry is in their favor, even if it could be a differentiator in theory. The amount they’ll pay is a tiny fraction compared to the market cap lost this week.
Open AI's CEO Sam Altman's take is that they will only ever allow API access to their models to avoid misuse.
I don't get HN's take with wanting everything open sourced. Some things are expensive to create and dangerous in the wrong hands. Not everything can and should be open sourced.
Can you think of any non-weapons examples where centralization/gatekeeping of a tech meaningfully and causally benefited society or a technology itself?
Actually, thinking about my own question I'm even inclined to remove the non-weapons qualifier. The most knee jerk response, nuclear weapons, is perhaps the best example of unexpected benefit. The 'decentralization' of nuclear weapons is undoubtedly why the Cold War was the Cold War, and not World War 3. And similarly why we haven't* seen an open war between nations with nuclear weapons. One power to rule over all suddenly turned into "war with this country no longer has a win scenario" effectively ending open warfare between nuclear nations.
There's also the inevitability/optics argument. There are already viable open source alternatives [1], and should this tech ultimately prove viable/useful that will only be the beginning. So there certainly will be "ai" that will be open, it just won't come from OpenAI(tm)(c).
I agree with your view that nuclear weapons on both sides prevent war. However, they’ve only ever been developed by a small number of capable and motivated nations, with considerable resources involved. The later ones (North Korea, Pakistan) developed them while other nations tried to prevent them from doing so.
If ML models continue their exponential growth in size, a similar outcome is possible.
I really don’t like the argument that you should make things free just because it makes the world better. What happened to ownership and respecting the effort it takes to create something?
I see a similar line of reasoning can be used to justify theft from the rich.
I'm not entirely sure whether to praise or condemn them for it, but OpenAI has chosen to keep their initial introduction/company plan publicly available on their site: https://openai.com/blog/introducing-openai/
----
"OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.
...
As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. We’ll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies."
----
My mocking about OpenAI(tm)(c) was not just juvenile "Micro$oft" type nonsense. At some point they discovered they could make a buck, and their ideology suddenly shifted 180. I have no qualms whatsoever about businesses pursuing profit, but the entity currently known as OpenAI couldn't be much further from the principles and values OpenAI was founded on, and their name itself is rapidly trending towards becoming a "Don't Be Evil" type of sardonicism. If this was Microsoft, Google, or other such companies operating in this way - I wouldn't have any expectation of anything besides what OpenAI is doing.
Companies tend to get quite a lot of credit when claiming some socially motivated interest, probably much more than deserved. So when they turn against those ideals, it should be noted - loudly.
> What happened to ownership and respecting the effort it takes to create something?
Ironic taking into consideration that the current generation of AI are more or less copyright laundering for the big corporations. Github Copilot being an extreme example of using GPL projects to generate "proprietary" closed source code. What happened to ownership and respecting the effort it takes to create something?
It sounds almost like a rehashing of Locke's labor theory of property wrt ownership of land that's very popular with classical liberals and libertarians. As that goes, land is initially nobody's, but when some person applies labor to improve or develop it somehow, that labor being "mixed in" makes the whole thing the property of the laborer.
Here, instead of common land, what we have is the common content. And they're saying that, by "developing" that content into a model that can do more useful things, the authors of the model are entitled to full private property rights on it.
I really hope that's not where we're going to end up, legally speaking.
A cluster of 6 year old 24GB NVIDIA Teslas should do the trick...they run for about $100 apiece. Put 12 or so of them together and you have the VRAM for a GPT3 clone.
Amazon has them listed at $200, but still, that's only $2,400 for 12 of them.
Still, adds up once you get the hardware you'd need to NVlink 12 of them, and then on top of that, the price of power/perf you get probably isn't great compared to modern compute.
Wonder what your volume would have to be before getting a box with 8 A100's from Lambdalabs would be the better tradeoff.
If you have time to wait for results then sure, it could work in theory but in practice they are so slow and power inefficient (compared to newer nodes) that no one uses them for LLMs, that's why they cost ~200$ used on ebay.
I just checked ebay and they are shockingly cheap. I can't even get DDR3 memory for the price they're selling 24GB of GDDR5... with a GPU thrown in for free.
Why is this? Did some large cloud vendor just upgrade?
Are there any deals like this on AMD hardware? Not having to deal with proprietary binary drivers is worth a lot of money and reduced performance to me. A lot.
These are pretty old, and all the companies are upgrading. But no one is upgrading from AMD hardware - basically no companies care if they use proprietary drivers. They want a good price-to-performance ratio, so they use NVIDIA stuff.
My take is that it's not good for democracy when the CEO of a private company is the one who decides what constitutes "misuse" and whose hands are "wrong" when it comes to access to a major technological breakthrough.
Yeah, just like the existence of Windows prevented Linux from ever existing.
The wrong hands have the money to seek alternatives. All this policy does is keep it out of the hands of the public, and ensure that whatever open alternatives start up won't be OpenAI's.
The secret to beating Isshin is you dont try to kill him. Think of him as a friend you’re trying to hang around and not let go. Once you get muscle memory for all his movesets you can sense that you can end the game whenever you want.
There was an event and several road closures. It managed to reroute using an alternative path.
Some times the waymo was cheaper than uber and at other times was twice as expensive.