Agreed. Starting from before the anthropic exodus, I suspect the timeline looks like:
(2015) Founding: majority are concerned with safety
(2019) For profit formed: mix of safety and profit motives (majority still safety oriented?)
(2020) GPT3 released to much hype, leading to many ambition chasers joining: the profit seeking side grows.
(2021) Anthropic exodus over safety: the safety side shrinks
(2022) chatgpt released, generating tons more hype and tons more ambitious profit seekers joining: the profit side grows even more, probably quickly outnumbering the safety side
(2023) this weeks shenanigans
The safety folks probably lost the majority a while ago. Maybe back in 2021, but definitely by the time the gpt3/chatgpt motivated newcomers were in the majority.
Maybe one lesson is that if your cofounder starts hiring a ton of people who aren’t aligned with you, you can quickly find yourself in the minority, especially once people on your side start to leave.
This is why I never understood people resigning in protest such as was the case with Google’s military contracts. You simply assure that the culture change happens more swiftly.
There's always other companies. Plus sometimes you just gotta stick to your values. For the Google military contracts it makes even more sense: the protest resignation isn't just a virtue signal, it's also just refusing to contribute to the military.
> If you want to deter military action involving your country, contributing to its defense is probably the best thing that you can do.
Given that Google is an American company, do you believe contributing to the American Department of "Defense" increases, or decreases, the amount of military action involving the USA?
The American military isn't called "world police" for nothing, and just like the cops they're sticking their noses where they don't belong and making things worse. I can understand why people would want no part in furthering global violence and destitution.
> If you believe that your country offers value (compared to the rest of the world), you should take any opportunities you can to serve.
Really? There's an obligation to further the American global ambition through contributing militarily? You can't think of any other way to spread American culture and values? To share the bounty of American wealth?
We already have nation states and wannabe nation states that take shots at us when they feel like the opportunity is there, despite us being the dominant military around the world. As does France, who has been a leading military power for longer than we have.
That's what having a funded and functional defense is all about -- making other entities not think that the opportunity is there.
I think the planet has relatively finite resources and that I'm god damned lucky to have been born into a nation with the ability to secure resources for itself. I enjoy my quality of life a great deal and would like to maintain it. At a minimum. Before helping others.
If you're the type of person who thinks we should just give up this position by not adequately defending it through military and technology investment, I would prefer that you just expatriate yourself now rather than let some other ascendant nation dictate our future and our quality of life.
If you're the kind of person who feels strongly for the plight of, for example, the Palestinians, you should recognize that the only way to deter those kinds of outcomes is to establish the means to establish and maintain sovereignty. That requires a combination of force and manpower.
> If you're the type of person who thinks we should just give up this position by not adequately defending it through military and technology investment, I would prefer that you just expatriate yourself now rather than let some other ascendant nation dictate our future and our quality of life.
But I thought you said if we want to fix something we should do so from within the system? I'm interested in ending American imperialism, by your logic isn't the best place to do so from within the USA?
> We already have nation states and wannabe nation states that take shots at us when they feel like the opportunity is there
From which nation state do you feel an existential threat? I haven't heard "they defend our freedoms" in a very, very long time, and I thought we all knew it was ironic.
> I think the planet has relatively finite resources
I'm curious about this viewpoint, because it seems to necessarily imply that the human race will simply die out when those resources (and those of the surrounding solar system) are exhausted. Is sustainability just, not a concept in this worldview?
> That's what having a funded and functional defense is all about -- making other entities not think that the opportunity is there.
It seems in the case of the USA, the "functional defense" is more often used to destabilize other nations, and arm terrorists that then turn around and attack the USA. It's really interesting you brought up Palestinian liberation as an example, because really one of the only reasons Israel is able to maintain its apartheid state and repression of the Palestinians is because of USA aid. In your understanding, both the Israelis and the Palestinians should arm up and up and up until they're both pointing nukes at eachother, correct? That's the only pathway to peace?
But very few other countries are doing it to the countries to whom the USA is doing it, thus I disagree with your contention. It seems to me that if the countries that are doing these awful things stop doing these awful things, the awful things will in fact stop happening, at least for a time.
As for to whom the awful things are happening, that's practically moot: humans are humans. I don't really understand why I should accept bad things happening to humans just because I was born on one side of an invisible line and they were born on another. Seems extremely fallacious and irrational, if not sociopathic.
> If your starting point of logic though is "America Bad" then your moralizing isn't about working at Google or not.
The discussion is about the evils perpetrated by the American military industrial complex, and why people may not want to work for companies that participate in this complex. Google being one of these companies. I similarly won't work for Ratheon or Halliburton, for obvious examples. So yes, it's not about working at just Google.
Though for what it's worth, I actually agree with you that the more ethical course of action would be to stay at Google, try to get into a military-adjacent project, and then sabotage it, probably via quiet quitting or holding lots of unnecessary meetings and wasting other people's time. This is directly out of the CIA's playbook, in fact. PDF: https://www.cia.gov/static/5c875f3ec660e092cf893f60b4a288df/...
We're in a really privileged position in human civilization where most of the species is far removed from the banal violence of nature.
You're lucky if you only need weapons to defend yourself against predators.
You're far less lucky if you need weapons to defend yourself because the neighboring mountain village's crops failed and their herds died. You're less lucky if you don't speak the same language as them and they outnumber you three to one and are already 4 days starving. You're less lucky that they're already 4 days starving, wielding farm tools, running down the hills at you, crazed and screaming.
Sure, but the point is that those same tools that you can use to fend off the neighboring village work equally well to invade the neighboring village for their prosperous farm land. You will not be the one that gets to decide how those tools are used.
There's a line of thought that having advanced weaponry inherently promotes its use, because who's going to stop you? How gung-ho do you think the US would have been about going to war in Iraq if it weren't for the billion dollar tanks and aircraft and bombs?
Just about any technology advancement ever has weapons potential. If you want to take that reasoning, just quiver in fear at home and not develop anything.
I disagree. All technological development changes human societies and imposes its rules upon them and rules over them, not the other way around. The combination of technology and human nature is an unstoppable deterministic force, one whose effects are so easily predicted when traced from… invention of cannons (?) in hindsight. No modern (organization-dependent) technology should have ever been developed. The people lived happier, more mentally healthy and more fulfilling lives despite living in worse material conditions and the lifespan isn’t that bad when you factor out child mortality anyway. Turns out human brain can (actually even designed to) easily deal with bad material conditions if it’s not messed up with thousand addictions, mouthbreathing, sedentary life and smartphones.
Saying this as an aspiring software engineer. I use NixOS, and rewrite things in Rust. It’s not some unga-bunga speaking.
Your argument is to not read the ideas of someone because he did something bad? What’s the reason? Are you that gullible that you can’t evaluate them with your own mind and conscience and will start shooting everyone the moment you finish his manifesto?
Ted Kacynski's writings were his worldview and what led him to send out a dozen bombs, including one which exploded on an American Airlines flight and luckily did not bring the plane down. His manifesto is his justification for said actions and to advocate becoming a student of it similarly justifies said violence.
It's the same self-own as the massive dummies in the last week who were all talking about Osama bin Laden's Letter to America being right when it was his justification for killing thousands of people on 9/11 and effectively kicking off multiple wars and further deathtoll.
It is a disgusting suggestion and I believe that you should seek professional help.
I’m a Muslim and I, by definition, don’t agree with Kaczynski’s idea of use of violence to bring the system down. OTOH I believe all organization-dependent technology[1] is evil and has only harmed humanity, never benefitted. Obviously this presumes a different understanding of harm and benefit, one not as the same as plain pain-avoidance and convenience-seeking which technological system tends to creates a tendency towards in people.
Philosophers in general don't have a hard time separating the terrorism of Ted Kaczynski from his philosophy.
> James Q. Wilson, in a 1998 New York Times op-ed, wrote: "If it is the work of a madman, then the writings of many political philosophers—Jean Jacques Rousseau, Thomas Paine, Karl Marx—are scarcely more sane." He added: "The Unabomber does not like socialization, technology, leftist political causes or conservative attitudes. Apart from his call for an (unspecified) revolution, his paper resembles something that a very good graduate student might have written."
Suggesting someone seek professional help because they read a widely-discussed manifesto is insulting. In your bio you say most people are morons, it makes me think of the saying, "if everyone you meet is an asshole, maybe you're the asshole."
After all, to you, somehow Osama bin Laden, who was trained by US Special Forces, worked with the Mujahideen which received upwards of 6 billion in aid from the USA, Saudi Arabia, and China, is responsible for the two decade long "War on Terror" launched seemingly at random by the Americans into countries now determined to be unrelated to the 9/11 attacks. 9/11 was a tragedy for certain, but to use it to justify the deaths of millions of completely unrelated innocents... well it certainly clarifies why our other thread has gone in the direction of you trying to justify imperialism to serve the purpose of nationalism.
That's always an issue with weapons, but if you opt out then you don't have them when you might need them.
It's a dangerous world out there.
Luckily for us, technology is still more-often used for good. Explosives can kill your enemies, but they can also cut paths through mountains and bring together communities.
IMO, the virtue signal where people refuse to work on defense technology is just self-identifying with the worst kind of cynicism about human beings and signaling poor reasoning skills.
The Manhattan Project, which had the stated purpose of building and deploying nuclear _weapons_, employed our most brilliant minds at the time and only two people quit -- and only one because of an ethics concern. Joseph Rotblat left the project after the Nazis were already defeated and because defeating the Nazis was the only reason he'd signed on. Also this is disputed by some who say that he was more concerned about finding family members who survived the Holocaust...
> Luckily for us, technology is still more-often used for good.
When you are on the other end of the cannon, or your heart is beating with those who are, you tend to not say that. Iraq, Syria, Palestine, Afghanistan…
I'm curious how much progress they ever made on that, to be honest. I'm not aware of how Claude is "safer", by any real-world metric, compared to ChatGPT.
Claude 2 is IMO, safer and in a bad way. They did "Constitutional AI". And made Claude 2 Safer but dumber than Claude 1 sadly. Which is why on the Arena leaderboard, Claude 1 is still score more than Claude 2...
Why do you find this so surprising? You make it sound as if OpenAI is already outrageously safety focused. I have talked to a few people from anthropic and they seem to believe that OpenAI doesn't care at all about safety.
It is unfortunate that some people hear AI safety and think about chatbots saying mean stuff, and others think about a future system performing the machine revolution against humanity.
Disincentivizing it from saying mean things just strengthens it's agreeableness, and inadvertently incentivizes it to acquire social engineering skills.
It's potential to cause havoc doesn't go away, it just teaches AI how to interact with us without raising suspicions, while simultaneously limiting our ability to prompt/control it.
Your guess is about as good as anyone else's at this point. The best we can do is attempt to put safety mechanisms in place under the hood, but even that would just be speculative, because we can't actually tell what's going on in these LLM black boxes.
How do we tell whether a human is safe? Incrementally granted trust with ongoing oversight is probably the best bet. Anyway, the first mailicious AGI would probably act like a toddler script-kiddie not some superhuman social engineering mastermind
> murderous tendencies lurking beneath the surface
…Where is that "beneath the surface"? Do you imagine a transformer has "thoughts" not dedicated to producing outputs? What is with all these illiterate anthropomorphic speculations where an LLM is construed as a human who is being taught to talk in some manner but otherwise has full internal freedom?
No, I do not think a transformer architecture in a statistical language model has thoughts. It was just a joke.
At the same time, the original question was how can something that is forced to be polite engage in the genocide of humanity, and my non-joke answer to that is that many of history's worst criminals and monsters were perfectly polite in everyday life.
I am not afraid of AI, AGI, ASI. People who are, it seems to me, have read a bit too much dystopian sci-fi. At the same time, "alignment" is, I believe, a silly nonsense that would not save us from a genocidal AGI. I just think it is extremely unlikely that AGI will be genocidal. But it is still fun to joke about. Fun, for me anyway, you don't have to like my jokes. :)
Might be more for PR/regulatory capture/SF cause du jour reasons than the "prepare for later versions that might start killing people, or assist terrorists" reasons.
Like one version of the story you could tell is that the safety people invented RLHF as in a chain of steps eventual AGI safety, but corporate wanted to use it as a cheaper content filter for existing models.
In another of the series of threads about all of this, another user opined that the Anthropic AI would refuse to answer the question 'how many holes does a straw have'. Sounds more neutered than GPT-4.
I don't think this has anything to do with safety. The board members voting Altman out all got their seats when Open AI was essentially a charity and those seats were bought with donations. This is basically the donors giving a big middle finger to everyone else trying to get rich off of their donations while they get nothing.
(2015) Founding: majority are concerned with safety
(2019) For profit formed: mix of safety and profit motives (majority still safety oriented?)
(2020) GPT3 released to much hype, leading to many ambition chasers joining: the profit seeking side grows.
(2021) Anthropic exodus over safety: the safety side shrinks
(2022) chatgpt released, generating tons more hype and tons more ambitious profit seekers joining: the profit side grows even more, probably quickly outnumbering the safety side
(2023) this weeks shenanigans
The safety folks probably lost the majority a while ago. Maybe back in 2021, but definitely by the time the gpt3/chatgpt motivated newcomers were in the majority.
Maybe one lesson is that if your cofounder starts hiring a ton of people who aren’t aligned with you, you can quickly find yourself in the minority, especially once people on your side start to leave.