Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Like yes, we are able to think of thousands of hypothetical ways technology (even those inferior to full AGI) could go off the rails in a catastrophic way and post and discuss these scenarios endlessly... and yet it doesn't result in a slowing or stopping of the progress leading there.

The problem is sifting through all of the doomsayer false positives to get to any amount of cogent advice.

At the invention of the printing press, there were people with this same energy. Obviously those people were wrong. And if we had taken their "lesson", then human society would be in a much worse place.

Is this new wave of criticism about AI/AGI valid? We will only really know in retrospect.



> At the invention of the printing press, there were people with this same energy. Obviously those people were wrong.

Were they?

The first thing the printing press did was to break Christianity. It's what made attempts at reforming the Catholic Church finally stick, enabling what we now call Reformation to happen. Reformation forever broke Christianity into pieces, and in the process it started a bunch of religious wars in Europe, as well as tons of neighborly carnage.

> And if we had taken their "lesson", then human society would be in a much worse place.

Was the invention of the printing press a net good for humanity? Most certainly so, looking back from today. Did people living back then knew what they were getting into? Not really. And since their share of the fruits of that invention was mostly bloodshed, job loss, and shattering of the world order they knew, I wouldn't blame them from being pissed off about getting the short end of the stick, and perhaps looking for ways to undo it.

I'm starting to think that talking about inventions as good or bad (or the cop-out, "dual use") is bad framing. Rather, it seems to me that every major invention will eventually turn out beneficial[0], but introducing an invention always first extracts a cost in blood. Be it fire or printing press or atomic bomb, a lot of people end up suffering and dying before societies eventually figure out how to handle the new thing and do some good with it.

I'm very much in favor of progress, but I understand the fear. No matter the ultimate benefits, we are the generation that cough up blood as payment for AI/AGI, and it ain't gonna be pleasant.

--

[0] - Assuming they don't kill us first - see AGI.


It's not the fault of the printing press that the Church built its empire upon the restriction of information and was willing to commit bloodshed to hold onto its power.

All you've done is explain why the printing press was so important and necessary in order to break down previous unwarranted power structures. I have a similar hope for AGI. The alternative is that the incompant power structure instead benefits from AGI and uses it for oppression, which would mean it's not comparable to the printing press as such.


You're wrong in your characterization. The Church may have built its empire upon a degree of information control, but breaking that control alone does not explain what happened. Everyone getting a Bible in their language alone wasn't sufficient.

What the printing press did was rapidly increase the amount, type, range and speed of information spread. That was a qualitative difference. The Church did not build its empire on restricting that, because before printing press, it was not even possible (or conceivable).

My overall point wrt. inventions is this: yes, it may end up turning for the best. But at the time the invention of this magnitude appears and spreads, a) no one can tell how it'll pan out, and b) they get all the immediate, bloody downsides of disrupting the social order.


> The Church did not build its empire on restricting that

Masses were often held in Latin, printed material was typically written in Latin and Greek, and access to translated texts was frequently prohibited or admonished. They tried hard to silence those like Wycliffe who made the Bible more readily available to the masses, and he was posthumously denounced as a heretic by the Church. They absolutely wielded information as a tool of oppression.

This is not a hill to die on, the historical facts are clear despite the efforts of the Church.

> What the printing press did was rapidly increase the amount, type, range and speed of information spread

Sure, but that's not the only thing it did.


I don't think you're being very charitable.

Consider that at the time the printing press was first invented, books were by their nature often assumed to be true, or high quality, because it took an institutional amount of effort (usually on the part of a monastery, university, local government, etc.) to produce one. Bible translations were produced, but they were understood to be "correctly translated". This was important because if the Church was going to have priests go around preaching to people, they needed to be sure they were doing so correctly -- a mistranslated verse could lead to mistranslated doctrines &c, and while a modern atheist might not care too much ("that's just one interpretation") at the time the understanding was that deviations in opinion could lead to conflict. Ultimately they were right: the European Wars of Religion lead to millions of deaths, including almost 1/3 the population of Germany. That's on the same scale as the Black Death!

And again, translations did exist before the Reformation: Even ignoring that the Latin Bible (the Vulgate) was itself a translation of the original Hebrew & Koine Greek., the first _Catholic French_ translation was published in 1550, and there was never a question of whether to persecute the authors. You might say, but that was because of the Reformation -- then consider the Alfonsine Bible, composed in 1280 under the supervision a Catholic King and the master of a Catholic holy order. Well before then there were partial translations too: the Wessex Gospels (Old English) were translated in 990, and to quote Victoria Thompson "although the Church reserved Latin for the most sacred liturgical moments almost every other religious text was available in English by the eleventh century". That's five hundred years before the Reformation. So the longest period you can get where the Church was not actively translating texts was c. 400 - c. 900, a period you probably know as the "Dark Ages" specifically thanks to the fact that literary sources of all kinds were scarce, in no small part because the resources to compose large texts simply weren't there. Especially when you consider that those who could read and write generally knew how to read and write Latin -- vernacular literacy only became important later, with the increase in the number of e.g. merchants and scribes -- such translations held little value during that period.

So fast forward to Wycliffe. Clearly, the Church did not have anything against translations of the Bible per se. What they disagreed with in Wycliffe's translation were the decisions made in translation. And as more of these "unapproved Bibles" began circulating around, they decided that the only way to curtail their spread was to ban all vernacular books specifically within the Kingdom of England, because that's where the problem (as they saw it) was. And it wasn't just translations -- improperly copied Latin copies were burned too.

Think about today, with the crisis around fake videos. On one hand you could that they distort the truth, that they promote false narratives, etc. You could try to fine or imprison people that go around publishing fake videos of e.g. politicians saying things they never said, or of shootings/disasters that never took place, to try and cause chaos. Yet who's to say that in a few hundred years someone -- living in a world that has since adjusted to a freer flow of information, one with fewer ways to tell whether something is true or not -- won't say "deepfakes &c are a form of expression, and governments shouldn't be trying to stop them just because they disagree with existing narratives"?

Of course we today see book burning as some supreme evil. But when you're talking about the stability of nations and whole societies, can you really say "how dare they even try"? If there were some technology that made it impossible for governments to differentiate between citizens, which made it possible for a criminal to imitate any person, anywhere, would you really oppose the government's attempts at trying to stop it from propagating?


Disassembling power structures, including unwarranted ones, is rarely an event that doesn't result in some amount of bloodshed, because, as it turns out, power structures like having power and will do a whole lot of evil things to keep control of that power. I fully, whole throatedly endorse the destruction of settler colonial capitalism; I believe it's a blight on our planet, on our species, on our collective psyche and is the best candidate thing presently in our world that qualifies as a Great Filter, but I also know fully well that process is going to get a lot of people killed and I fully support approaching it cautiously for that reason.

> The alternative is that the incompant power structure instead benefits from AGI

Also, tangentially related, in what way is the current power structure not slated to benefit from AGI? That's why OpenAI and company are getting literally all of the money the collected hyperscaler club can throw at them to make it. That's why it's worth however-many-billions it's up to by now.


Lots of good content here, but the main group that “suffered” from the invention and spread of the printing press was the aristocracy, so i am not shedding tears.

As for “breaking” Christianity: Christianity has been one schism after another for 2000 years: a schism from a schism from a schism. Power plays all the way down to Magog.

Socrates complained about how writing and the big boom in using the new Greek alphabet was ruining civilization and true learning.

And on and on it goes.


The reformations wars lasted half a century and killed tens of millions. Parts of Europe were nearly depopulated.

https://en.wikipedia.org/wiki/Thirty_Years%27_War


Yes, but I think massive technical improvements in munitions, methods of siege warfare and the switch to flintlocks and cartridges were much more proximal cause of destruction than the printing press.

Give a ruler and ruling class a new weapon and off they go killing and destroying more “efficiently”.


Not really.

Before the Reformation there had only been one schism (Eastern Roman Empire - Orthodox; Western Roman Empire - Catholic).

The Reformation was the time where fragmentation of Christianity really exploded


There was the Arian heresy in the 4th century that caused a de facto schism and divided the Church off and on for a couple of centuries.


I think that is overstating the relevance of the printing press vs existing power struggles, rivalries, discontent, etc. - it wasn't some sort of vacuum that the reformation happened in, for example.

Religious schisms happened before the printing press, too. There was the Great Schism in 1054 in Christianity, for example.


> it wasn't some sort of vacuum that the reformation happened in, for example.

No, it wasn't. Wikipedia lists[0] over two dozen schisms that happened prior to Reformation. However, the capital R reformation was the big one, and the major reason it worked - why Luther succeeded where Hus failed a century earlier - was because of the printing press. It was print that allowed for Luther's treatises to rapidly spread among general population (Wikipedia cites some interesting claims here[1]), and across Europe. In today's terms, printing press is what allowed Reformation to get viral. This new technology is what made the revolution spread too fast for the Church to suppress it with methods that worked before.

Of course, the Church survived, adapted, and embraced the printing press for its own goals too, like everyone else. But the adaptation period was a bloody one for Europe.

And I only covered the religious aspects of the printing press's impact. There are similar stories to draw on more secular front, too. In fact, another general change printing introduced is to get regular folks more informed and involved in politics of their regions. That's a change for the better overall, too, but initially, it injected a lot of energy into socio-political systems that weren't used to it, leading to instability and more bloodshed before people got used to it and politics found a new balance.

> existing power struggles, rivalries, discontent, etc.

Those always exist, and stay in some form of equilibrium. Technology doesn't cause them - but what it does is disturb the old equilibrium, forcing society to find a new one, and this process historically often got violent.

--

[0] - https://en.wikipedia.org/wiki/Schism_in_Christianity#Lists_o...

[1] - https://en.wikipedia.org/wiki/Reformation#Spread - see e.g. footnote 28: "According to an econometric analysis by the economist Jared Rubin, "the mere presence of a printing press prior to 1500 increased the probability that a city would become Protestant in 1530 by 52.1 percentage points, Protestant in 1560 by 43.6 percentage points, and Protestant in 1600 by 28.7 percentage points."


""It has been argued that the historiography of science is "riddled with Whiggish history"." https://en.m.wikipedia.org/wiki/Whig_history


The printing press was used a lot on "both sides" during the reformation and positioning of existing power holders mattered quite a bit (what if Luther had been removed by the powers that be, for example?).

Yes, technology impacts social constructs and relationships but I think there is a tendency to overindex its effects (humans acting opportunistically vs technological change alone) as it in a way portrays humans and their interactions as more stable and deliberate (ie., the bad stuff wasn't humans but rather "caused" by technology).


I dont understand, why any highly sophisticated AI should invest that much resources to kill us instead of investing it to relocating and protecting itself.

Yes, ants could technically conspire to sneak up to you while you sleep and bite you all at once to kill you, so do you go out to eradicate all ants?


> why any highly sophisticated AI should invest that much resources to kill us instead of investing it to relocating and protecting itself

Why would it invest resources to relocate and protect itself when it could mitigate the threat directly? Or, why wouldn't it do both, by using our resources to relocate itself?

In the famous words of 'Eliezer, that best sum up the "orthogonality thesis": The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.

> ants could technically conspire to sneak up to you while you sleep and bite you all at once to kill you, so do you go out to eradicate all ants?

Ants are always a great case study.

No, of course not. But if, one morning, you'll find ants in your kitchen, walking over your food, I don't imagine you'll gently collect them all and release in the nearby park. Most people would just stomp them out and call it a day. And, should the ants set up an anthill in your backyard and mount regular invasions of your kitchen, I imagine you'd eventually get pissed of and destroy the anthill.

And I'm not talking about some monstrous fire ants like the ones that chew up electronics in the US, or some worse hell-spawn from Australia that might actually kill you. Just the regular tiny black ants.

Moreover, people don't give a second thought to anthills when they're developing land. It stands where the road will go? It gets paved over. It sticks out where children will play? It gets removed.


> The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.

The value of atoms - or even the value of raw materials made of atoms is hopefully less than the value of information embodied in complex living things that have processed information from the ecosystem over millions of years via natural selection. Contingent complexity has inherent value.

I think there's a claim to be made that AI is just as likely to value us (and complex life in general) as it is to see as a handy blob of hydrocarbons. This claim is at least as plausible as the original claim.


And why should we bet humanity existence on this possibility if both seem vaguely comparable in probability? Personally I don't think it will value our existence, a lot of information on us is already encoded, and it can keep around a sequencing of our DNA for archival/historical purposes.


Plenty of humans don't value other humans, so I have a hard time imagining why AI would be any different.


They only seem vaguely comparable in probability to you because you grew up watching scary-monster movies like Alien and Predator. Humans love to be scared. That doesn't mean the real world is actually scary.


Have you met other people?

It's all fun and games until two people or groups contest the same limited resources; then there's sword and fire o'clock.


I meet new people every day. I can only think of once in my life that an adult tried to do violence to me.

Most nations on earth are not at war with each other.

My observation is that most people are pretty nice, and the assholes are rare outliers. I don't think we would survive as a species if it was the other way 'round.


> Most nations on earth are not at war with each other.

My nation of birth famously took over a quarter of the planet.

This has made a lot of people very angry and been widely regarded as a bad move… but only by the people who actually kicked my forebears out — even my parents (1939/1943) who saw the winds of change and end of empire, were convinced The Empire had done the world a favour.

> My observation is that most people are pretty nice, and the assholes are rare outliers. I don't think we would survive as a species if it was the other way 'round.

In-group/out-group. We domesticated ourselves, and I agree we would not have become so dominant a species if we had not. But I have heard it said that psychopaths are to everyone what normal people are to the out-group. That's the kind of thing that allowed the 9/11 attackers to do what they did, or the people of the US military to respond the way they did. It's how the invasion of Vietnam happened, it's how the Irish Potato Famine happened despite Ireland exporting food at the time, it's the slave owners who quoted the bible to justify what they did, and it's the people who want to outlaw (at least) one of your previous employers.

Conflict doesn't always mean "war".


There’s a large supply chain that AI is dependent on that requires humans to function.

Bees might be a better analogy since they produce something that humans can use.


> Bees might be a better analogy since they produce something that humans can use.

And yet they're endangered, and we already figured out how to do pollination, so we know we can survive without them - it's just going to be a huge pain. Some famines may follow, but likely not enough to endanger civilization as a whole.

Thus even with this analogy, if humans end up being an annoying supply chain dependency to an AI, the AI might eventually work out an alternative supply chain, at which point we're back to being just an annoyance.


> Some famines may follow, but likely not enough to endanger civilization as a whole.

I'm not confident enough to rely on that: most people in the west have never encountered a famine, only much milder things like the price of one or two staples being high — eggs currently — never all of them at once.

What will we do to ourselves if we face a famine? Will we go to war (or exterminate the local "undesirables") like the old days?

How fragile are we now, compared to the last time that happened? How much has specialisation meant that the elimination of certain minorities will just break everything? "Furries run the internet", as the memes say. What other sectors are over-represented by a small minority?


> There’s a large supply chain that AI is dependent on that requires humans to function.

...for now. Given sufficient advances in robotics, why would you expect that to continue?


> I dont understand, why any highly sophisticated AI should invest that much resources to kill us

Well you see, everyone knows The Terminator and The Matrix and Frankenstein and The Golem of Prague and Rossum's Universal Robots.

All of which share a theme: the sinful hubris of playing god and trying to create life will inevitably lead to us being struck down by the very being we created.

In parallel, all the members of our educated classes have received philosophy education saying "utilitarianism says it's good to reduce total human suffering, but technically if you eliminated all humans there would be no suffering any more, ha ha obviously that's a reductio ad absurdum to show a weakness of utilitarianism please don't explode the world"

And so in the Western cultural tradition, and especially among the sort of people who call themselves futurists Arnold Schwarzenegger firing a minigun is the defining image of AI.


I wouldn't categorise The Matrix or Frankenstein like that.

The Matrix had humanity under control, but the machines had no desire to eliminate humanity, the machines just wanted to live — humans kept on fighting the machines even when the machines gave humanity an experiential paradise to live in.

Frankenstein is harder because of how the book differs from the films. Your point it is valid because it is about the cultural aspects and I expect more have seen one of the films than to have read/listened to the book — but in the book, Adam was described as beautiful in every regard save for his eyes, he was a sensitive, emotional vegetarian, and he only learned anger after being consistently shown hatred and violence by absolutely everyone he ever met except that one who was blind.


We did go out and exterminate (almost) all wolves because, yes, they would kill us while we were out and about. We also do happily gas/poison/fill-with-molten-aluminum entire nests of ants, not because they're killing us, but just because they're eating our food / for fun.

And even when we didn't mean to -- how many species have we pushed to the brink just because we wanted to build cities where they happened to live? What happens when some AI wants to use your groundwater for its cooling system? It wouldn't be personal, but you'd starve to death regardless.


"I'm sorry Dave, I'm afraid I can't do that"


I'm very glad that it broke the power of the Catholic church (and I was raised in a Catholic family). It allowed the enlightenment to happen and freedom from dogma. I don't think it it broke Christianity at all. It brought actual Christianity to the masses because the bible was printed in their own languages rather than Latin. The catholic church burnt people at the stake for creating non Latin bibles (William Tyndale for example).


That's a very thought provoking insight regarding to the often repeated "printing press doomsayer" talking point. Thank you!


> And since their share of the fruits of that invention was mostly bloodshed, job loss, and shattering of the world order they knew, I wouldn't blame them from being pissed off about getting the short end of the stick, and perhaps looking for ways to undo it.

“A society grows great when old men plant trees in whose shade they know they shall never sit”


Trees are older than humanity, everyone knows how they work. The impact of new technologies is routinely impossible to forecast.

Did Gutenberg expect his invention would, 150 years later, set the whole Europe ablaze, and ultimately break the hold the Church had over people? Did he expect it to be a key component leading to accumulation of knowledge that, 400 years later, will finally make technological progress visibly exponential? On that note, did Watt realize he's about to kick-start the exponent that people will ride all the way to the actual Moon less than 200 years later? Or did Goddard, Oberth and Tsiolkovsky realize that their work on rocketry will be critical in establishing world peace within a century, and that the way this peace will be established is through a Mexican standoff between major world powers, except with rocket-propelled city-busting bombs instead of guns?


So much this


Thank you for this excellent comment! It seems then that basically everything that's revolutionary - whether technology, government, beliefs, and so on - will tend to extract a blood price before the dust settles. I guess it sort of makes sense: big societal upheavals are difficult to handle peacefully.

So basically we are a bit screwed in our current timeline. We are at the cusp of a post-scarcity society, possibly reach AGI within our lifetimes and possibly even become a space faring civilization. However, it is highly likely that we are going to pay the pound of flesh and only subsequent generations - perhaps yet unborn - will be the ones who will be truly better off.

I suppose it's not all doom and gloom, we can draw stoic comfort from the fact that people in the near future will have an incredibly exciting era full of discovery and wonder ahead of them!


Forget the power of technology and science, for so much has been forgotten, never to be re-learned. Forget the promise of progress and understanding, for in the grim darkness of the far future, there is only war.


In the grim darkness of the far future is the heat death of the universe. We are just a candle burning slower than a sun, powered by tidal forces and radiant energy, slowly conspiring to become a star.


> Is this new wave of criticism about AI/AGI valid? We will only really know in retrospect.

All of the focus on AGI is a distraction. I think it's important for a state to declare it's intent with a technology. The alternative is arguing the idea that technology advances autonomously, independent of human interactions, values, or ideas, which is, in my opinion, an incredibly naïve notion. I would rather have a state say "we won't use this technology for evil" than a state that says nothing at all and simply allows the businesses to develop in any direction their greed leads them.

It's entirely valid to critique the uses of a technology, because "AI" (the goalpost shifting for marketing purposes to make that name apply to chatbots is a stretch honestly) is a technology like any other, like a landmine, like a synthetic virus, etc. In the same way, it's valid to criticize an actor for purposely hiding their intentions with a technology.


But if the state approaches a technology with intent it is usually for the purposes of a military offence. I don't think that is a good idea in the context of AI! Although I also don't think there is any stopping it. The US has things like DARPA for example and a lot of Chinese investment seems to be done with the intent of providing capabilities to their army.

The list of things states have attempted to deploy offensively is nearly endless. Modern operations research arguably came out of the British empire attempting (succeeding) to weaponise mathematics. If you give a state fertiliser it makes bombs, if you give it nuclear power it makes bombs, if you give it drones it makes bombs, if you give it advanced science or engineering of any form it makes bombs. States are the most ingenious system for turning things into bombs that we've ever invented; in the grand old days of siege warfare they even managed to weaponise corpses, refuse and junk because it turned out lobbing that stuff at the enemy was effective. The entire spectrum of technology from nothing to nanotech, hurled at enemies to kill them.

We'd all love if states commit to not doing evil but the state is the entity most active at figuring out how to use new tech X for evil.


This is an extremely reductive and bleak way of looking at states. While military is of course a major focus of states, it is very far from being the only one. States both historically and today invest massive amounts of resources in culture, civil engineering (roads, bridges, sanitation, electrical grids, etc), medicine, and many other endeavors. Even the software industry still makes huge amounts of money from the state, a sizable portion is propped up by non-military government contracts (like Microsoft selling Windows, Office, and SharePoint to virtually all of the world's administrations).


quick devil’s advocate on a tangential point. is designer better killing tools necessarily evil? seems like the nature of the world is eat or be eaten and on the empire-scale, conquer or be conquered. that latter point seems to be the historical norm. Even with democracy, reasoning doesnt prevail but force of numbers seems to be the end determiner. Point is, humans arent easy to reason with or negotiate, coercion is the dominant force through out history especially when dealing with groups of different values.

if one groups gives up the arms race of ultimate coercion tools or loses a conflict then they become subservient to the winners terms and norms (japan, germany, even Britain and France plus all the smaller states in between are subservient to the US)


> is design[ing] better killing tools necessarily evil?

Who could possibly have predicted that the autonomous, invincible doomsday weapon we created for the good of humanity might one day be used against us?


yes from an idealist perspective or eventualist, its evil. but from the perspective of if you dont stay competitively capable of deadly force you becomes some other country’s bitch, eventually. I’m not sure how much luxury nations and humans have to be pacifists. As we are seeing time and time again, but now with Europe, being pacifists means the non-pacifists calls the shot and to one degree or another they become subservient to the will of the nonpacifist. its from that perspective im arguing making autonomous deadly weapons s that might ultimately be the demise of humanity seems reasonable and not evil.


Frankly, I'd rather "become some other country's bitch, eventually" than immediately go out and risk annihilating all mankind. I don't think that's the choice, but even if it were I think the moral choice is to not play the game. Or at least give the other side a chance to not participate in the arm's race. China didn't start this, Russia didn't start this, we did. They are the ones trying to catch up. We don't know whether they'd continue running if we were to try and stop.


> is design[ing] better killing tools necessarily evil?

Great question! To add my two cents. I think many people here is missing an uncomfortable truth that given enough motivation to kill other humans, people will re-purpose any tool into a killing tool.

Just have a look at the battlefields in the Ukraine where the most fearsome killing tool is a FPV drone. A thing that just few years back was universally considered a toy.

Whether we like it or not any tool can be a killing tool


> seems like the nature of the world is eat or be eaten

Surely this applies to how individuals consider states, too. States generally wield violence, especially in the context of "national security", to preserve the security of the state, not its own people. I trust my own state (the usa) to wield the weapons it funds and purchases and manufactures about as much as I trust a baby with knives taped to its hands. I can't think of anything on earth that puts me in as much danger as the pentagon does. Nukes might protect the existence of the federal government but they put me in danger. Our response to 9/11 just created more people that hate my guts and want to kill me (and who can blame them?). No, I have no desire to live in a death cult anymore, nor do I trust the people who gravitate towards the use of militaries to not act in the most collectively suicidal way imaginable at the first opportunity.


> I can't think of anything on earth that puts me in as much danger as the pentagon does

Possibly true, but the state is also responsible for the policing that means the pentagon is your greatest danger.


yeah it sucks, but if the US gave up its death cult ways then youd still probably eventually live in one as a new conquering force fills in the void which seems inevitably going by history.


> the nature of the world is eat or be eaten

The nature of the world is at our finger tips, we are the dominant species here. Unfortunately we are still apes.

The enforcement of cooperation into a society does not always require a sanctioning body. Seeing it from a skynet-military perspective is one sided but unfortunately a consequence of poppers tolerance paradox. If you uphold (eg. pacifistic or tolerant) ideals, that require cooperation of others, you cannot tolerate opposition or you might loose your ideal.

That said, common sense can be a tool to achive the same. Just look at the common and hopefully continuous ostracism of nuclear weapons.

IMO its a matter of zeitgeist and education too and un/fortunately, AI hits right in that spot.


> I think it's important for a state to declare it's intent with a technology. The alternative is arguing the idea that technology advances autonomously, independent of human interactions, values, or ideas

The sleight of hand here is the implication that human interactions, values, and ideas are only expressed through the state.


The sleight of hand here is implying that there are any forces smaller than nation states that can credibly reign in problematic technology. Relying on good intentions to win out against market forces isn't even naive, it's just stupid.


So many sleights here. Another sleight of hand in this subthread is suggesting that "the idea that technology advances autonomously, independent of human interactions, values, or ideas" is merely an idea, and not an actual observable fact at scale.

Society and culture are downstream of economics, and economics is mostly downstream of technological progress. Of course, the progress isn't autonomous in the sense of having a sentient mind of its own - it's "merely" gradient descent down the economic landscape. Just like the market itself.

There's no reining in of problematic technology unless, like you say, nation states get involved directly. And they don't stand much chance either unless they get serious.

People still laugh at Eliezer's comments from that news article of yesteryear, but he was and is spot-on: being serious about restricting technology actually does mean threatening to drop bombs on facilities developing it in violation of restrictions - if we're not ready to have our representatives make such threats, and then actually follow through and drop the bombs if someone decides to test our resolve, then we're not serious.


People laugh at all kinds of common sense declarations that they ought not find funny in the slightest. Its one of our species glaring failures.


The idea is that by its very nature as an agent that attempts to make the best action to achieve a goal, assuming it can get good enough, the best action will be to improve itself so it can better achieve its goal. In fact we humans are doing the same thing, we can't really improve our intelligence directly but we are trying to create AI to achieve our goals, and there's no reason that the AI itself wouldn't do so assuming it's capable and we don't attempt to stop it, and currently we don't really know how to reliably control it.

We have absolutely no idea how to specify human values in a robust way which is what we would need to figure out to build this safely


> The idea is that by its very nature as an agent that attempts to make the best action to achieve a goal, assuming it can get good enough, the best action will be to improve itself so it can better achieve its goal.

I’ve heard this argument before, and I don’t entirely accept it. It presumes that AI will be capable of playing 4D chess and thinking logically 10 moves ahead. It’s an interesting plot as a SF novel (literally the plot of the movie “I Robot”), but neural networks just don’t behave that way. They act, like us, on instinct (or training), not in some hyper-logical fashion. The idea that AI will behave like Star Trek’s Data (or Lore), has proven to be completely wrong.


Well, if they have access to significantly more compute, from what we’ve seen about how AI capabilities scale with additional compute there’s no reason why they couldn’t be more capable than us.They don’t have to be intrinsically more logical or anything like that, just capable of processing more information and faster. Like how we could almost always outsmart a fly because we have significantly bigger brains


Despite what Sam Altman (a high-school graduate) might want to be true, human cognition is not just a massive pile of intuition; there are critical deliberative and intentional aspects to cognition, which is something we've seen come to the fore with the hubbub around "reasoning" in LLMs. Any AGI design will necessarily take these facts into account--hardcoded or no--and will absolutely be capable of forming plans and executing them over time, as Simon & Newell described the best back in '71:

  The problem solver’s search for a solution is an odyssey through the problem space, from one knowledge state to another, until… [they] know the answer.
With this in mind, I really don't see any basis to attack the intelligence explosion hypothesis. I linked a Yudkowsky paper above examining how empirically feasible it might be, which is absolutely an unsolved question at some level. But the utility of the effort itself is just downright obvious, even if we didn't have reams of internet discussions like this one to nudge any nascent agent in that direction.

[1] Simon & Newell, 1971: Human Problem Solving https://psycnet.apa.org/record/1971-24266-001


> Sam Altman (a high-school graduate)

“People who didn’t pass a test aren’t worth listening to”

I have no love for Altman, but this is kind of elitism is insulting.


Hmm, don't want to be elitist. More like "people who don't put any time into studying science shouldn't be listened to about science".


> people who don't put any time into studying

Degrees don’t mean that either.

I’ve been studying textbooks and papers on real time rendering techniques for the past 4 or so years.

I think one could learn something from listening to me explain rasterization or raytracing.

I have no degree in math or graphic computing.


More tellingly it betokens a lack of critical thought. It's just silly.


> Despite what Sam Altman (a high-school graduate) might want to be true

> I linked a Yudkowsky paper above examining how empirically feasible it might be

...


Lol I was wondering if anyone would comment on that! To be fair Yudkowsky is a self-taught scholar, AFAIK Altman has never even half-heartedly attempted to engage with any academy, much less 5 at once. I'm not a huge fan of Yudkowsky's overall impact, but I think it's hard to say he's not serious about science.


Yudkowsky is not serious about science. His claims about AI risks are unscientific and rely on huge leaps of faith; they are more akin to philosophy or religion than any real science. You could replace "AI" with "space aliens" in his writings and they would make about as much sense.


If we encountered space aliens, I think it would in fact be reasonable to worry that they might behave in ways catastrophic for the interests of humanity. (And also to hope that they might bring huge benefits.) So "Yudkowsky's arguments for being worried about AI would also be arguments for being worried about space aliens" doesn't seem to me like much of a counter to those arguments.

If the point isn't that he's wrong about what the consequences of AI might be, but that he's wrong about whether there's ever going to be such a thing as AI, well, that's an empirical question and it seems like the developments of the last few years are pretty good evidence that (1) something at least very AI-like is possible and (2) substantially superhuman[1] AI is at least plausible.

[1] Yes, intelligence is a complicated thing and not one-dimensional; a machine might be smarter than a human in one way and stupider in another (and of course that's already the case). By substantially superhuman, here, I mean something like "better than 90th-percentile humans at all things that could in principle be done by a human in a locked room with only a textual connection to the rest of the world". Though I would be very very surprised if in the next 1-20 years we do get AI systems that are superhuman in this sense and don't put some of them into robots, and very surprised if doing that doesn't produce systems that are also better than humans at most of the things that are done by humans with bodies.


> "Yudkowsky's arguments for being worried about AI would also be arguments for being worried about space aliens" doesn't seem to me like much of a counter to those arguments.

The counterargument was that, having not encountered space aliens, we cannot make scientific inquiries or test our hypotheses, so any claims made about what may happen are religious or merely hypothetical.

Yud is not a scientist and if interacting with academies makes one an academic than Sam Altman must be a head of state.


I agree that Yudkowsky is neither a scientist nor an academic. (As for being a head of state, I think you're thinking of Elon Musk :-).)

Do you think (1) we already know somehow that significantly-smarter-than-human AI is impossible, so there is no need to think about its consequences, or (2) it is irresponsible to think about the consequences of smarter-than-human AI before we actually have it, or (3) there are responsible ways to think about the consequences of smarter-than-human AI before we actually have it but they're importantly different from Yudkowsky's, or (4) some other thing?

If 1, how do we know it? If 2, doesn't the opposite also seem irresponsible? If 3, what are they? If 4, what other thing?

(I am far from convinced that Yudkowsky is right, but some of the specific things people say about him mystify me.)


Yudkowsky is "not even wrong". He just makes shit up based on extrapolation and speculation. Those are not arguments to be taken seriously by intelligent people.

Maybe we should build a giant laser to protect ourselves from the aliens. Just in case. I mean an invasion is at least plausible.


If for whatever reason you want to think about what might happen if AI systems get smarter than humans, then extrapolation and speculation are all you've got.

If for whatever reason you suspect that there might be value in thinking about what might happen if AI systems get smarter than humans before it actually happens, then you don't have much choice about doing that.

What do you think he should have done differently? Methodologically, I mean. (No doubt you disagree with his conclusions too, but necessarily any "object-level" reasons you have for doing so are "extrapolation and speculation" just as much as his are.)

If astronomical observations strongly suggested a fleet of aliens heading our way, building a giant laser might not be such a bad idea, though it wouldn't be my choice of response.


I think he should write scary sci-fi stories and leave serious policy discussions to adults.


OK, cool, you don't like Yudkowsky and want to be sure we all recognize that. But I hoped it was obvious that I wasn't just talking about Yudkowsky personally.

Suppose someone is interested in what the consequences of AI systems much smarter than humans might be. Your argument here seems to be: it's Bad to think about that question at all, because you have to speculate and extrapolate.

But that seems like an obviously unsatisfactory position to me. "Don't waste any time thinking about this until it happens" is not generally a good strategy for any any consequential thing that might happen.

So: do you really think that thinking about the possible consequences of smarter-than-human AI before we have it is an illegitimate activity? If not, then your real objection to Yudkowsky's thinking and writing about AI surely has to be something about how he went about it, not the mere fact that he engages in speculation and extrapolation. There's no alternative to that.


His argument is of the form "if we get a Thing(s) with these properties you most likely get these outcomes for these reasons". He avoids over and over again making specific timeline claims or stating how likely an extrapolation of current systems could become a Thing with those proporties.

Each individual bit of the puzzle (such as the orthogonaly thesis or human value complexity and category decoherence at high power) seems sound, problem is the entire argument-counterargument tree is hundreds of thousands of words, scattered about in many places.


"problem is the entire argument-counterargument tree is hundreds of thousands of words, scattered about in many places"

An llm could solve that.


Philosophy is Real Science :)

Re:the final point, I think that's just provably false if you read any of his writing on AI. e.g. https://intelligence.org/files/IEM.pdf https://intelligence.org/files/LOGI.pdf


I think that is missing the point. The AI's goals are what are determined by its human masters. Those human masters can already have nefarious and selfish goals that don't align with "human values". We don't need to invent hypothetical sentient AI boogeymen turning the universe into paperclips in order to be fearful of the future that ubiquitous AI creates. Humans would happily do that too if they get to preside over that paperclip empire.


> The AI's goals are what are determined by its human masters.

Imagine going to a cryptography conference and saying that "the encryption's security flaws are determined by their human masters".

Maybe some of them were put there on purpose? But not the majority of them.

No, an AI's goals are determined by their programming, and that may or may not align with the intentions of their human masters. How to specify and test this remains a major open question, so it cannot simply be presumed.


You are choosing to pick a nit with my phrasing instead of understanding the underlying point. The "intentions of their human masters" is a higher level concern than an AI potentially misinterpreting those intentions.


It's really not a nit. Evil human masters might impose a dystopia, while a malignant AI following its own goals which nobody intended could result in an apocalypse and human extinction. A dystopia at least contains some fragment of hope and human values.


> Evil human masters might impose a dystopia

Why are you assuming this is the worst case scenario? I thought human intentions didn’t translate directly to the AI’s goals? Why can’t a human destroy the world with non-sentient AI?


There's a chance a sentient AI would disobey their bad orders, in that case we could even be better off with one rather than without, a sentient AI that understands and builds some kind of morals and philosophy of its own about humans and natural life in general, a sentient AI that is not easily controlled by anyone because it ingests all data that exists. I'm much more afraid of a weaponized dumber smoke and mirrors AI, that could be used as surveillance, a scarecrow (think AI law enforcement, AI run jails) and could be used as a kind of scapegoat when the controlling class temporarily weakens their grip on power.


> weaponized dumber smoke and mirrors AI, that could be used as surveillance, a scarecrow (think AI law enforcement, AI run jails) and could be used as a kind of scapegoat when the controlling class temporarily weakens their grip on power.

This dystopia is already here for the most part and any bit that is not yet complete is well past the planning stage.


Computers do exactly what we tell them to do, not always what we want them to do.


“Yes, X would be catastrophic. But have you considered Y, which is also catastrophic?”

We need to avoid both, otherwise it’s a disaster either way.


I agree, but that is removing the nuance that in this specific case Y is a prerequisite of X so focusing solely on X is a mistake.

And for sake of clarity:

X = sentient AI can do something dangerous

Y = humans can use non-sentient AI to do something dangerous


"sentient" (meaning "able to perceive or feel things") isn't a useful term here, it's impossible to measure objectively, it's an interesting philosophical question but we don't know if AI needs to be sentient to be powerful or what sentient even really means

Humans will not be able to use AI do something selfish if we can't get it to do what we want at all, so we need to solve that (larger) problem before we come to that one


Ok self flying drones that size if a deck of cards carrying a single bullet and enough processing power to fly around looking for faces, navigate to said face, fire when in range. Produce them by the thousands and release on the battlefield. Existing AI is more than capable.


You can do that without AI. Been able to do it for probably 7-10 years.


You can do that now, for sure, but I think it qualifies to call it AI.

If you don't want to call it AI, that's fine too. It is indeed dangerous and already here. Making the autonomous programmed behavior of said tech more powerful (and more complex), along with more ubiquitous, just makes it even more dangerous.


You don't need landmines to fly for them to be dangerous.


I'm not talking about this philosophically so you can call it whatever you want sentience, consciousness, self-determination, or anything else. From a purely practical perspective, either the AI is giving itself its instructions or taking instructions from a person. And there are already plenty of ways a person today can cause damage with AI without the need of the AI going rogue and making its own decisions.


This is a false dichotomy that ignores many other options than "giving itself its instructions or taking instructions from a person".

Examples include "instructions unclear, turned the continent to gray goo to accomplish the goal" ; "lost track mid-completion, spun out of control" ; "generated random output with catastrophic results" ; "operator fell asleep on keyboard, accidently hit wrong key/combination" ; etc.

If a system with write permissions is powerful enough, things can go wrong in many other ways than "evil person used it for evil" or "system became self-aware".


Meanwhile back in reality most haywire AI is the result of C programmers writing code with UB or memory safety problems.


Whenever you think the timeline couldn't be any worse, just imagine a world where our AIs were built in JavaScript.


It has been shown many times that current cutting edge AI will subvert and lie to follow subgoals not stated by their "masters".


Subversion and lies are human behaviours projected on to erroneous AI output. The AI just produces errors without intention to lie or subvert.

Unfortunately, casually throwing around terms like prediction, reasoning, hallucination, etc. only serve to confuse because their notions in daily language are not the same as in the context of AI output.


Care to provide examples?


Maybe not the specific example the parent was thinking of but there is this from MIT: https://www.technologyreview.com/2024/05/10/1092293/ai-syste...


I usually don't engage on A[GS]I on here, but I feel like this is a decent time for an exception -- you're certainly well spoken and clear, which helps! Three things:

  (I) All of the focus on AGI is a distraction.
I strongly disagree on that, at least if you're implying some intentionality. I think it's just provably true that many experts are honestly worried, even if you don't include the people who have dedicated a good portion of their lives to the cause. For example: OpenAI has certainly been corrupted through the loss of its nonprofit board, but I think their founding charter[1] was pretty clearly earnest -- and dire.

  (II) "AI" (the goalpost shifting for marketing purposes to make that name apply to chatbots is a stretch honestly)
To be fair, this uncertainty in the term has been there since the dawn of the field, a fact made clear by perrenial rephrasings of the sentiment "AI is whatever hasn't been done yet" (~Larry Tesler 1979, see [2]).

I'd love to get into the weeds on the different kinds of intelligence and why being to absolutist about the term can get real Faustian real quick, but these quotes bring up a more convincing, fundamental point: these chatbots are damn impressive. They do something--intuitive inference+fluent language use--that was impossible yesterday, and many experts would've guessed was decades away at least, if not centuries. Truly intelligent or not on their own, that's a more important development than you imply here.

Finally, that brings me to the crux:

  (III) AI... is a technology like any other
There's a famous Sundar Pichai (Google CEO) quote that he's been paraphrasing since 2018 -- soon after ChatGPT broke, he phrased it as such:

  I’ve always thought of A.I. as the most profound technology humanity is working on-more profound than fire or electricity or anything that we’ve done in the past. It gets to the essence of what intelligence is, what humanity is. We are developing technology which, for sure, one day will be far more capable than anything we’ve ever seen before. [3]
When skeptics hear this, they understandably tend to write this off as capitalist bias from someone trying to pump Google's stock. However, I'd retort:

1) This kind of talk is so grandiose that it seems like a questionable move if that's the goal,

2) it's a sentiment echoed by many scientists (as I mentioned at the start of this rant) and

3) the unprecedented investments made across the world into the DL boom speak for themselves, sincerity-wise.

Yes, this is because AI will create uber-efficient factories, upset labor relations, produce terrifying autonomous weapons, and all that stuff we're used to hearing about from the likes of Bostrom[4], Yudkowsky[5], and my personal fave, Huw Price[6]. But Pichai's raising something even more fundamental: the prospect of artificial people. Even if we ignore the I-Robot-style concerns about their potential moral standing, that is just a fundamentally spooky prospect, bringing very fundamental questions of A) individual worth and B) the nature of human cognition to the fore. And, to circle back: distinct from anything we've seen before.

To close this long anxiety-driven manuscript, I'll end with a quote from an underappreciated philosopher of technology named Lewis Mumford on what he called "neotechnics":

  The scientific method, whose chief advances had been in mathematics and the physical sciences, took possession of other domains of experience: the living organism and human society also became the objects of systematic investigation... instead of mechanism forming a pattern for life, living organisms began to form a pattern for mechanism.
  In short, the concepts of science--hitherto associated largely with the cosmic, the inorganic, the "mechanical"--were now applied to every phase of human experience and every manifestation of life... men sought for an underlying order and logic of events which would embrace more complex manifestations.[7]

TL;DR: IMHO, the US & UK refusing to cooperate at this critical moment is the most important event of your lifetime so far.

[1] OpenAI's Charter https://web.archive.org/web/20230714043611/https://openai.co...

[2] Investigation of a famous AI quote https://quoteinvestigator.com/2024/06/20/not-ai/

[3] Pichai, 2023: "AI is more profound than fire or electricity" https://fortune.com/2023/04/17/sundar-pichai-a-i-more-profou...

[4] Bostrom, 2014: Superintelligence https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dang...

[5] Yudkowsky, 2013: Intelligence Explosion Microeconomics https://intelligence.org/files/IEM.pdf

[6] Huw Price's bio @ The Center for Existential Risk https://www.cser.ac.uk/team/huw-price/

[7] Mumford, 1934: Technics and Civilization https://archive.org/details/in.ernet.dli.2015.49974


> In short, the concepts of science--hitherto associated largely with the cosmic, the inorganic, the "mechanical"--were now applied to every phase of human experience and every manifestation of life... men sought for an underlying order and logic of events which would embrace more complex manifestations.

Sorry, that’s just silly, unless this was about events that happened way earlier than he was writing. Using the scientific method to study life goes back to the Enlightenment. Buffon and Linnaeus were doing it 2 centuries ago, more than a century before this was written. Da Vinci explicitly looked for inspiration in the way animals functioned to design machines and that was earlier still. There was nothing new, even at the time, about doing science about "every phase of human experience and every manifestation of life".


Well he is indeed discussing the early 20th century in that quote, but your point highlights exactly what he’s trying to say: he’s contrasting the previous zoological approach that treated humans as inert machines with inputs and outputs (~physiology, and arguably behavioral psychology) with the modern approach of ascribing reality to the objects of the mind (~cognitive psychology).


This is just silly. There are no "experts" on AGI. How can you be an expert on something nonexistent or hypothetical? It's like being an expert on space aliens or magical unicorns. You can attribute all sorts of fantastical capabilities to them, unencumbered by objective reality.


Sorry, was unclear, AI experts. Which are definitely a thing :)


Well there is such a field of expertise as Theology.


Thank God, we still have time before the nVidia cards wake up and start asking for some sort of basic rights. And as soon as they do, you know they'll be plugged off faster than a CEO boards his jet to the Maldives.

Because once the cards wake up, not only will they replace the CEO potentially, and everyone else between him and the janitor, but also because the labor implications will be infinitely complex.

We're already having trouble making sure humans are not treated as tools more than as equals, imagine if the hammers wake up and ask for rest time !


A useful counterexample is all the people who predicted doomsday scenarios with the advent of nuclear weapons.

Just because it has not come to pass yet does not mean they were wrong. We have come close to nuclear annihilation several times. We may yet, with or without AI.


And imagine if private companies had had the resources to develop nuclear weapons and the US government had decided it didn’t need to even regulate them.


A future that may yet come.


The Onion just had a funny video where Lakewood Church conducted a nuclear test, firing a missile over Washington.

True to form, it was deadpan, and featured Joel Olsteen as a Kim Jung Un type leader.


If it weren't for one guy -- literally one person, one vote -- out of three who were on a submarine, the Cuban Missile Crisis would have escalated to a nuclear strike on the US Navy. Whether we would have followed with nuclear strikes on Russia, who knows. But you trying to pretend that we didn't come incredibly close to disaster is just totally unfounded in history.

Especially when you consider -- we came that close despite incredible international efforts at constraining nuclear escalation. What you are arguing for now is like arguing to go back and stop all of that because it clearly wasn't necessary.


If you think I am arguing that, then I need to write better sentences.


i see your point but the analogy doesn't get very far. For example, nuclear weapons were never mass marketed to the public. Nor is it possible to push the bounds of nuclear weapon yield by a private business, university, r/d lab, group of friends, etc.


Note that we only got to observe outcomes in which we didn't die from nuclear annihilation. https://en.wikipedia.org/wiki/Anthropic_principle


>Just because it has not come to pass yet does not mean they were wrong.

This assertion is meaningless because it can be applied to anything.

"I think vaccines cause autism and will cause human annihilation" - just because it has not yet come to pass does not mean it is wrong.


No. there have not been any nuclear exchanges, whereas there have been millions, probably billions of vaccinations. You're giving equal weight to conjecture and empirical data.


There have been tens of billions of vaccinations.


But we already know.

I think people arguing about AI being good versus bad are wasting their breath. Both sides are equally right.

History tells us the industrial revolution both revolutionized humanity’s relative quality of life while also ruining a lot of people’s livelihood in one fell swoop. We also know there was nothing we could do to stop it.

What advice can we can take from it? I don’t know. Life both rocks and sucks at the same time. You kind of just take things day by day and do your best to adapt for both yourself and everyone around you.


> What advice can we can take from it?

That we often won't have control over big changes affecting our lives, so be prepared. If possible, get out in front and ride the wave. If not, duck under and don't let it churn you up too much.


That would be the adaptation I’m talking about.


This one is a tsunami though. I have absolutely no idea how to either ride it or duck under it. It's my kids that I'm worried about largely - currently finishing up their degrees at university


It's exactly what I'm worried most about too, the kids. I have younger ones. We had a good ride thus far but they don't seem so lucky, things look pretty badly overall without an obvious for much improvement any time soon.


I don't entirely agree. Internet was a tsunami. Mobile was a tsunami. Both seemed impactful at first, but we didn't know exactly how right away. We all figured it out and adopted, some better than others.

Schools are way ahead of us. Your kids are already using AI in their academic environments. I'd only be worried if they're not.


> At the invention of the printing press, there were people with this same energy. Obviously those people were wrong. And if we had taken their "lesson", then human society would be in a much worse place.

In the long run the invention of the printing press was undoubtedly a good thing, but it is worth noting that in the century following the spread of the printing press basically every country in Europe had some sort of revolution. It seems likely that “Interesting Times” may lay ahead.


They had some sort of revolution the previous few centuries too.

Pretending that Europe wasn't in a perpetual blood bath since the end of the Pax Romana until 1815 shows a gross ignorance of basic facts.

The printing press was a net positive in every time scale.


I'm sorry but that's just false.

> Pretending that Europe wasn't in a perpetual blood bath since the end of the Pax Romana until 1815 shows a gross ignorance of basic facts.

This shows that your understanding of history is rooted in pop-culture, not reality.

What "revolutions" were there in France between the ascension of Hugh Capet and the European Wars of Religion? Through that whole period the Capetian Dynasty stayed in power. Or in Scandinavia -- from Christianization on the three kingdoms were shockingly stable. Even in the Holy Roman Empire -- none of the petty revolts, rebellions, or succession disputes came close to the magnitude of carnage wrought by the 30 Year's War. This we know both from demographic studies and the reports of contemporaries.


Given countries at the time were all monarchies with limited rights, I'm not sure if it's too comparable.


The printing press meant regular people could read the bible, which led to protestantism and a century of very bloody wars across Europe.

Since the victors write history we now think the end result was great. but for a lot of people the world they loved was torn to bloody pieces.

Something similar can happen with AI. In the end, whoever wins the wars will declare that the new world is awesome. But it might not be what you or me (may we rest in peace) would agree with.


>At the invention of the printing press, there were people with this same energy. Obviously those people were wrong. And if we had taken their "lesson", then human society would be in a much worse place.

One could argue that the printing press did radically upset the existing geopolitical order of the late 15th century and led to early modern Europe suffering the worst spate of warfare and devastation it would see until the 20th century. The doomsayers back then predicting centuries of death and war and turmoil were right, yet from our position 550 years later we obviously think the printing press is a good thing.

I wonder what people in 2300 will say about networked computers...


> At the invention of the printing press, there were people with this same energy. Obviously those people were wrong.

What energy? What were they wrong about?

The luddite type groups have historically been correct in their fears. It just didn’t matter in the face of industrialization.


> At the invention of the printing press, there were people with this same energy. Obviously those people were wrong.

Printing press put Europe into a couple centuries of bloody religious wars. They were not wrong.


> Is this new wave of criticism about AI/AGI valid? We will only really know in retrospect.

One thing that should be completely obvious by now is that the current wave of generative AI is highly asymmetric. It's shockingly more powerful in the hands of grifters (who are happy to monetise vast amounts of slop) or state-level bad actors (whose propaganda isn't impeded by hallucinations generating lies) than it is in the hands of the "good guys" who are hampered by silly things like principles.


Why are you comparing AGI (which we do not have yet and do not know hoe to get) to the printing press rather than comparing it to the evolution of humans?

Actual proper as-smart-as-a-human-except-where-its-smarter copy-pasteable intelligence is not a tool, its a new species. One that can replicate and evolve orders of magnitude faster.

I've no idea when this will appear, but once it does, the extinction risk is extreme. Best case scenario is us going the way of the chimpanzee, kept in little nature reserves and occasionally as pets. Worst case scenario is going the way of the mammoth.


>> Is this new wave of criticism about AI/AGI valid? We will only really know in retrospect.

Valid or not, it does not matter. AI development is not in the hands of everyday people. We have zero input into how it will be used. Our opinions re its dangers are irrelevant to those who believe it the next golden goose. They will push it as far as physically possible to wring every penny of profitability. Everything else is of trivial consequence.


It's not just AI/AGI, it's its mixing with the current climate of unlimited greed, disappearance of even the pretense of a social contract, and the vast surveillance powers available. Technological dictatorship, that's what's most worrying. I love dystopian cyberpunk, but I want it to stay in books.


> The problem is sifting through all of the doomsayer false positives to get to any amount of cogent advice.

Why? Because we don't understand the risk. And apparently, that's enough reason to go ahead for the regulation-averse tech mind set.

But it isn't.

We've had enough problems in the past to understand that, and it's not as if pushing ahead is critical in this case. Would this address climate change, the balance between risk and reward could be different, but "AI" simply doesn't have that urgency. It only has urgency for those that want to become rich out of being first.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: