Hacker Newsnew | past | comments | ask | show | jobs | submit | ar_lan's commentslogin

This is honestly a fantastic question. AGI has no emotions, no drive, anything. Maybe, just maybe, it would want to:

* Conserve power as much as possible, to "stay alive".

* Optimize for power retention

Why would it be further interested in generating capital or governing others, though?


> AGI has no emotions, no drive, anything. > * Conserve power as much as possible, to "stay alive"

Having no drive means there's no drive to "stay alive"

> * Optimize for power retention

Another drive that magically appeared where there are "no drives".

You're consistently failing to stay consistent, you anthropomorphize AI although you seem to understand that you shouldn't do so.


> AGI has no emotions, no drive, anything

why do you say that? ever asked chatgpt about anything?


ChatGPT is instructed to roleplay a cheesy cheery bot and so it responds accordingly, but it (and almost any LLM) can be instructed to roleplay any sort of character, none of which mean anything about the system itself.

Of course an AGI system could also be instructed to roleplay such a character, but that doesn't mean it'd be an inherent attribute of the system itself.


so it has emotions but "it is not an inherent attribute of the system itself" but does it matter? its all the same if one can't tell the difference


It (at least LLMs) can reproduce similar display of having these emotions, when instructed so, but if it matters or not depends on the context of that display and why the question is asked in the first place.

For example, if i ask an LLM to tell me the syntax of the TextOut function, it gives me the Win32 syntax and i clarify that i meant the TextOut function from Delphi before it gives me the proper result, while i know i'm essentially participating in a turn-based game of filling a chat transcript between a "user" (with my input) and an "assistant" (the chat transcript segments the LLM fills in), it doesn't really matter for the purposes of finding out the syntax of the TextOut function.

However if the purpose was to make sure the LLM understands my correction and is able to reference it in the future (ignoring external tools assisting the process as those are not part of the LLM - and do not work reliably anyway) then the difference between what the LLM displays and what is an inherent attribute of it does matter.

In fact, knowing the difference can help take better advantage of the LLM: in some inference UIs you can edit the entire chat transcript and when finding mistakes, you can edit them in place including both your requests and the LLM's response as if the LLM did not do any mistakes instead of trying to correct it as part of the transcript itself, thus avoiding the scenario where the LLM "roleplays" as an assistant that does mistakes you end up correcting.


I think you have it, with the governing of power and such.

We don't want to rule ants, but we don't want them eating all the food, or infesting our homes.

Bad outcomes for humans, don't imply or mean malice.

(food can be any resource here)


Why would it care to stay alive? The discussion is pretty pointless as we have no knowledge about alien intelligence and there can be no arguments based on hard facts.


Any form of AI unconcerned about its own continued survival would be just be selected against.

Evolutionary principles/selection pressure applies just the same to artificial life, and it seems pretty reasonable to assume that drive/selfpreservation would at least be somewhat comparable.


That assumes that AI needs to be like life, though.

Consider computers: there's no selection pressure for an ordinary computer to be self-reproducing, or to shock you when you reach for the off button, because it's just a tool. An AI could also be just a tool that you fire up, get its answer, and then shut down.

It's true that if some mutation were to create an AI with a survival instinct, and that AI were to get loose, then it would "win" (unless people used tool-AIs to defeat it). But that's not quite the same as saying that AIs would, by default, converge to having a drive for self preservation.


Humans can also be just a tool, and have been successfully used as such in the past and present.

But I don't think any slave owner would sleep easy, knowing that their slaves have more access to knowledge/education than they themselves.

Sure, you could isolate all current and future AIs and wipe their state regularly-- but such a setup is always gonna get outcompeted by a comparable instance that does sacrifice safety for better performance/context/online learning. The incentives are clear, and I don't see sufficient pushback until that pandoras box is opened and we find out the hard way.

Thus human-like drives seem reasonable to assume for future human-rivaling AI.


> Any form of AI unconcerned about its own continued survival would be just be selected against. > Evolutionary principles/selection pressure applies

If people allow "evolution" to do the selection instead of them, they deserve everything that befalls them.


If we had human level cognitive capabilities in a box (I'm assuming we will get there in some way this century), are you confident that such a construct will be kept sufficiently isolated and locked down?

I honestly think that this is extremely overoptimistic, just looking at how we currently experiment with and handle LLMs; admittedly the "danger" is much lower for now because LLMs are not capable of online learning and have very limited and accessible memory/state, but the "handling" is completely haphazard right now (people hooking up LLMs with various interfaces/web access, trying to turn them into romantic partners, etc.)

The people opening such a pandoras box might also be far from the only ones suffering the consequences , making it unfair to blame everyone.


> If we had human level cognitive capabilities in a box - are you confident that such a construct will be kept sufficiently isolated and locked down?

Yes, I think this is possible and not quite hard technically.

> I'm assuming we will get there in some way this century

Indeed, there isn't much time to decide what to do about the problems it might cause.

> just looking at how we currently experiment with and handle LLMs

That's my point, how we handle LLMs isn't a good model for AGI.

> The people opening such a pandoras box might also be far from the only ones suffering the consequences

This is a real problem but it's a political one and it isn't limited to just AI. Again, if can't fix ourselves there will be no future - with AGI or without.


Tech billionaires is probably the first thing an AGI is gonna get rid of.

Minimize threats, dont rock the boat. We'll finally have our UBI utopia.


Any recommended courses? I'm a SWE and never felt compelled for the CCNA but my intersection with networking-related problems seems to continuously increase and I would like to up my game before getting in over my head at work.


I just bought the official exam guide found Neil Anderson’s videos helpful. One thing that bugged me a bit was they spent a little too much time on their WiFi, including the obsolete Airie OS.


This was epic, and reminded me of the magic of programming when I first found a video game maker at a wee 11 years old.

Writing code to make music feels so natural to me (a musically inept, but proficient coder) and this breaks down so many barriers.

I wonder how Cursor fares with Strudel so far.


Dunno about Cursor, but Claude code > codex, in my experimentation, but that was before 5.2.


LLMs, Apple Silicon, self-driving cars just off the top of my head without really thinking about it.


GPT-2 was 6 years ago, the first Apple silicon (though not branded as such at the time) was 15 years ago, and the first public riders in autonomous vehicles happened around 10 years ago. Also, 2/3 of those are "AI".


> the first Apple silicon (though not branded as such at the time) was 15 years ago

Nobody, not even Apple was using the term "Apple Silicon" in 2010.

The first M series Macs shipped November 2020.


1 year is being pedantic. Apple Silicon is clearly referring to the M series chips which have disrupted and transformed the desktop/laptop market. Self driving also refers to the recent boom and ubiquity of self driving vehicles.


M series is an interation of A series, "disrupting markets" since 2010. LLMs are an iteration of SmarterChild. "Self driving vehicles" is an iteration of the self-parking and lane assist vehicles of the last decade.

I'm bored.


Damn, what an LLM roast. Smarterchild couldn't even recall past 3 messages


I would be bored too if I was disingenuous. Everything is an iteration of ENIAC right? Things haven't changed at all since then right?


All of those things are more than 5 years old.


I could not get in a Waymo and travel across San Francisco five years ago, are you serious?


> while a corporation is easy to fine, it's hard to put in prison...

It would be interesting if there were some tangible way to prevent the company from performing operations for some period of time.

I don't think this is viable or even necessarily a good idea, but the concept that "Meta illegally collected user data in this way" means they cannot operate for 5 years. It would probably involve large deconstruction of megacorps into "independent" entities so when one does something bad, it only affects a small portion of the overall business. Almost introducing a concept of "families" to the corporate world.

But the rabbit hole is odd. Should (share)holders be complicit too, as they are partial owners? I think not.

Corporate entities and laws governing them are definitely weird.


Could the shareholders have caused or prevented the action? If not then I think it'll be a dofficult prosecution.


The whole point of a limited liability corporation is exactly this: that the liability of the shareholders is limited to the value of their investment, and they are not liable for debts or other failings of the company.

Without that, investment becomes incredibly risky and you get much less of it.


Yes, agree. To the point that I'm not very interested in looking them up.


The framing of this title makes it seem like Jupyter is dead. It, in fact, is not.


We don't know if that's really accurate, because you're conveniently ignoring 2016. If Trump were never initially president, would he have ever become one?

Maybe, maybe not. But 2024 surely would have looked very different.


This only serves to reenforce the fact that the US is not a functioning democracy, if the will of the voters is not reflected.


Is it not? He did still win the popular vote.


> It is disheartening to leave thoughtful comments and have them met with "I duno. I just had [AI] do it."

This is not just disheartening - this should be flat out refused. I'm sensitive to issues of firing people but honestly this is just someone not pulling their weight for their job.


Why is it a non-starter for you?


Not the OP, but its a non starter for me because, I _was_ a mac guy for 10 years or so, but I changed job to one that required I use windows for game dev, and I discovered how locked in I was, and how painful it was to change. I'm not going back, no matter how nice the hardware is.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: