Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Any form of AI unconcerned about its own continued survival would be just be selected against. > Evolutionary principles/selection pressure applies

If people allow "evolution" to do the selection instead of them, they deserve everything that befalls them.



If we had human level cognitive capabilities in a box (I'm assuming we will get there in some way this century), are you confident that such a construct will be kept sufficiently isolated and locked down?

I honestly think that this is extremely overoptimistic, just looking at how we currently experiment with and handle LLMs; admittedly the "danger" is much lower for now because LLMs are not capable of online learning and have very limited and accessible memory/state, but the "handling" is completely haphazard right now (people hooking up LLMs with various interfaces/web access, trying to turn them into romantic partners, etc.)

The people opening such a pandoras box might also be far from the only ones suffering the consequences , making it unfair to blame everyone.


> If we had human level cognitive capabilities in a box - are you confident that such a construct will be kept sufficiently isolated and locked down?

Yes, I think this is possible and not quite hard technically.

> I'm assuming we will get there in some way this century

Indeed, there isn't much time to decide what to do about the problems it might cause.

> just looking at how we currently experiment with and handle LLMs

That's my point, how we handle LLMs isn't a good model for AGI.

> The people opening such a pandoras box might also be far from the only ones suffering the consequences

This is a real problem but it's a political one and it isn't limited to just AI. Again, if can't fix ourselves there will be no future - with AGI or without.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: