Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

1. The religious nut doesn't have the knowledge or the skill sets right now, but AI might enable them.

2. Accessibility of information makes a huge difference. Prior to 2020 people rarely stole Kias or catalytic converters. When knowledge of how to do this (and for catalytic converters, knowledge of their resale value) became available (i.e. trending on Tiktok), then thefts became frequent. The only barrier which disappeared from 2019 to 2021 was that the information became very easily accessible.

Your last two questions are not counterarguments, since AIs are already outperforming the median biology student, and obviously removing sites from the internet is not feasible. Easier to stop foundation model development than to censor the internet.

> What is to stop someone from training a model on such data anytime they want?

Present proposals are to limit GPU access and compute for training runs. Data centers are kind of like nuclear enrichment facilities in that they are hard to hide, require large numbers of dual-use components that are possible to regulate (centrifuges vs. GPUs), and they have large power requirements which make them show up on aerial imaging.



What happens if someone develops a highly effective distributed training algorithm permitting a bunch of people with gaming PCs and fast broadband to train foundation models in a manner akin to Folding@Home?

If that happened open efforts could marshal tens or hundreds of thousands of GPUs.

Right now the barrier is that training requires too much synchronization bandwidth between compute nodes, but I’m not aware of any hard mathematical reason there couldn’t be an algorithm that does not have to sync so much. Even if it were less efficient this could be overcome by the sheer number of nodes you could marshal.


Is that a serious argument against an AI pause? There are potential scenarios in which regulating AI is challenging, so it isn't worth doing? Why don't we stop regulating nuclear material while we're at it?

In my mind the existential risks make regulation of large training runs worth it. Should distributed training runs become an issue we can figure out a way to inspect them, too.

To respond to the specific htpothetical, if that scenario happens it will presumably be by either a botnet, by a large group of wealthy hobbyists, or by a corporation or a nation state intent on circumventing the pause. Botnets have been dismantled before, and large groups of wealthy hoobyists tend to interested in self preservation (at least more so than individuals). Corporate and state actors defecting on international treaties can be penalized via standard mechanisms.


You are talking about some pretty heavy handed authoritarian stuff to ban math on the basis of hypothetical risks. The nuclear analogy isn’t applicable because we all know that a-bombs really work. There is no proof of any kind of outsized risk from real world AI beyond other types of computing that can be used for negative purposes like encryption or cryptocurrency.

Here’s a legit question: you say pause. Pause until what? What is the go condition? You can never prove an unbounded negative like “AI will never ever become dangerous” so I would think there is no go condition anyone could agree on.

… which means people eventually just ignore the pause when they get tired of it and the hysteria dies out. Why bother then?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: