Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Home-made bombs are being sent to physicists in Mexico (nature.com)
154 points by acangiano on Sept 16, 2011 | hide | past | favorite | 40 comments


For this audience, this might get more clicks if the title mentioned CS, nanotechnology, AI, etc. not physicists. And besides that, the title is incorrect: the only bomb mentioned in the article was sent to the author's brother; the author is a physicist, while his brother is a computer scientist and roboticist.


Unfortunately, it's too late for me to change the title.


If only there was a way to know that it was the brother before you posted.


You may notice that the title is copied verbatim from the subtitle. I submitted before I reached the end of the article. I already expressed regret for not being able to change the title. There is no reason to be sarcastic about it. Your comment is arguably no better for this community, than my rushed title submission.


What are you talking about? You posted without a minimum of effort, not even finishing the story first. That's plenty of reason to be sarcastic!


The article mentioned the group praises Ted Kaczynski, so most likely good to read this essay if you want to see their POV.

Industrial Society and Its Future  (1995)  by Theodore Kaczynski

http://web.cecs.pdx.edu/~harry/ethics/Unabomber.pdf


I think ted was likely a great deal smarter than the people we're dealing with now.


Maybe, but life is cheaper in Mexico and the police have bigger fish to fry, making the bombers more dangerous and less likely to got caught.


Life is cheaper in Mexico? I know you didn't intend the way that came out, but life is life. It's no cheaper in Mexico than it is among the meth rednecks across the street from me, believe me.


I think he was referring to cost of life


I think he was referring to cost of life

Actually to the fact that it's a mess of a country, at least right now. people being killed left and right, and is almost a failed state. Not that Mexican parents weep less for their kids or anything like that.


The FBI should put a message on the Web saying they know where these terrorists are and they've already sent nanobot assassins to infect them.


I think this case is out of their jurisdiction.


Nothing is outside the jurisdiction of nanobot assassins, duh.


for other spanish speakers, this group is called Individualidades Tendiendo a lo Salvaje and a google search gives plenty of hits for more info http://www.google.cl/search?sourceid=chrome&ie=UTF-8&...

(they don't seem to have a site; the statement is repeated on various anarchist blogs, but from dates and credits one of these may be the original http://liberaciontotal.lahaine.org/?p=1622 http://culmine.noblogs.org/post/2011/04/27/paquete-explosivo...)


TFA: In statements posted on the Internet, the ITS expresses particular hostility towards nano­technology and computer scientists. It claims that nanotechnology will lead to the downfall of mankind, and predicts that the world will become dominated by self-aware artificial-intelligence technology.

The spooky thing is that this may not be as crazy as it sounds.


...

That's absurd, I'm sorry. Of course the world will become dominated by self-aware technology; it already is, after all. (Well - from our point of view, anyway. From the bacterial point of view it's pretty much bacteria all the way down.)

But to imagine that that self-aware technology will be anything but symbiotic with us is truly crazy. And certainly not something that bombs could fix anyway.

Also: what kind of lame anti-technology terrorist group posts its statements on the Internet?


>But to imagine that that self-aware technology will be anything but symbiotic with us is truly crazy.

Have you any argument to back up such a strong position?

>Of course the world will become dominated by self-aware technology; it already is, after all.

I'm not aware of any technological artifact that meets a reasonable definition of 'self-aware' - could you provide an example?

Why is it 'truly crazy' to imagine self aware technology could pose a threat to us? I don't think we've built any self-aware technology, and I suspect that if such technology is possible, its still a way off in the future. But if we did, I think we'd have to be extremely careful with it, and be very cautious of unintended consequences.

Its not unusual for people to consider the potential of such technology to be a threat. A commonly cited essay is Bill Joys wired article: http://www.wired.com/wired/archive/8.04/joy_pr.html

In the other direction, if you look at these guys, who are extremely optimistic about AI, they say one of their main research areas is on ensuring any such AI would be 'friendly': http://singinst.org/research/researchareas

I disagree with this 'ITS's assessment of the risks, as described in the Nature article. I completely disagree with their methods, obviously.

But I think that if self-aware technology was on the horizon, concerns about it potentially being a threat would be legitimate.

Asserting that it couldn't possibly be anything but symbiotic, without offering any evidence or argument to back that position up, is also crazy.


could you provide an example?

You're it. Assuming a definition of technology broad enough to include you being a survival mechanism for bacteria.

If you want my argument, the tl;dr I just wrote for the deleted post here is going to have to suffice. I haven't read Joy's article yet, although I've heard of it - thanks for linking it.

I guess the short answer to why I think it's going to be symbiotic is that it's going to be either symbiotic or not a competitor. I guess a better characterization of my belief is that I just don't buy AI being our future competitors either way. But I don't intend to lose any sleep over it, because I have enough short-term worries that I really don't care much. So AI could supplant me - fine! Then I wouldn't have to worry about college tuition for my kids. Sometimes leading a post-holocaust scratch-farming existence sounds relaxing.

(Nota bene: I grew up on a farm, so no, I don't actually think scratch farming would be low-stress; this was hyperbole just for humorous effect. Trust me; technology is our friend and I don't want to go backwards. Ever.)


Even if we consider me a "technological artifact" for the purposes of discussion, it's clear that we pose enormous threats to not just other species, but ourselves as well. Obvious examples in the mainstream are global climate change and nuclear weapons. You can even find well-known physicists mentioning Venus as a conceivable future of the Earth, due to runaway greenhouse effects.

Not too far from nanotech "grey goo" nightmares.

(I don't think that technology is inherently bad, but it's being developed by incredibly destructive social arrangements.)


Bah. If the system is that unstable, then maybe we're going to destroy ourselves - but it's just as likely we'll be able to save ourselves when something entirely else would have killed us anyway. There's no reason to think technology per se, or any particular branch of it, should be avoided because it's the one.

Look, I'll be the first to agree that technology is dangerous and human beings are pretty stupid and shortsighted when using it (and when not using it, and before breakfast, and all other times), but I just have nothing but contempt for people who think the answer is to dissuade other people from being smarter than they are, by means violent or not.

If you (and by this I mean the generic you) think technology is dangerous, or some specific technology is dangerous, then by all means investigate how it is dangerous, how that danger can be avoided, and before all else become a better person and demonstrate by your life that a race to destruction is not where humanity should be. I know that's all kumbayah and people probably won't even understand what I mean, but seriously, the answer is just to become a better humanity. The answer is always to become a better humanity. I personally think we're doing OK on that, current news notwithstanding (because the current news is always alarming, but the trends are towards a more human humanity).

Well. This isn't an argument, just a viewpoint. I got work to do and probably shouldn't be engaging in philosophy.

Oh, except one more point: global warming is going to be expensive, but it's not going to kill us. It's not the first time Earth has run a fever, and we're not Venus yet. Even in the (I believe) colossally unlikely event that we crash our technological civilization or even drive ourselves and a bunch of other things to extinction, the planet's going to be just fine, and life will go on.


If we are a threat to ourselves at some point we could stop being a threat to everything else.


Thanks, I was thinking about Bill Joy while writing my comment. I also would like to add this excerpt by Nassim Taleb on "creation of bacterial cell controlled by chemically synthesized genome" * (http://www.edge.org/discourse/creation/creation_index.html)

If I understand this well, to the creationists, this should be an insult to God; but, further, to the evolutionist, this is certainly an insult to evolution. And to the risk manager/probabilist, like myself & my peers, this is an insult to human Prudence, the beginning of the mother-of-all exposure to Black Swans. Let me explain.

Evolution (in complex systems) proceeds by undirected, convex bricolage or tinkering, inherently robust, i.e., with the achievement of potential stochastic gains thanks to continuous and repetitive small, near-harmless mistakes. What men have done with top-down, command-and-control science has been exactly the reverse: concave interventions, i.e., the achievement of small certain gains through exposure to massive stochastic mistakes (coming from the natural incompleteness in our understanding of systems). Our record in understanding risks in complex systems (biology, economics, climate) has been pitiful, marred with retrospective distortions (we only understand the risks after the damage takes place), and there is nothing to convince me that we have gotten better at risk management. In this particular case, because of the scalability of the errors, you are exposed to the wildest possible form of informational uncertainty (even more than markets), producing tail risks of unheard proportions.

I have an immense respect for Craig Venter, whom I consider one of the smartest men who ever breathed, but, giving fallible humans such powers is similar to giving a small child a bunch of explosives.

Edit: note that I obviously don't support any lunatics sending bombs


> But to imagine that that self-aware technology will be anything but symbiotic with us is truly crazy

What makes you think no one will deliberately make self-aware technology that is hostile to us?

There are currently, and always have been, groups that take an "ends justifies the means" approach to achieving their goals--goals which often involve killing others. It's hard to believe none of them will use self-aware technology in weapons, or that we'll solve all the problem that lead to the existence of such groups before self-aware technology becomes common.


Very true. Higher technology will put more power in the hands of even crazy individuals - but I submit to you that the American electoral college already does that.

The answer isn't a futile attempt not to understand the world and its potential better, but rather to understand why crazy people want to kill people, and try to fix the disease instead of the symptoms.

But it's going to be a long freaking time before humanity is that mature, so yeah, in the meantime, maybe we're all going to die.

But probably not. And therefore we should continue trying to learn about the world and its potential, raise our kids to be better people than we are, and get on with things.


"There are ... groups that take an "ends justifies the means" approach to achieving their goals--goals which often involve killing others."

Groups such as the United States of America?


> But to imagine that that self-aware technology will be anything but symbiotic with us is truly crazy.

It took us thousands of years and the invention of the atom bomb to figure out how to live with other humans peacefully. Creating a potentially much more powerful conscious entity with its own volition, and without the weaknesses and limitations of flesh and blood humans, has a fundamentally unknown and therefore risky set of potential consequences.

> Also: what kind of lame anti-technology terrorist group posts its statements on the Internet?

That's a bit of a straw man argument. Everyone uses some technology, even the Amish. The question is what technology is beneficial to humanity, and it's fair to say we've invented more than a few things that hurt humanity more than they helped, like mustard gas and the electric chair.


Read what you just wrote:

It took ... the invention of the atom bomb to figure out how to live with other humans peacefully.

Exactly.

But much more saliently, we can't help engaging in memetic evolution any more than bacteria could help evolving into eurkaryotes. If a higher-level consciousness is possible at all, it's going to happen. It's going to happen. So instead of framing the issue in terms of "maybe there are roads mankind was not meant to go down", it's my fervently held opinion that the issue is really, "maybe we damn well be mature enough to survive when we reach the end of whatever road this is."

Because it's going to steam engine whether you want it or not.


Truly bad things can happen. There is no rule that we can't go extinct accidentally.

http://lesswrong.com/lw/uk/beyond_the_reach_of_god/

Edit: Removed Internet psychoanalysis.


[deleted]


It's just one more "oh my god the world is going to be scary" ghost story. The human race has gone through a number of really, really deeply changing technological hurdles over the past few thousand years while still remaining fundamentally human (and in fact I think more fundamentally human), and I don't think this one is going to be any different.

Because when you get down to it, there are two ways an AI can be intelligent. Either it participates in the human world - in which case we are inherently part of its future - or it doesn't - in which case we are irrelevant to it, and it is irrelevant to us, and essentially in exactly the same way as bacteria. Sure, bacteria can kill us and the world is more risky with them in it. And sure, looked at from a certain angle, we evolved "beyond" bacteria and thus took over the world from them.

But when you look at reality, first: we're made of bacteria. And even if you don't consider eukaryotes just a good way for a few bacteria to survive better, there are actual bacteria that live in us, to the point that there are more bacterial cells that make up you than there are human cells with human DNA, and you would die without them.

So even though evolution has gone "beyond" bacteria - it's gone beyond them in a way that in no way makes their lives worse, because it really is bacteria all the way down. So far from being surpassed, actually the bacteria have conquered everything.

Human culture is a software running on that substrate, in a sense. The things that make us human are our language, our ideas - and those are at an entirely different level from the bacterial. They coexist as separate continua, really.

So let's assume that there arises a higher-level self-aware mind (and let's further assume that hasn't happened yet) - at worst it's going to evolve "beyond" us in the same way we are "beyond" the bacteria. I don't see that as precluding us continuing in our mundane human existence in a way that the vast majority of us will never notice.

So to be perfectly clear - I personally believe entirely that something like the Singularity is in our future. What I think is crazy is the idea that we will be superseded. These anti-tech terrorists think it's going to be Terminator, when actually it's going to be something like a more ubiquitous Internet. Nobody's afraid of the Internet - the terrorists aren't, even - even though it's highly probable that the Internet itself is what will evolve into the next level of life.

I just don't see how anybody can look at the changes that are coming down the pike and think, "Oh my God, the future is scary" - it's just going to be better. I mean, shit, I grew up in the middle of nowhere with interlibrary loan the best way for me to find out the really hard stuff. The Internet is like crack to me. I now do programming and technical translation for customers all over the world, and I can move whenever I want. That lifestyle was impossible twenty years ago. Impossible. No agency would track me down by mail if I had to keep leaving forwarding addresses, but now? I go to Hungary for the summer, we lived in Puerto Rico for two years, now I bought a foreclosure close to home (that I found on an Internet search) and we've been here a couple of years - and next year it's back to Europe. My customers might notice I'm getting up six hours earlier, but otherwise it's all the same to them.

As technology gets smarter, I will continue to build it into my life, and as a result, my life will continue to be closer to what it ought to be. I just don't see the downside to that. And as it gets really smart, it's going to be inside my brain anyway, and - poof! Exponential curve. I expect to be part of it. How could it be otherwise?

So, no. I think the only reason to be afraid of the future is if you want to be afraid of something. And I consider that truly crazy.


Keep in mind, humans had absolutely no compunction about inventing antibiotics to selectively exterminate entire species of bacteria that happened to inconvenience us. I'd hate to have something so powerful with the same kind of power over us. Technology enhances our lives now because we have agency over it, not the other way around.


That's an interesting angle, though I'd personally rate the higher risk being humans who do have agency over technology. Technology is a human ability multiplier, which is a problem if the ability includes destruction. The 20th century brought nuclear weapons, for example, which have been kept remarkably out of use through, I think, a combination of luck and extreme suppression. It just so happens that some of the engineering challenges in making a nuclear weapon are particularly hard to DIY, even when you understand the science (fuel enrichment is apparently the most significant bottleneck, with a secondary bottleneck at actual weapon construction). And then governments make extensive efforts to keep any sort of nuclear DIY scene from developing, to make sure that even harmless science-fair type versions of practical nuclear knowledge don't arise "in the wild".

Will that all also be true for 21st-century destructive technologies? If any technology appears where one person could kill 20 million people with it, either it will have to be very hard to DIY, we'll have to be very good at suppressing it, or likely, both simultaneously.


I think some of what you're saying is a bit too broad, but more or less on the same point as I am. Technology drives human progress, some could argue it -is- human progress and the idea that there is anything to fear from computers or "nanobots" is about as ridiculous as believing "The Matrix" is based on a true story.

Luddites have existed in our society (by that I mean First world) for a long time. The ITS is just another in a long list of people conspiratorially whispering about things William Gibson wrote about in fiction 20 years ago.


> things William Gibson wrote about in fiction 20 years ago

Greg Bear, 26 years ago (Blood Music): http://en.wikipedia.org/wiki/Grey_goo


Which is funny, because the goo is actually green.

I never understood the fear of nanotech, because I see no reason that tiny robots would be any more scary or harmful than all the bacteria out there already. And the bacteria can already both self-replicate and kill us.


If bacteria were as powerful as nanotech, there would not be any need for nanotech. Nanotech could potentially be to bacteria what the space shuttle is to a horse.


Nanotech can neither replicate nor kill us yet. It's not magic, and evolution has a huge head start on dealing with issues like powering it and self-replication. Devices at that scale are, generally, quite frail to things like stray cosmic rays or other background radiation causing them to break down.

And the space shuttles are dead; there will be no more of them. Horses, meanwhile, have survived for millions of years, so you're making the opposite point by mistake.


thanks for taking the time to write this. it was quite an epic read, while listening to radiohead's 'everything in it's right place'.

That deleted comment was mine. My intention was to trigger some argumentation, but then @feral wrote something that sounded like he had a much more formed opinion that I did, so my comment felt unneeded. anyway, deleting it was such a bad call, sorry about that. lesson learned.

anyway, for the record and the curious, here it is (snatched it back from my cache):

--

But to imagine that that self-aware technology will be anything but symbiotic with us is truly crazy.

Why is it absurd? I'm not arguing one way or the other, I'm just interested. I know very little about Ray Kurzweil's singularity theory (or AI research for that matter), the little I know doesn't sound that implausible. But it does sound quite dark, so if there's a reason why it's truly crazy then I would joyously hear it.

Edit: to be clear here, crazy or not crazy, I do not endorse violence.

--


> Also: what kind of lame anti-technology terrorist group posts its statements on the Internet?

Good question. I'd say it depends where one draws the line. Some people draw it at horse-drawn buggies (and they may be right), others don't balk at electricity but reject some body-invading medical practices, others have very hard lines on human dignity.

I guess we'll find out, sooner or later, where the real line is by the collective "Oh, SHIT!" expletive when the tripwire is stepped on ...


What a bunch of dicks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: