If you are at all in doubt of your qualia, to quote Sam Harris (who has many entertaining podcasts on this topic with people far more qualified to speak on it than I am):
> Unfortunately, many experiences suck. And they don’t just suck as a matter of cultural convention or personal bias—they really and truly suck. (If you doubt this, place your hand on a hot stove and report back.)
The twitter thread was about moral realism, but the topics are very much intertwined. If there is no imperative for anyone to do less harm to other creatures, why should anyone care if you are in pain?
And of course this argument extends to our treatment of animals too. Countless number of living, feeling animals, suffer for scientific progress (and cosmetics), let alone factory farming and wet markets. But at least there's light at the end of the tunnel for some of these concerns (lab meat, improved in-vivo testing)
I am a collection of interconnected units, each of which is itself a combination of different organelles and other purpose-serving features. My intelligence is itself a product of the interaction of simple, unintelligent parts. So I would say that I am a machine. I would describe a robot as a man-made machine where the interaction of various silicon parts and processors gives rise to intelligence. By that definition, I am not a robot.
I believe that we don't feel pain when we are truly in trauma. It is only when we can do something about the damaging stimulus that we feel pain. I once had a major accident where I experienced lung collapse and multiple fractures. I never lost consciousness but I don't remember feeling any pain until well after others came to my aid. Even when I tried and failed to pick myself up off the ground, I did not feel pain, only disability and relief that my fingers and toes still moved.
I believe that the question of "what is pain" and the question of "is pain a good criterion for deciding whether it is acceptable to do harm to something" are two totally different philosophical problems. They are connected only insofar as we have chosen pain as a proxy for harm. But that very relationship between pain and harm indicates that pain is not just some kind of soulful feeling, but rather a signal to help us evade harm now or in the future.
We're getting stuck on semantics here. I do, but then I'd cease to see it as a robot, and more a sentient being. One criterion of consciousness that I've encountered is, 'there is something what it is like to be <x>' (Thomas Nagel). If there's something 'what it is like to be' a robot, a bat, a mosquito, an ameoba, a rock... then it is conscious.
> I believe that the question of "what is pain" and the question of "is pain a good criterion for deciding whether it is acceptable to do harm to something" are two totally different philosophical problems. They are connected only insofar as we have chosen pain as a proxy for harm. But that very relationship between pain and harm indicates that pain is not just some kind of soulful feeling, but rather a signal to help us evade harm now or in the future.
Qualia is more broad than just pain, of course. I just picked this particular phenomenon for it's poignance :)
If it is conscious, ethically speaking, we should consider how we treat it in a manner different to something that isn't. So if a rock isn't conscious, and some interconnected neural/silicon device is, we should at least have some way to query whether it is in an undesirable state or not.. if feasible/practical.
Maybe if trees/plants/rocks/ameobas are conscious, we can't be consulting their feelings when we harvest crops, or use disinfectant, mine for precious metals etc. We can make decisions to treat livestock better, and change how we utilize our environmental resources - so we ought to, and we are. But if we were to go out of our way to make new conscious entities, don't you think we should extend our historical shifts in attitude to slavery and our growing shifts in attitude to animal welfare also to these new entities?
The one quibble is that computationalism – the idea that experience is simply what some kinds of computation feel like from the inside, regardless of the substrate – may or may not be correct. It could be that qualia can only arise in systems that are physically intertwined in particular configurations (see Tononi's IIT), and it could even be that quantum effects are required (I'm skeptical, but who knows). The jury is still out on those question.
Therefore, it may be true that using biological neurons, arranged in a certain configuration, would give rise to qualia like pain in a way that shifting electrons between CPU registers never could. We just don't know.
If you are at all in doubt of your qualia, to quote Sam Harris (who has many entertaining podcasts on this topic with people far more qualified to speak on it than I am):
> Unfortunately, many experiences suck. And they don’t just suck as a matter of cultural convention or personal bias—they really and truly suck. (If you doubt this, place your hand on a hot stove and report back.)
https://twitter.com/SamHarrisOrg/status/951276362387591169
The twitter thread was about moral realism, but the topics are very much intertwined. If there is no imperative for anyone to do less harm to other creatures, why should anyone care if you are in pain?
And of course this argument extends to our treatment of animals too. Countless number of living, feeling animals, suffer for scientific progress (and cosmetics), let alone factory farming and wet markets. But at least there's light at the end of the tunnel for some of these concerns (lab meat, improved in-vivo testing)