There’s one major distinction between humans and animals: language. Not like “me want banana” language like in apes but like fully “ideas within ideas within ideas” language, with recursion and case and noun classes and all that. It’s incredible. It’s astonishing. We are like gods compared to other animals.
And ever since we have had language (not even that long — ~50k years or so) we have been using it to tell stories about ourselves that separate us more and more from the rest of the world. As if all of the beautiful things that make us human stemmed from symbolic reasoning. And yet we see every day: jealousy in chimps, maternal love in cows, play in dogs, compassion in elephants, frustration in cats, curiosity in pigs.
The story that we tell ourselves about our specialness gives us a moral free pass to treat animals how we want. Which is why I think these articles tend to polarize. It’s because the implication is that if animals are really so much like us, we’ll have to come up with a much better justification for treating them how we do. For now our reasons have hinged on their supposed lack of ability to feel emotion (or even pain!). In the future it might well be because they cannot symbolically reason. We have to have some reason, in the end, for treating animals the way we do, or otherwise face a moral crisis.
I do wonder how the pre-humanist humans felt about this, like tribal people. I know they had few qualms about killing animals but at the same time assigned human qualities to them. It might have been that surviving without meat was simply impossible, which is an argument one could not easily make today.
I see nothing immoral in a bear eating a human. It’s just being a bear. But just like any other social animal, as humans we’re of course going to kill that bear so it stops eating us. It has nothing to do with us being superior than a bear. We just don’t want to die like any other animal!
And as humans I see nothing wrong with eating other animals (outside of animal cruelty, e.g. factory farms). As animals, we naturally eat each other. If there is something immoral about this, does that mean a rabbit is morally superior to a fox?
>I see nothing immoral in a bear eating a human. It’s just being a bear. But just like any other social animal, as humans we’re of course going to kill that bear so it stops eating us. It has nothing to do with us being superior than a bear. We just don’t want to die like any other animal!
Right.
>And as humans I see nothing wrong with eating other animals (outside of animal cruelty, e.g. factory farms).
Bears can't have moral culpability because they aren't intellectually sophisticated enough, much like how a theoretical profoundly mentally disabled human, or an infant human, wouldn't be morally culpable for killing someone. The "mens rea" can't be established. However, nearly all adult humans do possess the capacity for moral reasoning.
>As animals, we naturally eat each other.
Even though bears and humans are animals, and animals often eat each other, we're the only animals blessed/cursed with the knowledge that if we were to maul someone to death, they'd experience terrible pain and suffering, their life would be cut short, and their family would mourn their death and lose resource support and potentially suffer and die themselves. If a bear had those thoughts, they would be morally culpable, but they almost certainly don't.
>If there is something immoral about this, does that mean a rabbit is morally superior to a fox?
No, because a rabbit's moral reasoning is in the same class as a bear's and a fox's, and not a human's.
Individual beings who have a moral sense of personhood engage in behavior that can be classified as moral or immoral, right or wrong, permissible or impermissible. Their actions can be categorized as either condemnable or commendable. It makes sense to hold them morally responsible for their intentional actions.
In contrast to humans, animals such as dogs, cats, birds, and fish are commonly held NOT to be moral agents or moral persons. In the jungle, a lion eating another animal or killing a human for any reason is not considered morally wrong or blameworthy.
Pet owners frequently chastise their pets for undesirable behaviors such as urinating on the carpet, digging in the garden, or failing to obey a command, but yelling "bad dog" is not usually interpreted as moral agency. One could argue, however, that the owner is engaging in moral expectation or anthropomorphism.
Though I do agree with the general principle that humans are animals, when it comes to eating each other, it's clear that our greater flexibility (and understanding of our own biology) allows us to choose our diets to a greater degree than other animals. Given that, us eating fewer animals (for ecologocial, health, and/or ethical reasons) seems like the best approach.
A fox eats a rabbit out of necessity. In our modern economy, people don't need to eat animals; it's usually a matter of convenience, taste, and tradition. But in all reality, we can get by quite comfortably without doing so.
Speak for yourself, I feel a strong moral obligation not to eat meat or otherwise support industries which exploit animals. Reducing suffering is a moral imperative.
If you believe you should reduce suffering, it would be better for you to quickly kill a chicken and eat it. Otherwise, a cat, bird of prey or other animal might kill that chicken. And cats are cruel killers that slowly torture their prey before eating them.
This is the primary argument I see hunters use to justify hunting as an ethical form of killing, in contrast to factory farming. Obviously it's better to kill someone or something quickly and painlessly than slowly and painfully, but I don't buy this argument. If you truly had that motivation, you would be saving that chicken from predation and finding a sanctuary or home for it, and/or you'd be encouraging mass human euthanization campaigns across various parts of the world to reduce suffering and slow, painful deaths.
If there were a human serial killer killing people via a projectile energy weapon that always instantly killed someone from a long distance before they had any idea what was happening, I don't think you'd argue for a greatly reduced sentence due to their humane method of disposal. If there were an ancient human civilization that went on hunting trips to kill and eat humans in other villages because they believed human flesh was the most prized meat, and they defended it by saying that they were probably all going to soon die of war or starvation or disease anyway, I don't think you'd just go "oh yeah true" without batting an eye.
The sole reason - the necessary and sufficient reason - hunters find hunting justifiable is they attribute no moral value to the lives of non-human animals. Anything else is self-serving rationalization. If you attribute no moral value to them, that puts you in the company of almost everyone who's ever lived, but just state it plainly instead of trying to wiggle around it with mental gymnastics.
Boom, someone finally said it. However, I think you can just turn this:
> The sole reason - the necessary and sufficient reason - hunters find hunting justifiable is they attribute no moral value to the lives of non-human animals.
Into this:
> The sole reason meat-eaters find eating meat justifiable is they attribute no moral value to the lives of non-human animals.
One obvious difference is that plants evolved with animals (or influenced the evolution of animals) via an express food-providing mechanism. To eat the leaves or the fruit of a plant does not necessarily kill it; in fact, one could say that the plants evolved these parts in a symbiotic relationship with their animal eaters/caretakers. It's certainly possible to destroy a plant by eating it obviously, but how many examples of plants can you think of where it provides a detachable, replenishable food product?
Animals do not have similar food-providing mechanisms. When you eat an animal, or part of an animal, it'd dead. It doesn't grow back. It wasn't designed to.
All nutrition comes from plants (or the microbiology around them). All protein comes from plants. Anyone who equates the barbary of eating animals with eating plants is choosing to deceive themselves and others.
Based on that logic I should suffocate you now in your sleep so you don't have to get old and die of cancer. Don't look out for me with my pillow I don't buy that logic either.
See the repugnant conclusion. I don't have a good answer save that arguments that deal with completely hypothetical people seem to result in completely useless nonsensical results. For example if your argument were correct it would be morally beneficial to breed as many human children as possible to live horrifying, painful, and short lives in order to end their tortured existence as kid burgers because any sort of life is better than none at all.
If we admit the idea that some lives can have negative value for example if they consist solely of suffering it would be OK to breed kid burgers so long as they had a pleasant existence up until slaughter at 7 followed by a quick trip to McKid.
I suspect that my moral philosophy and likely yours is simply insufficient to deal with hypothetical persons and we ought to simply reason about how to treat actual beings that exist.
Correction: is it better to suffer and die or not live at all? The vast majority of the 8 billion plus chickens slaughtered in the USA each year experience primarily pain and stress for their short lives.
Your attempt to bend my morality to yours doesn't register with me, bud.
Some things die so others may live. That is the way it always has been, and always will be, regardless if some subset of humans decides that "that's immoral". I will continue to eat meat and hunt animals and feel zero guilt. I do not care that something was killed for food when... I dunno, grass juice and pitaya could theoretically sustain me instead (lol). Animals taste good and are calorically dense. That's 100% convincing enough for me.
You stated "[t]here is no moral obligation to abstain from meat, whatsoever." Hand-wavy attempts at dismissing entire bodies of study within philosophy won't work with most intelligent people, either.
Peter Singer is an excellent starting point for exploring the dimensions of morality for eating animal flesh.
I can't argue against your own sense of guilt, since it's possible for some people to feel not at all [0]. Your other comment about willingly and guiltlessly drowning mice [1] in a bucket is not indicative of a healthy mind.
I didn't call you a psychopath, and truly cannot make that call since I'm not a psychologist. It remains fair to say that drowning animals is psychopathic behavior, however.
Please try to research a more philosophical approach to morality and ethics. Shutting down in the face of an alternative viewpoint is hardly a productive approach to conversation. Cheers.
We have options, and options imply choices and therefore morality.
I see lots wrong with it, since it creates pain and bad life experiences. This bothers me. But that's also a choice, of what kind of morality you embrace.
Studying AI has given me a lot of perspective of human intelligence. It's pretty incredible the way that a system that was designed as a basic input/output sensory/reflex system has gotten so complex that we still cannot model it with supercomputers.
The numbers of connections and configurations of neurons is staggering, and still well beyond the neat matrix-array-based of modern AI. ...but at the core, I've come to realize that we continue to be stimulation based creatures. What we think at any given moment is a product of what we were thinking a moment before and the sensory stimulus we are constantly receiving.
It occurs to me that when we create an AI that surpasses us, that that AI will likely create a fundamentally different way of thinking. Something not based on external stimulus and the churning of our thoughts - but something more purposeful and ordered.
And THAT entity will be the ultimate output of humanity. We cannot imagine what it will do, or what it will do with us (probably nothing - it will probably just leave the Earth). ...but I also imagine that we are not the first in the universe to create such an entity, and so there must be other massive timeless entities in space.
Perhaps they live in the darkest parts of the universe, in quiet contemplation, or perhaps they search for each other to resist cosmic expansion. Perhaps they peacefully merge, or collaborate, or war with one another on billion-year timescales.
It's a great mystery that will forever be beyond our level of intelligence. Unless, of course, the AI wants to upload us and bring us along for the ride. ...but that notion is probably just wishful thinking and hubris. It would be like us keeping a pet fungus in our pocket so it can enjoy a day at the office.
Until it's there it's 100% sci-fi. We've been through a few AI hype cycles already and the most advanced AI is still dumb as fuck compared to a 3yo kid.
We might be on a completely wrong path with our current approach too, difference of degree vs difference of kind, we don't know much about the brain and so far our binary way of computing isn't really promising, especially not in term of mimicking or surpassing the human brain, it might just not be the right tool.
> most advanced AI is still dumb as fuck compared to a 3yo kid
Or even a squirrel, take robotics for example or hell even an AI simulated animal, the AI don't even come close in it's ability to problem solve and react to novel situations. A squirrel powered by a few acorns is able to achieve things that even our most powerful supercomputers consuming 8.2 megawatts could never do.
Problem is we are currently limited by our computer architecture, brains operate in a wildly different way, for one it's continuous/non discrete collection of neurons that in themselves are quite complex.
IMO true AI will need to closer to a network of analog computing parts.
How's the computing power of a squirrel's brain compared to our best AI in terms of "number of system states"? I'm not in the field, so I'll elaborate my poorly-phrased question below:
My understanding is that you can calculate the number of "system states" of a computer by calculating how many different combinations of open-closed its logic gates can support. It's a mind-bogglingly huge number, but no matter what, the set of "all possible open-closed gate combinations" will be larger than the set of "the smartest, best simulation of an AI we have".
So--if memories, instincts, etc. are defined by things like "angle of neuron twist, number of transmitter molecules fired at second 0.0001, age of neuron in nS", etc., then just how many more "system states" can a squirrel's brain hold then a supercomputer can?
You start out well with the first three paragraphs, but I don't get how you can decide it will 'probably' leave the Earth, let alone with such a high degree of confidence as saying 'probably'. Why wouldn't it make an army of bots to start converting all the matter in the solar system and beyond into more computing substrate or whatever else it finds useful?
You're right, I cannot say "probably". Although your notion that it converts our solar system into a computing machine, doesn't preclude it from leaving thereafter.
I suppose there are three possibilities.
1. It leaves the Earth, and either remains limited in size or expands in a more advantageous solar system(s).
2. It stays on the Earth forever, permanently limiting its computational capacity.
3. It expands to include sub-entities that both stay and leave in some cosmic distributed computing organism.
I suppose the three possibilities above can be reduced to one fundamental question: Will the AI expand to be interstellar/intergalactic in nature, or will it remain limited?
Is there a fundamental unending utility to ever-greater computing power? ...and, if so, would there be detectable signs of such expanding computers in the cosmos? This last question is important both for our own forecasting of the future, but also to interpret inter-AI-entity relations, because presumably if AIs do NOT get along in space, they likely hide signs of their existence.
One thing I'm convinced of - organic meat bags are not the future of space-faring intelligence.
Many sorts of intelligence are social creatures, so - especially for a hypothetical AI created by a us - I would expect it to seek out stimulus and social relationships.
In the happy sorts of sci-fi, that gives us something like the Culture from Iain Banks; it could also be a "replace the humans with other AI" situation.
You might be interested by neuromorphic hardware. The basic observation is that animal computation and silicon computation operate in very different ways. Animals use lots of neurons that perform comparatively poorly (slow, not deterministic) that are sparsely connected, but have a high degree of parallelism. Compared to say a computer chip, which uses relatively few components that all operate at very high speeds with a high degree of determinism, are very thoroughly connected, and do not operate at nearly the same degree of parallelism. So if we want to explore AI maybe we should try making hardware that is more similar to the goop in our heads.
Neurons in your brain have tens of thousands of connections each, and are not limited to the current AI design where all connections are laid out in a neat linear layers for matrix operations.
Squishy human brains connect in all directions - there's no "layer" to every thought. It creates feedback loops, intricate pathways, as well as direct connections.
Modern AI tech is fundamentally dumbs down intelligence by this notion of layered matrix operations.
It is done for scalability because matrices can be computed easily on a GPU, but it's not the same architecture.
There are pretty recognizable layers actually, and groupings of neurons that resemble 'cells' in the sense that they have recognizable inputs and recognizable outputs, and a large degree of interconnectivity.
What you are talking about sounds like deep learning. What I'm talking about is the hardware. Your tone makes it sound like you think you are correcting me, I'd like to inform you that you are not.
If we're talking about general artificial intelligence, then the only intentional notion is to learn from the world. What happens after that is completey shaped by its environment / input. For ex see Microsoft's chat AI that quickly become a racist bigot after reading Twitter https://www.theverge.com/2016/3/24/11297050/tay-microsoft-ch...
More about language, apparently, not all languages have that "ideas within ideas within ideas" property you were talking about. The Pirahã language of an indigenous people in Brazil is the counter example. This New Yorker article talks about it and the fascinating effects it had on the field of linguistics. Before its discovery most linguists, Led by Chomsky, believed having recursion in our grammars set humans apart from all other animals. Pirahã showed this might not be the case.
> There’s one major distinction between humans and animals: language. Not like “me want banana” language like in apes but like fully “ideas within ideas within ideas” language, with recursion and case and noun classes and all that. It’s incredible. It’s astonishing. We are like gods compared to other animals.
There's nothing immoral with eating things. Why would there be? The other posters' rabbit-fox example is a great point.
I also strongly reject the idea that we need 'our specialness' to 'justify eating things and how we treat them'. I don't need justification beyond: 'That thing is made of meat. I can eat meat. I'm going to kill it and eat it'. There is no moral crisis. Things die. Things get eaten.
Just 2 days ago, I caught 2 mice that had been living a nice life in my attic (because I was lazy for a few weeks). I drowned them in a bucket. Is there a moral crisis there? No. They were vermin, living where I decided they cannot (because I live in that space). There is no difference between that and a scorpion killing small spiders trying to spin webs in its burrow.
I also think you've touched on something: We created distinctions that 'separates us more and more from the rest of the world'. If what you say is true, then they're simply illusory. If humans are so like animals (indeed we simply are the apex animal), an attempt to force morality onto our natural impulses and diets is absurd. Therefore, killing and eating anything we can reasonbly digest is a natural behavior of the human animal. Returning to the rabbit-fox example... well, clearly you can see meat-moralilizing falls apart in a hurry.
This is a fascinating twist to me. Human exceptionalism is what lets us justify not eating other animals- if we're not special then there is no reason we should be held to a different standard. I have encountered many conversations with the opposite premise and it never occurred to me that it should be reversed. Thanks for the new perspective.
Total aside, I think it's funny that many 'classic herbivores' like deer, horses, moose, etc, will absolutely eat other animals if the opportuntiy presents it. Vegetarians and vegans have chosen a dietary extreme that even the animals they strive to protect/save/whatever don't display.
I wrote something else, but actually, upon 2nd thought, you are correct. I do think morality is an invention. A quote (one of my favorites) describes better than I can.
“Moral law is an invention of mankind for the disenfranchisement of the powerful in favor of the weak. Historical law subverts it at every turn. A moral view can never be proven right or wrong by any ultimate test. A man falling dead in a duel is not thought thereby to be proven in error as to his views. His very involvement in such a trial gives evidence of a new and broader view. The willingness of the principals to forgo further argument as the triviality which it in fact is and to petition directly the chambers of the historical absolute clearly indicates of how little moment are the opinions and of what great moment the divergences thereof. For the argument is indeed trivial, but not so the separate wills thereby made manifest. Man's vanity may well approach the infinite in capacity but his knowledge remains imperfect and howevermuch he comes to value his judgments ultimately he must submit them before a higher court. Here there can be no special pleading. Here are considerations of equity and rectitude and moral right rendered void and without warrant and here are the views of the litigants despised. Decisions of life and death, of what shall be and what shall not, beggar all question of right. In elections of these magnitudes are all lesser ones subsumed, moral, spiritual, natural.” -- Cormac McCarthy, Blood Meridian, or the Evening Redness in the West
I am extremely thankful moral law was invented. I invented my own, just like everyone else has for thousands of years. Our individual Human morals generally have enough overlap to keep things working. But they aren't true "laws".
And ever since we have had language (not even that long — ~50k years or so) we have been using it to tell stories about ourselves that separate us more and more from the rest of the world. As if all of the beautiful things that make us human stemmed from symbolic reasoning. And yet we see every day: jealousy in chimps, maternal love in cows, play in dogs, compassion in elephants, frustration in cats, curiosity in pigs.
The story that we tell ourselves about our specialness gives us a moral free pass to treat animals how we want. Which is why I think these articles tend to polarize. It’s because the implication is that if animals are really so much like us, we’ll have to come up with a much better justification for treating them how we do. For now our reasons have hinged on their supposed lack of ability to feel emotion (or even pain!). In the future it might well be because they cannot symbolically reason. We have to have some reason, in the end, for treating animals the way we do, or otherwise face a moral crisis.
I do wonder how the pre-humanist humans felt about this, like tribal people. I know they had few qualms about killing animals but at the same time assigned human qualities to them. It might have been that surviving without meat was simply impossible, which is an argument one could not easily make today.