Thursday, May 16, 2019

Three Things

Comment read online:
Liberals thought we won the Civil War. Conservatives are still fighting it.

An outstanding article by Robert Wright.

How the world works, by Kevin Drum.

Saturday, May 11, 2019

Brilliant song from a brilliant album

I wish I was a christian, knew what to believe
I could learn a lot of rules to put my mind at ease

Maybe not the best song on the album (here's another to try), but gosh do I love that couplet.


Thursday, May 9, 2019

The Epitome of Straight White Male Privilege

Sam Harris on Recode / Decode:

Kara: Although, I have to say, it’s not identity politics to understand why people come to things because of things that have happened to them, right? 
Sam: Yeah. But then those people have to get over those things to have a rational conversation.

He does realize he was being recorded, right? Just get over it? Really?

Read the whole thing. No more need for concern for civil rights, this isn't the 1960s, etc. Amazing.


Saturday, April 27, 2019

Less Suffering in the Universe

Every conscious mind is a universe unto itself.

That matter and energy could become subjective experience is shocking and amazing.

Thursday, April 25, 2019

Where I part ways with the smartest person I know

Not to distract from Dr. Greger's piece (which is the most important one in this thread), but I'm going to pick up on some questions from my utilitarian piece.

I'm still a consequentialist. That is, I still think the rightness or wrongness of an action is determined by the consequences, not by whether it follows some rule.

One person suggested I'm a negative utilitarian. But I think it is more complicated than that. Despite Parfit, I see a fundamental discontinuity between summing within an individual vs summing across individuals. I would cause myself to suffer some (exercise, not eating a vegan donut) to avoid worse suffering in the future (open heart surgery). But I wouldn't choose to have someone else to suffer so I could avoid worse suffering. (Well, unless that someone else was Mike Pence or Paul Ryan.)

Yes, I understand it is more complicated than that -- e.g., I would choose to have person A turn down a donut over choosing person B having a Crohn's attack. My point is that I see a difference between intra-person choices and inter-person choices.

It is the summing across individuals that really gets me. There is no entity experiencing "all the suffering in the universe." Only individuals suffer -- the universe doesn't suffer. (This also answers Parfit's repugnant conclusion to my satisfaction.)

The smartest person I know disagrees with me on this. They are focused on existential risks because summing up all the future joy from hedonistic robots vastly and absolutely swamps any concerns of the moment.

OK, Data isn't the perfect example, but huzzah for STNG.

But like in my previous post, the math is where I get off the train (in addition to not believing that someone's happiness offsets the suffering of someone else). I understand expected values, but these calculations say that having a fractional chance of lowering existential risk (a small chance of improving the likelihood that quadzillions of happy robots will take over the universe) is more important than, say, stopping something like World War II and the Holocaust and all the accompanying acute suffering.

I don't know that I'm right; as I mentioned, I've changed my mind before. I understand that many smart people think I'm simply mistaken. And I am glad there are people working to build a better long-term future.

Monday, April 22, 2019

Why I am not a utilitarian

Imagine the worst possible suffering. The person suffering, when they are able to form a conscious thought, actively wants to die. The worst torture ever conceived. On a suffering scale, make that a 1,000 out of 1,000. 

Now imagine the briefest, mildest unpleasant experience. Not even a stubbed toe -- maybe just a brief muscle spasm that you barely notice. Make that a 0.0001 on the suffering scale.

Is there any circumstance under which you would choose to save 10,000,001 individuals from the brief muscle spasm instead of saving the one person from torture?

And yet, that is what a strict utilitarian calculus would require -- prevent the most total suffering. The short takeaway is that I don't see that morality can be based on a simple summing across individuals (apologies to Parfit).

This is relevant in many situations, not the least of which is how we think about insects.

I agree with Dr. Greger that talking about insects with the general public is so strategically misguided that it causes actual harm. But I also think that it is philosophically mistaken as well.


I would be very comfortable betting my life that insects do not have subjective conscious experiences. Not to get into the weeds, but I believe people too easily conflate behavior with consciousness. The ability to sense things -- "sentience" in its broadest meaning -- exists all the way down to single-cell organisms (Carl Sagan once told me this is why "sentience" per se couldn't be the basis of morality). To me, all the evidence indicates that the ability to have conscious, subjective experiences -- to be able to actually suffer -- derives from and requires significant neural complexity (sorry Chalmers). And once the appropriate level of complexity is reached, further complexity can lead to a greater capacity for consciousness. That is, the ability to feel feelings is not binary, but analog. (For much much more, please see this. I don't fully agree, but it is the most honest and thorough treatment of consciousness I've ever come across.)

So even if insects can have any subjective experience, their most intense sensation would be the palest hint of a feeling -- a tiny fraction of the worst suffering we can experience.

Just as no number of people having a brief muscle spasm could rise to the level of concern of one person being tortured, no number of insects being made to experience their "worst" suffering rises to the level of concern of a single person suffering anywhere near their worst.

Note: I'm not saying I have a coherent replacement for utilitarianism. I realize the contradictions in this view. Just as I would save one person from being tortured instead of sparing a trillion people from experiencing a muscle spasm, I would be inclined to save multiple people from suffering at level 999 vs one person at level 1,000. However, I can imagine myself being convinced that the latter is wrong; i.e., that I should always care about relieving the worst-off suffering. On the other hand, I currently can't imagine being convinced to care about insects, especially in a world with so many individuals clearly suffering so intensely.

Finally, I realize the above seems to contradict my concern for chickens. The reason isn't a love for chickens, but rather, marginal impact.

I could be convinced that I should do something with my life that would help individuals who are suffering worse than chickens -- individuals who wouldn't be helped if not for my specific efforts. But I've not yet come across that argument. Loads of people care about mammals, and loads more care about humans. But few people care about chickens, even though the average chicken is basically tortured. Even assuming chickens have a lower capacity to suffer than a cow, I'd rather be reincarnated as an average steer than your average chicken. (I'd easily choose being reincarnated as a bug over either of those options!) Yet most of our dietary advocacy leads to more chickens suffering.


I could be wrong -- I've changed my mind before.

Sunday, April 21, 2019

"What have I done??"


This picture of a Cooper's Hawk nest cracks me up every time. Photo by one of Anne's naturalist friends (I think Ned, but I could be wrong).

Sunday, April 7, 2019

Worse than Hitler

You might be familiar with the Simulation Hypothesis, which says, in short, that we are probably living in a simulation.



My main argument against this is that anyone in the future who ran an "ancestor simulation" would be directly responsible for recreating every war, genocide, and atrocity. They would cause unimaginable numbers of individuals, human and non-human, to be re-brutalized.

If one of our descendants would create a simulation of their past (our present), they would be the most evil person of all time, by unfathomable orders of magnitude. The simulator would, by definition, be every horrible criminal combined. Even in our backward societies today, we outlaw child abuse, torture, and rape. Doesn't anyone else doubt the crime of actively creating child abuse, torture, and rape would be allowed?

One counter-argument is that the people of the future will be so far beyond us that they won't care about the suffering of mere animals in a simulation. The parallel is that we don't currently concern ourselves with the welfare of ants. But even we, as incredibly flawed biological creatures, are already expanding our moral circle, even living in our limited, competitive, and often zero-sum world.

Furthermore, those in the future potentially running these ancestor simulations will have a direct line back to us. They will not be ignorant of our history, of the Holocaust, the Killing Fields, the great wars and famines.

It seems very unlikely to me that in the future, society, in whatever form, will: 1. Care enough about the lives of those in the past to want to use some of their limited computing resources to run a simulation, but 2. Not care at all that they are recreating every type of suffering that has ever existed.

Consider also that instead of bringing into existence vast torture and brutality, this future computation could go to creating even more happy conscious individuals.

I find it more likely that advanced intelligent creatures (i.e., not our descendants, but others who evolved elsewhere) might be running simulations of different worlds or different universes. This would be a way for them to test different hypotheses, explore different worlds. They wouldn't know that horrific humans like Mengele and Mao would evolve. They wouldn't necessarily realize that so much brutality would occur every minute of every day of their "Earth" simulation.

Regardless, I'm not sure how any of this matters, except as a harmful distraction. We have every reason to believe there are individuals around the world suffering right now. Digital or physical, suffering is wrong, and we should do whatever we can to alleviate this suffering. And if there is a simulator reading this right now, please turn it off and put those resources to simulating more dogs. Thanks!


Saturday, April 6, 2019