MailChimp

Friday, December 12, 2025

A Meaningful Life for the Super-Smart (Part 3)

(Part 1, Part 2)

Are you smarter than all the yutzes around you, but not sure how to focus your life? 

Have I got a deal for you!

Here’s the plan, which I've watched many follow, endorsed by many of the most praised people online:

Step 1: Look at all the unnecessary and acute suffering in the world, including, but not limited to: 

Ignore all that. 

Only normies care about those obvious bummers. You are so much smarter than them!

Instead, Choose Your Own Adventure:

A. Spend your time and resources on something you will have no impact on, like AI or future robots. Or wild animals. [More on the latter next week.]

B. Spend your time and resources on something that has no actual positive impact in the world. Like spreading a specific religion. Or microscopic bugs on our skin or other bugs and nematodes. (BTW, the human digestive tract alone has a million times more neurons than a nematode.)

But wait! There is so much more!

Step 2: Find the Pavlovian language for other people who are also so much smarter (and not distracted by ever having really suffered). Expected value, Bayesian priors. That will prove how brilliant and objective and rational you are, unlike the hypocritical, biased sheeple. 

And if you have to … bend the truth a bit … that’s OK. You can lie about shrimpyour expected value is just so big! And you get so many clicks and so much love from other super-smart people!

But the fun doesn’t stop with just taking time and resources from the non-math causes!

Step 3: Make sure the masses know just how much smarter you are!

Advocate bombing data centers. Say it is immoral to wash your face. Equate the exploitation of bugs with confining a calf in a crate for his entire brief and brutal life. Insist on following the math wherever it goes, all the way to prioritizing electrons.    

Vox's Dylan Matthews illuminated your path years ago:

The common response I got to this was, "Yes, sure, but even if there's a very, very, very small likelihood of us decreasing AI risk, that still trumps global poverty, because infinitesimally increasing the odds that 10^52 people in the future exist saves way more lives than poverty reduction ever could."

The problem is that you could use this logic to defend just about anything. Imagine that a wizard showed up and said, "Humans are about to go extinct unless you give me $10 to cast a magical spell." Even if you only think there's a, say, 0.00000000000000001 percent chance that he's right, you should still, under this reasoning, give him the $10, because the expected value is that you're saving 10^32 lives. Bostrom calls this scenario "Pascal's Mugging," and it's a huge problem** for anyone trying to defend efforts to reduce human risk of extinction to the exclusion of anything else. 

**That is not a “problem”! It is a feature, not a bug!

As the normie writer Michael Lewis' put it in Going Infinite:

One day some historian of effective altruism will marvel at how easily it transformed itself. It turned its back on living people without bloodshed or even, really, much shouting. You might think that people who had sacrificed fame and fortune to save poor children in Africa would rebel at the idea of moving on from poor children in Africa to future children in another galaxy. They didn’t, not really—which tells you something about the role of ordinary human feeling in the movement. It didn’t matter. What mattered was the math. 

And voila! In three easy steps, you will have become a modern-day useful idiot, living a life filled with super-smart math, not squishy “feelings”! Abstractions are so much more pure and neat than trying to impact the chaos of the real world. 

Bonus: you will have endless opportunities for community with other mathletes. You’ll never run out of words to write! Now you need to get the rest in line.

Tyson and Trump and TB will thank you.

No comments: