Sunday, February 5, 2023

80,000 Hours

 Beth Orton She Cries Your Name

Cottonwood Canyon Road, Grand Staircase Escalante, Utah.
Is it better with the picture at the top, or should it stay below?

A former colleague of mine once commented that it would take 80,000 Hours to read all the posts, articles, comments, and books posted by Effective Altruists. I actually think this is an understatement; I'm pretty sure that the Effective Altruist Forum, Less Wrong, Astral Codex Ten, and related sites and subreddits have far more than 40 hours worth of content posted every week. 

I'm not joking. Just perusing these sites makes me shudder. 

In addition to everyone wanting to have their say, the pattern seems to be to try to win arguments through sheer volume. Anyone who writes a brief piece is just swamped by those who seem to have nothing else to do but typee-typee.

Please understand, I'm not badmouthing anyone. There are a lot of smart and passionate people who really want to make the world a better place. But there are also a lot of people there trying to prove how smart they are (how high their expected value is), along with many unhinged individuals

So, noting that I am not well-read on the topic, I did find this article by Carla Cremer to be worth a skim: How effective altruists ignored risk. (That article led me to Ineffective altruism: Some doubts about effective altruism. Yikes.) A few excerpts:

Longtermism and expected value calculations merely provided room for the measure of goodness to wiggle and shape-shift. Futurism gives rationalization air to breathe because it decouples arguments from verification. You might, by chance, be right on how some intervention today affects humans 300 years from now. But if you were wrong, you’ll never know — and neither will your donors. For all their love of Bayesian inference, their endless gesturing at moral uncertainty, and their norms of superficially signposting epistemic humility, EAs became more willing to venture into a far future where they were far more likely to end up in a space so vast and unconstrained that the only feedback to update against was themselves....

It should be the burden of institutions, not individuals, to face and manage the uncertainty of the world. Risk reduction in a complex world will never be done by people cosplaying perfect Bayesians. Good reasoning is not about eradicating biases, but about understanding which decision-making procedures can find a place and function for our biases. There is no harm in being wrong: It’s a feature, not a bug, in a decision procedure that balances your bias against an opposing bias. Under the right conditions, individual inaccuracy can contribute to collective accuracy.

I will not blame EAs for having been wrong about the trustworthiness of Bankman-Fried, but I will blame them for refusing to put enough effort into constructing an environment in which they could be wrong safely. Blame lies in the audacity to take large risks on behalf of others, while at the same time rejecting institutional designs that let ideas fail gently.

No comments: