Sunday, January 23, 2022

What I want from long-termists

I continually rip on longtermists on this blog (e.g, 1, 2). There one main thing I'd like from that community:

A recognition of sign-uncertainty (aka cluelessness). That is, we don't know if our actions aimed at the long-term future will have a positive or negative impact. There have been plenty of examples, but one involves work on AI. It is entirely possible that by trying to reign in / slow down the development of AI in the United States (e.g., to force researchers to stop and try to address the alignment problem), an unfettered AI from China could be first and pre-empt every other attempt. 

I don't buy the "AI is a threat to humanity" / "AI will be our god." But if you did believe that, it just seems really difficult to feel very confident that your actions would actually increase the probability of a good outcome.

Also, maybe regular and overt admission of opportunity costs; e.g. that writing endless series of million-word essays about a million years from now means you are actively choosing to not help people who are suffering terribly right now.




No comments: