In brief, the longtermists claim that if humanity can survive the next few centuries and successfully colonize outer space, the number of people who could exist in the future is absolutely enormous. According to the “father of Longtermism”, Nick Bostrom, there could be something like 10^58 human beings in the future, although most of them would be living “happy lives” inside vast computer simulations powered by nanotechnological systems designed to capture all or most of the energy output of stars. (Why Bostrom feels confident that all these people would be “happy” in their simulated lives is not clear. Maybe they would take digital Prozac or something?) Other longtermists, such as Hilary Greaves and Will MacAskill, calculate that there could be 10^45 happy people in computer simulations within our Milky Way galaxy alone. That’s a whole lot of people, and longtermists think you should be very impressed.
So the question is: If you want to do “the most good”, should you focus on helping people who are alive right now or these vast numbers of possible people living in computer simulations in the far future? The answer is, of course, that you should focus on these far-future digital beings.
Phil Torres
In my previous linked article, the author writes at some point that he believes Elon Mask wants to do some flavor of good
. And here we have some clues as to what Musk considers ‘good’ for the future of humanity. I would take this with some grain of salt, as we can’t definitely know what his plans and expectations are, but seems plausible considering the kind of projects he’s involved in.
This doesn’t mean we should entirely neglect current problems, as the longtermists would certainly tell us, but in their view we should help contemporary people only insofar as doing so will ensure that these future people will exist. This is not unlike the logic that leads corporations to care about their employees’ mental health. For corporations, people are not valuable as ends in themselves. Instead, good mental health matters because it is conducive to maximizing profit, since healthy people tend to be more productive. Corporations care about people insofar as doing so benefits them.
For longtermists, morality and economics are almost indistinguishable: Both are numbers games that aim to maximize something. In the case of businesses, you want to maximize profit, while in the case of morality, you want to maximize “happy people”. It’s basically ethics as capitalism.
Musk has explicitly said that buying Twitter is about the future of civilization
. That points to his peculiar notion of philanthropy and the notion that no matter how obnoxious, puerile, inappropriate or petty his behavior — no matter how destructive or embarrassing his actions may be in the present — by aiming to influence the long-term future, he stands a chance of being considered by all those happy people in future computer simulations as having done more good, overall, than any single person in human history so far. Step aside, Mahatma Gandhi, Mother Teresa and Martin Luther King Jr.
This vision for the future of the human race is relatively common in science-fiction, possibly one of the best examples being Greg Egan’s Diaspora. The thing is… in Diaspora, becoming virtual was a refuge from the environmental calamities on Earth; not a decision taken freely or lightly, but rather a solution to ensure the survival of humanity. To me, this signifies that humanity failed to think long-term and large-scale, otherwise they would have prevented a climate catastrophe, and biological and cybernetic humans could have coexisted and pursued alternate paths of development.
The notion that Musk’s actions today will bring about this longtermist Eden is extremely arrogant and egotistic. If we’re talking about a distant far-future, by definition everyone currently alive cannot know the ultimate outcome of their actions, therefore we cannot judge people by this hypothetical compounded ‘future good’, but by their consequences during our lifetimes. Even if this pan-galactical virtual civilization eventually emerges, their members will have so little in common with current humans we cannot presume to guess what they will consider ‘happiness’ or how will they judge the actions of their distant ancestors. If we as a species make it that far, it’s entirely possible that most historical records of the 21st century will have been long-lost during the (unavoidable) upheavals on the way there. Maybe Musk should next invest in a disaster-proof bunker in a remote place to preserve his memoires…
Post a Comment