15 Comments
Dec 31, 2021·edited Dec 31, 2021Liked by Sam Atis

Math/econ here. On Pascal's mugger, our probability that the mugger will keep their promise is allowed to decrease with how much money they promise. So, there is no reason the expected return should increase with their promise.

For the lottery example, we need to distinguish between expected value and expected utility. Even if the expected return on the lottery were positive, almost everyone is risk-averse. Hence, why you said you would still probably not buy such a lottery ticket.

An effective altruist is already working directly with utilities in their calculations and might not need to make this distinction. However, perhaps the appropriate societal utility function (as a function of everyone's each individual utility) is somewhat risk averse.

Pascal's wager makes a lot of assumptions over uncertainty. As an atheist conditioning on my being wrong, I have no idea what God would want. If I had to guess, God would probably prefer a humanist over a selfish and disingenuous monkey dart at the wall. Similarly, I'm doubtful God would want me to subscribe on the selfish microscope chance you put in a good word.

Expand full comment
author

>So, there is no reason the expected return should increase with their promise.

This seems wrong to me, unless you argue that the chance the mugger will keep their promise continues to decrease with how much money they promise indefinitely. Is a mugger who claims he has access to another dimension less likely to be telling the truth if he promises ten trillion dollars than if he promises one trillion dollars?

Expand full comment

Yes, that is what I'm arguing. As you say, my belief would have to be at least inversely proportional: if the offer goes up by a factor of ten then my belief would have to go down by at least a factor of ten.

(In truth, I have already assessed this mugger to be unhinged and I don't think they are much more than some constant more likely to give me $x than another random person. I'm at this point more worried about getting away from them than any possible rewards.)

Expand full comment
author

I don't think it makes much sense for your belief to be inversely proportional to the amount offered by the mugger.

Expand full comment
Dec 31, 2021·edited Dec 31, 2021

Fair, but then, what would be reasonable beliefs? Like what would your beliefs be?

Expand full comment
author

Well I think the expected value calculation is that you ought to give him the money, which is why I'm sceptical that EV is the right way of making this decision.

Expand full comment
Dec 31, 2021·edited Dec 31, 2021

Right, your point is that EV is problematic when it comes to infinities. The counterpoints are that: you need to both specify a prior and a utility function to do these calculations. And, there are many reasonable ones (to me at least) in which I don't need to get to the reducto ad absurdum conclusion of giving all my money to a philosophical mugger.

I mean, just consider the vast amount of uncertainty that could happen in this situation? Once we suspend our standard view of reality to allow for this guy maybe telling the truth, there are an infinite number of good/bad things that could happen from this interaction.

Expand full comment
Jan 9, 2022·edited Jan 9, 2022

Piggybacking off this comment, I think an analogous argument to this is that probabilities can become infinitely small as well. This allows us to have finite expectation values even when the benefit is infinitely valued.

The same thing applies for large finite payoffs which can have extremely small probabilities. i.e. even if your vote will have an utilitarian value of (marginal difference in the policy quality) x (population size) = very big, your vote is likely to influence the election by 1/(population size) so the EV of your vote is likely (difference in the policy quality) x (1 person).

Note also that most times really large number can't just be thought of as infinite. Infinities are very hard to construct in most circumstances, i.e. if a person promised access to money/gold from another dimension, that still wouldn't be worth infinite money on Earth because of inflation, global currency supplies, scarcity, etc.

Expand full comment

> I’m used to using EV and Expected Utility when thinking about what the rational thing to do is [...]

I think the problem lies in *merely* using an EV calculation to determine what the rational decision is. Rather, a better procedure uses both an EV calculation *as well as* a bet-sizing calculation. For an example of a bet-sizing method, see the Kelly criterion <https://en.wikipedia.org/wiki/Kelly_criterion>. [1]

In essence, a decision to act involves a cost, i.e., an expenditure of some resources (often called the "bet size" in discussions about the Kelly strategy, which tend to involve examples about gambling). For example, in the mugger case, the cost is whatever dollar amount you are handing over to the mugger. In the Pascal's wager case, the cost is the utility loss one takes from adopting a Christian lifestyle (if there is such a utility loss). For voting, the cost is the time you take to read up on candidates and go vote.

The important insight is that if you want to maximize the growth of your wealth (or your cumulative net "utils"), there is an optimal expenditure size for each betting opportunity. Pay either more or less than the optimal amount and the expected value of the logarithm of your capital balance [2] goes *down*, even if the individual transaction has a positive expected value! The expected value of the logarithm of your capital balance can even become negative (your capital can go to zero) if you are too far off the optimal bet size.

A simple case demonstrating this would be a positive EV lottery ticket. In the United States, there are a few national lotteries which work on a system whereby if there is no winner during one time period, the prize money is rolled forward to the next time period, increasing the prize amount for the next time. In these types of lotteries, it is not too uncommon for there to be a string of no-winner periods, which eventually results in the prize amount being so large that lottery tickets have a positive EV. Suppose for example that the Mega Millions lottery has accumulated a prize of $1 billion USD, the cost of a ticket is $2, and the chance of winning is 1 in 250 million. The expected value of a ticket is thus $4, twice the cost of the ticket. Most asset classes have ROIs nowhere near 100%, so does this mean that if you have $2 to invest, it should go into buying a lottery ticket instead of into a stock index fund? If you merely compare the one-year-forward EV of any given stock index versus the 100% ROI from a lottery ticket, then the answer appears to be, yes, buy the lottery ticket.

But when you look at the situation with your Kelly strategy glasses on, you see that the optimal fraction of your wealth to bet, given the parameters above, is [1/250,000,000 - (1 - 1/250,000,000)/500,000,000] =

1/500,000,000. Let's say your net worth is $1 million USD. The optimal bet size is then 1 / 5th of a penny. (And if, like most people, you have less than $1 million USD, the optimal bet size is even less.) Since the minimum "investment" is a $2 ticket, and tickets cannot be bought in fractional quantities, after rounding to the nearest whole number, we see the rational number of tickets to buy is 0. That is, despite the huge expected ROI for "investing" in a lottery ticket, you should not buy one. A strategy of repeatedly buying lottery tickets under these circumstances produces a negative growth rate for your capital, which eventually leads to bankruptcy with 100% probability, in the limit of an infinite sequence of such transactions. (Though under other circumstances, such as a vastly greater initial wealth OR the ability to buy up a large fraction of the tickets at once, it could be rational to buy lottery tickets.)

Applying this reasoning to the mugger scenario: since the win probability is extremely low, the optimal bet size may well be so small that it isn't reasonable to give them as much as penny.

And of note to the Pascal's wager scenario: even the promise of an infinite reward does not necessarily increase the optimal bet size past a certain maximum. If there is a non-zero probability of losing the bet, there is no reward large enough to make the optimal bet size your entire capital. (However, I did once hear a well-known preacher say that the proper interpretation of Pascal's argument was that the rewards for Christian living *in this life* were such that it was worth doing even if we could not calculate what would happen to us in the next life. Under this interpretation, the "cost" of the bet is negative. That is, being a Christian is a free lunch, and it is rational to take the bet for that reason. The reader may judge the salience of this view for themselves.)

[1] For technical reasons, the Kelly criterion is a bit simplistic should only be applied in real life with caution. It is nevertheless a great way to introduce oneself to the underlying concept of bet sizing. See the wikipedia article for more discussion about some nuances.

[2] Or the logarithm of your net cumulative "utils".

Expand full comment

Similarly, the utilitarians sole purpose in life should be maximizing utility by making people convert to Christianity in any way possible.

One thing worth mentioning is risk aversion. You could have an expected value that is positive and not take the choice because you are risk averse. I think effective altruists should be risk neutral in their charitable investments. For this reason, they should make extremely risky investments. See here: https://parrhesia.substack.com/p/should-effective-altruists-make-risky

Expand full comment