Math/econ here. On Pascal's mugger, our probability that the mugger will keep their promise is allowed to decrease with how much money they promise. So, there is no reason the expected return should increase with their promise.
For the lottery example, we need to distinguish between expected value and expected utility. Even if the expecte…
Math/econ here. On Pascal's mugger, our probability that the mugger will keep their promise is allowed to decrease with how much money they promise. So, there is no reason the expected return should increase with their promise.
For the lottery example, we need to distinguish between expected value and expected utility. Even if the expected return on the lottery were positive, almost everyone is risk-averse. Hence, why you said you would still probably not buy such a lottery ticket.
An effective altruist is already working directly with utilities in their calculations and might not need to make this distinction. However, perhaps the appropriate societal utility function (as a function of everyone's each individual utility) is somewhat risk averse.
Pascal's wager makes a lot of assumptions over uncertainty. As an atheist conditioning on my being wrong, I have no idea what God would want. If I had to guess, God would probably prefer a humanist over a selfish and disingenuous monkey dart at the wall. Similarly, I'm doubtful God would want me to subscribe on the selfish microscope chance you put in a good word.
>So, there is no reason the expected return should increase with their promise.
This seems wrong to me, unless you argue that the chance the mugger will keep their promise continues to decrease with how much money they promise indefinitely. Is a mugger who claims he has access to another dimension less likely to be telling the truth if he promises ten trillion dollars than if he promises one trillion dollars?
Yes, that is what I'm arguing. As you say, my belief would have to be at least inversely proportional: if the offer goes up by a factor of ten then my belief would have to go down by at least a factor of ten.
(In truth, I have already assessed this mugger to be unhinged and I don't think they are much more than some constant more likely to give me $x than another random person. I'm at this point more worried about getting away from them than any possible rewards.)
Well I think the expected value calculation is that you ought to give him the money, which is why I'm sceptical that EV is the right way of making this decision.
Right, your point is that EV is problematic when it comes to infinities. The counterpoints are that: you need to both specify a prior and a utility function to do these calculations. And, there are many reasonable ones (to me at least) in which I don't need to get to the reducto ad absurdum conclusion of giving all my money to a philosophical mugger.
I mean, just consider the vast amount of uncertainty that could happen in this situation? Once we suspend our standard view of reality to allow for this guy maybe telling the truth, there are an infinite number of good/bad things that could happen from this interaction.
Seems like large finite numbers will do the trick rather than just infinities. Utility function seems like it would work for the mugger example ordinarily, but if we specify that the only goal of the scenario is to maximise money, does it still work?
Yes, we could certainly construct beliefs and utility functions such that we'd pay the mugger. My point is that we can also very clearly construct them such that we don't. It's not necessarily a problem with expected utility theory but with our choice of priors.
I think the point about uncertainty is a critical one. We are asked to calculate conditional on us being completely wrong about how reality works. And the point is, we have no idea how to form probabilities then. Given that our basic understanding of reality is approximately correct, then I'd assign literally 0 probability to the mugger giving me a quadrillion dollars.
Breaking it up into these two cases, I can say what the right decision is in case 1, and in case 2 that I have no concept of what the right decision is. Knowing only this, my decision under said uncertainty would be to run from the mugger.
I accept that if you assign literally zero probability that it isn't worth doing, but I'm not sure I'm willing to say that the chance is literally zero. Interesting points though.
I'm saying in case 1 where I know how to compute probability (the mugger is offering me a quadrillion dollars of value through a dimensional portal and I assume dimensional portals don't exist) it's literally zero. I'm not saying the probability is literally zero.
Piggybacking off this comment, I think an analogous argument to this is that probabilities can become infinitely small as well. This allows us to have finite expectation values even when the benefit is infinitely valued.
The same thing applies for large finite payoffs which can have extremely small probabilities. i.e. even if your vote will have an utilitarian value of (marginal difference in the policy quality) x (population size) = very big, your vote is likely to influence the election by 1/(population size) so the EV of your vote is likely (difference in the policy quality) x (1 person).
Note also that most times really large number can't just be thought of as infinite. Infinities are very hard to construct in most circumstances, i.e. if a person promised access to money/gold from another dimension, that still wouldn't be worth infinite money on Earth because of inflation, global currency supplies, scarcity, etc.
Math/econ here. On Pascal's mugger, our probability that the mugger will keep their promise is allowed to decrease with how much money they promise. So, there is no reason the expected return should increase with their promise.
For the lottery example, we need to distinguish between expected value and expected utility. Even if the expected return on the lottery were positive, almost everyone is risk-averse. Hence, why you said you would still probably not buy such a lottery ticket.
An effective altruist is already working directly with utilities in their calculations and might not need to make this distinction. However, perhaps the appropriate societal utility function (as a function of everyone's each individual utility) is somewhat risk averse.
Pascal's wager makes a lot of assumptions over uncertainty. As an atheist conditioning on my being wrong, I have no idea what God would want. If I had to guess, God would probably prefer a humanist over a selfish and disingenuous monkey dart at the wall. Similarly, I'm doubtful God would want me to subscribe on the selfish microscope chance you put in a good word.
>So, there is no reason the expected return should increase with their promise.
This seems wrong to me, unless you argue that the chance the mugger will keep their promise continues to decrease with how much money they promise indefinitely. Is a mugger who claims he has access to another dimension less likely to be telling the truth if he promises ten trillion dollars than if he promises one trillion dollars?
Yes, that is what I'm arguing. As you say, my belief would have to be at least inversely proportional: if the offer goes up by a factor of ten then my belief would have to go down by at least a factor of ten.
(In truth, I have already assessed this mugger to be unhinged and I don't think they are much more than some constant more likely to give me $x than another random person. I'm at this point more worried about getting away from them than any possible rewards.)
I don't think it makes much sense for your belief to be inversely proportional to the amount offered by the mugger.
Fair, but then, what would be reasonable beliefs? Like what would your beliefs be?
Well I think the expected value calculation is that you ought to give him the money, which is why I'm sceptical that EV is the right way of making this decision.
Right, your point is that EV is problematic when it comes to infinities. The counterpoints are that: you need to both specify a prior and a utility function to do these calculations. And, there are many reasonable ones (to me at least) in which I don't need to get to the reducto ad absurdum conclusion of giving all my money to a philosophical mugger.
I mean, just consider the vast amount of uncertainty that could happen in this situation? Once we suspend our standard view of reality to allow for this guy maybe telling the truth, there are an infinite number of good/bad things that could happen from this interaction.
Seems like large finite numbers will do the trick rather than just infinities. Utility function seems like it would work for the mugger example ordinarily, but if we specify that the only goal of the scenario is to maximise money, does it still work?
Yes, we could certainly construct beliefs and utility functions such that we'd pay the mugger. My point is that we can also very clearly construct them such that we don't. It's not necessarily a problem with expected utility theory but with our choice of priors.
I think the point about uncertainty is a critical one. We are asked to calculate conditional on us being completely wrong about how reality works. And the point is, we have no idea how to form probabilities then. Given that our basic understanding of reality is approximately correct, then I'd assign literally 0 probability to the mugger giving me a quadrillion dollars.
Breaking it up into these two cases, I can say what the right decision is in case 1, and in case 2 that I have no concept of what the right decision is. Knowing only this, my decision under said uncertainty would be to run from the mugger.
I accept that if you assign literally zero probability that it isn't worth doing, but I'm not sure I'm willing to say that the chance is literally zero. Interesting points though.
Fyi, you'd enjoy the two envelopes paradox.
I'm saying in case 1 where I know how to compute probability (the mugger is offering me a quadrillion dollars of value through a dimensional portal and I assume dimensional portals don't exist) it's literally zero. I'm not saying the probability is literally zero.
Piggybacking off this comment, I think an analogous argument to this is that probabilities can become infinitely small as well. This allows us to have finite expectation values even when the benefit is infinitely valued.
The same thing applies for large finite payoffs which can have extremely small probabilities. i.e. even if your vote will have an utilitarian value of (marginal difference in the policy quality) x (population size) = very big, your vote is likely to influence the election by 1/(population size) so the EV of your vote is likely (difference in the policy quality) x (1 person).
Note also that most times really large number can't just be thought of as infinite. Infinities are very hard to construct in most circumstances, i.e. if a person promised access to money/gold from another dimension, that still wouldn't be worth infinite money on Earth because of inflation, global currency supplies, scarcity, etc.