There are a ton of interesting challenges to utilitarianism, and I thought there might be some value in putting them together in one place. Most of the ‘classic’ challenges aren’t ones that I find particularly troubling. Take the ‘transplant problem’:
I will say, I think the 'status quo bias' objection to the experience machine argument relies on a sleight-of-hand, making asymmetric situations out to be symmetric. Of course, some people really would exit the experience machine if asked, even knowing that suffering awaits them: they have a felt need for freedom or truthfulness experienced as an intrinsic value. (For what I take to be a good illustration of this basic idea, see the behaviour of the character Maeve in Westworld Season 1: although her life within the 'machine' is certainly not one of pure bliss, it's still illustrative of what the basic motivation might be.) But that isn't my primary concern: I think that, even if you assume that you shouldn't exit the machine, you can consistently and without status quo bias say that you shouldn't enter it either.
If you are told 'you are already in the experience machine, would you like to come out?', you'd be emerging into a world you'd never lived in, where you'd never built relationships or had concrete projects or goals or aspirations. You'd have never related to this world at all; the only world you care about would be the one inside the machine. By contrast, if someone comes up to you and says 'you live in reality, but would you like to enter the machine?', you _do_ have a connection to this world. You have a family, you probably have friends, you probably have goals and dreams; while you also potentially have a lot of suffering, you might choose not to sacrifice your 'categorical desires' (as they're called) to get rid of it. But if you're already in the machine, you probably don't _have_ any categorical desires, at least as ordinarily understood.* As such, the situations are asymmetric. If I were told that I was in the experience machine, I probably wouldn't leave, because nothing of particular value to me would be waiting in the real world. But it's not valid at all to infer that, rationally, I should get in the experience machine if given the choice, because there _are_ things of particular value to me in this world. This isn't status quo bias: it's a rational response to my existing patterns of desire and value.
Nozick's objection to utilitarianism _just is_ that it takes these asymmetric situations to be symmetric. Things like relationships, or being-in-the-world, or projects and aspirations, simply cannot be represented in classical utilitarianism except insofar as they might cause positive or negative affect; but as a matter of fact they are _intrinsically_, not just instrumentally, relevant to our ethical decisions. The entire point of the experience machine argument is to point out that classical utilitarianism cannot understand the importance of categorical desires. The status quo bias response, which assumes that the two situations are symmetrical and thus ignores the relevance of categorical desire, does not refute the argument: it strengthens it.
*You likely would have some analogue of categorical desires, desires directed towards the world of the machine that are not conditional on your presence in the machine (as opposed to desires directed towards the real world that are not conditional on you being alive). But this would make the situation even more asymmetrical, and only strengthen my point, by providing reasons to stay in the machine that do not correspond with reasons to enter it.
I guess most people think that the point of charity is to build a good society. From that point of view, if people think a society with animal shelters is a good society, giving to animal shelters is entirely correct.
Utilitarians disagree. They don't think the point of charity is to build a good society, but to build a good world. There is one big problem with this idea: The world consists of different societies and those societies never have entirely peaceful intentions towards each other. If most peoples try to build good societies, while a few try to build a good world, those few risk being overrun by good societies.
I escape both the Repugnant Conclusion and the St Petersberg Paradox by not assigning any positive value at all to the creation of a new life, however good that life is expected to be.
I feel pretty comfortable about this, although I believe this does lead to other unintuitive conclusions and I *do* ascribe negative value to the creation of lives of suffering which looks a bit inconsistent.
I'm with you on the organ donor one, I think (in principle, yes; in virtually any practical situation that looks a bit like this, clearly no); I feel uncomfortable about the simulation one, but do believe my discomfort is irrational.
I don't really have a solution for Pascal's Mugging, though.
Excellent summary of some serious objections to utilitarianism. I have one thought on the St. Petersburg paradox, though I admit I have not thought it through carefully. If time is finite, then the number of times the button can be pressed is finite, so there is never a guarantee that the universe will be destroyed, and pressing as many times as possible does result in some chance of immense value. This seems acceptable to me. If it's infinite, then the universe will eventually be destroyed, though the expected rate of value accumulation of the universe at any given finite time is greater with more button presses, it's (Rate of utility accumulation given no presses at time t)*2^n*.51^n, where n is the number of presses by time t. Especially since both presses and no presses may result in infinite net utility in this case, I have no idea what to make of this. Infinities in moral reasoning in general make me squirm, maybe I need to read more about infinite ethics.
All meta-ethical traincars end in a train wreck near Reductio station. Your best bet is to mind the is-ought gap on your way out, and respect the other passengers. 😉
Here I advocate going all the way to crazytown (with the sort of exception of the Saint Petersberg thing). https://benthams.substack.com/p/going-all-the-way-to-crazy-town
One thing worth noting; those paradoxes and such arise for every satisfiable axiology. Everyone thinks that there's a prima facie duty to promote utility -- thus, this is just as much a problem for anyone else as the utilitarian.