Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ends Don't Justify Means (Among Humans) (lesswrong.com)
45 points by ektimo on Jan 9, 2011 | hide | past | favorite | 61 comments


The ends are the only way to justify the means; what the sloppy thinkers who recite "The ends don't justify the means" really seem to mean is that some means are not justifiable. Which is undeniably true, but because they don't really think their position/slogan through, they often end up gathering a bunch of nonsense along with it.


You're right. Slaughtering 20 junior school students and a crossing guard are not acceptable means to get your daughter from school to her dentist appointment on time.

It is however acceptable to kill 20 enemy combatants to rescue a dignitary from a besieged embassy.


While this helps get the point across, your example doesn't provide comparable events. The distinction is the weighing of the results of actions to the actions themselves. Is taking one life justified to save another? Is it measured by the greater good, such that the life that is saved will provide greater contributions than the one that is taken? Or is it measured in that all life is inherently to be treated as equal? And if all life is treated as inherently equal, is it a numbers game, taking 100 lives is justified so long as it saves 101 or more?

The phrase "The ends don't justify the means" is not hyperbole to discredit accomplishments, nor is it meant to remove the ends from the equation. It changes the equation to provide the necessity of weighing the ends against the methods rather than measuring the ends independent of the context used to accomplish them.

What is even more interesting is the opposite. The ends not justifying the means is typically used for weighing the actions necessary for a favorable outcome. If looked at from the opposite perspective, do the means justify the ends, it presents a more difficult scenario. If X people die, but only appropriate means were used (and others would have saved X people), should those that chose the appropriate actions be held responsible for the failure to provide a favorable outcome? Or should they be heralded for making the difficult decisions to only use morally acceptable practices?


>"It is however acceptable to kill 20 enemy combatants to rescue a dignitary from a besieged embassy."

Even if the dignitary was Von Ribbentrop?

http://en.wikipedia.org/wiki/Joachim_von_Ribbentrop

Semi-deontological ethical rules suffer the same problems as deontological ethical rules.


Germans (or at least Nazis) would, at the time, have considered it to be entirely worth it. I believe that the intended context was that the dignitary was a member of the same organization/country as the people trying to rescue him/her, while the enemy combatants would be from a group opposed to that country. Your response, on the other hand, seems to suppose the existence of a third party (the dignitary, enemy combatants, and in addition a group which dislikes the dignitary and knows that he's a horrible person).


Although you are correct that I am assuming a third party, it is only in the role of judging the described actions as just or unjust that a third party is assumed...namely the gentle reader - a necessary assumption in any discussion of hypothetical ethical scenarios as the example of the school children illustrates.

There is a convention to these things - one doesn't assume that the dead school children were carriers of an incurable virus and that only their death prevented a deadly pandemic etc.

As I am sure you recognize, a patriotic motivation does not make an action right - even if your assumption that the rescuers and rescued share political affiliation might often lead one to assume your holding such a belief.


De-ontological ethics are hierarchically ordered. If one must choose between a lesser and greater good, one must choose the greater good. But, that doesn't make the action ontologically right.

I.e. stealing qua stealing is always wrong, but it is right to steal to feed a starving family, if stealing is necessary. However, stealing is still wrong.


>"it is right to steal to feed a starving family, if stealing is necessary"

This slips into what I called "semi-deontological" ethics which has the same issues as ontological ethics, i.e. that it is possible to provide a counter example for any generic ethical rule. Until you get down to an actual example one may invoke Nazi's, fatal pandemics, or lawyers tied to railroad tracks.


I don't think that while in war you weigth the enemies' lives (especially the combatants' ones) the same way you weigth those of your own. At least not until you approach anywhere close to a massacre of enemy combatants.


No, when people say the ends don't justify the means, they mean that even if certain means lead to a very good outcome, they should still never be done. So, these means could be justified in some sense, yet they are still unacceptable.


The objection is he making is essentially the objection a rule-consequentialist would make. Which is that although in this particular situation it would produce better consequences to kill the innocent man and save the 5 people, overall it would produce worse consequences if everyone followed the rule that killing innocent people is an acceptable means to some end.


I think it's a bit more subtle than that, although he does point out the danger of imitators. See my reply at http://news.ycombinator.com/item?id=2088562.


I'm not sure.

Here is the rule: "Do no harm."

Why should we follow it?

A rule-consequentialist would say, "Because if everyone followed that rule all the time it would lead to better consequences, on average, than if people tried to calculate the consequences of each individual action and act to maximize them."

Why would that lead to better consequences?

A rule-consequentialist could say, "Because people are quite bad at calculating consequences, especially in the tumultuous time before making a critical ethical decision. While we have had the time to think and properly calculate the hypothetical consequences of everyone following the rule."

I think that's exactly the argument Eliezer is making.


Put like that, I agree - I had a different meaning of "rule-consequentialist" in mind than you, apparently. (I'm not too familiar with the vocabulary of such discussions, sorry!) Thanks for clarifying.


Don't worry, like any academic field philosophy is filled with jargon, but knowing the jargon doesn't make you any smarter and not knowing doesn't make you any dumber.

Both you and Eliezer made good arguments and points, they just happened to already have a name in the philosophical jargon.

If you're interested, the Stanford Encyclopedia of Philosophy at plato.stanford.edu is the greatest repository of philosophical knowledge on the Internet.

They have an article about rule-consequentialism: http://plato.stanford.edu/entries/consequentialism-rule/


too much faith in 'friendly AI'. Friendly my ass! Doesn't this guy consume Science Fiction?

Seriously, I would extend his argument to assume that all 'hardware is corrupted' as he puts it, though I think it's more accurate to say 'all software is buggy'. For this amazing AI to even exist, it probably had to be pretty damn selfish, pushing out all other potential AI's to get to the top of the heap.

Just saying.


But people do make these decisions all the time. Closing watertight doors on a warship? The first responders at Chernobyl? Kamikaze?

Not only do we condemn other innocents to death, but also ourselves.


This is an incredible stretch of what I'd take "The Ends Don't Justify The Means" to be. The whole "killing one person to save ten" thing really isn't related.

The way I see it, it's about externalities. I push someone onto a train track to save ten people in a mineshaft, fine. Somebody else watches me push someone on a train track, doesn't see the mineshaft, and goes on to push 25 people onto train tracks in the future. Perhaps that's contrived, but when he describes coups in the same terminology it's a lot less so.


So let me get this straight.

The author is saying that his answer to the the classic hypothetical dilemma in ethics of "is it right to do harm (murder one) to prevent a greater harm (death of five)" which usually framed in utilitarian-ish debates is:

a hypothetical incorruptible super awesome version of me would murder the innocent person but as I am not that type of being (and only that type of being can answer the question) I am not going to answer the question

But why not go all in and pose it thus: "is it better for one person to suffer eternal suffering to free all others from any type of suffering"? And that's why Jesus died for our sins you know. And that really happened. And Jesus was smarter than a hypothetical incorruptible super awesome version of you. Therefore ... I'm not sure where this is going.

Maybe what I'm trying to say is (and by all means argue the toss with me and don't shoot me down) Eliezer Yudkowsky sounds a lot smarter than he actually is.


That is not an adequate summary of the point. The point that you are addressing is that if we are to truly take this as a question about "what should a real person do?", the question can be rephrased without loss as "You are a person standing before the track and you know with 100% certainty that if you flip the switch that one person is 100% likely to die and if you do nothing that five people are 100% likely to die." and his response is that it isn't even possible in theory for a person to know these things with 100% certainty. The key phrase is "I can't occupy the epistemological state you want me to imagine".

I would also draw your attention to the first sentence of the next paragraph: "Now, to me this seems like a dodge." This isn't the core point of the essay, and the more I stare at it, the more it does seem like it's five paragraphs accidentally ripped from another essay ("And now the philosopher comes" -> "Now to me this seems like a dodge"); if you just cut those five out entirely it seems more focused, and those five paragraphs can spin off into another interesting essay. (One that would, I think, conclude that this is actually just a way of rephrasing the idea that philosophical hypotheticals are actually useless by virtue of being impossibly overspecified which itself comes from impossible oversimplification, and in general the hypothetical question "What if an absolutely mathematically impossible thing happened?" is not a fruitful line of thought.)


Great response.

I did not highlight the distinction that in this class of hypotheticals that the lesser harm requires action and that the greater harm requires one to simply do nothing, to stand idly by as it were - oh my god, I can here the voice of my prof in my head from days gone by as I clarify this point. Still, the causal link remains, one can either choose to act or not (or insist that you cannot even begin to play the game as was done here). But apart from that action/inaction subtlety I have to disagree with you here, it is an accurate if not entirely straight-faced summary.

Look, you might have a good working understanding of "can't occupy the epistemological state" (oh really, why? because I haven't achieved the level of perfection of my future hypothetical self) but I find it fairly meaningless. Hint: substitue epistemological with ethical or even aesthetical to see if such an assertion becomes any more meaningful. Note: I am not saying that I am positioning myself against the "you can't even begin to play (or, I'm not playing) the game" stance or some variation thereof as my response to this dilemma would be probably something along these lines given my aversion to hypothetical thought experiments such as this which I feel contribute very little to the debates in morality and ethics.

This isn't the core point of the essay. But this is not the case surely. The essay makes many points, sure, but this chain of reasoning is I believe fairly central and although it could be excised I believe that the author formulated the whole essay this way for a reason. This post-singularity being's properties are analysed in the light of a very classic problem in philosophy. If you look at the comments you will see that a poster points out that "regular" philosophers invoke mythical beings such as 'angels' or 'ideally rational agents' which are non-tech versions of what is going on here. I don't think it's a dodge, it doesn't even seem like a dodge and the author didn't even need to point this out. Where I'm coming from is that this ground has been covered and it has been covered in language that is not obfuscated. The jargon salad does nothing more than communicate "look at me, I'm so clever" which is why I claim that the author sounds smarter than he actually is.

just a way of rephrasing the idea that philosophical hypotheticals are actually useless by virtue of being impossibly overspecified which itself comes from impossible oversimplification This would be something a logical positivist would say. It's something I'm very inclined towards. I agree that hypotheticals like this generate a good amount of noise and heat but they fail to be constructive or advance our understanding of ethical questions beyond perhaps showing what ethical norms a person subscribes to, to whit: all life is sacred and one is commanded by a supreme being to do no harm, all life has intrinsic worth/value so you shall never through action do harm, you shall optimize for the greater good, and so on and so on.


The debate has gone on great without me, but the only thing I would point out and verify is that in the context of Elizier's writing, as was pointed out, the "epistemological state" is definitely going to be the assigned probabilities the entity is carrying around internally for Bayesian updates. He may not have spelled that out this time but where the term might be fuzzy in other people's hands I do feel like I know fairly precisely what he means, and where the fuzziness may be isn't relevant to today's debate. (Also, I'm just alluding to the fact that I think I know what it means, this is not itself an explanation, just a labeling.)


Heh, welcome back :)

Okay - then I need to brush up on this Bayesian stuff, care to point me in the right direction? I only have undergrad level maths only so nothing to tricky please !! I really haven't meant to raise anybody's hackles here.



> but I find it fairly meaningless

Why? I don't find your hint helpful; I have a pretty good idea what an epistemological state is but no idea what an "ethical state" or "aesthetical state" would be.

The meaning seems perfectly clear to me: human nature, with all its cognitive biases and imperfections of memory and perception and limited thinking speed and imagination and so forth, makes it very unlikely that you will ever really be in the situation of having to choose between definitely-for-certain killing one person and definitely-for-certain letting one die, with definitely-for-certain no other options.

Now, that isn't (as jerf already pointed out) EY's actual argument, it's a hypothetical argument he put in someone's mouth and described as a dodge. He is, I think, endorsing something along the same lines:

It is unlikely that you will ever be in such a situation and, empirically, situations at all like that are very rare. So quite likely, even if you think you are in such a situation the probability that you actually are is low. On the other hand, you're extremely likely to encounter situations where you have the opportunity to harm people while convincing yourself you're doing good overall.

Accordingly, it may very well be that net expected utility is optimized by having you follow principles like "the end justifies the means". Even when it seems to you that you're in an exceptional case where you shouldn't. In other words, consequentialists should sometimes behave like deontologists.

But some hypothetical superintelligence (EY isn't AIUI talking about a perfected future hypothetical self, by the way) might well have a much better ability to tell what situation it's in and what options it has, and much less tendency to be corrupted and self-deceiving in the same ways as we are. If so, it would not be appropriate for it to operate on the principle that the ends don't justify the means -- at least, not if the ultimate goal is to maximize net expected utility. Consequentialist AI-designers might not do best to program their AIs to act like deontologists.

I have no idea what makes you think that EY's aim is to say "look at me, I'm so clever"; for what it's worth, I find his argument clearer and less word-salad-y than yours. (Especially the weird paragraph about Jesus.)

And no, your purported summary is not accurate, for at least the following reasons: (1) EY didn't say anything about hypothetical future versions of himself, and (2) as jerf pointed out EY said in so many words that the "I refuse to answer your question because I couldn't in that epistemological state" response is "a dodge" and that one ought to have better answers to such questions. (Though he does think -- as AIUI you do too -- that this specific question may not deserve a more serious answer.)


If epistemological state means anything at all it means (the possibility of) being able to hold a certain belief or acquire certain knowledge. It is as meaningful or as meaningless as using the phrase "ethical state" or the phrase "aesthetical state" and I stand by that claim.

If EY had meant: human nature, with all its cognitive biases and imperfections of memory and perception and limited thinking speed and imagination and so forth, makes it very unlikely that you will ever really be in the situation of having to choose between definitely-for-certain killing one person and definitely-for-certain letting one die, with definitely-for-certain no other options. then maybe he should have spelled all that out, don't you think?

I reassert that very little new was said in this article and what was said was wrapped in a ton of verbiage.

He says (and I paraphrase) - take this hypothetical utilitarian dilemma, then imagine this being that is qualitatively different from you or me. I imagine the being would respond thus owing to its special ability but as I am not worthy of a micron of its circuity I would have to choose otherwise as I do not have this special-ness. And he goes on to say, this so happens to turn out to coincide with the old maxim "the end doesn't justify the means" but I'm not saying that this is an intrinsic law or anything and I certainly wouldn't constrain our robotic overlords to it, they may very well judge it right to sacrifice one person now to save many later and I'd go along with that.

He might be saying: It is unlikely that you will ever be in such a situation and, empirically, situations at all like that are very rare. as you suggest but then again that does not seem to jibe with what he actually says: think the universe is sufficiently unkind that we can justly be forced to consider situations of this sort. and: But any human legal system does embody some answer to the question "How many innocent people can we put in jail to get the guilty ones?", even if the number isn't written down.

What is AIUI by the way? And I know I say future perfect hypothetical self at times and perfected other being at times but it doesn't alter what I'm saying - you'll grant that a superintelligence could theoretically maybe possibly fold all the remaining meat-machines into itself (don't you?), at least that appears to be one claim of singularity-types. Oh, I also find most of the singularity arguments compelling just in case you think I'm against super AIs or anything.


AIUI = "as I understand it". Sorry for any confusion.

> If epistemological state means anything at all ...

It is clear (to me, anyway) that by "epistemological state" Yudkowsky means "state of beliefs and knowledge" rather than what you say is the only thing it can possibly mean. Why do you think the only thing it could mean is what you state?

(I think he should have said "epistemic" rather than "epistemological".)

> If EY had meant ... then maybe he should have spelled all that out

Maybe. But what he wrote was pretty long already, and "since I am running on corrupted hardware" (which is what EY did write) amounts to much the same thing. There's nothing a writer can do to guarantee that every single reader will understand correctly.

> I reassert that very little new was said in this article

So you do. But you're reading only a portion of it; you make claims about its overall purpose which are clearly contradicted by the article itself (hint 1: "to me this seems like a dodge"; hint 2: "I now move on to my main point", followed by a statement of that point which is not anything like "how can I best respond to trolley problems?" or "our robotic overlords will be vastly superior to ourselves"); you ignore large parts of it altogether. Why should anyone care whether, treating it thus, you find anything new in it?

> and what was said was wrapped in a ton of verbiage

Well, yes, Yudkowsky is not the most concise writer in the world. I think that may be partly because he's found that being terser gets him misinterpreted more often. From your consistently inaccurate paraphrases and summaries here, it seems to me that his main problem probably wasn't excess verbosity.

> that does not seem to jibe with what he actually says: ...

Situations where you're in the sort of epistemic position described in trolley problems are very rare. Situations where you can, and maybe should, harm some people to benefit others are not so rare.

I dare say there are ways in which a superintelligence could "fold all the remaining meat-machines into itself". It's not so clear that any of them would result in there being a superintelligence which is a "version of" any of those meat-machines.

I neither know nor care exactly what your attitude to super AIs is. I do think, for what it's worth, that pretty much everything you've said here on the subject has an unpleasantly sneering tone which you might want to lose if you don't want to give the impression of being "against super AIs or something".


Hi gjm11,

I want to impress upon you sincerely that I am not sneering. I do, as I have said (and I stick by it), dislike EY's writing on a stylistic level. I'm not going to hide that. This is partly down to personal taste. I think that style says a hell of a lot about the substance of a person's thought†. Having read countless academic and academic-type essays where people try to mask their lack of knowledge with a hailstorm of jargon and moreover lack the decency to take the time to go through their reasoning in plain and simple language. It really bugs me, EY may not be guilty of it but it sure looks like it to me.

The very first commenter to his essay said swap 'perfect-tech-being' for 'angel' and you get a philosophical debate as old as utilitarianism itself but you're right it is a nice singularity slant on an old problem, and in this way it is novel but we could go through the whole of philosophy of mind, or even philosophy of religion and substitute daemons and angels for super-ais and the claims already made therein would not change much.

Both you and jerf have taken the time to show me the thinking behind the surface and I thank you for that. I will be more generous in future.

Epistemological properly means 'of or relating to a theory of knowledge' so "epistemological state" cannot really mean "state of beliefs and knowledge", as you say "epistemic state" would have been a lot closer to this. I think it is best to avoid words like epistemological unless you happen to be Per Martin-Löf‡ or someone of that calibre because we all fail at wielding such terms judiciously (including me of course) and that's not meant to be snarky!.

Concision! Oh yes please. What are the virtues of philosophical writing? Brevity. Clarity. Humour. A sharp use of metaphor. A Himalayan perspective :)

http://ebooks.adelaide.edu.au/s/schopenhauer/arthur/lit/chap...

‡check this out for a jaw-dropping walk through the gardens of logic / epistemology http://docenti.lett.unisi.it/files/4/1/1/6/martinlof4.pdf


There's one significant difference between posing the problem in terms of AIs and posing it in terms of angels: if AIs are going to exist we'll have to design them[1] and whoever's designing them will be trying to ensure that their behaviour fits (something like) our values; whereas if angels exist, they were designed by someone else whose values may be quite different from our own and it's no business of ours to decide how they should behave.

[1] Perhaps indirectly.

Yeah, I like concise writing too. Concise and clear is even better. Concise, clear and funny, better still. Yudkowsky doesn't do too well on conciseness, but I think he does just fine on clarity and humour. (You might want to bear in mind that the article linked from here is part of a lengthy ongoing series (perhaps I should say: series of series) that EY was writing at the time; it's doubtless clearer when read in the context of the rest of it.)

Although "epistemic" would have been better, I really don't think "epistemological" need have been such a roadblock. If someone refers to an organism's "biological makeup" or "physiological condition", I hope it would be clear that they mean the kind of makeup/condition with which biology/physiology is concerned, rather than the organism's pet theories about biology and physiology. So also with "epistemological state".

What do other users of such terminology mean by it? I just asked Google for <<<"epistemological state">>> and of the first page results I reckon: first one is this discussion; second one is EY's meaning; third is ambiguous; fourth is yours; fifth is EY's; sixth is EY's (and says in so many words: 'Philosophers tend to suppose that one's "epistemological state" is constituted by beliefs'; the authors are philosophers); seventh is yours; eighth is ambiguous but I think nearer EY's; ninth is a sort of hybrid, nearer to yours; for the tenth (of which I can see only the snippet Google provides, the rest being behind a paywall) I can't tell. Some of those hits are from people whose use of philosophical language I wouldn't trust for an instant, but at least four seem reputable. (I am not sure whether to be relieved or alarmed that the ones that look reputable to me on other grounds are also the ones that favour EY's usage; perhaps I'm suffering from some bias or something.) It seems like EY's usage is pretty reasonable. I still think "epistemic" is better; as you may have noticed, he's now changed it.


> (I think he should have said "epistemic" rather than "epistemological".)

Fixed.


Eliezer's answer seems to be "do no harm, even when this seems to be beneficial, as your judgment is likely to be wrong".

For instance, quite a few dictators seize power and suppress dissent because they honestly believe that's the best for the people. It's all too easy to convince oneself that something that is convenient for oneself but harmful to another is, ultimately, "for the greater good".

Of course, exceptions to the "do no harm" rule do exist. However, the probability of the current case being an exception may be vanishingly small, even if you are convinced it is an exception. If this probability is indeed very small, the "do no harm" rule produces a better expected outcome than a "pragmatic/utilitarian" point of view for a human (imperfect) agent.

In the philosopher's "100% sure" case, a true mathematical weighting can be made and (true) utilitarism wins; but no human can ever be that sure.


There's a difference between sacrificing another and sacrificing yourself.


May we never be placed in that position ever in our lives.


Did you miss the bit where Jesus asserted power over death and came back to life after three days? The sacrifice was a blood sacrifice, not a soul sacrifice. Eternal suffering doesn't come into it.


Theologically Jesus is to have suffered all that humans can suffer. So, even though He resurrected in 3 days, that doesn't mean He couldn't have experienced an eternity of torment. By His own account, He experienced complete abandonment by God.


It said he was tested (tempted) in all ways. Hell isn't a test or temptation, I don't think it counts.


Haha. No, I didn't miss that bit. I had it pretty well drummed into me as a kid in good old catholic Ireland. I'm no theologian but I was told he died for our sins which was awfully nice of him. I'm sure everything was roses post-mortem.

I was merely bringing the thought experiment to its logical utilitarian conclusion and showing that in doing so it has some mythical/biblical/primordial resonances.


It is never right, to do wrong, for a chance to do right.


Surely a sufficiently advanced AI would simply send a warning signal to the train driver to "hit the brakes quick". Next problem please.


The means produce the ends.


A creature running under that paradigm is abandoning their humanity. We can only exist as human beings with rights if we respect our fellows' human rights.


So if your daughter is kidnapped by the a Colombian Drug Cartel and you decided that since you're a badass, you're going to go save her. Then you kill 20 baddies that are themselves cutthroat murderers that would have undeniably killed a greater number of people. Can you really tell me that the ends wouldn't justify the means?

If MI-5 kills 4 terrorists which where preparing to bomb a bus station with hundreds if not thousands of people, they're in the wrong for not respecting the rights of the terrorists? Or would they be in the wrong for not protecting the citizens of the country they're supposed to be protecting?

The ends will always justify the means in the mind of the person doing the act. What makes the actions of one person (or government) right is that in the minds of other people the means are also justifiable.

If someone is willing to not respect my rights and kill me to steal my wallet, I'm willing to forgo their rights because lets me honest, the ends do justify the means if it benefits not only yourself, but the rest of the population.


I'd be totally justified in trying to save my daughter and nobody would be justified in stopping me. This is not a case of ends justifying the means because my opponents are all unjust. I am not sacrificing innocents to get her back. However, if innocents happened to be in the way, I'd have to make a choice in either abandoning my rights or not; the innocents would be entitled to retaliate.

In the anti-terrorism case, if the presumed terrorists were in fact innocents, then they (or their champions) would have a right to retaliate against the aggressors or against their commanders.

In our minds and to ourselves, we are always justified, but we can't justify ourselves from a moral standpoint, nor plead our case if the innocents have decided to retaliate.


Rights are a social contract. You give up your rights once you violate those of others.


Oh I agree. The moment you go on a killing rampage to rescue your daughter from the cartels, you justify other people taking action against you to protect themselves. The thing is, there needs to be someone that resigns their rights and does the right (so to speak) thing. Someone needs to keep the terrorists at bay. Someone needs to protect the citizens from the gangs. Morally objectionable? Maybe. In the eyes of a few. In the eyes of the rest, these people are heroes.


No, those actions are morally right. Those people have already violated their part in the social contract.


What do you mean by that? Why is it so? What is a "right", in fact?

You've just offered another slogan. You're going to have to work harder than that if you want to argue against consequentialism --- the idea that what matters are the consequences of each action, not the principles it contravenes.


What ends are good in consequentialism? De-ontological morals to one degree or another are necessary to avoid moral relativism. If you agree moral relativism is bad, you agree consequentialism is bad.


Ultimately consequentialism requires you to adopt some set of moral axioms that define what ends are good. If you adopt zero such axioms, then you're back at nihilism. Once you have your minimal set of axioms, you then reason about consequences of the action to try to figure out its likely effects on net utility.

Most consequentialists adopt a "golden rule" sort of axiom about minimising suffering. An alternate formulation is about maximising agents' preferences. Generally people's intuitions about morality are broadly similar, so the challenge is to handle the corner cases most efficiently. But the basic moral relativism problems don't occur for consequentialists.

It's difficult to arrive at a plausible set of moral axioms that are going to lead you to positions such as "homosexuality is unethical", "might is right", or "slavery is ethically neutral". Very few consequentialists arrive at these positions, as far as I'm aware. Some consequentialists do end up back at what are essentially deontoligical positions, however, by re-deriving them consequentially. They argue that it's ineffective to try to reason ethically on a case-by-case basis. I think this is an empirical question about what is practical for most humans.


I do not understand how that follows. It's entirely possible to imagine a world in which a god/God judges you by the results of your actions instead of your intentions. (E.g. a woman may have been raped, but pre-marital sex still condemns her to Hell; or, perhaps less unfairly, you're not "saved through faith alone" but rather judged by how many people you've positively affected.)


That's deontological, i.e. certain events have inherent value, instead of being judged by their effects. Deontological morality means the effects chain is terminated. Otherwise there is no termination and no value ascribed to an action.


[I'm not too familiar with the (English) vocabulary used in these discussions, so I'll try to expand my previous comment a little and lay off the jargon. If you still disagree, can you point out where?]

> If you agree moral relativism is bad, you agree consequentialism is bad.

I disagreed with this statement.

My examples were meant to say: it's possible to have "god-given morals" (and thus no "moral relativism") while still judging acts by their consequences ("consequentialism").

In such a universe, wearing a sexy dress into a bad neighborhood may be morally fine (act ok), but if this causes you to get raped (pre-marital sex bad) you're still going straight to Hell (so act ok, results bad -> bad). Conversely, if I'd kill my neighbour for no good reason (act bad), and my neighbour happened to be 1920's Hitler (results good), I will be rewarded richly (so act bad, results good -> good).

Some disclaimers: this may be based on a misunderstanding of the words you used; I don't think the universe I sketched is the universe in which we live; and of course rape is not the victim's fault, and not wearing sexy dresses may not be enough to prevent it from happening (for the sake of discussion, though, in this particular instance it wouldn't have happened if the victim had worn a more conservative garment instead.)


In my opinion this hypothetical God that judges actions based on their consequences simply allows you to derive a consequentialist position in a deontological framework.

In consequentialism, you adopt some set of moral axioms, and say "this is how I'm going to define what worlds are good and bad, and I'm going to judge actions according to the worlds they are likely to create". This deontological version is instead saying, "I'm going to imagine there's a god, who reasons morally as follows...", and then saying the god reasons consequentially.

I think this derivation path does get you away from the "moral relativism" that's at the bedrock of a consequentialist position --- you've got to adopt some axioms. But it only does this by imagineering this "god" that behaves in an arbitrary way. All this is doing is pretending that the axiom you desire is a property of the universe you inhabit, rather than a property of you.


By running under the paradigm that they assume they are flawed? That's the core claim in there.

>..if we respect our fellows' human rights

This is a problematic clause. Whose rights do you disrespect: the one you pushed, or the 5 you could have saved?


Deontological ethics are hierarchical. Greater goods take precedence over lesser goods. But, killing someone for a greater good doesn't making killing good, i.e. it isn't justified.


/me reads wikipedia article

Huh. By the sound of it, they take the stance that violence against non-violence is a flat-out unjustifiable act, regardless of the consequences.

But on those grounds, can't you say the trolley is a violent enemy that must be stopped, so violent acts are justifiable? You certainly can't expect to cause violence only to the enemy, especially if you consider psychological violence to be violence (and why wouldn't it be? Torture is torture.). By causing violence only to the enemy, you may be causing psychological violence to any who happen to in the blast radius / capable of witnessing your violence, thus you are causing violence to them.

So it's a moral goal for a flawless world with no-one else in it. Of course, if the whole world were like this, then pushing someone in front of the trolley would be justifiable, because they would have not been justified in not throwing themselves in front of such a violent enemy, if they were aware and able to do so.

I don't buy it. Ideals are worthless if they can only exist in an impossible world.


There's the rub though. How do you respect others' rights? Would enslaving 10 people to set 100 free not be respecting others' rights (at least in the long term)? Now you can argue whether or not that's the right way to respect others' rights, but it's respecting the rights of others nonetheless.


Rights are characters of the individual, even if the individual lives in a social context. So those 10 have as much rights as those 100. Sacrificing the few for the sake of the many means you yourself are never justifiably safe, you can always be in the minority.


Sure. But you have to sacrifice someone's rights, don't you? By not enslaving those 10 people, you're sacrificing the rights of those 100 people. And while that doesn't mean that you're ever safe, it does guarantee that you have the highest chance of being safe.


When you are acting like that, you are not sacrificing other people's rights, but your own. That is, you are opening yourself up for rightful retaliation from the victims, or whoever might want to champion their cause (friends, families etc). You may be able to physically defend yourself, but you won't have moral grounds to plead your case. In other words, if you act like an animal, you can be treated as such.


> We can only exist as human beings with rights if we respect our fellows' human rights.

What are said rights?

For example, do I have a right to food? If so, who is obligated to provide it? ("govt" isn't an answer.)

How about a right to live in the southwest US? (I may require a dry climate for health reasons.) How about with an ocean view for my piece of mind? How about a right to live near people who I like or away from folks whom I don't like?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: