toxin puzzle, toxic debate

Violet and I were discussing the reading and realized we interpreted a section of Blackburn rather differently. I will present my case on why I think Blackburn is making a normative statement on page 189. 


Here on page 189, Blackburn is describing the toxin puzzle in which you must truly intent to consume the toxin after receiving the reward. 


I am arguing that Blackburn thinks that we, as a society, must “cultivate a disposition” in which we could truly intent to take the toxin and take the toxin. Violet disagrees that he makes such a normative statement! On the following page, he describes how forming a disposition to take the toxin is a sufficient reason for not re-thinking your decision after receiving your reward is normally a dogmatic and unadmirable trait. However, I think he highlights that it is not normally seen as admirable because he wants to cultivate a society in which it is admirable to be dogmatic. 


He thinks as a society we should be “moved to co-operate not just because of principle, or altruism, but just because they would feel uncomfortable with themselves if, for instance, it turned out that they had played hawk and the other dove.” He wants us to view a year out of prison differently if it was at the cost of another in prison rather than just tout court. 


I will add that though he makes it abundantly clear that either choice is rational, I do think he suggests a normative claim in which one choice, or disposition, is better for society. 


Without such a disposition described above, trust would be unable to form. On page 196, he describes how being trusted can cause you to act trustworthy. However, unless you had such a disposition like the one described above, you would not take into account if you are trusted or not when acting. 


Comments

  1. Violet's original and unrelated post: My long standing confusion with MAD

    I’ve always had the following concern with mutually assured destruction. Surely the reason that countries refrain from nuking each other is not the rational conclusion that dropping just one weapon would inevitably lead to everyone involved (or everyone on Earth) being obliterated.

    Even if dropping the first bomb resulted in a chain of retaliation, I would think surely someone would swallow their pride and refrain from dropping the very last one, despite leaving the interaction harmed relative to the other guy. Surely rational actors consider aggregate cost and not just the cost facing them.

    As long as someone had to make a conscious decision to drop the final bomb which would obliterate everybody, surely rationality would keep someone from dropping the bomb which finally obliterated everyone.

    I would think the expected value of an action would be dependent on the expected value of different courses of action, and when the stakes suddenly become everyone on Earth dying, someone would swallow their pride.

    Similarly, in the Centipede problem, why is it not rational to incorporate the opportunity cost of forgone collaboration in the decision to help for the first time? I would think the cost, benefit, and probability of my neighbor helping me are critically important to the conclusions from this. If it is relatively easy to help my partner and the possibility of collaboration allows for any benefit, losing out once doesn’t seem like that big of a deal.

    Shouldn’t the cost of never helping in the first place incorporate the lost opportunity of the potential to cooperate and receive potential benefits all the way up to the maximum possible? This is where I see this as similar to MAD: say we help each other 99 times, and my neighbor decides to cash out and not help me on the last one. Surely I would be content with my surplus of 98 units of benefit, and not rationally conclude that I shouldn’t have helped the last time? Why do I care so much?

    The problem’s conclusion seems to implicitly make assumptions about what people care about – losing out on one favor to my neighbor is much more costly to me than the probability of benefit. But is this necessarily true?

    I would think calculating the expected value of possible collaboration could allow for the first person to make the right move. Surely it is possible that helping my neighbor once and losing out on only the cost of this one favor can be rational.

    If I help my neighbor once, I risk doing something which has no benefit to me, if my neighbor doesn’t return the favor. But as long as I don’t know that my neighbor wouldn’t help me, not taking this risk definitively forgoes all possible benefit. The opportunity cost of never taking the risk should be calculated into the decision to make the first move.

    Perhaps I would like Gauthier, and Blackburn is exactly responding to my concerns with his distinction between theoretical and empirical situations. Or perhaps this just gets down to a definitional problem, all contingent upon if “rational” requires only considering your best interest and knowing everyone else does the same. But even so, what does this show about the real world? If in every case this is not what people think. Or, do we implicitly calculate the probability of what other people include in their calculations?

    ReplyDelete
  2. Violet's response to Bika:

    Hi Bika! Thanks for your post.

    Perhaps my point will become chicken-shit, but I nonetheless feel that the central aim of his argument is descriptive. He responds to Gauthier and others about what "rationality" is, and his discussion of how trust is formed and which dispositions allow for benefit in different situations is merely descriptive. He argues that co-operation is not clearly "rational" necessarily, but depends on the concerns of each actor.

    Notably, he does not say that one "should" be "moved to co-operate," as you say, but rather that "someone may be moved to co-operate ..." (189). This is a small point but demonstrative of his tone.

    Because cooperation is clearly better for everyone, he (and hopefully most people) would argue that it would be better if people had the dispositions which allowed them to cooperate. But I maintain that the overwhelming focus of his argument is to describe how collaboration happens despite collaboration not being rational for every actor.

    ReplyDelete

Post a Comment

Popular posts from this blog

Updated Syllabus

securing legitimate expectations - rawls (ft chamallas)

Anderson, Brettschneider, and Shiffrin: What a Trio.