Quote of the Day

My wife commented to me that one of the unfortunate things about philosophers is that they aren’t mathematicians. When philosophers discover a paradox, they write paper after paper after paper about the paradox, trying to resolve it. When mathematicians discover a paradox, they re-evaluate the model that caused it. Philosophers seem to have a lot of trouble actually evaluating underlying models, though. Instead, they assume some model is correct (in this case, their harm-based moral model) and they criticize the details of conclusions people draw from other models.

I don’t think it ever even occurs to them that their model of “harm” might not be the best one.

-Commenter LotharBot in a comment on my “Test of Moral Intuitions” post. (LotharBot’s subsequent comment is also well worth reading.)

UPDATE: Mrs. LotharBot elaborates in the comments.

5 thoughts on “Quote of the Day”

  1. The first half of the sentence was “My wife commented to me that…” ;) She deserves most of the credit for that insight.

  2. Sorry. I replaced your wife’s words in the quote to make it clear that the ideas are hers.

  3. Lotharbot’s wife here.

    I think I may have been careless with language. Strictly speaking, either camp (philosophers and mathematicians) can abide a paradox–it is contradictions that they seek to avoid. But the frequent error philosophers make is to ignore the models involved, resulting in mistaking a paradox for a contradiction, or (sometimes sillier) manufacturing a paradox.

    When faced with an apparent contradiction, it is certain an error lies somewhere: whether it is with poor reasoning, contradictory axioms, or bad models. In the first two cases, it is a bona fide contradiction, but in the last it is simply a counterintuitive result.

    When faced with difficulties, mathematicians habitually question all three, knowing that their understanding of the underlying models is one of the things that may be flawed. Philosophers by and large don’t seem to have this habit: they question axioms/definitions and reasoning, but rarely question whether something they think they understand “really works that way.”

    That’s the frustration I have with reading philsophers. All too often they go on for pages upon pages of improbable and uncomfortable conclusions, all built on an assumption about how something like “harm” or “goodness” works–when five minutes’ reflection on the topic of “does it really work that way?” reveals a whole host of more likely solutions.

    The quiz that sparked the comment makes the error several times in the analysis page. As an example, though it does lip service to the idea that it is possible to see harm in the scenarios depicted in the quiz, it goes on to say that arguing harm results from them “is not an easy argument to make” and “will require a good deal of thought.” Of course, this is quite an outrageous statement to make unless they and their audience have a shared technical understanding of what harm means and how it works. At best, they could say, “With our understanding of harm due to physical and psychological effects, we think it would be very difficult to argue these actions cause any harm.” But no one can reasonably say how hard or easy it is to argue something in someone *else’s* model! This statement completely fails to take seriously the possibility that there might be a competing mental model of harm in which the harm from those actions is immediate and obvious. A few moments’ thought yields some popular concepts of harm in terms of character formation, lost opportunities, symbolic meaning, spiritual effects, etc., which wildly change the amount of harm done in the circumstances. Painstakingly precise logical analysis is undercut by sloppy analysis of available models.

    And it’s got a really bad one at the end. They write, “… if one wants to argue that an act can be wrong without harm, or the possibility of harm, then it is necessary to think carefully about how one justifies the attribution of wrongdoing, in order to avoid at least some notion of harm – however broad – entering into the moral calculus.”

    This could mean either that moral systems not based on harm must exclude it entirely or that people usually include the notion of harm in moral decisions even when they think they aren’t doing so. If it is the latter, it again fails to take seriously the possibility of competing moral models. (“Oh sure, you can have a model which doesn’t invoke harm… but we suspect you won’t really mean it.” ) If it is the former (and the general context inclines me to think it is,) it is even worse: a terrible blind spot that arises from a unquestioned and overly restrictive model of moral systems–namely, that they derive from a single axiom or value. It is quite possible to develop a moral system that rejects actions which cause harm, and also rejects other actions for other reasons.

    It is frustrating to see people take such painstaking care to get the logic right, and not take five minutes to think about the models. Sadly, this is more often the case than not when I read philosophy, even the good stuff.

  4. -Thanks for that elaboration.

    -The term “frame blindness” also comes to mind.

    -Is not the Rawlsian critique of laissez-faire based on assumptions about the knowability of secondary and tertiary effects of one’s behavior? IOW, are you not asked to accept one of Rawls’s models as a premise?

    -Behaviorists might make some of the same points that you did, e.g., in criticizing notions of “mind” and mind/body dualism.

Comments are closed.