Update and reboot: Sam Harris has responded to my blog post reacting to his TED talk. In the initial version of this response-to-the-response-to-the-response-to-the-talk, I let myself get carried away with irritation at this tweet, and thereby contributed to the distraction from substantive conversation. Bad blogger. In any event, Sam elaborates his position in some detail, so I encourage you to have a look if you are interested, although it didn't change my mind on any issue of consequence. There are a number of posts out there by people who know what they are talking about and surely articulate it better than I do, including Russell Blackford and Julian Sanchez (who, one must admit, has a flair for titles), and I should add Chris Schoen. But I wanted to try to clarify my own view on two particular points, so I put them below the fold. I went on longer than I intended to (funny how that happens). The whole thing was written in a matter of minutes -- have to get back to real work -- so grains of salt are prescribed. First, the role of consensus. In formal reasoning, we all recognize the difference between axioms and deductions. We start by assuming some axioms, and the laws of logic allow us to draw certain conclusions from them. It's not helpful to argue that the axioms are ``wrong'' -- all we are saying is that if these assumptions hold, then we can safely draw certain conclusions. A similar (although not precisely analogous) situation holds in other areas of human reason, including both science and morality. Within a certain community of like-minded reasoners, a set of assumptions is taken for granted, from which we can draw conclusions. When we do natural science, we assume that our sense data is more or less reliable, that we are not being misled by an evil demon, that simpler theories are preferable to complicated theories when all else is equal, and so forth. Given those assumptions, we can go ahead and do science, and when we disagree -- which scientists certainly do -- we can usually assume that the disagreements will ultimately be overcome by appeal to phenomena in the natural world, since as like-minded reasoners we share common criteria for adjudicating disputes. Of course there might be some people who refuse to accept those assumptions, and become believers in astrology or creationism or radical epistemological skepticism or what have you. We can't persuade those people that they're wrong by using the standards of conventional science, because they don't accept those standards (even when they say they do). Nevertheless, we science-lovers can get on with our lives, pleased that we have a system that works by our lights, and in particular one that is pragmatically successful at helping us deal with the world we live in. When it comes to morality, we indeed have a very similar situation. If we all agree on a set of starting moral assumptions, then we constitute a functioning community that can set about figuring out how to pass moral judgments. And, as I emphasized in the original post, the methods and results of science can be extremely helpful in that project, which is the important and interesting thing that we all agree on, which is why it's a shame to muddy the waters by denying the fact/value distinction or stooping to insults. But I digress. The problem, obviously, is that we don't all agree on the assumptions, as far as morality is concerned. Saying that everyone, or at least all right-thinking people, really want to increase human well-being seems pretty reasonable, but when you take the real world seriously it falls to pieces. And to see that, we don't have to contrast the values of fine upstanding bourgeois Americans with those of Hitler or Jeffrey Dahmer. There are plenty of fine upstanding people -- you can easily find them on the internet! -- who think that human well-being is maximized by an absolute respect for individual autonomy, where people have equal access to primary goods but are given the chance to succeed or fail in life on their own. Other people think that a more collective approach is called for, and it is appropriate for some people to cede part of their personal autonomy -- for example, by paying higher taxes -- in the name of the greater good. Now, we might choose to marshall arguments in favor of one or another of these viewpoints. But those arguments would not reduce to simple facts about the world that we could in principle point to; they would be appeals to the underlying moral sentiments of the individuals, which may very well end up being radically incompatible. Let's say that killing a seventy-year-old person (against their will) and transplanting their heart into the body of a twenty-year old patient might add more years to the young person's life than the older person might be expected to have left. Despite the fact that a naive utility-counting would argue in favor of the operation, most people (not all) would judge that not to be moral. But what if a deadly virus threatened to wipe out all of humanity, and (somehow) the cure required killing an unwilling victim? Most people (not all) would argue that we should reluctantly take that step. (Think of how many people are in favor of involuntary conscription.) Does anyone think that empirical research, in neuroscience or anywhere else, is going to produce a quantitative answer to the question of exactly how much harm would need to be averted to justify sacrificing someone's life? "I have scientifically proven that if we can save the life of 1,634 people, it's morally right to sacrifice this one victim; but if it's only 1,633, we shouldn't do it." At bottom, the issue is this: there exist real moral questions that no amount of empirical research alone will help us solve. If you think that it's immoral to eat meat, and I think it's perfectly okay, neither one of us is making a mistake, in the sense that Fred Hoyle was making a mistake when he believed that conditions in the universe have been essentially unchanging over time. We're just starting from different premises. The crucial point is that the difference between sets of incompatible moral assumptions is not analogous to the difference between believing in the Big Bang vs. believing in the Steady State model; but it is analogous to believing in science vs. being a radical epistemological skeptic who claims not to trust their sense data. In the cosmological-models case, we trust that we agree on the underlying norms of science and together we form a functioning community; in the epistemological case, we don't agree on the underlying assumptions, and we have to hope to agree to disagree and work out social structures that let us live together in peace. None of which means that those of us who do share common moral assumptions shouldn't set about the hard work of articulating those assumptions and figuring out how to maximize their realization, a project of which science is undoubtedly going to be an important part. Which is what we should be talking about all along. The second point I wanted to mention was the justification we might have for passing moral judgments over others. Not to be uncharitable, but it seems that the biggest motivation most people have for insisting that morals can be grounded in facts is that they want it to be true -- because if it's not true, how can we say the Taliban are bad people? That's easy: the same way I can say radical epistemological skepticism is wrong. Even if there is no metaphysically certain grounding from which I can rationally argue with a hard-core skeptic or a Taliban supporter, nothing stops me from using the fundamental assumptions that I do accept, and acting accordingly. There is a weird sort of backwards-logic that gets deployed at this juncture: "if you don't believe that morals are objectively true, you can't condemn the morality of the Taliban." Why not? Watch me: "the morality of the Taliban is loathsome and should be resisted." See? I did it! The only difference is that I can only present logical reasons to support that conclusion to other members of my morality community who proceed from similar assumptions. For people who don't, I can't prove that the Taliban is immoral. But so what? What exactly is the advantage of being in possession of a rigorous empirical argument that the Taliban is immoral? Does anyone think they will be persuaded? How we actually act in the world in the face of things we perceive to be immoral seems to depend in absolutely no way on whether I pretend that morality is grounded in facts about Nature. (Of course there exist people who will argue that the Taliban should be left alone because we shouldn't pass our parochial Western judgment on their way of life -- and I disagree with those people, because we clearly do not share underlying moral assumptions.) Needless to say, it doesn't matter what the advantage of a hypothetical objective morality would be -- even if the world would be a better place if morals were objective, that doesn't make it true. That's the most disappointing part of the whole discussion, to see people purportedly devoted to reason try to concoct arguments in favor of a state of affairs because they want it to be true, rather than because it is.