The pair may come from the same publisher (Princeton), but they couldn't be more different.
Blind Spots is a good book. It tells a story in a clear and compelling fashion, which is what a book is for.
The story is that we often act unethically, not because we're faced with ethical questions and decide to pick the "bad" option, but because we fail to see that there is an ethical issue at all.
This is not the same as saying that 'the road to hell is paved with good intentions'. That old phrase warns against trying to be good and, as a result, causing evil, because your plans go wrong. Blind Spots is saying, even if all of your attempts to be good work out just fine, you might still cause evil despite that.
For example, you could be a good employee, who never calls in sick unnecessarily, kind to your friends and colleagues, and a generous charity donor.
Unfortunately, you're an accountant connected to Enron, and your work - ultimately - consists of defrauding innocent people. But of course, you don't think of it like that, because we don't tend to think about things "ultimately".
Which is hard to disagree with. At worst, you could say it's obvious, although I think it's still something we ought to be reminded of. That's not all there is to the book, though: it also discusses how this happens and suggests ways to avoid it within organizations.
For example, the authors give an example of how setting up rewards and punishments to "make people be ethical", can make them less so, by encouraging people to think of the issue as a personal trade-off between gain and loss, rather than an ethical dilemma - what the authors call "ethical fading".
A day-care centre was annoyed at the fact that some parents were picking up their children late. This was antisocial because it meant staff had to work late into the evening.
So they started charging parents a late fee. Not a big one, but enough to send people a message: this is wrong, don't do. But in fact what happened was that late pickups became more common.
Previously, many people were making an effort to be on time, as a matter of principle. Once the fees were in place, it stopped being an ethical issue and just became a financial trade-off: is it worth paying the fee to get an extra hour?
Of course, you could make the fees higher to get around this, but even then, you've caused ethical fading, and you'll be relying on the sanctions from that point on.
Braintrust, by contrast, is just not a good read. The bulk of the book consists of discussions of various neurotransmitters and brain areas and how they may be related to human social behaviour. Oxytocin, for example, may make us behave all trusting and kindly, as it's involved in maternal bonding. There's a long discussion of the neurochemistry of male sexual behaviour in voles.
It's not clear how this is relevant to ethics. Whether it's oxytocin that does it, or something else, and whether voles are a useful model of human behaviour or not, clearly sometimes we trust people and sometimes we don't. That's psychology. And biology can't yet explain it.
Churchland doesn't claim that the various biological concepts that she covers can fully explain anything, and she doesn't vouch that all of these findings are rock solid. Which is good, because they can't, and they're not. So why spend well over half of the book talking about them?
Churchland's big idea seems to be that human morality emerges out of our more general capacity for sociability. Hence all the stuff about oxytocin and "the social brain". OK. But I'd have said that's a given - there's obviously some relation between sociability and morality.
I think there is an interesting idea in here, albeit not very clearly expressed, namely that morality isn't a special function of the brain, but just one of the many forms in which our social cognition can take.
In other words, I think the claim is that ethics isn't just related to sociability, it is sociability. Even asocial animals care about their own welfare, in terms of pleasure and pain; social ones become social when they extend this caring to others; intelligent social animals including humans and maybe some primates also have a system for inferring the motivations and thoughts of others.
At the end of the book, Churchland stops reviewing neuroscience, and starts talking about the implications for philosophy. This is best section of the book, but it's too short.
Churchland makes the interesting point, for example, that when we are considering philosophical "ethical dilemmas", like the famous trolley problems, we may not be applying any kind of ethical "rules" as such. Rather, she thinks that our moral reasoning is pretty much a kind of pattern recognition based on previous experience - like all our other social reasoning.
Someone who'd just read a book about the horrors of Stalinism might tend to adopt an anti-consequentialist, every-life-is-sacred approach. Whereas if you'd just watched a movie in which the hero, reluctantly but rightly, decides to sacrifice one guy to save many other people, would do the opposite. Then the ethical "rules" might be confabulated to cover it.
This is a nice idea. It's open to criticism, but it's a serious suggestion, and one that deserves a decent discussion. Sadly, there isn't one. If only there were more room in the book for this kind of stuff - but oxytocin covers so many pages.
Basically, the good parts of this book are not about the brain at all.
Reading Braintrust is like going on date but then bumping into an annoying friend who insists on coming along for dinner. Jesus, The Brain, you want to say. I like you and all, but seriously, you are getting in the way right now.