In a recent series of articles, New Scientist magazine explored what their lead article called “The Great Illusion of the Self.”
The article gave more space to why we don’t know much of anything about our selves than to what we do know, or think that we know, for “While it seems irrefutable that we must exist in some sense, things get a lot more puzzling once we try to get a better grip of what having a self actually amounts to.”
According to the article, we are sure of three things about our selves. We are continuous. We are unified. And we are agents.
“All of these beliefs appear to be blindingly obvious and as certain as can be” ; yet “as we look at them more closely, they become less and less self-evident.”
Human intelligence is unique, and it isn’t.
Our intellects are unique, in the sense that no other animal more than remotely approaches the power of the human brain, a power that includes the remarkable ability both to become aware of its own activity and to think about itself. Cognition and metacognition, on a scale no other animal even approaches.
Our intellects are not unique, in the sense that our formidable mental powers result from the action and interaction of the same neural raw material that compose all synaptic systems, large and small. A hundred neurons or a hundred million neurons is a difference of scale — a very significant difference — not a difference of kind.
The idea that all brains fall somewhere along the same neural continuum is reinforced by David Robson’s “Hive minds: Honeybee intelligence creates a buzz,” published by New Scientist on November 28th.
I’ve argued here more than once that, when it comes to psychology, measurement trumps interpretation. That’s one big reason that I am less critical of brain scans than some others are. To the extent that you have to interpret a game or speculate about a gesture, you’re on potentially shaky ground.
A newly-published study provides evidence of some of the potential problems that can plague research that may appear to be empirical, but really isn’t.
The study, “Social Evaluation or Simple Association? Simple Associations May Explain Moral Reasoning in Infants,” published by PlosOne on August 8th, re-evaluates a landmark experiment that used a toy scenario to conclude that infants have an innate preference for “moral” helpers. Continue reading
In the latest issue of Philosophy Now, Raymond Tallis takes a semi-serious look at the great unknown, the under-examined third of our lives in which we are asleep.
The tone of Tallis’s article comes from the fact that he, like the rest of us, doesn’t know the first thing about sleep. Not just what it is and why we do it, but what it means to our concepts of consciousness and self that every night we lose control, passing from a world of physical perception to another of mental impression. Continue reading
It’s hard to imagine a case that could bring the questions about moral and legal responsibility that are raised by neuroscience any more front and centre than the upcoming investigation of the motives and culpability of Aurora mass murderer James Holmes.
We now know that Holmes was seeing a psychiatrist before his rampage. And his dazed behaviour during his first court appearance suggests that he may be so seriously unhinged that it will be very difficult to hold him criminally responsible for his actions.
Many people suspect that the neuroscience student’s “insanity” is a carefully and cynically planned “get out of jail free” card. They worry that he’ll get away with it, avoiding the harshest versions of the retribution that his crime deserves.
There have been several posts here on whether and to what extent neuroscience should mitigate criminal responsibility, most recently “But it’s the other guy in my brain who’s guilty.”
Many experts, among them Michael Gazzaniga (article here), have argued that brain science is nowhere near complete nor definitive enough to inform decisions about criminal guilt. One of the most recent forays into this contentious legal arena is a New York Times piece by psychologists John Monterosso and Barry Schwartz.
Two new online articles explore the brain centres that may be responsible for self-awareness.
The first article begins with the question, how do we become conscious after sleep? The question can be rephrased to ask what brain areas become more active as we wake and regain normal self-awareness.
Whatever your definition of consciousness, or your opinion of brain scan studies, unless you’re up for some form of dualism there’s no real disputing that every cognitive state is associated with specific brain processes.
Science Daily published online a summary of new research into the brain states of “lucid dreamers,” people who, though asleep, are aware that they are dreaming and whose brain activity at the moment of achieving this “dreaming awareness” is more easily measured than is the brain activity of typical, non-conscious dreamers.
When a personality that’s not me commits a crime, is it a fair punishment to incarcerate the body we share?
And if it’s not, then doesn’t a part of me that I don’t even know get away with it, even get away with murder?
These are the kinds of brain-twisting questions that loom over criminal justice thanks to advances in neuropsychology. And these are the questions that give nightmares to the many who worry about a science-induced end to criminal justice as we know it.
In “Split personality crime: who is guilty?” — a soon to be “paywalled” article published by New Scientist on July 5th — Jessica Hamzelou reports on a study of patients diagnosed with dissociative identity disorder (DID), also known as multiple personality disorder.
Last time, I wrote a rather frustrated little piece about how hard it is for a species as habitually irrational as we are to have real democracy.
The more I thought about what I’d written, and about the many books and articles that had prompted it, the more I appreciated an article that I had read way back in April. (In online terms, that’s a couple of decades ago, not just a couple of months.)
In a Scientific American blog piece titled “The Irrationality of Irrationality: The Paradox of Popular Psychology,” Samuel McNerney cautions us to tread lightly when we draw conclusions from the recent flood of popularized psychology explanations of how and why we’re not really rational creatures at all — at least, not often, and never entirely.
Much of this week’s American political news has been dominated by two high-profile and highly-anticipated Supreme Court decisions.
The first decision struck down much of Arizona’s intrusion into immigration law, on the grounds not that the law violates individual rights but on the narrower legal grounds that immigration is a federal concern. The second, even more prominent decision gave Barack Obama a win (and Mitt Romney a campaign issue) on medical care.
But it’s neither of these decisions about which I want to write.
Instead, I’m motivated by the less-trumpeted and more predictable Supreme Court decision that upheld the Republican Wyoming legislature’s repeal of a law banning large third-party campaign contributions. This decision was along the same 5-4 ideological lines that had previously removed campaign contribution limits from federal elections.
One recent posting was about the claim of evolutionary psychologists that our basic political orientations are more innate than learned. Today, we take a tangential look at the subject by examining a brain study that indicates that basic personality traits may be tied to neural activity in specific areas of the brain.
Oh, great, some of you must be thinking — evolutionary psychology and brain scans. But patience. I’m not going to advocate anything, just report what the study says.