Está en la página 1de 29

A Neurological Approach to Skepticism

Published by Steven Novella under Uncategorized Comments: 7


A recent Op-Ed in the New York Times by Sam Wang and Sandra Aamodt called Your Brain Lies to You discusses many themes I have covered in this blog (here and here for example). The piece appears to be a preview of their upcoming book: Welcome to Your Brain: Why You Lose Your Car Keys but Never Forget How to Drive and Other Puzzles of Everyday Life, but it is an excellent summary of many skeptical principles namely that we cannot trust our memories.

They write:

The brain does not simply gather and stockpile information as a computers hard drive does. Facts are stored first in the hippocampus, a structure deep in the brain about the size and shape of a fat mans curled pinkie finger. But the information does not rest there. Every time we recall it, our brain writes it down again, and during this re-storage, it is also reprocessed. In time, the fact is gradually transferred to the cerebral cortex and is separated from the context in which it was originally learned. For example, you know that the capital of California is Sacramento, but you probably dont remember how you learned it. This phenomenon is known as source amnesia we remember facts but not necessarily how we learned them. In the context of our evolutionary milieu this was probably not a problem it was better to dedicate brain power to remembering important stuff rather than how we learned them. Or, this could just be an unintended consequence of how the brain works. Either way, in our modern society, filled with misinformation, ideological distortion, pseudoscience, denial, revisionism, and simply bad information and unsubstantiated rumors spreading at the speed of the internet it is critically important to know and remember where information came from.

I face this every day as a physician. I have a lot of medical knowledge in my head that I learned from years of classes, reading journals, and listening to lectures. For certain issues controversial topics, recent changes to thinking, or those within my area of expertise I can know the literature pretty well. But for much of my clinical knowledge I know the facts but not their sources. If I had to know even just the most critical literature on every single medical fact I would need to have about 10 times as much information crammed into my head. This is just not practical.

To compensate professionals in highly demanding areas like medicine rely upon what we euphemistically call our ectopic brains. I may not know everything off the top of my head, but I know where to look it up when I need to.

It turns out that knowing the sources for information is very important. Some fact in medicine are well established. But others were just guesses made years ago, but somehow became entrenched in established medical knowledge without ever being properly studied. Yet many physician may just remember both as disconnected facts. Recognition of this problem was part of the impetus for Evidence-Based Medicine (EBM) going back and looking at the actual evidence for everything we think we know.

Source amnesia is not just a problem for information-intensive professions. Actually professionals typically have an infrastructure to deal with this failing of human memory like searchable catalogues of published research. For everyday use, the internet is becoming humanitys ectopic brain which is overall a good thing, but it does displace the problem because you have to consider the source of the information you are reading on the internet.

The problem of source amnesia is far worse than just having a poor memory for the source of information, for the same process also leads to the loss of important bit such as whether or not the information is true. This leads to the fact that when people are told a claim and that the claim is a myth, days or weeks later they are likely to remember the claim but not that it was a myth.

Ack! This cuts right to the heart of what skeptics do pointing out myths and misinformation. If we are not careful we actually can be increasing belief in the very myths we are trying to correct. Therefore, when Jenny McCarthy goes around spouting that vaccines cause autism, the scientific community can carefully correct all of her nonsense and misinformation, but the public is likely to remember, didnt I hear somewhere about vaccines and autism? The damage is done.

Whats the solution? I think there are several ways to exploit this understanding of human memory. On a personal level I think that we need to consciously apply mental metatags to all information. For each factual claim (beyond the mundane everyday stuff) we need to make a mental effort to attach to that claim what is the source, is it actually true, and how reliable is the knowledge. This is a mental habit we need to encourage and practice.

As a skeptic I find that I have learned to do this as a necessary strategy for remembering skeptical information. Skeptics are used to asking is it true, and so for each claim (especially controversial ones) we habitually focus on this question. As this habit evolves we next ask how do we know? Ideally for each such claim we would have mentally attached metatags, such as: Claim: vaccines cause autism; status: false antivaccinationist myth; source: NeuroLogica blog summarizing extensive epidemiological research. This then leads to the utility of having a skeptical ectopic brain on the internet a convenient way to find and verify sources. Much of this infrastructure already exists. For example, most skeptics know that Snopes.com is a good place to start to see if a claim is true or a myth.

In addition to consciously attaching important information to alleged facts, such as whether or not it is true or false, it also helps to put individual claims or facts into a deeper context. For example, it is much easier to remember that the association between vaccines and autism is a myth if one understands its place within the anti-vaccinationist movement. This way people build a knowledge framework where it is much easier to keep individual facts straight. Our memory for individual facts, if they are not connected to such a framework, will tend to drift over time, becoming progressively distorted and even reversed.

In terms of skeptical activism, knowledge of this aspect of human memory can help skeptics frame their message. We do not, for example, want to mention a myth that is not already generally known for the purpose of refuting it. We also need to be conscious of how we state things. Rather than saying that the claim that people use 10% of their brain is a myth, we should say first that people use 100% of their brain first establish the framework of the positive true statement.

Also we need to emphasize teaching the tools of how to think, rather than just telling people what to think. Along these lines we need to teach people how we know what we know in science, not just the current findings of science. If you teach the process of arriving at a conclusion, that automatically gives them a framework to help remember information correctly and also gives them the ability to reproduce the argument and re-arrive at the correct conclusion rather than just having to remember it correctly by rote.

onsciousness is undoubtedly one of the most complex and interesting phenomena in the universe. Wrapping our minds around the concept of mind has vexed philosophers and scientists for centuries perhaps because it is the task of the brain trying to understand itself. This has led to many theories and bizarre beliefs about consciousness that it is non-physical, that it is due to quantum weirdness, or that it requires new laws of nature to explain. And yet modern philosophers and neuroscientists are increasingly of the opinion that perhaps its not such a hard problem after all. Perhaps the real trick is realizing that its not even a problem at all. Yesterday I wrote my most recent reply to Michael Egnors rather lame attempt at defending what is called cartesian dualism the notion that consciousness requires the addition of something nonphysical. Ironically he invoked the writings of David Chalmers to his cause, not realizing (or not caring) that Chalmers is a harsh critic of cartesian dualism and rather supports what he calls naturalistic dualism. Chalmers believes that the something extra required to explain consciousness is a new law of nature, not a non-physical spiritus.

Today I will discuss Chalmers proposed solution (actually he points the way to a solution but acknowledges he does not yet have one) and its major critic, Daniel Dennett.

The Problem of Subjectivity

At the core of the debate over consciousness is the nature of subjective experience. Why is it that we have subjective experience, that we feel, and we have the sense that we exist? Why arent we, as David Chalmers asks, just zombies carrying out all of the processes of life without experiencing it?

Part of the enduring controversy over this question is that it intersects with a deeper question about the nature of science itself. Can science rely upon subjective experience to understand nature? Or (a related question) can science formulate models of reality that include elements that cannot be objectively observed and tested? If we dont allow subjectivity, then how do we study the phenomenon of subjective experience itself? I admit this is a thorny problem. Former Buddhist monk, B. Alan Wallace, has written a number of books and articles advocating, for this reason, the reintroduction of subjective experience into scientific thinking. I disagree with him on this point, and also on the way in which he invokes

quantum mechanics to support his form of dualism but this is a topic of a future post (perhaps later this week).

The question of whether or not science should restrict itself to the observable and testable has plagued the world of physics since the time of Einstein and the death of the classical model of nature. In the classical world there was no doubt that what we observe about the world is the same as the way the world actually is. Quantum mechanics put an end to this simplicity.

In the quantum world, elegant experiments have shown us that nature, at its fundamental level, is counterintuitive. Matter exists simultaneously as both a wave and a particle, and the properties of matter can only be described statistically based upon probability the Heisenberg uncertainty principle, that we can only know so much about the physical properties of matter, laid waste to classical determinism.

This lead to the great question is quantum mechanics simply a system that accurately describes and predicts what we see during experiments, or does it describe the way nature actually is and (heres the really important bit) is there a difference?

Ironically, Albert Einstein revolutionized physics with relativity theory by being the first physicist to say that the variability of time and space is not just a system for explaining observation it actually describes the way the universe works. Prior to Einstein physicists invoked the ether an unseen and unknown substance permeating all space and through which light propagated. The ether was a fix to rescue classical physics from the observations of reality. Einstein then, later in life, took the opposite position by resisting the notion that quantum mechanics actually described nature, as opposed to just our experiments. (This is summarized by his famous quip, God does not play dice with the universe.) Einstein wanted (but failed) to develop a unified field theory to explain what was really going on (to rescue the classic world he helped destroy), while younger physicists, such as Max Born, argued that what we see is what we get science must restrict itself to what can be tested and observed. So What is Consciousness This same type of dilemma that so vexed Einstein is now faced by philosophers and scientists trying to crack the consciousness nut. Everyone (outside of cartesian dualists) seems to agree that

neuroscience is making tremendous progress in explaining what Chalmers calls the easy problems of neuroscience explaining observable, objective, measurable mental phenomena as brain states and function. But neuroscience has not offered an explanation for the subjective experience of consciousness the hard problem. (Chalmers outlines his views in this paper published in 1995.)

Chalmers believes that there is something more that is needed than a reductionist description of brain function. He thinks there is a higher order natural process going on an actual new law of nature as fundamental as electromagnetism that explains how brain activity causes consciousness.

I think this is akin to Einsteins desire to find an underlying unifying theory of physics that would do away with all the apparent quantum weirdness. Chalmers wants to find a grand unification theory of consciousness. His imagined new law of nature is his ether of the mind.

Chalmers may ultimately be correct (just as Einstein may ultimately turn out to have been correct we may yet find some deeper reality underlying quantum mechanics). I cannot think of any reason why Chalmers must be wrong there may be some higher order process going on. But at this point I find Chalmers proposition as unnecessary as the ether.

Further, Chalmers also says that perhaps consciousness arises from some property of nature that science cannot discover, even in principle. This gets back to the argument about the nature of science do the methods of science tell us how nature is, or just that part of nature that science can test? I believe it must be the latter untestable notions have no useful place in science. Chalmers untestable law of consciousness cannot lead anywhere. Emergent Phenomenon Perhaps the most direct challenge to Chalmers has come from philosophy Daniel Dennett. He has raised a number of excellent points challenging Chalmers contention of the hard problem. Particularly revealing is the analogy to vitalism the notion of a vital life force that separates animate from inanimate objects. He writes:

The easy problems of life include those of explaining the following phenomena: reproduction, development, growth, metabolism, self-repair, immunological self-defense, . . . . These are not all that easy, of course, and it may take another century or so to work out the fine points, but they are easy compared to the really hard problem: life itself. We can imagine something that was capable of reproduction, development, growth, metabolism, self-repair and immunological selfdefense, but that wasnt, you know, alive. The residual mystery of life would be untouched by solutions to all the easy problems. In fact, when I read your accounts of life, I am left feeling like the victim of a bait-and-switch.

This imaginary vitalist just doesnt see how the solution to all the easy problems amounts to a solution to the imagined hard problem. Somehow this vitalist has got under the impression that being alive is something over and above all these subsidiary component phenomena. I dont know what we can do about such a person beyond just patiently saying: your exercise in imagination has misfired; you cant imagine what you say you can, and just saying you can doesnt cut a ny ice. (Dennett, 1991, p.281-2)

Therefore the vitalists of old claimed that a vital force was necessary to explain life. Biologists then proceeded to explain all the components of life until nothing was left for vitalism to explain, except a vague sense that being alive is somehow a thing unto itself. Dennett compares this to the dualists (all dualists, including naturalistic dualists like Chalmers) if neuroscience can explain all of the cognitive phenomena that we can observe: memory, perceptive, communication, reflection, etc., then perhaps it has explained consciousness as well as biology has explained life. Then there would be no more need for a separate explanation for consciousness as there is for a vitalistic force to explain life.

In other words life is what we collectively call a host of complex and organized chemical reactions resulting in the ability to use energy for growth and reproduction. Life emerges from the component parts that make it up, but there is no extra thing that is life.

Likewise, perhaps consciousness is what emerges when the brain is actively engaged in its various functions. When we are perceiving stimuli, keeping information in our working memory and manipulating it, carrying on an internal conversation with ourselves, etc. all of these things add up to consciousness without the need for any extra added thing. In this sense consciousness in an

emergent phenomenon not a new law of nature or a bit of mysterious magic. While I admit it is difficult to fully comprehend this notion, I find it the most compelling of all the options.

Chalmers primary objection comes down to why do we experience anything? Why arent we zombies carrying out all the functions we ascribe to consciousness without being conscious. I think the simple answer is whats the difference? What if carrying out all the functions of consciousness IS consciousness?

Dennett explains it thusly:

What impresses me about my own consciousness, as I know it so intimately, is my delight in some features and dismay over others, my distraction and concentration, my unnamable sinking feelings of foreboding and my blithe disregard of some perceptual details, my obsessions and oversights, my ability to conjure up fantasies, my inability to hold more than a few items in consciousness at a time, my ability to be moved to tears by a vivid recollection of the death of a loved one, my inability to catch myself in the act of framing the words I sometimes say to myself, and so forth. These are all merely the performance of functions or the manifestation of various complex dispositions to perform functions. In the course of making an introspective catalogue of evidence, I wouldnt know what I was thinking about if I couldnt identify them for myself by these functional differentia. Subtract them away, and nothing is left beyond a weird conviction (in some people) that there is some ineffable residue of qualitative content bereft of all powers to move us, delight us, annoy us, remind us of anything.

I also add to this that there are different states of consciousness that correlate with different brain states not just the function changes, but subjective consciousness changes as well. For example dreaming is a form of consciousness. You are still yourself and you are aware and many of the components of consciousness are there, but it is also different. One difference is that reality testing (a specific cognitive function) is not as active. That is why often dreams make sense to you while you are dreaming but then to your awake consciousness it seems unreal. Being inebriated is also an altered state of consciousness, as are all so-called encephalopathies where overall brain function is impaired.

In other words, if you change or impair the easy problem functional components of consciousness you also change consciousness the simplest explanation for this is that consciousness emerges from those functional components. Conclusion

Philosophers and scientists still struggle to put into words exactly what consciousness is, and it does defy easy conceptualization. But I think the best explanation, and one that is consistent will all observable phenomena both what we can objectively measure, and what we subjectively experience is that consciousness is an emergent property of all that the brain does. I do not think we need to invoke quantum weirdness, I do not think we need to appeal to unfalsifiable inherent laws of nature, nor non-physical causes.

At least thats what I think.

This is a method of teaching I use as a medical educator. I try, whenever possible, to get my students and residents to figure out the answers for themselves rather than just give them the answers. And I always try to give them some framework to hang new information on why does this make sense given what else we know.

This also all fits in well with the scientific approach. All of science is describing one reality, and so it should all fit together and make sense as a whole. There are also more basic intellectual tools like logic that apply more generally, even beyond the discipline of science. It therefore makes sense to try to hook everything up as much as possible to see connections between various parts of science as well as between science and other intellectual disciplines.

Making these connections has the added benefit of aiding our poor human memory, which always needs an anchor. Disconnected factoids will tend to drift off into fantasy land. While well-connected information will tend to stay put.

In this way science, memory, learning, and skepticism all come together more connections.

Your Baby Can Read Not!


Published by Steven Novella under Uncategorized Comments: 31

I have received numerous questions recently regarding the latest infomercial craze called Your Baby Can Read. This is a program that promises to teach infants and toddlers how to read, giving them a jump start on their education. Their website claims:

A babys brain thrives on stimulation and develops at a phenomenal pacenearly 90% during the first five years of life! The best and easiest time to learn a language is during the infant and toddler years, when the brain is creating thousands of synapses every second allowing a child to learn both the written word and spoken word simultaneously, and with much more ease.

This is mostly true in fact the first four years of life is not only the best time to learn a language, it is the only time that language itself can be acquired. If a child is completely deprived of exposure to language during this time the neuro-developmental window will close. People can still, of course, learn second languages after the age of four, but it is more difficult and their brains will never be as hard-wired for those second languages as they are for a primary language learned before age four. But the company goes off the rails of evidence when it conflates language with reading. There is no window of opportunity for reading like there is with language adults who have never read can learn how to read. And while our brains are pre-programmed to absorb language, reading is more of a cultural adaptation.

The site also abuses evidence when it claims that:

Studies prove that the earlier a child learns to read, the better they perform in school and later in life.

Yes but this might have something to do with smarter kids being able to learn to read earlier. Also, smarter parents, or just parents in a more stable and nurturing environment, may be more likely to read to their children early. What we have is correlational data with lots of variables. None of this necessarily means that forcing kids to learn to read early has any advantage.

In general studies of neurological development and education show that forcing kids to learn some task before their brains are naturally ready does not have any advantage. You cannot force the

brain to develop quicker or better. In fact, it seems that children need only a minimally stimulating environment for their brain development program to unfold as it is destined to.

This further means that the whole baby genius industry for anxious parents is misguided. This is just the latest incarnation of this fiction.

There is another layer to this debate, however that between phonics and whole word or whole language reading. One school of thought believes that children learn to read by first mastering the sounds that letters make then putting them together (ala hooked on phonics). The second school of thought believes the children read whole works, and therefore can be taught to memorize whole words and the phonemic understanding will come later in its own time.

In recent years the phonics side of this debate has been dominant in the education community. But the whole word group is a vocal minority.

However it also seems that there is an emerging third group who combine the two methods in a practical way. People read by both constructing words from their phonetic parts, an also by memorizing and reading whole words. Have you ever received this e-mail:

Arocdnicg to rsceearch at Cmabrigde Uinervtisy, it deosnt mttaer in waht oredr the ltteers in a wrod are, the olny iprmoatnt tihng is taht the frist and lsat ltteer are in the rghit pcale. The rset can be a toatl mses and you can sitll raed it wouthit pobelrm. Tihs is buseace the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe.

This would seem to support the whole word school of thought. However, we also learn new words by sounding them out, and still have to do this for uncommon words. So a blended approach seems practical and is gaining acceptance.

The Your Baby Can Read program is an extreme whole word appraoch. Infants and toddlers are taught to memorize words, which they can then recognize and name from memory, even before they can understand what they are reading. Critics of this approach claim that this is not really reading, just memorization and association. Some even caution that by taking an extreme whole word approach, phonic understanding can be delayed and the net result can be negative.

Others are critical of this entire approach of forced learning at a very young age. It is more productive, they argue, to give the child a loving supportive environement and let their brain develop as it will. You are far better off spending your time playing with and bonding with your child than engaged in drills or having them sit in front of a video.

There also does not appear to be any evidence that programs like Your Baby Can Read have any long term advantage. Their website does not provide links to any pulished studies to support their claims. Regarding the founder it declares:

Dr. Titzers research has been published in scientific journals, including the prestigious Psychological Review. True but misleading as a Pubmed search on Titzer R came up with only two publications, neither of which have anything to do with learning to read. His Wikipedia page claims that he has published no scholarly work on infant reading. Conclusion

While the background concepts are quite interesting, the bottom line is that we have another product being marketed to the public with amazing claims and no rigorous scientific evidence to back them up. This product also falls into the broader category of gimicky products claiming to make children smarter or more successful academically.

Anxious parents wanting to give their kids every advantage is a great marketing demographic, in that they are easily exploited. But like all gimicky schemes promising easy answers to complex or difficult problems (weight loss, relationships, or academic success) in the end it is likely to be nothing but a costly distraction from more common sense approaches like just spending quality time with your kids and giving them a rich and save environment. What such products often really provide is a false sense of control.

Recent the following question appeared in my Topic Suggestion thread:

hi dr novella,

though a little past the time of your debate with homeopaths at the University of Connecticut Health Center: A Debate: Homeopathy Quackery Or A Key To The Future of Medicine? (2007), im wondering why in your response to the actual debate on your blog you respond in the comments section to a post:

The bottom line is that homeopathy is a tangle of magical thinking, every element of which lacks a theoretical or empirical basis.

im unsure how you can make this statement when Dr. Rustom Roy disproved one of your main arguments, that homeopathic medicines are merely placebos, showed evidence that the structure and thus function of water can be changed for extended periods of time. this evidence presented refutes that the remedies are merely water, the inert substance that we all think it is. your quote above entirely ignores and contradicts the evidence that was shown to you.

this would be an interesting topic of discussion.

thanks. Rustom Roy I also recently learned that Rustom Roy died on August 26th, just a few weeks ago at the age of 86. Roy was one of those enigmatic scientists who, on the one hand, had a conventional career as a material scientist. But he also harbored a strong belief in the supernatural and pursued those longings in parallel to material science. Those interests most notably crossed in his claim that water has memory and this can be an explanation for homeopathic effects.

During my one encounter with Roy at the UCONN debate referenced above, at one point Roy declared that the materialist paradigm is dead. Clearly he wished it to be so, but in my opinion reports of the death of materialism are premature. Roys evidence, at least that he was offering at the time, was John of God. This is a side issue but John of God is a self-styled healer in Brazil who has been thoroughly exposed by Joe Nickell and others. He uses a combination of old carnie tricks and faith-healing revival techniques to ply his trade. I found it very enlightening that Roy was completely taken in by

a fairly unremarkable charlatan, and thought that sufficient evidence to overturn the materialist paradigm. This was also not an isolated lapse. Roy founded the Friends of Health, dedicated to promoting a spiritual holistic approach to healing. Clearly this was a deeply held spiritual belief for Roy, and it is unfortunate that he allowed his religious beliefs to taint his science. Homeopathy

There are many reasons why homeopathy is pure pseudoscience. First, it was invented rather than discovered Hahnemann developed his principles of homeopathy from anecdote and superstition, without any chain of scientific research, evidence or reasoning.

Homeopathys three laws are also made-up superstition, and two hundred years of subsequent scientific advance has born this out. Scientific knowledge builds on itself, and when someone discovers a fundamental property of nature that leads to further discoveries and a deepening understanding. Homeopathy led to nothing. The law of similars is the notion that like cures like that a small dose of a substance will cure whatever symptoms it causes in a high dose. This, however, is not based upon anything in biology or chemistry. It is often falsely compared to the responses to vaccines, but this is not an apt analogy.

The law of infinitessimals, the notion that substance become more potent when diluted, violates the law of mass action and everything we know about chemistry. Also, many homeopathic remedies are diluted past the point where even a single molecule of original substance is likely to be left behind. Hahnemann believed that the water retained the magical essence of the substance (homeopathy is a vitalistic belief system), and this also lead to the more recent attempts to explain homeopathy in terms of water memory, which I will discuss further below. In addition to the basic principles of homeopathy being superstitious and contradicted by modern science, the clinical evidence also shows that homeopathic remedies do not work. There have been hundreds of clinical studies of homeopathy, and taken as a whole this vast literature shows that homeopathic remedies are indistinguishable from placebos.

This is not even a scientific controversy the evidence is overwhelming homeopathy cannot work and does not work. Only ideology, wishful thinking, and scientific illiteracy keeps it alive.

Water Memory Modern defenders have desperately tried to justify homeopathy with scientific-sounding explanations, but they have failed miserably. The results are often hilarious (at least to those who have the slightest familiarity with actual science). One such attempt is the notion that water is capable of having memory that it can physically remember the chemical properties of substances that have been diluted in it. The notion of water memory was first raised by French homeopath Jacques Benveniste in 1988. He was not studying the water structure itself, just trying to demonstrate that water can retain the memory of antibodies or other substances diluted in it. His research, however, has been completely discredited among the many flaws in his methods, his lab was cherry picking data, using improper statistics, and recounting data points that did not fit their desired results.

Roy, however, was referring to later research which he believed showed that water molecules are like bricks they can be used to build structures that contain greater complexity and information than the bricks themselves. Specifically, that water molecules could encode in their structure the chemical properties of what was diluted in them.

However, the evidence does not support this claim. What has been demonstrated is that water molecules do form transient bonds with other water molecules, creating a larger ultrastructure but these water structures are extremely short-lived. They are not permanent. In fact, research shows that water molecules very efficiently distribute energy from these bonds, making them extremely ephemeral. One such research paper concludes:

Our results highlight the efficiency of energy redistribution within the hydrogen-bonded network, and that liquid water essentially loses the memory of persistent correlations in its structure within 50 fs.

Thats 50 femotoseconds, or 50 quadrillionths (10^-15) of a second. Contrary to Roys claims water does not hold memory. In fact it is characterized by being extremely efficient at not holding a memory. Scientists can argue about whether or not under certain conditions water can display ultrastructure lingering for longer than femtoseconds but they are arguing over fractions of a second.

There is no evidence that water can retain these structures for a biologically meaningful amount of time. It is amazing that Roy and others so enthusiastically extrapolated from the claim (itself probably bogus) that water can hold structures slightly longer than previously believed to the notion that this can explain the biological effectiveness of homeopathy. Lets take a close look at the non-trivial steps they glossed over.

If this kind of water memory is an explanation for homeopathy, then these structures would have to survive not only in a sample of water, but through the physical mixing of that water with other water. In fact, they would have to transfer their structure, like a template, to surrounding water molecules. This would need to be reliably repeatable over many dilutions. Then these structures would have to survive transfer to a sugar pill (often homeopathic remedies are prepared by a drop of the water being place onto a sugar pill.

These water structures would then have to be transferred to the sugar molecules, because before long the water will evaporate. This pill will then sit on a shelf for days, months, or years finally to be consumed by the gullible. She sugar pill will be broken down in the stomach, the sugar molecules digested, absorbed into the blood stream, and then distributed through the blood to the tissues of the body.

Presumably whatever molecules are retaining this alleged ultrastructure are sticking together throughout this process, and finding their way to the target organ where they are able to have their chemical/biological effect.

Absurd does not even begin to cover the leaps of logic that are being committed here. In short, invoking water memory as an explanation for homeopathic effects just adds more layers of magical thinking to the notion of homeopathy, it does not offer a plausible explanation (even if water memory were true, which it isnt.)

Chemical bonds (some chemical bonds) are strong enough to survive this process intact and make it through the body to the target tissue where they can bind to receptors or undergo their chemical reactions. Even most chemicals, however, cannot make it through this biological gauntlet with their chemical activity intact which is why the bioavailability of many potential drugs is too low for them to be useful as oral agents. They are simply broken down by the digestive process. The

ephemeral bonds of this alleged water memory, in other words if this fiction of water memory even existed, would have a bioavailability of zero. Conclusion

Rustom Roy had a respectable career as a materials scientist, but likely his name would be unknown to the public were it not for his side interest in magical healing and homeopathy, which is certain to eclipse his more conventional career. His claim that water memory provides a scientific explanation for the action of homeopathic preparations is pure pseudoscience. It does not hold up to a review of the published scientific evidence, or even just thinking through how such water memory could exert a biological effect.

Homeopathy, as the water memory claims demonstrate, has become nothing but a desperate enterprise of piling pseudoscience on top of pseudoscience.

It is common knowledge that the human brain is horrifically complicated perhaps the single most complex thing of which humans are aware. I am often asked if we understand how the brain works, often phrased to imply a false dichotomy, a yes-or-no answer. Rather, we understand quite a bit about how the brain is organized, what functions it has and how they work and connect together, and we know quite a bit about brain physiology, biochemistry, and electrical function. But there is also a great deal we do not know layers of complexity we have not yet sorted out. I would not say that the brain is a mystery but rather that we understand a lot, but we also have much to discover. One aspect of brain function that is an active area of investigation is the overall organization of brain systems. Specifically to what degree is the brain organized into discrete modules or regions that carry out specific functions vs distributed networks that are carrying out those functions? I have written about this debate before, concluding that the answer is both. As is often the case in science, when there are two schools of thought, each with compelling evidence in their favor, it often turns out that both schools are correct. I would summarize our current knowledge (I would not call this a consensus as there is still vigorous debate on this issue, but this is how I put the evidence together) as the brain being

comprised of identifiable regions that are specialized for a specific type of information processing. These regions, or modules, connect and communicate with other regions in networks. Some of these networks represent discrete functions themselves, but they may also just be ways for different modules to communicate the results of their processing to other modules. A given module may participate in many networks, although they will tend to cluster around the same theme.

In the last decade fMRI technology as steadily improved giving researchers the ability to visualize modules and networks in action. Studies with fMRI are still very tricky, and it seems that there are a lot of false positives out there, but there are some quality studies also. The view of brain function emerging from these studies supports the general notion of modules involved in networks. A new study reports on efforts to map neural networks in the brain and finds that they can be correlated with specific cognitive functions lending support to the role of networks in brain function. From the abstract:

The cognitive domains included processing speed, memory, language, visuospatial, and executive functions. We examined the association of these cognitive assessments with both the connectivity of the whole brain network and individual cortical regions. We found that the efficiency of the whole brain network of cortical fiber connections had an influence on processing speed and visuospatial and executive functions. Correlations between connectivity of specific regions and cognitive assessments were also observed, e.g., stronger connectivity in regions such as superior frontal gyrus and posterior cingulate cortex were associated with better executive function. Similar to the relationship between regional connectivity efficiency and age, greater processing speed was significantly correlated with better connectivity of nearly all the cortical regions. For the first time, regional anatomical connectivity maps related to processing speed and visuospatial and executive functions in the elderly are identified.

Essentially they found that the strength and speed of connections among different regions of the brain (networks) were the best predictor of cognitive function in the elderly.

These results make sense from my summary above brain functions depend upon specialized regions engaging in specific networks, and further on communication among various regions and networks. As overall processing speed decreases we would predict to see functional decline in

many areas, especially global functional areas like executive function (the ability to strategically plan and control our own behavior).

While this study, combined with other studies, does support the role of networks it does not eliminate the role of modules in brain function. Atrophy or damage to specific brain regions also correlates with impaired function. So again we keep coming back to the conclusion that brain function is a combination of modules engaging in networks and damage to either will reduce function.

I suspect that as our tools improve and our mapping and models become more details we will discover further layers of complexity only hinted at now. For example, we may find that some cognitive functions require combinations of networks (networks of networks of modules). The number of possible patterns of activity increases exponentially with each layer of complexity, so researchers definitely have their work cut out for them.

While research such as this demonstrates that we are making steady progress in understanding the brain, we are also getting to point where we have a better idea of how much we still do not know. Also, progress in neuroscience has been clearly accelerating recently, boosted by new technologies such as fMRI, EEG mapping, and transcranial magnetic stimulation. Because these technologies are fundamentally computer-based, Moores law is in full effect, and so they are likely to continue to improve geometrically (doubling every 18 months or so) along with other computer-based technology.

Further, there are parallel research programs that are attempting to create virtual models of the brain, and they are making steady progress. While at the same time computer scientists are trying to develop better artificial intelligence systems which are drawing from neuroscience and potentially might feed back into our understanding of neuroscience.

In short we are at the beginning of what may be the progressive merger of neuroscience and computer science. And this process is accelerating.

Smiling Babies, fMRI, Brain Modules, and Neural Networks

Published by Steven Novella under Skepticism Comments: 12

I have two daughters, about to turn nine and six. They are, in my completely subjective and biased assessment, the most adorable things in the universe. They evoke in me a powerful and complex set of emotions an experience that every parent understands and no non-parent can truly appreciate.

Despite concerns about the testability of evolutionary psychological explanations, it seems obvious that such a reaction in a parent would be favored by natural selection, as would be any features in a child that provoked such a response from their parents. This in turn suggests that much of the response of a parent to their child is hard-wired in the brain and genetically determined. This doesnt rule out cultural and learned influences, it merely suggest that a strong parenting tendency will be coded in the genes. A recent bit of research supporting this notion was published in the latest issue of Pediatrics: Whats in a smile? Maternal brain responses to infant facial cues. The study uses fMRI, which images blood flow to the brain from which the relative activity of various brain regions can be inferred, to measure the reaction of mothers to various pictures their child happy, neutral, and sad and another child happy, neutral, and sad. The results:

Key dopamine-associated reward-processing regions of the brain were activated when mothers viewed their own infants face compared with an unknown infants face. These included the ventral tegmental area/substantia nigra regions, the striatum, and frontal lobe regions involved in (1) emotion processing (medial prefrontal, anterior cingulate, and insula cortex), (2) cognition (dorsolateral prefrontal cortex), and (3) motor/behavioral outputs (primary motor area). Happy, but not neutral or sad own-infant faces, activated nigrostriatal brain regions interconnected by dopaminergic neurons, including the substantia nigra and dorsal putamen. A region-of-interest analysis revealed that activation in these regions was related to positive infant affect (happy > neutral > sad) for each own-unknown infant-face contrast.

In order to interpret what this study tells us I must first back up and discuss some basic principles. fMRI

I have written a great deal about fMRI studies. This is the latest state-of-the-art neuroscience research tool. But in order to interpret studies like this it is important to understand the limitations of this research tool. First, as I stated above, fMRI measures blood flow to the brain. Brain tissue that is more active because the neurons are firing will require more blood flow, and therefore blood flow can be used to infer relative brain activity. The theory is that when a subject is performing some task the fMRI will show which parts of the brain are involved in that task.

The primary limitation of this use of an fMRI is that it is impossible to know what is going on inside someones mind. We cannot know what a subject is actually thinking, feeling, remembering, or attending to. We can only make a rough estimate based upon one of two basic strategies: we can expose the subject to some stimuli and then image their response to that stimuli, or we can make them perform a task. This is a reasonable approach if someone is looking at a picture, regardless of what else might be going on in their mind, at least we can know they are looking at the picture.

But all the uncontrolled bits of mental activity will act like noise concealing the signal of interest. To control for this researchers generally look at multiple trials of multiple individuals and then use statistical analysis to pick the signal out of the noise what brain region activity do the subjects have in common? Again, this is a reasonable approach, but it is important to understand how tricky and difficult using fMRI is. Without a solid protocol and careful analysis the blotches of computer enhanced colors produced by the fMRI are little more than a Rorchach test researchers can see whatever they want in them. It also means that results should be reproducible before we put too much faith in them. Having said that, I think fMRI can be a powerful tool when used properly to help understand how the different parts of the brain are hooked up and interact with each other. Brain Modules vs Neural Networks

The results of fMRI studies are only meaningful when they are interpreted within a paradigm of brain function and organization. Right now neuroscientists are working to form a consistent and predictive model of brain organization. At one end of the spectrum there are those who advocate what some call the modular brain specific anatomical parts of the brain serve specific functions. They are like modules the fear module causes fear when activated, the anxiety module produces anxiety, and the subtraction module performs subtraction calculations, etc.

The neural networks paradigm, on the other hand, emphasizes the network of connections between different parts of the brain as primarily important to function. In this model there is no anxiety module, but rather a certain pattern of networks in a specific individual will produce anxiety. But some of the same parts of the brain active during anxiety may also take part in other networks that will produce other emotions, like happiness.

The modular paradigm works a bit better for fMRI studies. If a part of the brain lights up during an activity then researchers can conclude that that part of the brain serves the function under study. The neural network paradigm makes fMRI studies more difficult (but not impossible) to interpret the parts of the brain that light up are involved in the network but it is not clear what they are doing. Michael Shermer wrote an interesting article for Scientific American called The Brain is Not Modular where he argues that fMRIs have led some scientists to over-apply the modular metaphor. In the comments section of the article neuroscientist Marco Iacoboni critiques Shermers article. The result in good summary of the debate over modularity.

This is a complex and rapidly evolving area of research and I do not pretend to be an expert as it is not my area of research. But here is my best shot and synthesizing what I have read. It seems that in practice most neuroscientists are somewhere in the middle between modules and networks combining both concepts. I think this is the right approach because I think the brain combines both approaches. There does appear to be modules in the brain we had evidence for this long before fMRI scans. As a clinical neurologist I can examine a patient with a stroke and in most cases tell you exactly the size and location of the lesion that we will see on the MRI based solely on the deficits on exam. If a patient has an isolated Wernickes aphasia an inability to understand verbal commands then they will have a medium-sized lesion in the angular gyrus of the dominant (usually left) hemisphere. That piece of the brain serves a very specific purpose.

But while the more basic or straightforward cortical functions are highly modular, the more complex or sophisticated higher cortical functions are not. There does not appear to be a piece of the brain that provides attention or concentration, or creativity. These seem to follow more of a network model of brain organization.

It is also possible that while there are modules in the brain their function may be more abstract and the specific effect they have varies depending upon what other parts of the brain that connect to them are also active. The trick for neuroscientists may be finding the most accurate way to describe the complex and abstract function of the various brain modules. There may be a layer or two of hidden complexity to what the fMRI studies are actually telling us. It seems that the more we study the brain the more complex a puzzle it becomes, and perhaps we have not yet crossed the threshold where new information isnt just giving us more questions.

I dont mean to downplay the vast amount of information we already have about brain function. As a said, I put this into practice almost every day. Rather I think that fMRI scanning has given us a new window into brain function and it is revealing a new layer of depth to its complexity. The modularity debate reflects our current struggle to understand this new depth. As is typical of scientific endeavors, the debate is healthy and is likely to lead to improved models of brain organization and function. Smiling Babies

With all this in mind, what does this new study of mothers looking at their smiling infants tell us? The abstracts conclusion says:

When first-time mothers see their own infants face, an extensive brain network seems to be activated, wherein affective and cognitive information may be integrated and directed toward motor/behavioral outputs. Dopaminergic reward-related brain regions are activated specifically in response to happy, but not sad, infant faces. Understanding how a mother responds uniquely to her own infant, when smiling or crying, may be the first step in understanding the neural basis of mother-infant attachment. It is no surprise that something as complex as a mothers reaction to their infant is very complex I think I could have told you that without this study. But it does give us some specific information. There is an emotional component to the reaction one that involves the reward center of the brain. This means that smiling babies make their mothers feel good is a way that reinforces the behavior. Some news stories of this study have likened this response to the high addicts get from crack. I think thats a stretch. The reward system seems to be one of those multi -purpose modules

where the significance of its activity can only be understood in the context of the network in which it is firing.

The study also suggests that the emotions generated by seeing ones smiling infant also is connected to, or triggers, certain thoughts. These emotions and thoughts also connect to specific behaviors. Although the study did not say it, I strongly suspect that one of those behaviors is the smiling, waving, and cooing that most parents will give back to their smiling infants. It likely also connects to more complex behaviors, like the instinct to protect and nurture the adorable little bundles of joy.

Crying babies (as any parent can tell you) elicits a very different emotional response. This is a very negative experience one is motivated to stop the crying as soon as possible and failure to do so quickly may result in guilt, shame, and feelings of inadequacy. It is no surprise that a crying baby elicits no reward response must make the baby smile and laugh. Anything to make it smile Aahhh! Its smiling how adorable.

Categorizing Brain Function


Published by Steven Novella under Neuroscience Comments: 49
This week on the SGU I will be interviewing Jon Ronson about his latest book, The Psychopath Test, just being released in the US. I am not going to write about the book here (I will do that after the interview, although I have already read a preview copy). Rather, as a prelude to the interview I want to discuss some background thoughts about how we think about brain function in the context of psychology and psychiatry. What I am actually going to give you is my own current synthesis, acknowledging that there is lots of wiggle room for interpretation and opinion, and my own thoughts have been constantly evolving over the years. Hardwiring

It is somewhat of a false dichotomy to think of brain function in terms of hardware and software. That compelling computer analogy tends to break down when you apply it to the brain, because in the brain hardware and software are largely the same thing. Memories are stored in the same

neurons that make up the basic structure of the brain, and experiences can alter that structure (to some degree) over time. The brain is neither hardware nor software its wetware.

But it is still useful to think of brain function in terms of long-term structures in the brain modules and networks that make up the basic functioning of the brain and change slowly (if at all) over time, and short term structures and processes that subsume short-term memory, our immediate experiences, mood, and emotions, and our attention and thoughts. The latter is as close as we get to software. First lets consider what processes lead to the basic neuroanatomy of the brain the factors that determine, for example, that the occipital lobe will process visual information, and that it will do so in a very specific way. We can talk about Wernickes area in the brain, because everyone seems to have one, it is always in the same place (although sometimes can be on the opposite side) always serves the same function (to process language), and always makes the same connections to other parts of the brain. Yes there is variation in neuroanatomy, just as there is variation in every biological parameter, but the consistency at this scale is very high.

As we delve into finer and finer details of anatomy, then individual variation becomes greater and greater. While everyone has a Wernickes area, some people seem to be born with greater language facility (perhaps potential is a better term) than others. What determines this?

It is clear that the ultimate cause is our genes they contain the instructions for growing a brain. To borrow an analogy from Richard Dawkins (which was pointed out to me by a friend of mine, Daniel Loxton), the genes are not a blue print, but are rather a cook book. In other words they do not determine where every neuron goes. They determine the rules by which the neurons are laid out, but by following those rules greater complexity emerges. Patterns are repeated and neurons are mapped, to the body and to sensory input. Our sensory cortex, for example, maps itself out to the surface of the body. This is a dynamic process that requires information input, and it is for this reason that the brain can contain much more information than the entire genome, let alone just those genes that code for brain proteins.

In addition to genes there are also epigenetic factors environmental factors that influence how the genes are expressed. Genes can be turned on and off in various cell populations, and further

this is not a binary state meaning that genes can be turned on to various degrees. A particular gene can be a little active, making a small amount of protein, or can be very active and crank out large amounts of its protein. The environment in the womb, for example, exerts powerful epigenetic influences on gene expression in the developing fetus. This includes the stress of the mother, the diet of the mother, and the levels of various hormones in the blood.

The third factor is developmental. The genes, modified by epigenetic factors, may have a plan for the brain, but that plan still needs to be executed in the developmental process. And that process can go awry, or be interfered with by external factors, like infection, or the presence of a twin.

The combination of genetic, epigenetic, and developmental factors then result in the final structure of the brain. Now it gets really interesting, and increasingly difficult to make categorical statements. Environment

The brain is an organ evolved to interact with the environment, to be adaptive and to learn. That is the whole point of having a brain to respond to the environment much more quickly than genes themselves can allow (even with epigenetic factors, which do allow for single generation responses to the environment). It is therefore no surprise that after birth (and one can argue even before birth), as the brain grows and matures, it is adapting itself to the environment and responding to all the various inputs it is receiving. Experiences, culture, family life, and other environmental factors all influence brain function.

The never-ending question, however, is to what degree are the functions of the brain determined by hardwiring (shorthand for the genetic, epigenetic, and developmental factors I described above) vs environmental factors. Here is where opinion seems to outweigh evidence. My personal opinion is that both are involved in almost every aspect of brain function, and to various degrees. Some aspects of brain function are dominantly determined by hardwiring. This applies to all the basic functions of the brain, like vision, motor function, wake-sleep cycle, breathing, and the like. Other aspects are perhaps dominantly determined by environment, such as language and culture. And many things are somewhere in the middle.

Most relevant to psychiatry is the question of personality. To what degree are individual personality traits determined by hardwiring vs environmental factors? Here our ability to categorize brain function is stretched to the breaking point. Scientists argued bitterly about where to draw the line around the category of planet. They had to deal with only a few variables size, shape, gravitational influence, the presence of moons, and perhaps a couple of others. And yet what they found was a confusing continuum of objects, and no truly objective way of using the identified variables to come up with an operational definition for planet that was not controversial.

Psychologists and psychiatrists have hundreds of variables to consider, that interact with each other in complex ways. Categorization is all but hopeless. However, there still app ear to be islands of stability or personality profiles that peak above the noise and can be identified and treated as a real entity. But we can never get away from the complexity.

Let me back up a bit, however, and get back to personality traits. The first challenge is identifying what these traits actually are. Is there really a part of the brain that determines how extroverted vs introverted we are? Is extroversion even a real brain function, or is it the end result of deeper underlying functions? This gets to one of the problems with thinking about human psychology. We generally are identifying three factors: mood, thoughts, and behaviors. We largely rely upon people to tell us how they feel and what they are thinking, and we can observe behavior. We then infer from these three end-results what the underlying personality traits might be. We are like chemists before the periodical table of elements was formulated. We are not sure if we are dealing with the fundamental units of personality (although I think we are in some cases). It is still very much a work in progress.

However, there is another layer of complexity in that mood, thought, and behavior occur within various simultaneous contexts. Movies exploit this all the time we may see a character behaving in a certain way that seems puzzling, or that makes us jump to certain conclusions about their personality. Only later is the context revealed, and we realize that the character was simply reacting to their situation in a way we might feel is reasonable. The issue of context is critical.

So mood, thought, and behavior are end results of underlying personality tendencies interacting with the environment. The environment not only includes the immediate situation, but also the recent experiences of a person, and even the long term experiences that may have taught them to

react in a certain way, their family life, their culture, and any subcultures in which they may be involved. Before any conclusions can be drawn about a persons personality, we must the refore know a great deal about their individual context.

Another layer of complexity is that individual personality traits, assuming we can even identify them, do not exist in isolation but also interact with each other. Someone who is extroverted and aggressive will behave differently from someone who is extroverted and shy, or extroverted and highly empathic.

The number of variable we are dealing with now are staggering, and the result is chaos (in the mathematical sense). Conclusion The Challenge of Psychology/Psychiatry

At this point it should seem like folly to place a label on someones psychological condition, and to some extent it is. However, as I said, there are recognizable islands of stability in the chaotic sea of psychology. Some people have a personality trait that is at one or the other extreme end of human variation, and tends to dominate their mood, thought, and behavior. For example, someone may have their anxiety cranked up to maximum, to the point that they are anxious in situations that would not make most people anxious. Their anxiety controls their life, and overshadows other aspects of their personality.

They are still an individual, with many other personality traits and their own complex individual context, and therefore they are different from every other person with anxiety. But it is still meaningful to think of them as a person with an anxiety disorder, and to treat the anxiety to bring it down to a more functional level. The overwhelming complexity of the human brain does not mean we should throw up our hands and abandon all attempts to help people with what can meaningfully be called psychological disorders.

But it does mean that we need to proceed with extreme caution. We need to be skeptical of the tentative labels that we use to help guide our thinking about treatment. No person can be reduced to a label to a single feature about them. People are not schizophrenics they are complex individuals who have a suite of personality tendencies that together fit into a vague and fuzzy, if

still recognizable, category we call schizophrenia. And this is not just being PC it reflects the importance of recognizing how we think about brain function at every level, with all of the limitations that are implied.

I had all this in mind when I read The Psychopath Test by Jon Ronson, which details his personal journey to understand just a single psychiatric diagnosis and the quagmire that led him to. I look forward to discussing his book with him this week.

También podría gustarte