Friday, December 18, 2009

Move over, monkeys, there's a new smart animal in town

We used to think that the use of tools was a hallmark of our human species. Then we learned that in fact, some primates also use tools. For example, orangutans will use a stick to poke at ant hills to collect the fleeing ants for a tasty snack. The real piece of humble pie came when we discovered that birds also use tools. That’s right. Tiny-brained crows are able to use a curved stick to get at a treat in a tight spot. So I can’t say that it came as a shock to learn that the octopus, an invertebrate, also recently joined the smarty-pants tool-using club.

Most people who spend all day procrastinating working at their desk and have internet access probably already saw this neat video featuring an octopus carrying around a coconut shell and hiding in it:




In a recent article published in the journal Current Biology (which includes the video above), researchers describe how octopuses (also called octopodes, I had to look this up) carry coconut shells for later defensive use. The octopuses are seen scurrying around (on distances up to 20 meters!) with the shells cumbersomely positioned between their tentacles, and later assemble the two halves of coconut shells and hide inside. The authors stress that the interesting feature of this behavior is that when the octopuses are carrying the shells, they are at an increased predator risk, because they move slower than normal and their heads are exposed. Therefore, the only benefit of carrying the shells is the future use of these shells as a shelter. Apparently, that’s an amazing display of foresight for an organism that uses most of it brain cells to control its too many limbs.


While the video is definitely captivating, I think that whether this study is groundbreaking or not depends a lot on what the definition for “tool use” is. In the article, the researchers state that “a tool provides no benefit until it is used for a specific purpose”, so shelters like those of the hermit crabs don’t qualify. But even though the shells are used at a later time point, I’m not sure about calling them “tools”. I guess I mostly see them as a shelter. Is a shelter a defensive tool? Ah, semantics…



Reference: Defensive tool use in a coconut-carrying octopus. (2009) Finn JK, Tregenza T, Norman MD. Current Biology 19(23) :1069-70.

Friday, December 11, 2009

Personal space invaders

My last post was about H.M., a man who was missing a part of his brain called the hippocampus. Studying H.M. helped us to greatly improve our understanding of several functions of this brain region. Needless to say, after all the attention generated by the study of this patient, every neuroscientist out there was on the hunt for a patient missing different parts of their brain. By this time, gone was the era when we could butcher people just for kicks, so scientists waited around.

Luck struck a group of scientists from California recently. They came across patient S.M., a 42 year-old woman missing her entire amygdala, a part of your brain not very far from the hippocampus. Studying this woman confirmed a lot of things we already knew about the amygdala, for example that it’s important for fear. However, S.M. thought us something new: the amygdala is the part of your brain that controls your perception of an elusive concept: personal space.

The study of S.M. consisted of having her indicate to the experimenter at which point she felt uncomfortable while a person would approach her from across the room. The chin-to-chin preferred distance was then compared with that of healthy, age-matched controls. S.M.’s preferred distance was significantly smaller than the preferred distance of the control subjects. To make sure this wasn’t due to some random fluke, quite a few factors that may influence personal space were controlled for, including presence or absence of eye-contact, familiarity with the person approaching, etc. All in all, there was really no situation that could make S.M. uncomfortable, even when the person would move towards her all the way to the point of touching. The weird thing is that S.M. knew she should feel uncomfortable, and she understood the concept of personal space, but she just wasn’t experiencing it.


It always amazes me to find out that something so vague, so variable (people who live in densely populated places typically have smaller personal spaces), and so elusive is actually regulated by a huge chunk of your brain. It leads me to wonder: is there a part of us, of our mind, of our personality, that isn’t already hardwired in our brain?


While S.M. gave us great insight into the biological basis of personal space, the greatest contribution of this study is possibly the ugliest journal cover image of all times:


Is the point of this story really conveyed by
a woman's face in some guy's armpit?

Reference: Personal space regulation by the human amygdala. (2009) Kennedy D.P., Glascher J., Tyszka J.M., Adolphs R. Nature Neuroscience, 12(10):1226-7.

Friday, December 4, 2009

A memorable amnesiac

When little Henry Molaison was 7 years old, he fell off his bicycle. Little did he know that this event was possibly the first link in a chain of events that eventually made him the most famous patient in the history of neuroscience.

Sometime after his bicycle accident, Henry started having seizures. At first they were just little seizures. Then, when he was 16, he had his first major seizure, and it all went downhill from there. Unable to hold a job and not responding to the anticonvulsant drugs that were available, Henry and his family considered a brain operation. His surgeon, Dr. Scoville, wanted to try a new experimental type of surgery, and Henry went for it. So in 1953, when Henry was just 27 years old, he got both his medial temporal lobes removed (kind of like a lobotomy, but instead of removing the front of the brain, they removed part of the sides, just about at the level of your ears).

The operation was extremely successful: the seizures stopped! Unfortunately, this happy outcome was overshadowed by a very strange “side effect”: Henry now suffered from severe anterograde amnesia, meaning he could no longer form new memories. He also suffered from retrograde amnesia: he could not remember events from three to four days prior to the operation, and other events from a more distant past. But the anterograde amnesia was the most astonishing. You could have a conversation with Henry, then leave the room for a few minutes, and when you came back, it was as if he had never met you. Can you imagine?

Dr. Scoville called on one of his friends, Dr Penfield (that’s right, Wilder Penfield, from Montréal!) who then sent his graduate student, Brenda Milner, to study Henry. Even after all that had happened to him, Henry remained a friendly guy and was okay with being studied extensively. The knowledge we gained from Henry (known as H.M. until his death to preserve his anonymity) laid the foundations for much of what we know today about memory (which, I agree, isn’t all that much, but still). For example, prior to Henry, it was thought that memories formed all over the brain. Studying him taught us that instead it is the hippocampus, a part of the brain that was removed during his operation, that is crucial in forming memories. Studying Henry also taught us that there are multiple memory systems, a most unexpected discovery. While Henry couldn’t remember what he had for breakfast 10 minutes ago, he could learn a new task, like drawing a star while only looking at his hand in a mirror (do try this at home to understand the challenge it represents). He would get progressively better and faster at it but never remembered doing the task before. This went on until one day he drew the star and exclaimed: “This was much easier than I thought it would be!”. This lead to the notion that there are different types of memory such as declarative (conscious knowledge of facts and events) and procedural (skill-based knowledge).


The reason I’m bringing up Henry today is because this week, one year after his death, neuroscientists at UCSD started slicing up Henry’s brain in extremely thin slices. These slices will be available for scientists all over the world to look at, analyze and study. Even though he’s gone, there is no doubt we still have much to learn from the most famous neuroscience patient.



Reference: The legacy of patient H.M. for neuroscience (2009) Squire, L. R. Neuron 61(1):6-9

Thursday, November 26, 2009

The reason why I write this blog

Just spotted this headline in a Canadian online news aggregator:

“Severe reactions to H1N1 shot: one death probed”

Then, almost at the very end of the article:

“On the issue of serious adverse reactions to the H1N1 vaccine, Butler-Jones said the rate of anaphylactic events was about 0.32 cases for every 100,000 doses of vaccine delivered - a figure that's within the norm for mass vaccination efforts.”

I just got my shot.

Do we really need two to tango?

It’s not easy finding an adequate male for reproduction. He needs to be manly, but not macho. He needs to be funny, but not immature. He needs to be romantic, but not needy. Ever wonder why we bother to go through the whole finding-a-mate dance when some species can just self-fertilize? Many animals and plants reproduce like us by outcrossing (which means two parents), but there are also a number of selfing species (meaning self-fertilizing or asexual reproduction). Oddly, when you look at the numbers, outcrossing doesn’t make a lot of sense. When selfing organisms (for example, aphids) reproduce, 100% of the offsprings can make more offsprings. When outcrossing organisms reproduce, we end up producing a variable proportion of those pesky boys (50% in the case of humans), who really are no good when it comes to having babies (except maybe for this guy). For many years, scientists have been speculating as to the evolutionary benefit of this numerical disadvantage. Recently, researchers tackled the question in a new way: by recreating evolution experimentally.

First, an important term to define: evolution. For the sake of this post, let’s use a simple definition: evolution is the change in the genetic material (genes, made from DNA) of a population of organisms from one generation to the next. Variations in the genetic material can occur in a few different ways, but a main one is mutations. Mutations can arise due to different factors: for example, a mistake can be made when the DNA is being copied during cell division, or the DNA can be damaged due to exposure to radiation or chemicals.


The study, published recently in the journal Nature, looks at a type of worm, C. elegans. Populations of this worm are composed of males and hermaphrodites, meaning this worm can reproduce both by selfing (hermaphrodites) or by outcrossing with males. The researchers were able to genetically engineer these worms to make two different populations: one that is only able to reproduce by selfing, and one that is only able to reproduce by outcrossing. This created a very valuable tool to look at how these populations deal with various evolutionary hurdles.


The researchers took both populations of worms (the selfing worms and the outcrossing worms) and exposed them to a chemical that increases the rate of mutations (a way to mimic a “sped up” evolution). They also created an environment where each population, in order to reach their food, needs to go over a worm-scale obstacle course. These two steps were important because they both impose a strong selection. Once the experiment was set-up, all the researchers did was let the worms reproduce through 50 generations and looked at which population did better.


Male readers, you’re safe! Even with all the hurdles, the outcrossing population of worms managed to maintain their fitness (or their evolutionary health) over the course of the experiment. The selfing populations of worms, however, showed a significant decline in fitness. To make sure this effect was not just a fluke, the researchers tried a different hurdle: they exposed both populations of worms to a disease-inducing bacteria. Initially, this bacteria caused an 80% mortality rate in both the outcrossing and the selfing worms. This means that the worms quickly had to learn to either avoid the bacteria or become resistant to it. This experiment confirmed what the researchers saw previously: the outcrossing population adapted rapidly to the bacteria and showed a significant increase in fitness over 40 generations, the selfing population did not manage to adapt.


This experiment may seem like a no-brainer (if we didn’t need males, they probably wouldn’t be around anymore, so they must be useful for *something*), it represents the first experimental test of the selective pressures that favor the evolution and maintenance of outcrossing.
By digging into the genetics of the worms, the researchers were able to come up with two explanations for the usefulness of outcrossing. The first explanation is that outcrossing reduces the effect of harmful mutations. For example, if part of my DNA is damaged, it can be compensated for in my children if my partner’s DNA is intact. If I wasn’t mixing my DNA with someone else’s, my offspring would inevitably inherit my defective DNA, and this would weaken the population. The second reason is that in selfing organisms, mutations (good or bad) are trapped in a single genetic background. This means that a beneficial mutation can never combine with another that may have occurred in a different genetic background. Therefore, beneficial mutations can never add up or even synergize. This results in stalling evolutionary fitness.

So while it’s sometimes hard to find Mr. Dreamy, it seems like in the long run, it’s worth it.


Meet C. elegans, evolutionary tool extraordinaire

Reference: Mutation load and rapid adaptation favour outcrossing over self-fertilization. (2009) Morran LT, Parmenter MD, Phillips PC. Nature, 462(7271):350-2.

Tuesday, November 17, 2009

The fine print

Don’t you just hate it when you sign up for a new telephone/internet/cable plan thinking the offer is such a good deal, only to find out three months later that after the “introductory period” you are actually charged way more? You typically only make that mistake once and then learn to read the pesky fine print. In science, just like in advertising, the claims and the fine print need to be scrutinized. And typically, the bolder the claim, the more attention is paid to the fine print. For example, if I’m going around boasting that I discovered that a molecular component Y of the protein X interacts with a sub-component of the peptide Z, chances are, no one will really care enough to read about it (ah, the joys of the PhD thesis). But, if I’m going around claiming that I have a vaccine that prevents AIDS, you can be sure people will read the fine print.

In September of this year, a press conference was held to announce the results of the largest AIDS-vaccine study ever conducted. The study, funded in part by the US Army and having cost a whopping $105 million, was a “first success” in the research for an AIDS vaccine, a “yes we can” moment. However, not unlike the story about the Darwininan fossil, this press conference came before the findings were published in a peer-reviewed journal. When the paper came out in October and the fine print was read by all, the excitement dropped significantly: as it turns out, many results from the study were negative, and the positive results showed that the vaccine only protects a third of the people who got get it, and only for a short while.

The actual article presents the results from three different analyses. The first analysis looked at all the participants in the study (over 16, 000 people). For this group, the vaccine had 26% efficacy. Sounds like a good start, right? The problem is that the p value for this effect was 0.08. This means that there’s an 8% chance that this effect is just due to chance. This is quite a bit over the golden scientific standard of 5%, and should be considered not significant. The second study started with the same amount of participants (16, 000), then excluded around 4000 people because they didn’t follow the protocol exactly (for example, they didn’t get the vaccine at the correct time). Logically, the results should look better. Interestingly, they don’t. In this case, the vaccine still showed a modest protective effect, but the p value was now 0.16, meaning there’s an even bigger chance this is just a fluke. Finally, the third study looked at the initial 16, 000 people minus 7 who turned out to have been infected with HIV before the study even started. In that case, the vaccine still showed a similar effect (about 30% efficacy), but this time, the p value was under the cutoff at a less-than-impressive 0.04. Phew! Something to brag about during the press conference!

The negative, statistically insignificant results combined with a few other issues (for example, the short-term protection offered by the vaccine –only about a year) have drawn a number of criticisms not necessarily of the study, but of the bragging. As for the study itself, opinions are divided. Some see it as a glimpse of hope and an encouraging start, many see it as a weak effect, mostly not statistically significant, and possibly a waste of (a lot of) money.


With all the vaccine controversies and conspiracy theories going around these days, all I have to say is throw this new one in the mix.



Reference: Vaccination with ALVAC and AIDSVAX to Prevent HIV-1 Infection in Thailand. (2009) Rerks-Ngarm S. et al. New Engl J Med. [Epub ahead of print]

Wednesday, November 4, 2009

To panic or not to panic: An interview with the swine flu (part II)

When the H1N1 story erupted in the media a little while back, I wrote a short Q and A post to give a scientific perspective on the topic. After that I really didn’t give much thought to the H1N1 flu. Since I don’t have a television at home and never listen to the radio, I live in a kind of media void. It’s glorious in there, let me tell you. Unfortunately, I was recently at a relative’s place and watched the news. I was shocked to see an endless stream of panic-inducing warnings and news about the H1N1, so the microbiologist in me decided to revive the topic here on Scientific Chick with the latest scientific information. So here you have it, a second exclusive interview with Mr. Swine Flu himself.

Scientific Chick: Mr. Flu, thanks for accepting to come back on Scientific Chick. How have you been? It seems you are gaining in strength and giving more severe illnesses.


Swine Flu:
I’ve been well, thank you, but please notice that I’ve changed my name to H1N1. As you know, we are now in the regular flu season, so I am quite busy going around and infecting people. However, unfortunately for my plans to dominate the Earth, I have not been inducing an increasingly severe flu. It may seem so because with more people infected, the percentage of seriously ill patients becomes more apparent, but I’m still the same guy.


SC: How do you feel about our new H1N1 vaccine?


H1N1:
I find it quite sad. You see, the vaccine is composed of my virus brothers, completely killed and inactivated. When talk of the vaccine started, I had a glimpse of hope that maybe, just maybe we could infect humans through the vaccine, but alas, that’s not possible. There is no chance a vaccine containing my dead relatives will give you flu.


SC: Surely with a vaccine so new, there is some chance of things going awry for us humans?

H1N1:
I wish! It is often thought that because this vaccine is new, it is untested and unsafe. Unfortunately, because I am so similar to my seasonal cousin, the H1N1 vaccine has been produced the same way regular flu vaccines are produced every year. Health organizations (like the NIH) around the world have conducted rigorous clinical trials that show the vaccine is both safe and effective. It’s been licensed by all the governmental agencies and even though I tried to be very sneaky showing up unexpectedly like I did, no shortcuts were taken.


SC: At least most formulations of the vaccine contain thimerosal, so if we don’t catch you, you’ll at least have the consolation that we’ll suffer from mercury poisoning and all the associated conditions.


H1N1: Well, that would be nice, but you are grossly exaggerating. While it is tempting to blame thimerosal (a mercury-derived preservative) for a number of conditions, there is just no scientific evidence for any sort of suggestion that thimerosal is unsafe. Since the hypothesis that thimerosal causes autism broke out many years ago, scientists have been working very hard to prove or disprove that link. Interestingly, some of the best, largest, most well-controlled and unbiaised clinical studies came out of this controversy, all concluding that there is no link. The irony is that there is more mercury in a can of tuna than in any vaccine.


SC: So really, if I wanted to give up vaccines for fear of thimerosal, I’d also have to give up ahi tuna tacos? That’s just not a possibility. What about adjuvants in vaccines? How do you feel about those?


H1N1:
I like adjuvants, because since they boost your immune response to the vaccine, less inactivated virus is needed per dose, which means less deaths in my family. That being said, whether you receive a vaccine with or without an adjuvant depends on where you live. In the USA, no adjuvants will be used. In Canada and some European countries, the vaccines contain adjuvants. Adjuvants are not new, and they also have a good safety track record. I hear a lot of concerns about squalene being used as an adjuvant, but you find squalene in olive oil.


SC: I hear a lot of discussions about Guillain-Barré Syndrome. Should we worry?


H1N1:
Guillain-Barré Syndrome (GBS) occurs when your body’s immune system turns against its own nerve cells, and this leads to paralysis. If caught early enough, it can be reversed. And yes, vaccines are among the many risk factor for GBS, at a rate of about one in a million. Guess what else is a risk factor for GBS? Me! The nasty ol’ flu. Pick your odds.


SC: Are you hiding in my tasty pork tenderloin and bacon?


H1N1:
No. You can only catch me through coughing or sneezing droplets from someone who is already infected, or through touching something contaminated and then letting your hands get to your face before they get to a sink to be washed.


SC: Should I wear a mask if I want to avoid you?


H1N1:
Also a no. The Public Health Agency of Canada doesn’t recommend wearing surgical masks to avoid catching me. While this may sound counter-intuitive, there is actually scientific evidence that shows that this is not an effective way to prevent flu transmission in the general public. People tend to use the masks incorrectly, contaminate themselves when putting the mask on or taking it off, and increase their risk of infection by trapping me near their mouth. That would really make it too easy for me.


SC: H1N1, thank you.
While it was lovely having you, I hope this was the last time.

H1N1:
Thank you. Did you want to come closer? I have a secret for you…



This plush H1N1 virus is safe to cuddle with!


References:

Autism and vaccination-the current evidence. (2009) Miller L, Reynolds J.
J Spec Pediatr Nurs. 14(3):166-72.

A Novel Influenza A (H1N1) Vaccine in Various Age Groups. (2009) Zhu FC et al.
N Engl J Med. Oct 21.

The H1N1 flu pandemic. What you need to know. (2009)
Mayo Clin Womens Healthsource. (11):4-5.


The Centre for Disease Control and Prevention – www.cdc.gov
Public Health Agency of Canada – www.publichealth.gc.ca

Sunday, October 25, 2009

A new kind of mind control

Remember subliminal messages? Those images supposedly flashing too quick for your mind to register, but still managing to convince you to drink more soft drinks, eat more fries, buy a luxury car? While those days may not be over yet, new forms of mind control (albeit more biological than psychological) are emerging thanks to the tiniest of creatures, the bacteria.

A friend of mine, Cal, recently alerted me to an interesting article about optogenetics. If you’re not familiar with the word, that’s because it’s very new, and it essentially means playing with light and genetics at the same time. It’s all the rage in neuroscience right now and articles such as the one I’ll be describing in this post are popping up every week.


It all starts with a tiny little pump called halorhodopsin found in bacteria. This pump sits at the surface of cells and pumps chloride ions from outside the cell to the inside (table salt is sodium chloride - same chloride). Cells use chloride for different reasons, but this pump can be especially relevant for brain cells (called neurons). Neurons pass information to one another through electric currents. And it just so happens that chloride ions are charged negatively. That means that if many chloride ions accumulate inside a neuron, the cell becomes increasingly charged negatively, making it harder to reach the positive threshold it needs to pass currents to other neurons.


Other types of chloride pumps already exist in your brain cells, but almost nobody makes a fuss about those. So how is halorhodopsin different? This is where the “opto” from “optogenetics” comes in. This particular pump is activated by light. This means that if neurons have this special pump, you can control whether they are active or not just by flashing a light onto them.


In a recent paper published in PNAS, the researchers genetically engineered zebrafish so that their brain cells expressed the special light-activated chloride pump. The researchers then recorded the electrical signals generated by the brain cells (they look like spikes, much like what you would see on an EEG). I don’t know if you can imagine what kind of feat that represents, but I’d like to make a motion to modernize the saying “like finding a needle in a haystack” to “like poking an electrode into a brain cell of a live fish”. Once they knew what the signals looked like in normal conditions, they shone the light on the fish*, and amazingly, all the brain cells went quiet. It worked! The light activated the pump, negatively charged chloride ions accumulated in the cells and made it too difficult to reach the spiking threshold.



The black lines are the current spikes that normally occur when brain cells transmit information. The yellow section is when the researchers shone the light: no more spikes.


Now a fish doesn’t have that many brain cells to start with, and since it spends most of its life moving it’s pretty safe to assume that a large portion of the fish’s brainpower is devoted to swimming. The researchers thought they had a pretty cool tool to test this, and so they did. They put a bunch of genetically-engineered zebrafish in a dish, watched them swim around for a bit, and then shone a light on them*. Sure enough, the fish stopped moving and lost coordination. I realize we’re talking about lousy, bottom-of-the-food-chain fish here, but think about it: *that’s* mind control.


The article continues to great lengths, going into details about what specific part of the brain controls the swimming behavior and describing control experiments that confirm that this isn’t a fluke (i.e. the fish aren’t just spooked by the light). All things considered, it’s a very elegant example of how to use optogenetics to better understand the brain. And the relevance of these advances lies in the increased understanding of not only the brain but also diseases of the brain. Recently, these new techniques used in animal research gave us important insights into Parkinson’s disease.


What about using these tools as ways not only to understand disease, but also to treat them? What if we made our own brain cells express this special pump so we could use light to activate or inhibit different areas of our brains? While this may seem like science fiction right now, don’t be so sure. I attended a talk on optogenetics recently, and the researcher firmly believed that this emerging field of neuroscience would eventually cure blindness. In the meantime, let’s see if you can think of all the ethical questions this would raise…


* For these experiments, the researchers used fish in the larvae stage. The skin of the fish at that point is transparent, and this allows the light to reach the brain cells.


This little zebrafish is doing his best to contribute to science.

Reference: Optical control of zebrafish behavior with halorhodopsin. (2009) Arrenberg, A.B., Del Bene, F., Baier, H. Proc Natl Acad Sci USA, 106(42):17968-73.

Monday, October 12, 2009

Yet another reason for a good night's sleep

How much do you sleep at night?

If you’re like most of the people I know, the answer is “not enough”. There’s a reason Starbucks coffee shops are popping up literally meters away from one another. Everybody has a reason to be sleep-deprived: new kid, big job, World of Warcraft, etc. So what if we’re cutting the night short a few hours? Other than the need for an overpriced coffee (or two, or three), it should be just fine, right?


Maybe not, if you believe the latest research on sleep and Alzheimer’s disease.


Alzheimer’s disease, a debilitating form of memory loss and cognitive decline, is the most common form of dementia. It is thought to be caused at least in part by amyloid beta (A-beta), a peptide (short protein). Your brain cells (neurons) normally make some A-beta. The problem that arises with Alzheimer’s disease is that neurons make too much A-beta, and these molecules aggregate together in chunks. It’s those A-beta chunks that are toxic, and their formation is concentration-dependent, which means the more A-beta you have floating around, the higher the probability of toxic chunks forming.


The recent article published in the journal Science looks at levels of A-beta in the brains of normal mice and in the brains of a mouse model of Alzheimer’s disease. The researchers studied the mice when they were 3 months of age, so well before big deposits and chunks of A-beta start occurring.


The interesting finding of this study is that the levels of A-beta in the brains of both types of mice were significantly correlated with the amount of time they spent awake. More time spent awake lead to more A-beta. Because the control, normal mice also exhibited this relationship, it means that it is not linked to the disease. It’s just a normal fluctuation of A-beta levels linked to the sleep-wake cycle. To be certain this link was relevant for human physiology, they also tested this in healthy humans and, sure enough, they observed the same correlation.


Not surprisingly, when the researchers proceeded to sleep-deprive the mice, they showed an even larger increase in A-beta levels. This increase was also observed when the mice were given a drug that promotes wakefulness (don’t extrapolate this to coffee just yet… But maybe keep it in mind…). The study also points out that the Alzheimer mice who are sleep-deprived showed much greater numbers of A-beta chunks (the toxic stuff) compared with non sleep-deprived mice.


If you come to Scientific Chick for relevant findings, this one is for you. The study essentially implies that optimizing sleep time could potentially inhibit the formation of chunks of toxic A-beta and slow the progression of Alzheimer’s disease.


We all know that Alzheimer’s disease is terrible, and that sleeping in is glorious. Let’s just put two and two together, shall we? Easier said than done, I know…


Mr. Minou gave up on caloric restriction but approves of this new approach to ward off age-related diseases.

Reference : Amyloid-{beta} dynamics are regulated by orexin and the sleep-wake cycle. (2009) Kang JE, Lim MM, Bateman RJ, Lee JJ, Smyth LP, Cirrito JR, Fujiki N, Nishino S, Holtzman DM. Science Sep 24. [Epub ahead of print]

Monday, September 28, 2009

Yet another reason to exercise

Last weekend I went for a bike ride and when I reached the bottom of the big hill leading to UBC, I noticed quite a bit of activity going on. I didn't pay too much attention at first, but once I was booting up the hill, I was passed by several senior citizens on top-notch bicycles and I started getting curious. I asked a person who seemed to volunteer for the event what was going on. As it turns out, I was cycling right in the middle of the BC Seniors Games. Now for those of you who might not know me, my thesis research has to do with aging and the brain and nothing warms my heart like witnessing older adults and seniors exercising. I had just hit the jackpot!

The reason I'm so enthralled to see seniors exercise is because it is the single best thing they can do to preserve their brains. Today's paper highlights recent research done in California that shows just that.

First, a bit of background. You have a gene called APOE (mice also have it). It comes in 3 flavors, and each person only has one of the three: APOE2 (not important for today’s article), APOE3 and APOE4. If you got lucky and scored the APOE3 kind, all is well. If you happen to be in the 20-25% of the population who has the APOE4 kind, you may be in trouble: APOE4 is a known risk factor for Alzheimer's disease. Does it mean you'll for sure get Alzheimer’s disease? No, but you are 10 to 30 times more at risk of developing Alzheimer's disease if you carry the APOE4 gene.

In this paper, researchers compared old APOE3 (normal) and APOE4 (at risk for dementia) mice. In general, aged APOE4 mice experience cognitive decline faster and earlier than APOE3 mice. The researchers were interested in studying whether exercise (running on a mouse wheel!) had any effect on this cognitive decline.

The researchers used cognitive tasks that rely on a part of the brain that's important for memory, the hippocampus. One of the tasks, called place recognition, involves putting a mouse in an arena with two objects. The mouse is then removed from the arena, one object is moved, and the mouse is put back in the arena. Presumably, a normal mouse will then spend more time exploring the object in the new location. For this task, the aged APOE4 mice were initially impaired compared with the APOE3 mice. This means that during the second trial of the task, they tended to explore both objects for similar amounts of time, instead of spending more time on the object at the new location. This result suggests that the APOE4 were unable to remember the initial object locations well. The good news? Mice who exercised did significantly better at this task. Interestingly, this was valid for both APOE3 and APOE4 mice. Even more interestingly, exercise improved the scores of both types of mice for all the tasks that tested the hippocampus.

What's going on in the brains of these exercising mice? It is thought that exercise increases the levels of a protein called BDNF (for Brain-Derived Neurotrophic Factor). BDNF regulates many important functions in the brain, including the making of new neurons and the making of new connections between neurons, and these effects are thought to be important for memory.

Regular readers of Scientific Chick know not to get too excited when I report about animal studies. Well, I'm happy to add that the results that were observed in those mice were also observed in humans. In fact, there are countless human studies out there that confirm that physical activity is a powerful way to improve and maintain your cognitive abilities.

When I try to urge certain people to exercise (you know who you are), I almost always hear the same excuse: “Well, my uncle so-and-so never got off his couch and he lived to be 100!” In some cases, heredity can be on your side, that's true. But genetics can be quite the lottery, and it's important to keep in mind that several forms of cognitive decline, including the most common form of Alzheimer's (called “sporadic” in scientific lingo) are not hereditary.

So to all my older readers out there, I'll see you on the road at next year's BC Seniors Games. And if you're not ready for cycling, there's always the cribbage category.


Winners from this year's BC Seniors Games, cycling event. This could be you!


Reference: Exercise improves cognition and hippocampal plasticity in APOE epsilon4 mice. (2009) Nichol K, Deeny SP, Seif J, Camaclang K, Cotman CW. Alzheimers Dement. 5(4):287-94.

Wednesday, September 16, 2009

Children see, children do, but monkeys know better

I usually like to blog about recent articles, and I try to limit myself to papers published in the last 2-3 years. I’m going to make an exception this time and write about a publication from way back (2005). By scientific standards, 5 years ago is literally ancient (kind of like computer standards), but bear with me, this is going to be worth it (unlike a 5-year old computer).

Researchers from the UK were interested in finding out more about learning patterns, and about how our learning patterns differ from one of our close cousins, the chimpanzee. To test this, they subjected human children 2-4 years old and chimpanzees 2-6 years old to a simple task: retrieving a treat from a box.

In the first set of experiments, the researchers gave the chimps an opaque black box, and showed them how to open it to retrieve a treat inside. This wasn’t a simple pull-the-top-off kind of box, though. The box seemingly could only be opened following a series of specific steps: pulling a bolt, putting a stick in a hole, opening a door, etc. Chimps are quick-learners, though, and by imitating the researcher, they were soon able to retrieve the treat, no problem.

How well did the human children to at the same task? Quite well. They too were able to learn how to retrieve the treat from the black box by copying all the steps the researcher showed them. It would have been slightly worrying otherwise. I mean, we’ve taken over the world, right? Surely we can teach our young how to open a silly box, right?

The second set of experiments was almost exactly the same as the first one, except this time the box was made out of clear plastic instead of being opaque. The researchers went through the same process of teaching the chimps how to open the box. But there’s a catch: with a transparent box, it became very obvious that most of the steps supposedly needed to open the box were irrelevant. All you had to do was open the door. Chimpanzees, our closest living relative, are quite smart, and dropped all the unnecessary steps. They didn’t bother with the bolt and the stick and all those irrelevant actions: they went straight for the door and grabbed the treat.

When it was time for the children to be tested on the clear box, they too got shown how to open it by the researchers, including all the unnecessary steps. And when it was their turn to do it, they obviously…

Started pulling the bolt, putting the stick in the hole, etc.

Wait, what?

Did I get that wrong? I must have messed up the subjects… Wait… Nope. Those monkeys just fed us a piece of humble pie.

The researchers suggest that in the case of this study, the difference between how the chimps and how the children perform the task may have to do with a different focus of attention. Children pay more attention to the process of opening the box and the actions of the researcher, while chimps have their eyes on the prize, and focus more on the goal rather than the process. The researchers conclude by saying that imitation may be a human strategy that is often employed at the expense of efficiency.

The interesting thing about this article is that some news reports and descriptions of this experiment in magazines ended with a conclusion that more or less stipulated that our children imitated even the irrelevant steps of the task because that was the smarter thing to do, twisting the story around to make it sound like humans were superior to the chimpanzees in some way. Start your debate engines, but my opinion is that we should stop considering ourselves so superior. The results of this study are pretty straightforward. Do we really have to come up with a twisted interpretation of the results to make us sound like the winners? I would love it if we could just look at this experiment and say, hey, what do you know, we can learn something from the chimps.

Eyes on the prize, people. Eyes on the prize.

This monkey is laughing at you.


Reference: Causal knowledge and imitation/emulation switching in chimpanzees (Pan troglodytes) and children (Homo sapiens). (2005) Horner V., Whiten A. Anim Cogn 8(3):164-81.

Tuesday, September 1, 2009

Conquering cancer one virus at a time

Back in June, I participated in The Ride to Conquer Cancer, a 2-day bike ride between Vancouver and Seattle to raise money for BC Cancer. It was an extremely moving, positive and rewarding experience. It also gave me a chance to eat a piece of humble pie when 70 year-old cancer survivors (identified by flags on their bikes, adding to their wind resistance) would pass me going up the hill. The good news is that at the end of the 272 km, I was still smiling:

The bad news is that I didn’t conquer cancer.

Cancer research is well-funded, popular, and has been around for quite some time. So why can’t we get rid of this disease? The problem with cancer is that it’s tremendously difficult to target. Unlike cells infected by viruses and bacteria, cancer cells don’t display any obvious flags that something is wrong with them, which makes them challenging to distinguish from healthy cells. Therefore, most treatments for cancer involve killing a number of healthy cells, and that’s just not ideal.

Progress is being made, though, as a recent publication in the journal PNAS suggests. In this paper, a collaboration between researchers in California and in Japan lead to the discovery of a new way of identifying tumors for easier removal. They rely on an unlikely ally: viruses.

The researchers genetically engineered a special type of virus to carry a gene that codes for a fluorescent protein, GFP (for Green Fluorescent Protein - as simple as that!). If all the cells in your body were to be infected by that virus, you would glow (kind of like the famous puppy). While this would immediately up your popularity ranking at any science party, it doesn’t do much for treating cancer. So the researchers took it one step further and engineered the virus so that it would only express the fluorescent protein (make the cell glow) if the cell has an active telomerase. Telomerase in an enzyme involved in the replication of cells. If the telomerase enzyme is active when it shouldn’t be, it can cause cells to divide indefinitely, creating tumors. In fact, it is thought that over 90% of human tumors show telomerase activation. To sum it up, cells are infected with a virus that has a gene for a fluorescent protein, but only cancerous cells have an active telomerase, the switch that turns on the fluorescence. The result? Glowing tumors.

The benefits of these findings are two-fold. First, glowing tumors mean that surgeons can precisely remove the tumors without having to also remove a chunk of healthy tissue “just to make sure”. Second, tumors have a nasty habit of hitching a ride in your lymphatic system or your blood and disseminate throughout your body, making it very difficult to take out every little bit of sprouting tumor. With this innovation, all those little disseminated tumors can be identified and removed. Those two benefits together could greatly reduce the chance of a relapse, an important consideration when treating cancer. The researchers tested their mutant virus in two different types of animal models of cancer (colon and lung) with great success. While I’m usually worried at the prospect of glowing body parts, this research could have a big impact on cancer treatment.

If I want to give myself a chance to conquer cancer in 2010, I should probably spend less time on the computer and more time on the bike...


Glowing tumors


Reference: In vivo internal tumor illumination by telomerase-dependent adenoviral GFP for precise surgical navigation. (2009) Kishimotoa, H., Zhaoa, M., Hayashia, K., Uratad, Y., Tanakac, N., Fujiwarac, T., Penmanf, S., and Hoffmana, R.M. Proc Natl Acad Sci 106(34):14514-7.

Wednesday, August 26, 2009

Dating advice courtesy of your friendly neighborhood monkey

Dating advice websites abound with many different kinds of advice: good advice (don’t pick your nose), strange advice (date according to your blood type), not so good advice (Leos dating Leos need to hire domestic helpers). One suggestion that seems to pop up quite frequently is to mimic the body language of your date. He picks up his drink for a sip, you pick up yours. He strokes his hair, you stroke yours. He picks his nose, you pick your nose. You get the idea. As it turns out, in a social interaction context, we humans tend to unintentionally imitate what others are doing. Think of this as a team-building exercise: it’s a behavior that helps establish rapport, empathy and other fuzzy feelings toward each other.

A recent publication in the journal Science shows that behavior matching leading to increased rapport is not exclusive to humans, and it can also occur between different species.


The study looks at capuchin monkeys, a very social primate species. During the experiments, the monkey is given a ball to play with. On either side of the monkey’s cage stands an experimenter. One experimenter is mimicking what the monkey is doing with the ball (poking it, pounding it against the wall, trying to eat it… Mmmmm…), while the other experimenter is also playing with the ball, but not mimicking the monkey. The researchers show that not only do the monkeys look at the imitators more, they also spend more time hanging out close to the imitators and prefer to interact with the imitators in a food exchange game. The authors also did an important series of control experiments to show that these effects were not due to the monkeys perceiving more attention from the imitators.


So why are we subconsciously hard-wired to constantly be playing Simon Says? It is thought that the positive feelings resulting from behavior matching played an important role in human evolution by leading to higher levels of tolerance and by preventing aggressive behavior (to put it simply, by preventing us from beating each other up). The same principle probably applies to primates, and the empathic connection that results from imitation may explain the altruistic tendencies observed in the behavior of capuchin monkeys.


Next time you catch yourself winking back at someone who winked at you, or laughing at a joke you didn’t get because everyone else is laughing, keep in mind it’s for the greater good. After all, even monkeys know that imitation is the sincerest form of flattery.


Scientific Chick readers applying their new found knowledge
(Image from Loldogs)


Reference: Capuchin monkeys display affiliation toward humans who imitate them. (2009) Paukner, A., Suomi, S.J., Visalberghi, E., Ferrari, P.F. Science 325:880-882.

Thursday, August 13, 2009

Need some real estate advice? Ask an ant.

What does it mean to be rational?

In a biological context, rationality means that when animals (including humans) are making a decision, they choose the option that maximizes their fitness benefit. For example: my cat, Mr. Minou, prefers to eat canned food rather than grass. Canned food provides him with more nutrients, more protein, and more energy than grass: it has a fitness benefit for Mr. Minou. Even if he is presented with a third option, say, dry cat kibbles, Mr. Minou stills prefers canned food, because it still has the highest fitness benefit (and, obviously, kibbles taste gross).


Pretty straightforward so far. So, being the advanced species we are, humans must be rational beings, right?


Wrong.


Here is another example. Let’s say you’re shopping for a new house. You have two equally important criteria for your new house: it must have big windows to get lots of natural light in, and it must have a big garage to store all your stuff. There are two houses on the market. House A has big windows but a small garage. House B has a big garage but small windows. Rationally, humans in this position have a 50% chance of picking either house.


Suddenly, a third house is made available on the market. House C has big windows, but NO garage. In a situation like this, humans overwhelmingly put rationality aside and shift their pick to house A with the large windows but the small garage, because the perceived value of house A increased when comparing it with other available houses. However, houses A and B have unchanged value!


The reason for this shift is that as decision-makers, we don’t assign absolute values to options, we assign relative values. We like to compare. And comparing can be misleading.


Recently, two American researchers wondered about the rationality of collective animals, such as ants. The researchers figured that just as choices we make result from the complex interactions of many brain cells, the decisions that an ant colony makes might similarly stem from a complex network of interacting insect. Ant societies act as unitary decision-makers, jointly deciding on things like a single travel direction, or a nest site. The researchers decided to take advantage of the ants’ nest-seeking strategies to test their rationality.


The ants in the study live in natural holes like hollow branches, and are able to emigrate to a new nest if their current nest is damaged. Colonies seeking a new nest reach consensus on the better site among the new options based on entrance size, cavity dimensions, interior light level, etc. The way a colony reaches consensus is fascinating: a few scout ants head out to assess the quality of potential homes. When a scout finds a potential new home, it leaves to recruit more scouts, who will then recruit more scouts, and so on. The strength of this technique lies in the key fact that the higher the quality of the nest an ant finds, the faster it will recruit other ants. Eventually, a threshold of recruiting is reached, and non-scout ants are recruited and eventually the entire ant colony is moved.


The researchers first established that ant colonies prefer nests that have small openings and low inside light levels. They then assessed the susceptibility of ant colonies to irrationality by comparing the colonies’ preference for new nests with different attributes in a very similar way to my house example: nest A has a dim interior but a large entrance size, and nest B had a brighter interior but a small entrance size. In this case, the ant colonies showed no preference for either site, which is very rational of them.


The researchers then added one of two decoy nests. Decoy nest A2 was just as dim as nest A, but had an even larger entrance diameter. Decoy nest B2 had the same entrance diameter as B but was even brighter than B. In summary, each decoy nest had a good feature equivalent to that of A or B, and the other feature was worse.


Well, the study suggests that we should turn to ants for real estate advice: the presence of either decoy did not affect the proportion of colonies choosing A or B. This means that even with the decoy, the ant colonies recognized that A and B had equal fitness values, and that the option of the decoy did not change the fitness values of the original nest sites.


So what is the ants’ secret for being so rational? The most plausible explanation is that for the most part, each scout ant only visits one site. If it’s good, it recruits, and if it’s crummy, it moves on. No comparing with the one next door. In this case, the fact that individuals in the decision-making strategy lack either the opportunity or the ability to compare all the options offers some protection against irrational, fitness-reducing errors.


Is this relevant for us, other than the piece of humble pie we must eat when realizing that ants can be more rational than we are? Well, when faced with a decision, it can be helpful to remember to evaluate each option for its absolute value, and not its relative value.


Mr. Minou himself occasionally forgoes rationality and chooses to eat grass. I suspect he only does it for the pleasure of watching me wash puke from the floor afterwards. Maybe in some twisted way, making sure I clean up after him confers him some kind of fitness benefit…



Mr. Minou being irrational

Reference: Rationality in collective decision-making by ant colonies. Edwards SC, Pratt SC. Proc Biol Sci (2009) Jul 22 [Epub ahead of print]

Tuesday, August 4, 2009

Aging is optional! Take two of these pills and call me in the morning.

We’re getting older.

Not exactly a surprise, I know, but I didn’t mean you and me are getting older. I meant we are getting older, as a population. In 2001, one Canadian in eight was aged 65 or older. By 2026, one in five will be 65 or older. So what should we do with an increasingly aged population? Well, this being a North American consumer culture, the sensible thing to do is try to sell them stuff. I mean, think of the size of the market!


Right now, a significant amount of research is being devoted to aging. The main focus is to try to slow down aging (partly by developing marketable supplements and such). As some of you might know, even my own PhD thesis project is on how to slow aging in the brain. Loyal readers of ScientificChick.com will also be aware of recent articles about caloric restriction, a potential way to keep old age at bay. Thankfully, a recent publication in Nature suggests a much easier way to live longer: forget starvation, all you have to do is pop a(nother) pill!


In this article, American researchers show that mice that eat rapamycin supplements starting at 600 days of age (senior citizens in mouse years) live longer, up to 14% longer for females and 9% longer for males. What’s more, rapamycin supplementation did not change the causes of death. The researchers propose that this drug could be acting by postponing death from cancer, by delaying mechanisms of aging, or both.


How does rapamycin work? Well, as you might expect with a miracle drug like this, we’re not really sure. Rapamycin is an inhibitor of a pathway called mTOR. The mTOR pathway has many functions in your cells, like coordinating the survival response arising when there are changes in nutrient and energy availability, and dealing with potentially deadly stresses, such as oxidative stress (the kind of stress fancy juices packed with antioxidants are supposed to battle). Since the mTOR pathway acts kind of like a central sensor of cell health, it makes sense that it would be implicated in regulating lifespan. Exactly how rapamycin is working its magic, though, is probably what the researchers are trying to figure out for their next article.


Could the increase in longevity following rapamycin supplementation be related to the effects seen with caloric restriction (the “eat less, live longer” paradigm)? Well, mice on rapamycin show no change in body weight, so we know the drug is not acting through a caloric restriction mechanism. The converse, however, may be true: it is thought that the beneficial effects of diet restriction may also be due to an inhibition of the mTOR pathway.


So don’t throw out the double-stuffed Oreos just yet, but don’t eat half the box either: rapamycin pills for humans won’t be on the shelves tomorrow. While mTOR inhibitors are currently being used to treat a few conditions (transplant rejection and some cancers, for example), there’s still a lot of work to do to tease out all the potential interactions and side effects.


Longevity in pill form? To me, it would feel like cheating the system. And if there’s one thing we keep learning over and over in the life sciences, it’s that trying to cheat Mother Nature always has some unintended consequences.



A great illustration of the aging mouse by TS Rogers

Reference: Rapamycin fed late in life extends lifespan in genetically heterogeneous mice. Harrison DE, Strong R, Sharp ZD, Nelson JF, Astle CM, Flurkey K, Nadon NL, Wilkinson JE, Frenkel K, Carter CS, Pahor M, Javors MA, Fernandez E, Miller RA. Nature 2009 16;460(7253):392-5.

Sunday, July 26, 2009

The fountain of youth revisited

Not too long ago, I wrote about my love of brownies and an article on caloric restriction. I wasn’t really planning on bringing up this topic again so soon but a recent Science paper on caloric restriction in monkeys is getting so much media attention that I just had to throw in my two cents.

In the article, a group of American researchers study control and calorie-restricted (30%) monkeys over 20 years. What they show is that the calorie-restricted monkeys have a reduced incidence of age-associated death, diabetes, cancer, cardiovascular disease and brain atrophy compared to the control monkeys. From the sounds of it, we can stop looking for the fountain of youth (it’s in Florida, by the way). The media absolutely loves this story, and news reports and videos are quick to claim that caloric restriction increases longevity in our closest cousins, and it must be good for us as well.

First, a disclaimer from the friendly folks at ScientificChick.com: In the recent years, solid, convincing and well-controlled studies have shown some benefits of caloric restriction in various types of experimental subjects ranging from yeasts to humans. I won’t go back into the pros and cons of caloric restriction in this post. There is good evidence out there that it can be beneficial in some instances, and also good evidence that it’s not for everyone.That being said, I believe there are many problems with this particular Science paper on caloric restriction.

In my opinion, a major issue with the findings is that the control monkeys (the ones not on caloric restriction) are fed ad libitum (meaning they can eat as much as they want). You might be able to guess the problem already, but let me give you an example just in case: I have a cat, and if I were to offer him a constant supply of what seems to me like gross, bland cat food, he would keep eating it until he would slip in a food coma. I think this goes for most species, including us (ever heard of the candy jar experiment?). Therefore, it’s very hard to judge if monkeys who eat as much as they want are eating the amount of food they should naturally be eating. Chances are they are eating more (breakfast, lunch and dinner are not served at regular hours in the wild). And this is particularly relevant because eating too much (or obesity) happens to be an important risk factor for all the diseases the study looks at (diabetes, cardiovascular problems, cancer, etc.).

Another issue with the article is that few of the findings show a statistically significant difference between the control and the calorie restricted groups, even though the researchers are studying a reasonably large number of monkeys. When your results are statistically significant, it means that what you are observing is unlikely to have occurred by chance. This concept is a hallmark of solid and convincing science findings and the media should be very careful not to hype findings that aren’t statistically significant. In addition, almost every single news article on this publication claimed that caloric restriction had an effect on longevity. While the study looks at age-associated diseases, the longevity (or life expectancy) parameter is not assessed at all (though the researchers do mention they plan on assessing this in the future).

Lastly, and perhaps most disturbing from my scientist point of view, the lead researcher in this study happens to be co-founder and member of the board of LifeGen Technologies, a company focusing on the impact of dietary interventions on the aging process. A little research on this company made it very clear to me that the more people buy this whole caloric restriction business, the more money the company makes. If that’s not a conflict of interest, I don’t know what is.

Now if you’ll excuse me, a new cupcake store just opened across from my building, and I must significantly increase the quality of my life by going over and eating a cupcake.


My cat, Mr Minou, is not a fan of caloric restriction.


Reference: Caloric restriction delays disease onset and mortality in rhesus monkeys. Colman RJ, Anderson RM, Johnson SC, Kastman EK, Kosmatka KJ, Beasley TM, Allison DB, Cruzen C, Simmons HA, Kemnitz JW, Weindruch R. Science. 2009 Jul 10;325(5937):201-4.

Sunday, July 5, 2009

Who wants a memory booster?

One of my first posts was about erasing memories. That may be useful if you suffer from post-traumatic stress disorder or if you just sat through the last installment of the Transformers movies, however, I can think of more people who would benefit from memory enhancement rather than memory erasure. One recent publication in Science hints that this may be just around the corner.

First, how do we know what animals remember? One way to test memory in rats is by using object recognition. You present the rat with two identical objects and let the animal explore them for a few minutes. Then you replace one of the objects with a new object, and typically, the rat will spend more time exploring the new object than the old one (presumably because the rat remembers the old one). By testing rat visual memory performance using this simple paradigm, the researchers established that rats were able to retain information about an object for up to 45 minutes, but after 60 minutes the objects were forgotten and treated as new unknowns. The researchers then injected a special protein in a specific part of the rat’s visual cortex, a part of the brain that is important for processing visual information. Following the injection, the rats were tested again for object recognition, and low and behold, the rats were now able to remember object information for longer than 45 minutes. How much longer? 60 minutes? 100 minutes? 1000 minutes? Actually it was 14 months. The rats went from being able to remember an object for 45 minutes to being able to remember it for 14 months.


Now the relevance of this article mainly lies in the identification of the function of a part of the visual cortex. To confirm their findings, the researchers took control rats (that didn’t receive the special drug) and inactivated the brain cells in the section of interest of the visual cortex (ok, they destroyed them). Those rats couldn’t remember objects at all. Interestingly, the researchers also showed that if you inject the special drug, then introduce a new object, and then destroy the brain cells, the rats will still remember the object for a long time, meaning this specific region of the visual cortex is important for making new memories but not for storing those memories. These are all important findings that further our understanding of visual memory.


But 14 months?? Surely this kind of memory enhancement won’t go unnoticed. The researchers claim that “the role of the RGS-14 protein in the enhancement of visual memory makes this protein an important pharmaceutical target for the treatment of (...) memory defects as well as for boosting the memory capacity”. That being said, I don’t think this drug will hit the shelves anytime soon. First, in the article, the researchers have to inject it directly into a specific brain region, and I certainly wouldn’t volunteer for that. Second, the drug affects an important, ubiquitous protein with many functions, and it’ll be a while before we tease out all the potential pitfalls of toying with something like that.


Regardless, with the aging population and the ever-increasing need (or want?) for maximum brain performance, there is a huge market for memory enhancers and the race is on to develop the first one. Now is the time to ask and answer all the ethical questions that surround this issue. If you had access to memory enhancers, would you use them? What if they were really expensive? What if they had detrimental side effects? What if they had detrimental side effects and everyone in school or work used them to enhance their performance relative to people who don’t use them (Tour de France, anyone?)?


Memory enhancers: useful drugs or can of worms?




The object recognition task


Reference: Role of layer 6 of V2 visual cortex in object-recognition memory. Lopez-Aranda, M.F., Lopez-Tellez, J.F., Navarro-Lobato, I., Masmudi-Martin, M., Gutierrez, A., Khan, Z.U. Science 2009 325:87-89.

Sunday, June 28, 2009

The trouble with the tube (part 2 of 2)

There is a wealth of literature on the negative impacts of television watching on developing children, and in my last post, I wrote about how certain types of television programs may lead to attentional problems later in life. I myself blame countless hours spent watching the Fresh Prince of Bel-Air for my affinity for men who can dance. To balance the argument, can television be good for infants and children? Kids certainly claim they need the extra channels for “educational” purposes! There is overwhelming evidence that certain programs like Sesame Street have many educational benefits. It’s been shown that these programs increase school readiness, and improve vocabulary scores in children who start watching at 3 years old or older. Nowadays, though, videos and DVDs like the Baby Einstein products are being marketed for much younger children, as young as one month old in some instances. Even though the American Academy of Pediatrics favors zero exposure to television for very young children, children under two spend on average one to two hours a day in front of the tube. Who could blame them (or their parents) when media producers claim that their videos and DVDs have developmental benefits? Who wouldn’t want their child to become the next Einstein?

However exciting the claims may be, it remains unclear whether children under two can actually benefit from the information from a television screen. A 2007 study showed that children who watched more baby videos and DVDs knew fewer words than those who didn’t. In order to examine the relationship between infant DVDs and language, researchers from California studied the language skills of two groups of children 12 to 15 months old. The first group (called the control group) just went about their daily routines. The second group (the DVD group) was instructed to watch the DVD Baby Wordsworth (a DVD from the Baby Einstein company that highlights vocabulary words) at multiple time points for the duration of the study. Before and after the study, the children’s vocabulary was evaluated using a standard test.

Surprisingly, the results of the study show no significant differences between the control group and the DVD group on language skills at any time point (either before or after the study). I see this as good news, because while watching the DVD isn’t helping the children learn new words, it’s not keeping them from learning either. In addition to these results, the researchers were insightful enough to study other predictors (predictors are essentially other variables that may affect the outcome of the study). Interestingly (but not surprisingly), the amount of time a child was read to was the best predictor of a higher vocabulary score. And, since by now most of my readers realize the importance of controls, this relationship is true even when controlling for age, gender, income, parent education and development level.

Why doesn’t the DVD help the children learn new words? There are many potential explanations for this. Maybe the DVD is just not that educational. Maybe the DVD just doesn’t attract the infant’s attention. What’s even more likely is that young children just can’t process information from the television.

My dad used to read me a story every single night, and this went on well after I was able to read by myself. I still have, and will cherish forever, my favourite book of tales. I’m really grateful for all the hours my parents spent reading to me. Maybe it counteracted all the stupidity I was exposed to during my Fresh Prince phase.



Best book ever!

Reference: Just a talking book? Word learning from watching baby videos. Robb, M.B., Richert, R.A., Wartella, E.A. British Journal of Developmental Psychology 2009 27(1):27-45.

 
© 2009 Scientific Chick. All Rights Reserved | Powered by Blogger
Design by psdvibe | Bloggerized By LawnyDesignz