Sunday, February 28, 2010

Art is a kind of innate drive that seizes a human being and makes him its instrument. To perform this difficult office it is sometimes necessary for him to sacrifice happiness and everything that makes life worth living for the ordinary human being. --Carl Jung

Sunday, February 21, 2010

Quantum Mind: http://en.wikipedia.org/wiki/Quantum_mind
Social Neuroscience: http://en.wikipedia.org/wiki/Social_neuroscience
Gödel's incompleteness theorems: http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorem

Theory of mind

Theory of mind is the ability to attribute mental states—beliefs, intents, desires, pretending, knowledge, etc.—to oneself and others and to understand that others have beliefs, desires and intentions that are different from one's own. Though there are philosophical approaches to issues raised in such discussions, theory of mind as such is distinct from the philosophy of mind.

Philosophical roots

Contemporary discussions of ToM have their roots in philosophical debate—most broadly, from the time of Descartes’ "Second Meditation," which set the groundwork for considering the science of the mind. Most prominent recently are two contrasting approaches, in the philosophical literature, to theory of mind: theory-theory and simulation theory. The theory-theorist imagines a veritable theory—"folk psychology"—used to reason about others' minds. The theory is developed automatically and innately, though instantiated through social interactions.[13]

On the other hand, simulation theory suggests ToM is not, at its core, theoretical. Two kinds of simulationism have been proposed.[14] One version (Alvin Goldman's) emphasizes that one must recognize one's own mental states before ascribing mental states to others by simulation. The second version of simulation theory proposes that each person comes to know his or her own and others' minds through what Robert Gordon[15] names a logical "ascent routine" which answers questions about mental states by re-phrasing the question as a metaphysical one. For example, if Zoe asks Pam, "Do you think that dog wants to play with you?", Pam would ask herself, "Does that dog want to play with me?" to determine her own response. She could equally well ask that to answer the question of what Zoe might think. Both hold that people generally understand one another by simulating being in the other's shoes.

One of the differences between the two theories that have influenced psychological consideration of ToM is that theory-theory describes ToM as a detached theoretical process that is an innate feature, whereas simulation theory portrays ToM as a kind of knowledge that allows one to form predictions of someone's mental states by putting oneself in the other person's shoes and simulating them. These theories continue to inform the definitions of theory of mind at the heart of scientific ToM investigation.

The philosophical roots of the Relational Frame Theory account of ToM arises from contextual psychology and refers to the study of organisms (both human and non-human) interacting in and with a historical and current situational context. It is an approach based on contextualism, a philosophy in which any event is interpreted as an ongoing act inseparable from its current and historical context and in which a radically functional approach to truth and meaning is adopted. As a variant of contextualism, RFT focuses on the construction of practical, scientific knowledge. This scientific form of contextual psychology is virtually synonymous with the philosophy of operant psychology[16].

Mirror neuron

From Wikipedia, the free encyclopedia

A mirror neuron is a neuron that fires both when an animal acts and when the animal observes the same action performed by another.[1] Thus, the neuron "mirrors" the behavior of the other, as though the observer were itself acting. Such neurons have been directly observed in primates, and are believed to occur in humans and other species including birds. In humans, brain activity consistent with that of mirror neurons has been found in the premotor cortex and the inferior parietal cortex.

Some scientists consider mirror neurons one of the most important recent discoveries in neuroscience. Among them is V.S. Ramachandran, who believes they might be very important in imitation and language acquisition.[2] However, despite the popularity of this field, to date no widely accepted neural or computational models have been put forward to describe how mirror neuron activity supports cognitive functions such as imitation.[3]

The function of the mirror system is a subject of much speculation. Many researchers in cognitive neuroscience and cognitive psychology consider that this system provides the physiological mechanism for the perception action coupling (see the common coding theory). These mirror neurons may be important for understanding the actions of other people, and for learning new skills by imitation. Some researchers also speculate that mirror systems may simulate observed actions, and thus contribute to theory of mind skills,[4][5] while others relate mirror neurons to language abilities.[6] It has also been proposed that problems with the mirror system may underlie cognitive disorders, particularly autism.[7][8] However the connection between mirror neuron dysfunction and autism remains speculative and it is unlikely that mirror neurons are related to many of the important characteristics of autism.

Possible functions

Understanding intentions

Many studies link mirror neurons to understanding goals and intentions. Fogassi et al. (2005)[33] recorded the activity of 41 mirror neurons in the inferior parietal lobe (IPL) of two rhesus macaques. The IPL has long been recognized as an association cortex that integrates sensory information. The monkeys watched an experimenter either grasp an apple and bring it to his mouth or grasp an object and place it in a cup. In total, 15 mirror neurons fired vigorously when the monkey observed the "grasp-to-eat" motion, but registered no activity while exposed to the "grasp-to-place" condition. For four other mirror neurons, the reverse held true: they activated in response to the experimenter eventually placing the apple in the cup but not to eating it. Only the type of action, and not the kinematic force with which models manipulated objects, determined neuron activity. It was also significant that neurons fired before the monkey observed the human model starting the second motor act (bringing the object to the mouth or placing it in a cup). Therefore, IPL neurons "code the same act (grasping) in a different way according to the final goal of the action in which the act is embedded".[33] They may furnish a neural basis for predicting another individual’s subsequent actions and inferring intention.[33]

Empathy

Stephanie Preston and Frans de Waal,[34] Jean Decety,[35][36] and Vittorio Gallese[37][38] have independently argued that the mirror neuron system is involved in empathy. A large number of experiments using functional MRI, electroencephalography and magnetoencephalography have shown that certain brain regions (in particular the anterior insula, anterior cingulate cortex, and inferior frontal cortex) are active when a person experiences an emotion (disgust, happiness, pain, etc.) and when he or she sees another person experiencing an emotion.[39][40][41][42][43][44][45] However, these brain regions are not quite the same as the ones which mirror hand actions, and mirror neurons for emotional states or empathy have not yet been described in monkeys. More recently, Christian Keysers at the Social Brain Lab and colleagues have shown that people who are more empathic according to self-report questionnaires have stronger activations both in the mirror system for hand actions[46] and the mirror system for emotions[44], providing more direct support for the idea that the mirror system is linked to empathy.

Language

In humans, functional MRI studies reported that areas homologous to the monkey mirror neuron system have been found in the inferior frontal cortex, close to Broca's area, one of the hypothesized language regions of the brain. This has led to suggestions that human language evolved from a gesture performance/understanding system implemented in mirror neurons. Mirror neurons have been said to have the potential to provide a mechanism for action understanding, imitation learning, and the simulation of other people's behaviour.[47]. This hypothesis is supported by some cytoarchitectonic homologies between monkey premotor area F5 and human Broca's area [48]. Rates of vocabulary expansion link to the ability of children to vocally mirror nonwords and so to acquire the new word pronunciations. Such speech repetition occurs automatically, fast[49] and separately in the brain to speech perception.[50][51] Moreover such vocal imitation can occur without comprehension such as in speech shadowing[52] and echolalia.[53]

Autism

Some researchers claim there is a link between mirror neuron deficiency and autism. In typical children, EEG recordings from motor areas are suppressed when the child watches another person move, and this is believed to be an index of mirror neuron activity. However, this suppression is not seen in children with autism.[7] Also, children with autism have less activity in mirror neuron regions of the brain when imitating.[8] Finally, anatomical differences have been found in the mirror neuron related brain areas in adults with autism spectrum disorders, compared to non-autistic adults. All these cortical areas were thinner and the degree of thinning was correlated with autism symptom severity, a correlation nearly restricted to these brain regions.[54] Based on these results, some researchers claim that autism is caused by a lack of mirror neurons, leading to disabilities in social skills, imitation, empathy and theory of mind.

Theory of mind

In Philosophy of mind, mirror neurons have become the primary rallying call of simulation theorists concerning our 'theory of mind.' 'Theory of mind' refers to our ability to infer another person's mental state (i.e., beliefs and desires) from their experiences or their behavior. For example, if you see a girl reaching into a jar labeled 'cookies,' you might assume that she wants a cookie (even if you know the jar is empty) and believes that there are cookies in the jar.

There are several competing models which attempt to account for our theory of mind; the most notable in relation to mirror neurons is simulation theory. According to simulation theory, theory of mind is available because we subconsciously empathize with the person we're observing and, accounting for relevant differences, imagine what we would desire and believe in that scenario.[55][56] Mirror neurons have been interpreted as the mechanism by which we simulate others in order to better understand them, and therefore their discovery has been taken by some as a validation of simulation theory (which appeared a decade before the discovery of mirror neurons).[57] More recently, Theory of Mind and Simulation have been seen as complementary systems, with different developmental time courses.[58][59][60]

Gender differences

The issue of gender differences in empathy is quite controversial and subject to social desirability and stereotypes. However, a series of recent studies conducted by Yawei Cheng, using a variety of neurophysiological measures, including MEG,[61] spinal reflex excitability,[62] electroencephalography,[63][64] have documented the presence of a gender difference in the human mirror neuron system, with female participants exhibiting stronger motor resonance than male participants.

Criticism

Although many in the scientific community have been excited about the discovery of mirror neurons, there are some researchers who express skepticism in regards to the claims that mirror neurons can explain empathy, theory of mind, etc. Greg Hickok, a cognitive neuroscientist at UC Irvine, has stated that "there is little or no evidence to support the mirror neuron=action understanding hypothesis and instead there is substantial evidence against it."[65] Hickok also published a detailed analysis of these problems in his paper, "Eight problems for the mirror neuron theory of action understanding in monkeys and humans."[66] The eight problems he refers to are:

  • There is no evidence in monkeys that mirror neurons support action understanding.
  • Action understanding can be achieved via non-mirror neuron mechanisms.
  • M1 contains mirror neurons.
  • The relation between macaque mirror neurons and the “mirror system” in humans is either non-parallel or undetermined.
  • Action understanding in humans dissociates from neurophysiological indices of the human “mirror system.”
  • Action understanding and action production dissociate.
  • Damage to the inferior frontal gyrus is not correlated with action understanding deficits.
  • Generalization of the mirror system to speech recognition fails on empirical grounds.

Wednesday, February 17, 2010

The Evolution of Illumination

Research Blogging / by Dave Munger

One of the most alluring visual feasts in the movie Avatar was its alien biosphere of glowing plants and animals. Nearly every living thing on the moon Pandora seemed to shimmer and sparkle—sometimes in response to touch, other times as an expression of emotion. It’s something that separates this magical world of make-believe from the real world here on Earth.

Or is it? While bioluminescent organisms are perhaps not as common in the real world as they are in science fiction, they do exist, in a surprising variety of places. I first encountered them at night on a dock near my childhood home in Seattle. Initially the waters of Puget Sound seemed dark, but dipping a hand revealed a luminous surprise—tiny glowing bits appeared, like underwater sparks, wherever my hand disturbed the water. Then I saw a glowing fish swim by, leaving a luminous trail. The fish wasn’t actually glowing; rather, it was causing tiny bioluminescent dinoflagellates to glow as it passed them. This may be a defense mechanism for the dinoflagellates. Since any movement by their predators causes them to glow, this light may attract other, larger predators that could then do away with the danger.

Image courtesy of Jeremy Marr

Christie Lynn, a graduate student in cell and molecular biology at the University of Hawaii, points out that dinoflagellates aren’t the only creatures in the sea that glow. Indeed, at depths of greater than 1,000 meters, where no light from the surface can reach, it has been estimated that nearly 90 percent of creatures emit some kind of light. These aren’t just microorganisms: Fish, squid, jellyfish, and shrimp are also commonly bioluminescent at these depths.

Their lights have a variety of purposes: Camouflage, attracting mates, attracting (or distracting) prey have all been observed. In animals with nervous systems, in most cases, neural activity initiates the bioluminescence. But in the velvet belly lantern shark, Lynn says, researchers found that the glowing was not caused by nerve cells. Instead, it seemed, certain hormones controlled the glow: Melatonin and prolactin turned it on, and a hormone called Alpha-MSH turned it off. This makes some sense, as melatonin is activated by darkness (it helps control sleeping behavior in humans). This species of shark uses glowing as a form of camouflage. It swims around 500 meters below the surface, and its glowing belly, matched to the dim light filtering down from above, make it less visible from below.

Bioluminescence isn’t limited to the deep, dark portions of the ocean. Lucas Brouwers is a graduate student in Molecular Mechanisms of Disease in Nijmegen, the Netherlands. He blogs about a coral that is ordinarily a dull shade of brown, but glows in a vivid rainbow of colors under certain conditions. The coral’s glow is due to the dinoflagellates living inside it in a symbiotic relationship. Bioluminescence is most commonly a yellow-green color, whether in fireflies or phytoplankton, so naturally researchers have been interested in how the wide array of colors exhibited by these corals evolved.

Using a very clever technique, Steven F. Field and Mikhail V. Matz of the University of Texas at Austin reconstructed the evolution of the proteins responsible for the coral dinoflagellates’ luminescence—all on a petri dish. Their results were published last September in the Journal of Molecular Biology and Evolution. Field and Matz examined all the different possible mutations of the bacterial genome between a green ancestor and modern red-glowing bacteria, using 20,000 different cell cultures. Through a process of elimination, they identified 20 critical mutations in the genome that were responsible for the variety of colors we see today. Interestingly, these mutations are epistatic: That is, individually, they don’t result in much difference, but combined, they result in the vibrant, bold colors of the coral’s glow, ranging from blue to red. The researchers were even able to illustrate these genetic relationships by using colonies of the host bacteria to construct a living phylogenetic tree.

Of course, the creature many of us associate with bioluminescence doesn’t live in the oceans at all. Zen Faulkes, a biologist at the University of Texas–Pan-American, uncovered a study about the glowing “firefly,” actually one of several glowing beetles. When I first saw fireflies after I moved to the southern US, I wondered how that glow could possibly be beneficial. Wouldn’t it attract predators? A team led by Paul R. Moosman studied how insect-eating bats respond to the glowing fireflies. They found relatively few remnants of fireflies in bat droppings, and caged bats rejected pieces of fireflies as food. Perhaps the glow of a firefly serves as a signal to potential predators that they are distasteful or poisonous, just like red berries signal danger to herbivores. Indeed, the researchers did find that some bats attacked glowing lures less than non-glowing lures, although Faulkes says the results weren’t conclusive for all bat species that were studied.

So while it’s possible that one purpose of the glow of fireflies is as a warning for predators, clearly much remains to be learned about the function of bioluminescence.

A Review of SuperFreakonomics

by P.J. Rooks

MAYBE YOUR DOCTOR IS THE SORT THAT ALWAYS MAKES A BIG PRODUCTION of washing his hands. Sweeping into the examining room, the busy guy makes a beeline for the sink. He scrupulously scrubs. He diligently dries. He even turns the faucet off with a paper towel. Then, turning to face you, he straightens his tie, opens your chart and asks, “Now, what seems to be the problem?” But maybe that’s the question you should be putting to him — or more precisely, “What’s wrong with this picture?” While his hands are (or were) squeaky clean, he’s just dragged them across a splattered palette of germs collected from every patient he’s seen today, yesterday, or maybe even last week, and now — open up and say “aah” — those hands are headed straight for your unguarded mouth. Is your chart really that dirty? No, but doctors wearing ties, it seems, could be hazardous to your health.

Physicians ties as the carrier of germs is another one down for econo-rogues Steven Levitt and Stephen Dubner who, continuing in their mission to explore the hidden side of everything, are back from the huge success of their original best-seller, Freakonomics, with an upsized encore. SuperFreakonomics is another enlightening hodge-podge of skepticism, myth-busting, and counter-intuitive common sense.

What you think you know may not be so, according to Levitt and Dubner. Take Kitty Genovese, for example. Stabbed to death in the courtyard of her New York apartment complex in 1964 while numerous neighbors looked on from their windows, she screamed for help repeatedly during the 35-minute attack and not one of them called the police. Involuntarily martyred to the future of Sociology 101 discussion sessions, college students across the nation have spent the last four decades analyzing why Ms. Genovese had it in her stars to become such an icon of “bystandar apathy.” However, as Levitt and Dubner suggest, perhaps this torrent of brainpower would have been better spent analyzing the police records instead. Almost immediately vaulted to urban legend status, one rather important point seems to have gone missing from the popular recollection of the Genovese case — the police were called. And while they may have botched the response, they aced the cover-up.

So you see, people aren’t so bad after all. Or are they? Levitt and Dubner impart a brief history of how some researchers have moved beyond the psychologist’s couch to study human nature. Lab games like Ultimatum and Dictator that test whether people are basically generous or self-serving, carry wide social appeal but have shown wavering results. Early experimenters found that, given a 20-dollar bill and the option to share (or not) with a stranger, most people chose to spread the wealth around a little (roughly, a 60/40 or 70/30 split is common). A slight rule change in later experiments, however, found people instead taking money from their peers. What to make of all this? The tie-breaker came when another researcher noticed that his subjects’ behavior seemed to be affected by the mere fact that they were being watched. The reality isn’t quite so terribly glum, however. In a later experiment, those who worked for their 20 bucks, for the most part, neither shared nor stole but respected the property of their unknown neighbors.

Doctors’ ties, scandalous homicides, selfishness vs. fairness — what does any of this have to do with economics? Economists and others saddled with the task of quantifying the real world have moved beyond the crusty confines of spreadsheets and statistical formulas and into a murky subterrain of lost truths, missed data, unexpected outcomes, and most importantly, human incentives. Forget about supply and demand, wages and market forces. “Human behavior is influenced by a dazzlingly complex set of incentives, social norms, framing references, and the lessons gleaned from past experience — in a word, context,” Levitt and Dubner explain. “We act as we do because, given the choices and incentives at play in a particular circumstance, it seems most productive to act that way. This is also known as rational behavior, which is what economics is all about.”

SuperFreakonomics peeks into the fascinating intellectual journeys of economists, doctors, safety analysts, climatologists (and yes, even a hooker) who have challenged the wisdom of the status quo and emerged with bold insights that carry the power to transform society. Here are just a few of the unconventional conclusions from SuperFreakonomics:

  • The “eat local” movement actually does more environmental harm than good.
  • Ditto for the Endangered Species Act.
  • Terrorists are more likely to come from affluence than poverty.
  • Car seats don’t work for kids over age two.
  • Trees may be causing more global warming than humans.
  • Unless you’re dying, heading for the hospital may not be your best emergency strategy.

Levitt and Dubner even have an inexpensive plan to save the planet from global warming … or is it cooling? But the best news yet is that the world and all its millions of moving parts have still not been completely explained. Having failed in their self-assigned though no-less-Herculean Freakonomics quest to explore (in 207 pages or less) the hidden side of everything, Levitt and Dubner are forced to “admit to lying in our previous book.” SuperFreakonomics is, as the title suggests, bigger and better than ever, and yet the authors warn that even after a second installment, there are still a few things that remain to be addressed. It may be a pie-in-the-sky goal, but readers and fans of these not-so-dismal scientists can hope that the pursuit of everything will continue to be a long and wordy chase.

Thursday, February 11, 2010

Some animals can instinctively solve navigational problems that have baffled humans for centuries.

The nervous system of the desert ant Cataglyphis fortis, with around 100,000 neurons, is about 1 millionth the size of a human brain. Yet in the featureless deserts of Tunisia, this ant can venture over 100 meters from its nest to find food without becoming lost. Imagine randomly wandering 20 kilometers in the open desert, your tracks obliterated by the wind, then turning around and making a beeline to your starting point—and no GPS allowed! That’s the equivalent of what the desert ant accomplishes with its scant neural resources. How does it do it?

“Jason,” a graduate student studying the development of human and animal cognition, discusses a remarkable series of experiments on the desert ant on his blog The Thoughtful Animal. In work spanning more than 30 years, researchers from RĂĽdiger Wehner’s laboratory at University of Zurich Institute of Zoology carefully tracked the movements of ants in the desert as the insects foraged for food. One of the researchers’ key questions was how the ants calculated the direction to their nest. To correct for the possibility that the ants used landmarks as visual cues, despite the relatively featureless desert landscape, the researchers engaged in a bit of trickery. They placed a food source at a distance from a nest, then tracked the nest’s ants until the ants found the food. Once the food was found, the ants were relocated from that point so that the way back to their nest was a different direction than it would have been otherwise. The relocated ants walked away from the nest, in the same direction they should have walked if they had never been moved. This suggested that the ants are not following features, but orienting themselves relative to an internal navigation system or (as turned out to be the case) the position of the Sun in the sky.

No matter how convoluted a route the ants take to find the food, they always return in a straight-line path, heading directly home. The researchers discovered that the ants’ navigation system isn’t perfect; small errors arise depending on how circuitous their initial route was. But the ants account for these errors as well, by walking in a corrective zigzag pattern as they approach the nest.

So how do the ants know how far to travel? It could still be that they are visually tracking the distance they walk. The researchers tested this by painting over the ants’ eyes for their return trip, but the ants still walked the correct distance, indicating that the ants are not using sight to measure their journeys.

Another possibility is that the ants simply count their steps. In a remarkable experiment published in Science in 2006, scientists painstakingly attached “stilts” made of pig hairs to some the ants’ legs, while other ants had their legs clipped, once they had reached their food target. If the ants counted their steps on the journey out, then the newly short-legged ants should stop short of the nest, while stilted ants should walk past it. Indeed, this is what occurred! Ants count their steps to track their location. (If only you had remembered to do this before you started on your 20-kilometer desert trek…)

But other creatures have different navigation puzzles to solve. In a separate post, Jason explains a study showing how maternal gerbils find their nests. When a baby is removed from the nest, the gerbil mother naturally tries to find and retrieve it. Researchers placed one of the babies in a cup at the center of a platform, shrouded in darkness. When the mother found the baby, the platform was rotated. Did she head for the new position of her nest, with its scents and sounds of crying babies? No, she went straight for the spot where the nest had been, ignoring all these other cues. For gerbils, clearly, relying on their internal representation of their environment normally suffices, so the other information goes unheeded.

Migratory birds, on the other hand, must navigate over much larger distances, some of them returning to the identical geographic spot year after year. How do they manage that trick? One component, University of Auckland researcher and teacher Fabiana Kubke reports, is the ability to detect the Earth’s magnetic field. Though we’ve known about this avian six sense for some time, the location of a bird’s magnetic detector is still somewhat of a mystery. Last November, however, a team led by Manuela Zapka published a letter in Nature that narrowed the possibilities. Migratory European robins have magnetic material in their beaks, but also molecules called “cryptochromes” in the back of their eyes that might be used as a sort of compass. The team systematically cut the connections between these two areas and the robins’ brains, finding that the ability to orient to compass points was only disturbed when the connection to cryptochromes was disrupted.

Much remains to be learned about how birds can successfully migrate over long distances. Unlike ants and gerbils, they can easily correct for large displacements in location and still return to the correct spot. As researchers learn more about how animals—including humans—navigate, look for more discussion of their results on ResearchBlogging.org.

Saddle-node bifurcation

In the mathematical area of bifurcation theory a saddle-node bifurcation or tangential bifurcation is a local bifurcation in which two fixed points (or equilibria) of a dynamical system collide and annihilate each other. The term 'saddle-node bifurcation' is most often used in reference to continuous dynamical systems. In discrete dynamical systems, the same bifurcation is often instead called a fold bifurcation. Another name is blue skies bifurcation in reference to the sudden creation of two fixed points.

If the phase space is one-dimensional, one of the equilibrium points is unstable (the saddle), while the other is stable (the node).

The normal form of a saddle-node bifurcation is:

\frac{dx}{dt}=r+x^2

Here x is the state variable and r is the bifurcation parameter.

  • If r <> there are two equilibrium points, a stable equilibrium point at -\sqrt{-r} and an unstable one at +\sqrt{-r}.
  • At r = 0 (the bifurcation point) there is exactly one equilibrium point. At this point the fixed point is no longer hyperbolic. In this case the fixed point is called a saddle-node fixed point.
  • If r > 0 there are no equilibrium points.

A saddle-node bifurcation occurs in the consumer equation (see transcritical bifurcation) if the consumption term is changed from px to p, that is the consumption rate is constant and not in proportion to resource x.

Saddle-node bifurcations may be associated with hysteresis loops and catastrophes.

Phase portrait showing Saddle-node bifurcation.

An example of a saddle-node bifurcation in two-dimensions occurs in the two-dimensional dynamical system:

 \frac {dx} {dt} = \alpha - x^2
 \frac {dy} {dt} = - y.

As can be seen by the animation obtained by plotting phase portraits by varying the parameter α,

  • When α is negative, there are no equilibrium points.
  • When α = 0, there is a saddle-node point.
  • When α is positive, there are two equilibrium points: that is, one saddle point and one node (either an attractor or a repellor),.

Re-Engineering the Human Immune System

Written By: Derya Unutmaz and Gary Marcus

Swine Flu. Spanish Flu. SARS. Almost every year, it seems, there is a new virus to watch out for. Roughly thirty thousand Americans die annually from a new flu strain — meaning roughly one flu fatality for every two victims of car accidents — and there is always the possibility that we will do battle with a much deadlier strain of flu virus, such as the one (cousin to the current swine flu) that killed 50 million people in 1918.

Currently, our bodies’ responses are, almost literally, catch as can. The immune system has two major components. Innate immunity responds first, but its responses are generic, its repertoire built-in and its memory nonexistent. On its own, it would not be enough. To deal with chronic infection and to develop responses targeted to specific pathogens the body also relies on a second “acquired immune system” that regulates and amplifies the responses of the inbuilt system, but also allows the body to cope with new challenges. Much of its action turns on production of antibodies, each of which is individually tailored to the physical chemistry of a particular alien invader. In the best case, the immune system creates an antibody that is a perfect match to some potential threat, and, more than that, the acquired immune system maintains a memory of that antibody, better preparing the body for future invasions from the same pathogen. Ideally, the antibody in question will bind to — and ultimately neutralize or even kill — the potentially threatening organisms.


Alas, at least for now, the process of manufacturing potent antibodies depends heavily on chance, and a type of lymphocyte known as B cells. In principle, B cells have the capacity to recombine to form a nearly infinite variety of antibodies: roughly 65 different “V regions” in the genome can combine with roughly 25 “D regions” and 6 “J regions,” which further undergo random mutations. In practice, getting the right antibody depends on getting the right combination at the right time. Which combinations emerge at any given moment in any given individual is a function of two things: the repertoire of antibody molecules a given organism has already generated, and a random interplay of combination and mutation that is much like natural selection itself — new B cells that are effective in locking onto enemy pathogens persist and spread; those that do a poor job tend to die off.

Unfortunately, there is no guarantee that this system will work. In any given individual there may be no extant antibody that is sufficiently close. If there is a hole in a given individual’s repertoire, that individual may never develop an adequate antibody. Even if there is an adequate starting point, the immune system still may fail to generate a proper antibody. The most useful mutations may or may not emerge, in part because the whole system is governed by a second type of immune cell known as the T cell. The job of T cells is to recognize small fragments of viral proteins as peptides and then help the B cells produce antibodies. Like B cells, T cells also have a recombinative system, generating billions of different receptors, only a few of which will recognize a given viral antigen. In effect, two separate systems must independently identify the same pathogen in order for the whole thing to work. At its best, the system is remarkably powerful — a single exposure to a pathogen can elicit a protective antibody that lasts a lifetime; people who were exposed to Spanish flu in 1918 still retain relevant antibodies today, 91 years later. (See Resources) But the system can be hit-or-miss. That same Spanish flu claimed 50 million lives, and there is no assurance that any given person will be able to generate the antibodies they need, even if they are vaccinated.

Artwork Courtesy of Derya Unutmaz MD
Artwork Courtesy of Derya Unutmaz MD

IMMUNITY 2.0

For now, the best way to supplement the body’s own defenses is through vaccines, but vaccines are far from a panacea. Each vaccine must be prepared in advance, few vaccines provide full protection to everybody, and despite popular misconception, even fewer last a lifetime. For example, smallpox vaccinations were lifelong, but tetanus vaccines generally last 5-10 years. There is still no vaccine for HIV infection. And when it comes to bacteria like tuberculosis, current vaccines are almost entirely ineffective. What’s more, the whole process is achingly indirect. Vaccines work by first stimulating B cells and T cells in order to induce production of antibodies. They don’t directly produce the needed antibodies. Rather, they try (not always successfully) to get the body to generate its own antibodies. In turn, stimulation of T cells requires yet another set of cells — called dendritic cells — and the presence of a diverse set of molecules called the major histocompatibility complex, creating still further complexity in generating an immune response.

Our best hope may be to cut out the middleman. Rather than merely hoping that the vaccine will indirectly lead to the antibody an individual needs, imagine if we could genetically engineer these antibodies and make them available as needed. Call it immunity-on-demand.

At first blush, the idea might seem farfetched. But there’s a good chance this system, or something like it, will actually be in place within decades. For starters, as mentioned above, every T cell and B cell expresses a unique receptor that recognizes a very small piece of a foreign structure from viruses or bacteria, such as proteins. Advances in recent genetic technology have made it possible to reprogram B cells, directly or through stem cells, to produce antibodies against parts of viral or bacterial proteins. Similarly, a new clonal army of T cells that are genetically engineered to recognize parts of a virus or bacteria would help the B cells produce potent antibodies against soft spots of these viruses and other pathogens that would otherwise neutralize or kill them.

Influenza Virus and AntibodiesAlready scientists at Caltech, headed by Nobel laureate David Baltimore, have engineered stem cells that can be programmed into B cells, which produce potent antibodies against HIV. Meanwhile, cancer researcher Steven Rosenberg at NIH has been engineering clonal T cells capable of recognizing tumors and transferring these cells to patients with a skin cancer called melanoma. His work has shown promising results in clinical trials. Together, these results could lay the groundwork for a new future, in which relevant antibodies and T cell receptors are directly downloaded, rather than indirectly induced.

Of course, many challenges remain. The first is to be able to better understand the pathogens themselves: each has an Achilles’ heel, but we’ve yet to find a fully systematic way of finding any given pathogen’s weakness, a prerequisite for any system of immunity on demand. It will also be important to develop structural models to artificially create the antibodies and T cell receptors that can recognize these regions. Eventually, as computational power continues to grow and as our structural biology knowledge increases, we may be able to design artificial vaccines completely in silico. For now, this is more dream than reality.

The real obstacle, however, is not the creation or the manufacture of protective antibodies against pathogens, but the delivery of those antibodies or cells into the body. Currently the only way to deliver antibodies into the body is difficult and unreliable. One needs to isolate stem or immune cells (B and T cells) from each individual patient and then custom-tailor the receptors for their genetic backgrounds, a process that is far too expensive to implement on a mass scale. Stem cells, nonetheless, do offer real promise. Already it seems plausible that in the future, bioengineers could create new stem cells from your blood cells and freeze them until needed. If there were to be a deadly new virus, bioprogrammers could design the potential immune receptors and genetically engineer and introduce them into your stored stem cells, which can then be injected into your blood. Eventually it may even be possible to deliver the immune receptor genes directly into your body, where they would target the stem cells and reprogram them.

At first blush, the idea of immunity-on-demand might seem farfetched.

All this is, of course, a delicate proposition. In some ways, an overactive immune system is as much of a risk as an underactive one: more than a million people worldwide a year die from collateral damage, like septic shock after bacterial infection, and inflammations that may ultimately induce chronic illness such as heart disease and perhaps even cancer. Coping with the immune system’s excesses will require advances in understanding the precise mechanisms of immune regulation. This fine-tuning of the immune response could also have the bonus effect of preventing autoimmune diseases.

We are not sure when this will all happen, but there’s a good chance it will, and perhaps the only question is when. There was a great leap forward in medicine when sterilization techniques were first implemented. Here’s to the hope that the fruits of information technology can underwrite a second, even bigger leap.

Derya Unutmaz is an Associate Professor of Microbiology and Pathology at N.Y.U. School of Medicine. His current research is focused on understanding the function of human immune system.

Gary Marcus is an author and a Professor of Psychology at NYU. His most recent book is Kluge: The Haphazard Construction of the Human Mind.

When it comes to scientific publishing and fame, the rich get richer and the poor get poorer. How can we break this feedback loop?

For to all those who have, more will be given, and they will have an abundance; but from those who have nothing, even what they have will be taken away.
—Matthew 25:29

Sociologist Robert K. Merton was the first to publish a paper on the similarity between this phrase in the Gospel of Matthew and the realities of how scientific research is rewarded, though he was likely not the first to note that famous scientists reap more credit than unknowns. Even if two researchers do similar work, the most eminent of the pair will get more acclaim, Merton observed—more praise within the community, more or better job offers, better opportunities. And it goes without saying that even if a graduate student publishes stellar work in a prestigious journal, their well-known advisor is likely to get more of the credit.

Merton published his theory, called the “Matthew Effect,” in 1968. At that time, the average age of a biomedical researcher in the US receiving his or her first significant funding was 35 or younger. That meant that researchers who had little in terms of fame (at 35, they would have completed a PhD and a post-doc and would be just starting out on their own) could still get funded if they wrote interesting proposals. So Merton’s observation about getting credit for one’s work, however true in terms of prestige, wasn’t adversely affecting the funding of new ideas.

But that has changed. Over the last 40 years, the importance of fame in science has increased. The effect has compounded because famous researchers have gathered the smartest and most ambitious graduate students and post-docs around them, so that each notable paper from a high-wattage group bootstraps their collective power. The famous grow more famous, and the younger researchers in their coterie are able to use that fame to their benefit. The effect of this concentration of power has finally trickled down to the level of funding: The average age on first receipt of the most common “starter” grants at the NIH is now almost 42. This means younger researchers without the strength of a fame-based community are cut out of the funding process, and their ideas, separate from an older researcher’s sphere of influence, don’t get pursued. This causes a founder effect in modern science, where the prestigious few dictate the direction of research. It’s not only unfair—it’s also actively dangerous to science’s progress.

It’s time to start rethinking the way we reward and fund science. How can we fund science in a way that is fair? By judging researchers independently of their fame—in other words, not by how many times their papers have been cited. By judging them instead via new measures, measures that until recently have been too ephemeral to use.

Right now, the gold standard worldwide for measuring a scientist’s worth is the number of times his or her papers are cited, along with the importance of the journal where the papers were published. Decisions of funding, faculty positions, and eminence in the field all derive from a scientist’s citation history. But relying on these measures entrenches the Matthew Effect: Even when the lead author is a graduate student, the majority of the credit accrues to the much older principal investigator. And an influential lab can inflate its citations by referring to its own work in papers that themselves go on to be heavy-hitters.

But what is most profoundly unbalanced about relying on citations is that the paper-based metric distorts the reality of the scientific enterprise. Scientists make data points, narratives, research tools, inventions, pictures, sounds, videos, and more. Journal articles are a compressed and heavily edited version of what happens in the lab.

The scientific paper—a vehicle for spreading information about techniques and ideas, not about a researcher’s worth—wasn’t intended for the uses we’ve devised for it. Now that we have other ways of assessing scientists’ merits, we should turn to those instead.

We have the capacity to measure the quality of a scientist across multiple dimensions, not just in terms of papers and citations. Was the scientist’s data online? Was it comprehensible? Can I replicate the results? Run the code? Access the research tools? Use them to write a new paper? What ideas were examined and discarded along the way, so that I might know the reality of the research? What is the impact of the scientist as an individual, rather than the impact of the paper he or she wrote? When we can see the scientist as a whole, we’re less prone to relying on reputation alone to assess merit.

Multidimensionality is one of the only counters to the Matthew Effect we have available. In forums where this kind of meritocracy prevails over seniority, like Linux or Wikipedia, the Matthew Effect is much less pronounced. And we have the capacity to measure each of these individual factors of a scientist’s work, using the basic discourse of the Web: the blog, the wiki, the comment, the trackback. We can find out who is talented in a lab, not just who was smart enough to hire that talent. As we develop the ability to measure multiple dimensions of scientific knowledge creation, dissemination, and re-use, we open up a new way to recognize excellence. What we can measure, we can value.

We don’t have to throw away the citation. Indeed, we shouldn’t. The citation has evolved to its position of importance because it’s a solid data point of value. We just have to put it into the context of all the other data points that it has always lived amongst, but have—until now—been prohibitively expensive to measure and reward.

“This all goes back to the idea of ‘carrying capacity’,” Sutton adds. “You can only put so many goldfish in a tank before they’re killed by their own wastes, and you can only raise so many livestock on a certain size of land.” Humans are in a unique position, he says, because Earth’s carrying capacity for us depends intimately on how we choose to live and use technology: Billions of bicycle-riding, solar-panel-building vegetarians will produce a different planet than billions of Hummer-driving, coal-burning carnivores. Thus, the influence of technology on Earth’s carrying capacity is difficult to predict and discern. “Some cultures will use technology to increase their environmental impact; some will use it to decrease it,” Sutton says. “But nighttime satellite imagery may offer a good proxy for its use now.”

Saturday, February 6, 2010

Negtive Results

http://www.jnr-eeb.org/index.php/jnr

http://wiki.bioinformatics.ucdavis.edu/index.php/Main_Page

http://seqanswers.com/

http://jsur.org/

http://www.jnrbm.com/

The problem with the JSUR model, and the nature of discovery

I expect JSUR will be a great way to comment on methods and techniques. Indeed it will codify a trend that has been going on for some time: public protocol knowledge sharing. Many sites like openwetware, seqanswers or the UC Davis bioinformatics wiki have been doing this for a while. Not to mention a plethora of blogs. Scientists are willing to share their experience with working protocols and procedures, and if this sharing of knowledge can be now monetized to that all-important coin of academia, the peer-reviewed publication, all the better.

So where is the problem? The problem lies with discovery, and credit given towards it. It would be very hard to get anyone to share awkward, unexpected or yet-uninterpreted results. First, as I said, no one wants to look like an idiot. Second, unexpected or yet uninterpreted results are often viewed as a precursor to yet another avenue of exploration. A scientist would rather pursue that avenue, with the hope of the actual meaningful discovery occurring in the lab. At most, there will be a consultation with a handful of trusted colleagues in a closed forum. If the results are made public, someone else might take the published unexpected and uninterpreted results, interpret them using complementary knowledge gained in their lab, and publish them as a bona-fide research paper. The scientist who catalyzed the research paper with his JSUR publication receives, at best, secondary credit. The story of Rosalind Franklin’s under-appreciated contribution to the discovery of the structure of DNA comes to mind. Watson and Crick used the X-ray diffraction patterns generated by Franklin to solve the three dimensional structure of the DNA molecule. Yet she was not given a co-authorship on the paper. (And she did not even make the results public, they were shared without her knowledge.) Unexpected results are viewed either as an opportunity or an embarrassment, and given the competitive nature of science, no on wants to advertise either: the first due to the fear of getting scooped, the second for fear of soiling a reputation. I expect JSUR would have a harder time filling in the odd-results niche, but I hope I am wrong.

But if you have protocols you are willing to share…what are you waiting for? Get those old lab notebooks, 00README files, forum posts and start editing them to a paper. You are sitting on a goldmine of publishable data and you did not even realize it.

SUR? Yes, sir.

The most exciting phrase to hear in science, the one that heralds new
discoveries, is not ‘Eureka!’, but ‘That’s funny…’ -Isaac Asimov

Thanks to Ruchira Datta for pointing out this one.

Science is many things to many people, but any lab-rat will tell you that research is mainly long stretches of frustration, interspersed with flashes of satisfying success. The best laid schemes of mice and men gang aft agley. A scientist’s path contains leads to blind alleys more than anything else, and meticulous experimental preparation only serves to somehow mitigate the problem, if you’re lucky. This doesn’t work, that doesn’t work either and this technique worked perfectly in Dr. X’s lab, why can’t I get this to work for me? My experiment was invalidated by my controls; my controls didn’t work the way the controls were supposed to work in the first place. I keep getting weird results from this assay. I can’t explain my latest results in any coherent way… these statements are typical of daily life in the lab.

This stumped and stymied day-to-day life is not the impression of science we get from reading a research paper, when listening to a lecture, or when watching a science documentary show. When science is actually presented, it seems that the path to discovery was carefully laid out, planned and flawlessly executed, a far cry from the frustrating, bumbling mess that really led to the discovery. There are three chief reasons for the disparity between how research is presented, as opposed to what really goes on. First, no one wants to look like an idiot, least of all scientists whose part of their professional trappings is strutting their smarts. Second, there are only so many pages to write a paper, one hour to present a seminar or one hour for a documentary: there is no time to present all the stuff that did not work. Third, who cares about what didnt work? Science is linked to progress, not to regress. OK, you had a hard time finding this out, we sympathize and thank you for blazing the trail for the rest of us. Make a note for yourself not to go into those blind alleys that held you back for years and move on. We’re not interested in your tales of woe.

Only maybe these tales of woe should be interesting to other people. If you make your negative results public, that could help others avoid the same pitfalls you had. If you share the limits of a technique, a protocol or software then someone can avoid using it in a way that does not work. A lab’s publications are actually the tip of the sum total of its accumulated knowledge.Every lab has its own oral tradition of accumulated do’s and dont’s. Not oral in the literal sense: they may even be written down for internal use, but never published. UPDATE (2-FEB-2010): most peer-reviewed journals don’t like stuff that does not work. Thanks to Mickey Kosloff for pointing out the Journal of Negative Results in Biomedicine and The Journal of Negative Results – Ecology and Evolutionary Biology.

Until now.

The Journal of Serendipitous and Unexpected Results aims to help us examine the sunken eight-ninths of the scientific knowledge iceberg, in life science and in computer science. (So an additional field over JNRB and JNREEB). From JSUR’s homepage:

Help disseminate untapped knowledge in the Computational or Life Sciences

Can you demonstrate that:

* Technique X fails on problem Y.
* Hypothesis X can’t be proven using method Y.
* Protocol X performs poorly for task Y.
* Method X has unexpected fundamental limitations.
* While investigating X, you discovered Y.
* Model X can’t capture the behavior of phenomenon Y.
* Failure X is explained by Y.
* Assumption X doesn’t hold in domain Y.
* Event X shouldn’t happen, but it does.



Monday, February 1, 2010

Fine Print on Perfectionism: What It Is & What It Isn’t

By Pavel G. Somov, Ph.D.

Perfectionism is the central feature of OCPD (obsessive compulsive personality disorder), not to be confused with OCD (obsessive compulsive disorder, think Tony Shalhoub in TV series “Monk” or Jack Nicholson’s character in “As Good As It Gets”). Perfectionism, as part of OCPD, is also characterized by such traits as an over-concern with details, excessive devotion to work and productivity (at the expense of leisure), excessive conscientiousness, scrupulousness, thriftiness, inflexibility and rigidity in the issues of morality and ethics, reluctance to delegate tasks and reluctance to relinquish control (Pfohl & Blum, 1991).

Perfectionism is mostly a result of learning, programming and conditioning. I see it as an ingenious adaptation to a hyper-critical, high-pressure, invalidating environment, a psychological self-defense strategy that unfortunately creates more problems than it solves. Most of the perfectionists I have worked with had perfectionistic or narcissistic parents. Aside from parental influence, the extent of perfectionism depends on the culture you live in. Some societies are more culturally perfectionistic than others. The so-called “developed societies,” for example, tend to emphasize “efficiency, punctuality, a willingness to work hard, and orientation to detail,” i.e. the very traits that may accompany perfectionism and OCPD (Millon et al., 2000, p. 174).

Perfectionism can be directed at oneself and/or at others (Flett & Hewitt, 2002). Self-directed (inwardly-oriented) perfectionists are notoriously hard on themselves: if they make a mistake they shred themselves to pieces in ruminating bouts of merciless self-scrutiny. Whereas self-directed perfectionists are their own worst critics, other-directed (outwardly-focused) perfectionists are tough on others and are easily frustrated by others’ imperfections. The literature on perfectionism also distinguishes between generalized (or “extreme”) perfectionism (in which perfectionists pursue “extreme standards across a variety of life domains”) and situational perfectionism (in which perfectionism is limited to specific areas of life) (Flett & Hewitt, 2002, p. 16). Situational perfectionism is, up to a point, adaptive. Indeed, some jobs have extremely narrow margins of error and require high level technical precision (e.g. surgeon) and protocol compliance (e.g. Secretary of State). When, however, perfectionism becomes a way of living (rather than a way of earning a living) then you have a case of generalized or “extreme” perfectionism.

If you are recognizing yourself in any of this, worry not: chances are your prognosis is good! How can I assert that without knowing you? My work with perfectionists has taught me that perfectionists are a highly motivated lot, perfectly positioned for a self-help approach. There’s one speed-bump, however: a perfectionist in treatment/therapy – just like anywhere else – wishes to excel. That, of course, boomerangs because the relentless striving to stop being perfectionistic reinforces the very perfectionism that a perfectionist is trying to overcome. The intuitive strategies (of, say, banning perfectionistic thoughts) tend to fall short which calls for more paradoxical, out-of-the-left-field, lateral (pattern-interrupting), and experiential interventions. In sum, the trick is not to fight your perfectionism but to let go of it. As for the know-how of this elusive “letting go” process, it is too long of a story to describe in a blog post. We (your mind and mine) will have to meet on the pages of my upcoming book (Present Perfect) to take this discussion to another level.

Breakfast of Consciousness

By Pavel G. Somov, Ph.D.

“Once I, Zhuang Zhou, dreamt that I was a butterfly fluttering about happily. I did not know that I was Zhou. Suddenly, I awoke, and there I was, Zhou again. I did not know whether it was Zhou dreaming that he was a butterfly or a butterfly dreaming that it was Zhou” (Zhuangzi).

Are you awake or asleep? Are you sure? How do you know which is which?

“When they were dreaming, they did not know it was a dream. They even tried to interpret their dreams while they were dreaming. They did not know that it was a dream until they awoke. <…> The foolish think that they are awake and insist that they know it. <…> I who say that you are dreaming am also dreaming” (Zhuangzi).

As an awakening to what is, is Enlightenment then a form of insomnia, an inability to fall back asleep into the dream that one is awake?

“A man who sleeps cannot ‘do.’ With him everything is done in sleep. Sleep is understood here not in the literal sense of our organic sleep, but in the sense of associative existence. First of all he must awake. Having awakened, he will see that as he is he cannot ‘do’” (Gurdjieff).

A reaction is not an action, but a re-enactment of the previously learned response to a given stimulus, triggered by the confusion of what is with what once was… Unless consciously chosen by us, life happens – to us – much like a dream in which we passively default to our perceptual-behavioral presets…

As an awakening to what is, is Enlightenment nothing more than a form of lucid dreaming in which one chooses to dream that he or she is awake? A kind of Reality TV with a running ticker tape of “YOU ARE DREAMING… YOU ARE DREAMING… YOU ARE DREAMING…”

“There is nothing but mind” (Niguma).
“The mind indeed is of the form of space. <…> The mind is the past. The mind is all. But in reality there is no mind” (Dattatreya).

Hmm… So, is all our existential sense-making just a circular attempt to deconstruct the non-sense of our mental constructions? A solipsist tail-chasing spin around one’s own I-axis?

“Election time’s coming.
Who you gonna vote for?
If I was President,
I’d get elected on Friday,
Assassinated on Saturday,
Buried on Sunday, then go back to work on Monday.
If I was President, if I was President, if I was President” (Wyclef Jean).

No, not a non sequitur… but a circadian reincarnation theory of sorts: where sleep is a death of conscious presence and an awakening is a kind of rebirth… if one, indeed, re-awakens and not just journeys from nocturnal sleep to a waking dream…


So, what are you doing this morning: waking up… or falling asleep? Will today be just another day that “happens” to you or will you “do” some conscious living?

Whatever you elect it to be, preside over your reality… Snuff out the illusions…

At any rate: have some consciousness for breakfast!