Saturday, April 24, 2010

Sergey Buldyrev

A study from Sergey Buldyrev and colleagues was published in Nature the day before Eyjafjallajökull’s eruption. The researchers investigated catastrophic failures in complex networked systems—systems like the closely coupled infrastructures underlying modern transportation, electricity distribution, telecommunications, and financial transactions. These systems are constructed from many interdependent nodes, which gives them greater stability and resilience: If one node fails, material, money, energy, or people are routed through other nodes, and functionality is maintained. But past a certain critical threshold of node failures, the system fragments and cannot function.

Buldyrev’s team modeled how disruptions percolate through a tightly linked pair of idealized interdependent networks, and found a counter-intuitive result: The failure of even a small number of nodes in one network can cause additional failures in the second. These failures can then feed back into the first network and cause yet more node failures. In other words, the greatest strength of an interdependent network in isolation is also the greatest weakness of interdependent networks as a whole. Two closely linked, highly resilient systems can suffer catastrophic failure through initially small disruptions that would have been essentially harmless to either network individually. What’s true for two linked networks presumably holds for larger assemblages.

A better message for Earth Day would be more frightening, and closer to the truth: save the humans. The planet will endure our collective ravages and the biosphere will eventually rebound. The world has certainly changed over the past forty years, but we have changed even more. As the products of countless interdependent complex systems, we seem to somehow harbor their flaws. Now, we’re making them manifest, eliminating the interstices that used to protect and insulate the thin veneer of life that glosses this planet. Our greatest strength has become our greatest weakness; we are complex, but fragile. Our systems are connecting, with each node at our fingertips. Alas, if only their failures could teach us to fly.

Sunday, April 11, 2010

“Conciseness in art is essential and a refinement. The concise man makes one think; the verbose bores. Always work towards conciseness.” – Édouard Manet

Saturday, April 10, 2010

Good to Great by Jim Collins

“The enemy of great is good.”

Magnifying the Quantum World by Dave Munger

In the 1870s, when Max Planck was still a young German university student, his professor Philipp von Jolly discouraged him from continuing to pursue physics, reportedly saying that nothing was left to discover in the field except for a few minor details.

Undaunted, Planck became a professor of physics at the University of Berlin, and by 1900 had developed a theory that would turn physics upside-down: Electromagnetic energy could only be emitted in discrete packets, or “quanta.” The field of quantum mechanics was born, and its ramifications continue to echo through physics today. Indeed, modern quantum researchers aren’t just filling in minor details; they’re still adding in leaps and bounds to our knowledge of how the world fundamentally works.

Planck’s breakthrough came out of his studies of “black bodies,” idealized objects that perfectly absorb and then re-emit electromagnetic radiation. In reality, nothing can absorb light so perfectly, but many real-world objects, like a hunk of iron, absorb and emit electromagnetic radiation similarly to a black body. As an iron ingot is heated, it begins to emit electromagnetic radiation, energy that travels on a spectrum of frequencies. When it’s quite hot, the ingot turns red—and as its temperature rises further, the ingot will progressively turn orange, then yellow, then white. These are only the frequencies we can see—the ingot, of course, is emitting invisible electromagnetic radiation too, in frequencies like infrared. Planck studied this “black-body spectrum,” and precisely measured how changing temperature affected the radiation a black body emitted. In his work, he came to realize that the emitted radiation didn’t smoothly increase with temperature, but in fact changed in sudden steps. Planck never quite understood the implications of his discovery, but Einstein and other physicists soon began to see its reach. Their conclusion: Everything in the universe—energy, light, particles, and all the macroscopic objects they form and influence—is somehow quantized, and subject to strange probabilistic behavior that defies classical explanations. In the quantum world, objects can be in multiple places at the same time, can simultaneously harbor mutually exclusive states, and can pop in and out of existence spontaneously. Even Richard Feynman, the Nobel-Prize-winning physicist who arguably had a better grasp of quantum mechanics than anyone else in the 20th century, quipped that no one really understood it.

On Closing the Culture Gap

by Paul Ehrlich

Many are aware that climate disruption may cause horrendous problems, but few seem to realize that this peril is not the only potentially catastrophic one and may not even be the most serious threat we face. Humanity finds itself in a desperate situation, but you’d never know it from listening to the media and the politicians. Loss of the biodiversity that runs human life-support systems, toxification of the planet, the risk of pandemics that increase in lockstep with population growth, and the possibility of nuclear resource wars all could be more lethal. We are finally, however, starting to understand the patterns of culture change and the role of natural selection in shaping them. And since everything from weapons of mass destruction to global heating is the result of changes in human culture over time, acquiring a fundamental understanding of cultural evolution just might be the key to saving civilization from itself.

The change will begin with clearing up the misapprehensions; even climate disruption, for instance, is widely misunderstood. Sea-level rise, displacing tens of millions of people, may be the least of it. Changing patterns of precipitation, which likely will be continuous over the next millennium, will make vast problems for agriculture. So will the melting of mountain snows and glaciers that are so critical to the flows of water upon which food production depends. Furthermore, the temperature sensitivity of crops and impairment of natural pest controls will make maintaining crop yields in many areas ever more difficult. Melting of the Himalayan “water tower” (the ice and snow on those mountains and on the Tibetan plateau), combined with reduced productivity of wheat and rice, now imperils the nutrition of some 1.6 billion nuclear-armed people in south Asia. Worldwide, we face the possibility of today’s billion hungry people becoming several billion starving to death.

To help avert such an outcome, humanity must revise civilization’s water-handling infrastructure for maximum flexibility. And that could be a minor chore compared with the necessary restructuring of the world’s energy economy in the next few decades, or addressing the racism, sexism, and economic inequity that make environmental problems so difficult to solve in the first place.

Educated people generally realize that humanity’s negative impacts on our life-support systems are tightly tied to population size; for instance, the more people, the more greenhouse gases are added to the atmosphere. But few realize that the 2.5 billion people projected to be added to the human population by midcentury will have a much greater destructive impact than the last 2.5 billion. People are smart and therefore naturally use the most concentrated, highest-grade resources first. So each additional person must be fed from more marginal land, equipped with objects made of metal won from poorer ores, supplied with water from more distant sources or expensively purified, and so on. Similarly, while politicians and many economists believe that increasing consumption is the cure for all economic ailments, it is only because they do not understand that the human economy is a wholly owned subsidiary of nature’s economy. They have not yet learned that it is the aggregate consumption of Homo sapiens that is destroying our natural capital. They are unwitting victims of the culture gap.

For most of our species’ existence, all members of hunter-gatherer bands possessed virtually the entire body of their group’s non-genetic information—its culture. But since the agricultural revolution, and especially in the past century or two, that situation has changed completely. No living person knows even a billionth of the cultural information possessed by humanity. No reader of Seed could assemble a 747 from its parts, let alone tell how each part was manufactured, where, and from what. Of course, there’s no way to close that enormous culture gap now. But critical parts of it could be filled in, so that most people would know, for example, what an ecosystem service is, the difference between ozone depletion and climate disruption, the biological significance of skin pigmentation, and the importance of the second law of thermodynamics.

Scientists today believe that such critical information must be disseminated and quickly acted upon to avoid catastrophe. But that is not happening, as indicated by the “much talk, little action” status of climate change. The central need is clearly not for more natural science research (although in many areas it would be very helpful). Rather, the social sciences and humanities need to be reorganized and refocused—“rebooted”—to provide better understanding of human behaviors and how they can be altered. Our civilization must move toward the formation of a sustainable, empathic, global family. Its members must be able to cooperate intensively to deal with global problems before it is too late.

That’s why a group of natural scientists, social scientists, and scholars from the humanities decided to inaugurate a Millennium Assessment of Human Behavior (MAHB, pronounced “mob”). It was so named to emphasize that it is human behavior, toward one another and toward the ecosystems that sustain us all, that requires both better understanding and rapid modification. The idea is that the MAHB might become a basic mechanism to expose society to the full range of population-environment-resource-ethics-power issues, and sponsor research on how to turn that knowledge into the required actions. Perhaps most important, the MAHB would stimulate a broad global discussion involving the greatest possible diversity of people, about what people desire, the ethics of those desires, and which are possible to meet in a sustainable society. It would, I hope, serve as a major tool for altering the course of cultural evolution.

The MAHB would differ from other global efforts such as the IPCC and the Millennium Ecosystem Assessment in that public input and outreach would play a much more dominant role. Local MAHB discussion groups are already forming, and I hope that the MAHB will kick off with a world megaconference around 2012 on the scale of the United Nations Conference on Environment and Development that was held in Rio de Janeiro two decades earlier. The purpose of the MAHB conference would be to initiate a continuing global discussion, creating the MAHB as a new, semipermanent institution. The whole MAHB program is now at a preliminary stage, and the need for input from those accustomed to working in the social sciences and humanities, in the media, in the business community, in NGOs, as well as the general public, is obvious.

The really big MAHB issue is how to ethically reorganize global civilization and consciously reshape its norms so that humanity can transition to a sustainable and fair society. Finding convergence will be challenging: There are few ethical universals, and it is highly unlikely that many will be agreed upon soon, although most ethical systems do converge on some basic elements (for instance, murder and cheating are wrong).

We clearly need an international discussion of such contentious topics as the degree to which wealth should be redistributed from rich to poor, what people owe to future generations, or the kind of population-control programs that are ethically justified. Such discussions should involve not just “leaders” but as many diverse publics as possible. Humanity’s future hangs on finding broad agreement on such major eco-ethical decisions. One role of the MAHB would be to facilitate discussion and debate of those usually ignored topics. The MAHB could also serve as a rallying point for myriad organizations now fighting for environmental quality and social justice. The discourse would emphasize that our brilliant, dominant species has undermined its own life-support systems. It now faces a daunting array of self-generated threats, ones that the human family could cooperatively organize to fight and, with luck, overcome by avoiding the first collapse of a global civilization.

Paul R. Ehrlich is Bing Professor of Population Studies at Stanford University and president of Stanford’s Center for Conservation Biology.

Dr. Ehrlich hopes that MAHB can become the focus of badly needed new, coordinated efforts by social scientists, scholars in the humanities, members of the business community, and the media alike. If you are willing to get involved, go to the MAHB website.

Monday, April 5, 2010

James Sturm: The reasons are unimportant

James Sturm is a cartoonist and co-founder of the Center for Cartoon Studies in White River Junction, Vermont. He is the author of the best-selling and award-winning graphic novel The Golem's Mighty Swing, chosen as the Best Graphic Novel of 2000 by magazine. In 2007, his trilogy of historical graphic novels was collected in a volume entitled Sturm's America: God, Gold, and Golems.

I like the question "Why Do You Make Art?" because it assumes what I do is art. A flattering assumption. The question also takes me back to my freshman year of college, where such questions like "What is nature?" and "Is reality a wave or a circle?" were earnestly debated (usually late at night and after smoking too much weed).

Twenty-five years later I'd like to think I am a little more clear-headed regarding this question. Perhaps the only insight I've gained is the knowledge that I have no idea and, secondly, the reasons are unimportant. Depending on my mood, on any given day, I could attribute making art to a high-minded impulse to connect with others or to understand the world or a narcissistic coping mechanism or a desire to be famous or therapy or as my religious discipline or to provide a sense of control or a desire to surrender control, etc., etc., etc. Whatever the reason, an inner compulsion exists and I continue to honor this internal imperative. If I didn't, I would feel really horrible. I would be a broken man. So whether attempting to make art is noble or selfish, the fact remains that I will do it nevertheless. Anything past this statement is speculation. I would be afraid that by proclaiming why I make art would be generating my own propaganda.

Saturday, April 3, 2010

Poetry

Abecedarian
"Abecedarian poems are now most commonly used as mnemonic devices and word games for children, such as those written by Dr. Seuss and Edward Gorey."

Anaphora
"As one of the world’s oldest poetic techniques, anaphora is used in much of the world’s religious and devotional poetry, including numerous Biblical Psalms."

Ballad
"Their subject matter dealt with religious themes, love, tragedy, domestic crimes, and sometimes even political propaganda."

Ballade
"One of the principal forms of music and poetry in fourteenth- and fifteenth-century France."

Blues Poem
"A blues poem typically takes on themes such as struggle, despair, and sex."

The Bop
"Not unlike the Shakespearean sonnet in trajectory, the Bop is a form of poetic argument consisting of three stanzas."

Cento
"From the Latin word for 'patchwork,' the cento is a poetic form made up of lines from poems by other poets.

Chance Operations
"A chance operation can be almost anything from throwing darts and rolling dice, to the ancient Chinese divination method, I-Ching, and even sophisticated computer programs."

Cinquain
"Examples of cinquains can be found in many European languages, and the origin of the form dates back to medieval French poetry."

Dramatic Monologue
"The poet speaks through an assumed voice—a character, a fictional identity, or a persona."

Ekphrasis
"Modern ekphrastic poems have generally shrugged off antiquity's obsession with elaborate description, and instead have tried to interpret, inhabit, confront, and speak to their subjects."

Elegy
"The traditional elegy mirrors three stages of loss. First, there is a lament, then praise for the idealized dead, and finally consolation and solace."

Epic
"Elements that typically distinguish epics include superhuman deeds, fabulous adventures, highly stylized language, and a blending of lyrical and dramatic traditions."

Epigram
"Candy is dandy, but liquor is quicker."

Found Poem
"The literary equivalent of a collage, found poetry is often made from newspaper articles, street signs, graffiti, speeches, letters, or even other poems."

Ghazal
"Traditionally invoking melancholy, love, longing, and metaphysical questions, ghazals are often sung by Iranian, Indian, and Pakistani musicians."

Haiku
"Often focusing on images from nature, haiku emphasizes simplicity, intensity, and directness of expression."

Limerick
"A popular form in children’s verse, the limerick is often comical, nonsensical, and sometimes even lewd."

Ode
"Originally accompanied by music and dance, and later reserved by the Romantic poets to convey their strongest sentiments."

OULIPO
"Although poetry and mathematics often seem to be incompatible areas of study, OULIPO seeks to connect them."

Pantoum
"The pantoum originated in Malaysia in the fifteenth-century as a short folk poem, typically made up of two rhyming couplets that were recited or sung."

Prose Poem
"Just as black humor straddles the fine line between comedy and tragedy, so the prose poem plants one foot in prose, the other in poetry, both heels resting precariously on banana peels."

Renga
"Renga began over seven hundred years ago in Japan to encourage the collaborative composition of poems."

Rondeau
"The rondeau began as a lyric form in thirteenth-century France, popular among medieval court poets and musicians."

Sapphic
"The sapphic dates back to ancient Greece and is named for the poet Sappho, who left behind many poem fragments written in an unmistakable meter."

Sestina
"The thirty-nine-line form is attributed to Arnaut Daniel, the Provencal troubadour of the twelfth century."

Sonnet
"From the Italian sonetto, which means 'a little sound or song,' the sonnet is a popular classical form that has compelled poets for centuries."

Tanka
"One of the oldest Japanese forms, tanka originated in the seventh century, and quickly became the preferred verse form in the Japanese Imperial Court."

Terza Rima
"Invented by the Italian poet Dante Alighiere in the late thirteenth century to structure his three-part epic poem, The Divine Comedy."

Triolet
"The earliest triolets were devotionals written by Patrick Carey, a seventeenth-century Benedictine monk."

Villanelle
"Strange as it may seem for a poem with such a rigid rhyme scheme, the villanelle did not start off as a fixed form."

Deus ex machina

A deus ex machina (Latin, "god from the machine") is a plot device whereby a previously intractable problem is suddenly and abruptly solved with an often contrived introduction of a new character, ability, or object. It is generally considered to be a poor storytelling technique by critics because it undermines the story's internal logic, although sometimes the story itself is designed to have an intention to go that route.

Nietzsche argues that the deus ex machina creates a false sense of consolation that ought not to be sought in phenomena and this denigration of the plot device has prevailed in critical opinion. Some 20th-century revisionist criticism suggests that the deus ex machina cannot be viewed in these simplified terms and argues rather that the device allows mortals to "probe" their relationship with the divine. Rush Rehm in particular cites examples of Greek tragedy in which the deus ex machina serves to complicate the lives and attitudes of characters confronted by the deity whilst simultaneously bringing the drama home to its audience.

Pygmalion effect

The Pygmalion effect, or Rosenthal effect, refers to the phenomenon in which the greater the expectation placed upon people, often children or students and employees, the better they perform. The effect is named after Pygmalion, a Cypriot sculptor in a narrative by Ovid in Greek mythology, who fell in love with a female statue he had carved out of ivory.

The Pygmalion effect is a form of self-fulfilling prophecy, and, in this respect, people with poor expectations internalize their negative label, and those with positive labels succeed accordingly. Within sociology, the effect is often cited with regards to education and social class.

"How we believe the world is and what we honestly think it can become have powerful effects on how things will turn out."- James Rhem


Stereotype threat

Stereotype threat "is a disruptive concern, when facing a negative stereotype, that one will be evaluated based on this stereotype." One published meta-analysis conducted by Walton & Spencer (2009) found significant evidence that stereotype threat impairs the standardized test performances of African Americans and women on the SAT. However, an unpublished meta-analysis of 55 published and unpublished studies shows mixed evidence of this effect.

Definition

Stereotype threat is "a disruptive concern, when facing a negative stereotype, that one will be evaluated based on this stereotype." Stereotype threat has been shown to undermine the performance of members of a number of groups in a number of domains. “Culturally-shared stereotypes suggesting poor performance of certain groups can, when made salient in a context involving the stereotype, disrupt performance of an individual who identifies with that group” (Steele, Aronson 1995).

Although Steele and Aronson focused on the emphasis on race affecting test performance, similar studies have demonstrated the same results for emphasis on gender. In other studies, researchers found that “consistent exposure to stereotype threat (e.g., faced by some ethnic minorities in academic environments and women in math) can reduce the degree to which individuals value the domain in question” (Aronson, et al. 2002; Osborne, 1995; Steele, 1997). Also, research has found that there are varying degrees of an individual on a certain group to be affected by stereotype threat:

"…some members may be more vulnerable to its negative consequences than others; factors such as the strength of one’s group identification or domain identification have been shown to be related to one's subsequent vulnerability to stereotype threat" (http://www.reducingstereotypethreat.org/definition.html)

Further research has also found that when an individual identifies with a specific group, performance can be negatively affected, because of concerns that they will, in fact, confirm the negative stereotypes of that group.

Ontological Paradox

An ontological paradox is a paradox of time travel that questions the existence and creation of information and objects that travel in time. It is very closely related to the predestination paradox and usually occurs at the same time. In simpler terms, an object is brought back in time, and it becomes the object that was initially brought back in time in the first place.

Definition

Because of the possibility of influencing the past while time traveling, one way of explaining why history does not change is by saying that whatever has happened was meant to happen. A time traveler attempting to alter the past in this model, intentionally or not, would only be fulfilling his role in creating history, not changing it. The Novikov self-consistency principle proposes that contradictory causal loops cannot form, but that consistent ones can. This theory, however, only makes sense if you're dealing with a wormhole or some other form of time travel where you end up in the same universe as you started. With actual time replication, which is what much of fiction calls "time travel" this could not be the case.

However, a scenario can occur where items or information are passed from the future to the past, which then become the same items or information that are subsequently passed back. This not only creates a loop, but a situation where these items have no discernible origin. Physical items are even more problematic than pieces of information, since they should ordinarily age and increase in entropy according to the Second law of thermodynamics. But if they age by any nonzero amount at each cycle, they cannot be the same item to be sent back in time, creating a contradiction.

The paradox raises the ontological questions of where, when and by whom the items were created or the information derived. Time loop logic operates on similar principles, sending the solutions to computation problems back in time to be checked for correctness without ever being computed "originally."

It is sometimes called the bootstrap paradox, in reference to the expression "pulling yourself up by your bootstraps", and the term was popularized by Robert A. Heinlein's story By His Bootstraps

Self-fulfilling prophecy

A self-fulfilling prophecy is a prediction that directly or indirectly causes itself to become true, by the very terms of the prophecy itself, due to positive feedback between belief and behavior. Although examples of such prophecies can be found in literature as far back as ancient Greece and ancient India, it is 20th-century sociologist Robert K. Merton who is credited with coining the expression "self-fulfilling prophecy" and formalizing its structure and consequences. In his book Social Theory and Social Structure, Merton gives as a feature of the self-fulfilling prophecy: eg. when Roxanna falsely believes that her marriage will fail and fears such failure will occur that it actually causes the marriage to fail.

The self-fulfilling prophecy is, in the beginning, a false definition of the situation evoking a new behaviour which makes the original false conception come 'true'. This specious validity of the self-fulfilling prophecy perpetuates a reign of error. For the prophet will cite the actual course of events as proof that he was right from the very beginning.

In other words, a prophecy declared as truth when it is actually false may sufficiently influence people, either through fear or logical confusion, so that their reactions ultimately fulfill the once-false prophecy.

Robert K. Merton's concept of the self-fulfilling prophecy stems from the Thomas theorem, which states that "If men define situations as real, they are real in their consequences." According to Thomas, people react not only to the situations they are in, but also, and often primarily, to the way they perceive the situations and to the meaning they assign to these perceptions. Therefore, their behavior is determined in part by their perception and the meaning they ascribe to the situations they are in, rather than by the situations themselves. Once people convince themselves that a situation really has a certain meaning, regardless of whether it actually does, they will take very real actions in consequence.

Friday, April 2, 2010

Synthetic Biology

http://www.synbiosafe.eu/DVD/Trailer.html

Emotion’s Alchemy

© May Lesser

You’re at breakfast enjoying a mouthful of milk when it happens: the zygomatic muscles, anchored at each cheekbone, tug the corners of your mouth backwards and up. Orbicular muscles encircling your eyeballs slowly squeeze tight beneath wrinkling skin. A 310-millisecond-long noise explodes from your throat, extending to a frequency of 10,000 Hertz. Five shorter pulses of the “h” sound follow, five times per second, hovering around 6 Hertz, each lasting a fifteenth of a second. Your heart reaches 115 beats per minute. Blood vessels relax. Muscle tone softens. Abdominal muscles clench. The soft tissue lining your upper larynx vibrates 120 times per second as air blasts past. The milk spews forth. You are laughing.

Laughter, real laugh-till-you-cry laughter, is one of many human emotional expressions. Arguably, laughing and its tearful counterpart, crying, are the loudest, most intrusive non-linguistic expressions of our species. But for all of that familiarity, they are little-understood behavioral mysteries parading in the light of everyday experience. Though evolutionary biologists have long explored the mammalian origins of emotional expression, human laughs and cries only rarely become subjects of cognitive neuroscience. But that may not stay the case. Laughing and crying, being live demonstrations of emotion and its social expression, provide new entryways into the tangled pathways of the brain.

For centuries, philosophers and physiologists have puzzled over the phenomenon of emotion. Where are joy, sadness, fear located in the “gelatinous substance” of the brain? wondered nineteenth century phrenologist Franz Gall. How is emotion’s expression related to subjective feeling? In the 1890s, psychologists William James and Carl Lange suggested we don’t cry because we are sad, rather, “we feel sorry because we cry, angry because we strike, afraid because we tremble,” but other theories reigned. And though the James-Lange theory has had a resurgence in recent decades, not until fMRI technology revealed images of the emotional brain could we begin to empirically explore Shakespeare’s musing in The Merchant of Venice: “Tell me where is fancy bred / Or in the heart, or in the head?”

A way of coming to a more integrated understanding of emotion is to surrender to the boundless accessibility of laughing and crying. I spent the last year occupied with such a task. The search for answers led me to areas as new to science as the mirror-neuron system, as painful as neurological disorders, and as artistic as method acting. There emerged a uniquely human science of emotion that begins to sew closed the doggedly dualistic notions of mind and body, heart and head.

A Ball of Emotion
“Try and keep your head still,” a soft voice murmured. “Just follow my fingers with your eyes.” The woman in the wheelchair couldn’t. At each attempted ascent, her eyes fell to center, unable to find visual anchor.

“How are you doing?”

“I’m fine,” she voiced over the course of eight seconds, her eyes calm and accepting. “I’m de-al-ing with i-t.” An array of wrinkles, right then, grew from her squinting eyelids. Rivulets of tears washed into the creases, bathing her cheeks in a Saran-wrap sheen. “Don’t mi-nd me,” she blurted.

Then, before anyone could reach out a hand in comfort, her jaw dropped and peals of laughter exploded into the boxy beige examination room at the Stanford University Neurology Clinic.

Dr. Josef Parvizi was unfazed. He sat on a short stool, his scarlet tie dangling as he leaned into her wheelchair, softly grasping her shoulder and stroking her hand. Parvizi, a neurologist at the Stanford School of Medicine, has a faint Iranian accent and an unwaveringly calm oval face. He’s seen patients like Nicole1 before.

Nicole’s crying started again, as though a memory had triggered a reaction that she would normally keep inside, like a filter between private thought and public expression was missing. Her sister, a rusty-haired woman clutching a leather bag, spoke for her. “So many things seem to upset her. There’s no rhyme or reason for an outburst,” she said. The doctor nodded. “It sounds like there are no brakes, like in a car. The brakes aren’t working so well for her emotion.” Her sister sighed in agreement. Nicole stopped rocking the wheelchair and tried once again to answer the doctors’ questions.

For the past 12 years, Nicole, 51, has lived with a progressing case of multiple sclerosis, a disorder in which her immune system attacks its own central nervous system, slowly nibbling away at the ability of her brain to send signals and coordinate muscle movements and cognition. Her MS has taken away her ability to walk and has limited her speech. Her disease now jeopardizes her ability to control her expressions of happiness and sadness. Today, Parvizi believes he will diagnose Nicole with a disorder called pathological laughing and crying, or PLC.

PLC develops after a brain injury, stroke, seizure, or, as with Nicole, during a neurodegenerative disorder. Usually, a lesion or tumor has encroached upon brain structures that govern emotional suppression and expression. It seems like the episodes of laughing or crying deploy without reason. Actually, it’s the result of lowered emotional thresholds. A passing funny thought that a healthy person could normally suppress, triggers laughter in Nicole. She experiences a rift between what she expresses and what she actually feels. Her laughter is a vast overestimation of her true feelings.

Cerebellar Vigilance
Parvizi asked Nicole to hold up her arm for a few seconds. Her raised arm shivered back and forth like a broken compass. The doctors looked at each other, recognizing the symptom. “Cerebellar ataxia,” Parvizi mouthed to another doctor. Cerebellar ataxia, a hallmark sign of multiple sclerosis, is the loss of muscle coordination. The cerebellum, a fist-sized 150-gram chunk of tissue, sits between the bottom of the brain and the top of the spinal cord. This structure accounts for 10 percent of the total volume of the brain, yet it contains half of all neurons. It coordinates the expression of involuntary, moment-to-moment muscle movements, fine-tuning motions we don’t need to think about to perform. When compromised by brain damage, the cerebellum, or “miniature brain” in Latin, can’t relay proper instructions to the brainstem, which executes many prepackaged muscle movements, including the diaphragm and facial contractions of laughing and crying.

Back in 2001 Parvizi was a graduate student at University of Iowa College of Medicine. He and his colleagues were studying a middle-aged landscaper who had suffered a stroke the year before and had been left with unexplained episodes of laughing and crying. A CAT scan presented damaged tissue in his cerebellum and brain stem, not surprising for a stroke victim. But the finding that the cerebellum could be a leading antagonist in the wrenching drama of PLC was something new—and perhaps game-changing—for emotional science.

The old explanation for PLC dates back to 1924, when neurologists worked with limited anatomical data. Basically, it was assumed that the healthy frontal lobe within the cerebral cortex usually regulates the emotional structures buried deeper in the brain. In that view, when those “higher” brain areas that endow us with rational, voluntary behavioral control fail, wild, pathological emotions are unleashed. But the voluntary pathway theory cannot explain why PLC patients often have no problem performing voluntary facial muscle movements. They can even mimic laughing and crying. Parvizi and his team knew that there had to be something going wrong with involuntary, automatic behavior patterns.

The seeming neurological magic through which an emotionally loaded stimulus turns into a physical expression is no simple process. But unlike the turn-of-the-century scientists, neuroscientists now know that it involves constant communication between networks. In neuroscience terms, major players are “induction sites” and “effector sites.” Induction sites, such as the amygdala or ventral striatum, pair a stimulus with an emotion. “You can think of an induction site like a switchboard deciding that when a snake comes, the best output is a sense of fear,” explains Parvizi. Effector sites, such as regions of the brainstem, execute the actual physical expression of that emotion, the part when we actually feel fear or joy. They are the warehouses producing the actual act of laughing or crying: moving the facial muscles up, spreading your lips, producing tears.

Laughing and crying provide new entryways into the tangled pathways of the brain.

Induction and effector sites do not operate in a linear step-by-step fashion in a healthy brain. Instead, Parvizi’s research suggests, the cerebellum could be intercepting the induction signals before they reach the effector site, like a checkpoint. The “mini-brain” then makes sure our behavior plays appropriately in the social context, deploying a lifetime of cultural learning. It’s an idea that adds an entire new continent to the map of emotion: Rather than the brain’s frontal lobe serving as the geographic hotspot of rational decision making, instructions from the frontal lobe, along with autobiographical memories and tactile and visual data sent from other brain areas, wind up at the cerebellum. The cerebellum then adjusts the emotional response to match the social setting. Finally, the brainstem executes the response. Making sure that what would have been a shriek of laughter in the café is a soft giggle in a classroom is the cerebellum’s constant chore. But when this disciplinarian is ailing, as in some cases of PLC, behaviors can swing wild.

Parvizi’s PLC research has led him to believe that emotions, instead of being consciously controlled, are spontaneous reactions that rely on an intact involuntary brain system to be appropriately projected into the world. This distinction has major implications for our belief in self-control. Through cognitive neuroscience’s history, it’s been assumed that the brain’s evolutionarily newer frontal lobe regulates the more primitive regions of emotion, desire, and instinct, “as if there are beasts living in the basement, and the tower controls those beasts,” Parvizi says. He calls this an outdated Victorian-era bias that insists our free will should be able to conquer instinct. In fact, the brain’s structures are more interdependent. And those beasts of emotion are much, much more complex.

He says that we certainly can consciously control our expressions, even during those perilous mouthfuls of milk. We have both voluntary and involuntary systems, but it seems like the brain uses autopilot settings much more than conscious direction. “It’s an old notion that we regulate our behavior through a very conscious process, through a hierarchical top down process,” he says. “My idea is that we respond automatically in a context and that automatism is built partly from our culture.” In other words, early childhood socialization and lifetime experiences, coded into memories, factor into our automatic emotional responses. For example, in Japan, where emotional suppression is valued, people tend to avoid overt emotional displays. Parvizi acknowledges that this is an area wide open for debate. It is not yet clear, for instance, if those cultural pre-sets are stored in the cerebellum, or sent there from other brain areas.

The evaluation in the Stanford Neurology Clinic ended. Diagnosis: Pathological Laughing and Crying induced by Multiple Sclerosis. Nicole was wheeled out with a prescription for an antidepressant medication that will raise her brain’s emotional threshold and hopefully dampen her haphazard emotional outbursts. If the treatment works, it will take more than a passing sad memory to trigger her tears. The space where the Nicole sat was suddenly quiet. “And this is something we see over and over—,” Parvizi said, turning to me. “The problem isn’t a lack of voluntarism. It’s something much more.”

Acted Emotions
And then there are individuals who, unlike those patients with PLC, are so in control of emotional expression that they can willingly propel their bodies into the involuntary displays of laughing and crying. Intimate understanding of their own emotional physiology allows them to trigger or squelch emotional phenomena. As Hamlet puzzled, “Is it not monstrous that this player here, but in a fiction, in a dream of passion, could force his soul so to his own conceit, that from her working all his visage waned, tears in his eyes, distraction in his aspect, a broken voice, and his whole function suiting with forms to his own conceit?” The expression of genuine emotion without any personal reason to feel it is the prerogative of the performer, or the “player” in Shakespeare’s day. The talented performer spends hours refining and practicing the ability to laugh and cry in a matter of seconds in front of a sea of onlookers. For the actress, mastering the emotional is artistry; for the neuroscientist it is elusive science.

Credit: Jacques Gabay Donio

Josef Parvizi’s former professor back in Iowa was Antonio Damasio, who is now a neuroscientist at the University of Southern California. He has long been determined to understand how circumstances trigger emotions and how emotions then become feelings, as they do in actors and everyone else. He developed today’s leading theory of emotion, the somatic marker hypothesis, which builds on those of the giants before him, such as Carl Lange and William James, the scholars who first noted that feelings arose from perceptions of our body state. Of course, as Damasio’s more nuanced research methods have revealed, it is a bit more complicated than that.

To distinguish between human emotion and feeling, Damasio starts at the beginning. He sees emotion as a package of survival tools that originally evolved to help living beings navigate their environment safely, providing bodily warnings of dangerous situations. These responses later evolved to cause positive and negative feelings, which extended the impact of emotions by leaving a permanent stamp on memory. Over millions of years, this feedback process between organism and environment birthed foresight, and eventually, the human ability to respond to situations creatively.

Emotions familiar to us, such as happiness or anger, require an initial stimulus, a sight, smell, or memory. Physical changes follow. Feelings unfurl. Stimuli can even be simple actions. Back in 1992, Psychologist Paul Ekman found that voluntary smiles and grimaces produce changes in the autonomic nervous system. His study participants actually began to feel happy or sad or angry after following instructions to set their facial muscles in certain positions. “Psychologically unmotivated and ‘acted’ emotional expressions have the power to cause feeling,” Damasio writes. Enter the actress.

Once More, with Feeling
Sheila Donio first attempted to cry onstage as the character “Rizzo” in a stage production of Grease in 2001. She has acted since childhood and settled into professional acting career as a teenager in São Paulo, Brazil. “As I knew I wanted to cry on a specific scene,” she explained, “I started to work on Rizzo’s emotions at home, listening to the song used right before my crying scene. Studying Rizzo’s emotions with that specific soundtrack made my brain connect one thing with the other.” Method acting, techniques devised in the 1930s by Constantin Stanislavski, and later adapted by director Lee Strasberg, emphasize this use of sense memory. Students of this method learn to use personal memories of sensory details to trigger authentic physiological reactions.

Teaching herself, Sheila used this process to tap into the pathways of her brain responsible for the generation of crying. Crying on command became second nature. “Every time I heard that song, I would start to feel her anxieties and frustrations and the buttons for crying would show up in my body, ready to be pressed.” In fact, Sheila’s method of manipulating her body’s physiology is a living demonstration of Damasio’s theory of emotion.

In 2000, Damasio and his colleagues published the results of a landmark study in the field of emotion and feeling. The team asked 41 individuals to recall a particularly vivid emotional episode of their lives, memories charged with happiness, sadness, anger, or fear. (In a prior screening session, only the participants proven to experience emotional changes when recalling previous events were chosen.) Hooked up to a PET scanner, which detects specific activity in brain regions, the participants re-lived the chosen experience. As instructed, they each made a hand movement when they began to feel the anger, happiness, or sadness.

Electrodes measuring the volunteers’ physiological phenomena —things like heart rate and sweat levels in skin—registered drastic changes before the hands were raised. In other words, Damasio’s team found that people reported feeling emotional only after the eruption of a physical emotion. “It’s very important for you to think of emotion as an action, so crying is a component of emotion, never as a part of feeling. Feeling is a perception of the action we have,” he told me. Of course, only tears give Sheila confirmation that conjuring the emotionally tinted memory of Rizzo’s song, or “pushing her buttons for crying,” can trigger an authentic emotional cascade.

In that same study, Damasio found that the body-sensing region of the brain, the somatosensory cortex, came online as the feelings arose. Later, in 2006, he reported that for each basic emotion (e.g., happiness, sadness, anger, and fear) there is a distinct cardio-respiratory pattern. Linking these data sets together, in a technology-age tweaking of the James-Lange theory, Damasio suggests that feelings arise from “maps” continually forming in brain regions such as the somatosensory cortex. The brain doesn’t have simple “on” and “off “emotional switches. It is always in flux. Feelings are more than the brain’s perception of emotion; they are a constant process of mapping shifting body states.

Sheila makes daily use of those “maps.” “I study how my body reacts when I am crying for real, in real life. It’s all about breathing, for me. I get myself on the highway that leads me to cry. When I do improv theatre, this is how I find my emotions in 30 seconds,” she said. As Sheila adjusts her inhalations and exhalations, her somatosensory cortex detects the body map for crying. Genuine sadness follows the tears. The tears amplify the feelings, triggering sharper emotion, creating a positive feedback loop. What Sheila describes as a “highway,” Damasio thinks of more as a two-way traffic rotary.

‘E’-motion
Emotion in acting is not all about conjuring tears through physiological manipulations and memory recall. The audience in the back row needs to recognize the crying or joyous body just as intensely the people in the front row. That’s why the performer must play to the visual brain, or the mirrors reflecting within it.

Our brains can “mirror” the actions of those we watch. We feel our muscles clench when we watch a figure skater twist in the air, or when we crack a smile as a stage performer grins. That’s the work of the proposed “human mirror neuron network,” part of our visual brain. Basically, swaths of neurons in the human premotor cortex activate both when we are performing an action and watching someone else perform that action. The young science of our mirroring ability is rapidly gaining a spot in emotional neurobiology. After all, “motion” and “emotion” live just one letter apart.

In 1995, Vittorio Gallese of Parma University in Italy discovered mirror neurons in macaque monkeys. His continued explorations of mirroring behavior have most recently focused on the contagious nature of action and sound. He had a professional actor and actress perform sorrow and joy without uttering words—laughing and crying. He showed other participants silent versions of the actors’ embodiments and recorded the movement of their facial muscles. In a second condition, the participants heard only the sound of laughing or crying. “The results are pretty interesting,” he reported at a mirror neuron conference in 2007. “If you see someone laughing, you have strong activation of your zygomatic muscle, which is active when you laugh. If you see someone crying, you have an activation of the corrugator supercili. The same results are obtained with sound.” So, whether we hear laughing or crying, or watch the actions in silence, our smiling and frowning muscles automatically begin to respond. In essence, our emotions are contagious.

Sophie Scott, neuroscientist at University College of London, pressed ‘play’ on her iTunes and a cacophony of laughter and shrieks (as well as gagging and groans) attacked the air, causing both our faces to cringe and smile. In 2006, these were the sound samples used in a study into emotional mirroring. Twenty subjects listened to the samples, both positive and negative emotional vocalizations, while their brains were scanned with fMRI. They were told not to move their faces.

She was looking into the brain’s premotor cortex, a slice of which houses the neurons that control those facial expression muscles of smiling or laughing, the ones actors use so much. Her research team analyzed the participants’ brain activation while they heard the amused sounds. The disgust sounds were used as controls this time. Even though they were not actually smiling or laughing, the predicted slice of premotor cortex became active when the subjects heard the delight noises. These participants were experiencing other people’s apparent happiness through sound alone. In essence, their brains were starting to share a laugh.

This brings us back to Shakespeare’s cogent demand: ‘Tell me where fancy is bred/ Or in the heart, or in the head?’ Always immersed in theatre, he knew implicitly that authentic exuberance involved no forced smiles, but instead pink cheeks, watery eyes, and quickened breath. The evidence stood in front of him. Hundreds of years later, the technology of neuroscience provides a more complicated answer. Bodily emotion and moody feelings, head and heart, are constantly intertwined, reciprocal, looping processes. They do not exist separately. Once made visible, emotion’s expression requires some exacting architecture. This neurological machinery operates without permission, exposing our feelings to others. Still, to see how far fancy can travel outside the body, we’ve never needed fMRI scans. Just smile as you pass someone on the sidewalk and watch for the smile back.

Our Mind-betraying Eyes

Thoughts are generally believed to be of the ether. Intangible, ephemeral, hidden from sight. But scientists are beginning to identify the many ways that cognitive abstractions are transferred to the physical world. Last week, researchers from Australia and Switzerland reported that they can essentially predict “what number you’re thinking of” with a look into your eyes. More accurately, they can predict the relative size of that number in a random series. The team had a group of participants call out 40 numbers (from 1 to 30) as randomly as possible. The researchers recorded the volunteers’ average horizontal and vertical eye position a split second before each number was called out, and were able to reliably forecast whether the next number would be higher or lower. When the subjects looked to the left and down, the number was smaller; when they looked up and to the right, it was larger. Not only did the direction of eye movement indicate the relative size of the next number, but surprisingly, the degree of movement predicted the magnitude of the numerical shift. This is great news if you’re a magician or a card shark. But beyond the purview of children’s birthday parties and card tables, the implications of the study are decidedly more profound. The results confirm earlier findings that we mentally reference an “imaginary number line” when thinking about numbers and make a solid case for how subtly these abstractions can direct body movements. They illuminate the remarkable connections between supposedly abstract thought processes, body mechanics, and the choices we make.—GB

Bursts: The Hidden Pattern Behind Everything We Do



By Albert-László Barabási (Dutton)
In his first book, Linked, Barabási introduced us to the interrelatedness of the universe and to the emerging field of network science. Here, the physicist shows how to use that knowledge to predict seemingly random human behavior. Or the spread of a viral epidemic through populations. Or the convoluted trails that money follows. Like the “unexplained” erratic motion of tiny objects floating through water that fascinated Einstein at the turn of the 20th century, apparent stochasticity, says Barabási, can all be explained—and predicted—by elegant mathematical formulas. And for the first time in history, we’re beginning to have the right data to plug into such formulas. Using algorithms built in his lab, fueled by reams of data we unthinkingly create in our daily digital interactions (carrying around and communicating with mobile devices, withdrawing money from ATMs, making online purchases), Barabási demonstrates how much of human activity occurs in quantifiable patterns known as “bursts.” These bursts seem to define us: from our emailing and web-browsing patterns to how we move about the world. But in Bursts, this realization surfaces only as the sum effect of a nigh-schizoid storyteller’s account of historical and personal events. Driven by colorful characters and an experimental plot structure that jumps between ostensibly unrelated narratives, the book weaves a bloody crusade, the papacy, 9/11, and FBI surveillance into a tidy package. The effect is enthralling: less like listening to a lecture at a research conference, and more like sitting at a bar with a clever friend who charms you with his semi-implausible anecdotes. After nursing the last beer, beyond being amused, you’ll have learned something truly profound about the curious paths of human activity.