Wednesday, September 30, 2009

Is saving our atmosphere killing our seas? Biofuels may stifle global warming, but scientists warn that agricultural runoff causes new problems.

Each year in April and May as farmers in the central US fertilize their crops, nearly 450 thousand metric tons of nitrates and phosphates pour down the Mississippi River. When these chemicals reach the Gulf of Mexico, they cause a feeding frenzy as photosynthetic algae absorb the nutrients. It’s a boom-and-bust cycle of epic proportions: The algae populations grow explosively, then die and decompose. This process depletes the water of oxygen on a vast scale, creating a suffocating “dead zone” the size of Massachusetts where few, if any, animals can survive.

The EPA has been working to reduce the size of the dead zone, with a goal of shrinking it to about 5,000 square kilometers—a quarter of its current size—by 2015. But a new study in Environmental Science & Technology shows that other efforts to preserve the environment may be exacerbating the dead zone. Kristopher Hite, a graduate student in biochemistry at Colorado State University, explains the implications of the study on his blog, Tom Paine’s Ghost.

The study examined the implications of a 2007 law that requires the US to annually produce 36 billion gallons of biofuels by 2022. Barring major biofuel production breakthroughs from sources like algae or microbes, most of this fuel will come from crops grown in the central US; the fertilizers and other agricultural waste they produce will flow straight down the Mississippi and feed the dead zone. Hite says the study, led by Christine Costello, found that meeting this goal will make it impossible for the EPA to reach its target reduction in the size of the dead zone. Even if fertilizer-intensive corn is replaced with more eco-friendly crops like switchgrass, the vast increase in agricultural production will cause the dead zone to grow unless preventive measures are taken.

So what can be done about it? The Society for Conservation Biology suggests that increasing the size of wetlands or other buffer zones around the source of the pollution—the farms themselves—could help.

Unfortunately, artificial wetlands have their own negative ecological side effects. As this post at Conservation Maven shows, some created wetlands are dominated by invasive species. Apparently, the heavy equipment used to build the sites also compacts the soil in a way that makes it more difficult for native species to flourish.

But not all human-made wetlands are bad. Conservation Maven also points to a Swedish study which found that less-diverse wetlands dominated by tall plants are actually more efficient at removing nitrogen from runoff than many other sites. So creating wetlands can be a very effective means of removing pollutants from water, even if local biodiversity suffers. The current pace of biofuel development, however, exceeds the capacity of available wetlands.

Hite remains an optimist, pointing to new technology that uses fungi to convert the cellulose in wood chips, corn stalks, and other agricultural “waste” into biofuels. If this can be done efficiently, we could eventually harvest several times more energy from the same amount of cropland. Even while acknowledging that we may still face problems like the Gulf’s dead zone, Hite believes that ultimately technology can help us prevent greater ecological disasters like global warming.

But should Hite even be making this case? How do we decide whether it’s ecologically sensible to produce biofuels or build wetlands? Some have argued that the advocacy of scientists like Hite and websites like Conservation Maven is misplaced. Shouldn’t scientists just be interested in giving us the facts, staying removed from policy decisions and letting the general public and politicians decide how to act? Doesn’t becoming an advocate introduce bias into the scientific process, potentially tarnishing results?

James Hrynyshyn is a freelance journalist and unapologetic environmental advocate who says that many of the best scientists, from Albert Einstein, to Carl Sagan, to NASA’s James Hansen, have also been important policy advocates. On his blog, The Island of Doubt, Hrynyshyn cites a May paper in Conservation Biology by Michael Nelson and John Vucetich, who argue that scientists’ advocacy positions can easily be separated from scientific truths. For instance, late in his life the great chemist Linus Pauling damaged his reputation by peddling vitamin C as a cure-all, but that didn’t take away from his earlier scientific contributions, for which he won two Nobel prizes.

More importantly, Hrynyshyn says, it’s unfair and unwise to restrict individuals—who are interested citizens as well as working scientists—from participating in the political process, especially when those individuals have knowledge and expertise that applies directly to important problems. Conservation biologists can both alert us to potential ecological disasters and offer insight into how to solve them. Why not tap their expertise to help form policy decisions?

There’s much more discussion of ecology—and ecologists’ role in creating environmental policy—at ResearchBlogging.org.

Thursday, September 17, 2009

Optical illusions may seem to deceive, but they actually reveal truths about how our brains construct reality

Stare at the red dot in the center of the figure for a minute or two. Before long, the green ring will disappear—it simply seems to fade into the white background. There are no tricks: This is a simple, static image file. The effect has been known for more than two centuries and is named for its discoverer, Ignaz Paul Vital Troxler (1780–1866), a Swiss physician and philosopher. “Troxler fading” is actually related to what you experience when you get “dizzy”: You become so habituated to a phenomenon (spinning in a circle or seeing a green ring in your peripheral vision) that you stop noticing it’s there. Or, rather, you don’t realize that your perceptual system has begun actively ignoring it. It’s only when your circumstances change that you see what the phenomenon has done to your perceptual system. When you stop spinning, the world seems to continue, in reverse. When you look away from the green ring, you see a red ring in the same part of your visual field.

Occasionally an illusion attracts widespread notice online, perhaps because it was posted by a popular blogger, but it’s rare that we see a scientific explanation of how the illusion works. That’s beginning to change. Each year, at the meeting of the Visual Science Society, the Neural Correlate Society holds a contest where vision scientists share their latest, greatest optical illusions. This year’s winner is entitled “The Break of the Curveball” and was created by Arthur Shapiro, Zhong-Lin Lu, Emily Knight, and Robert Ennis. Shapiro, an associate professor of psychology at American University, is also a blogger and an avid baseball fan. (He once referred to pro football games in the early fall during the baseball playoffs as “preseason games that count.”) He has a full explanation of the illusion on his blog, but to my mind an even more impressive illusion is this one. (Click and watch this before continuing!)

When you shift your focus from the red dot to the yellow dot, the motion of the balls rotating around the dots appears to reverse. Again, Shapiro explains the effect on his blog. As with Troxler fading, the effect is due to our perceptual system’s limited ability to process information outside of a central focal region. When you look directly at the red dot, you can see that the surrounding circles are moving in one direction as the shaded patterns inside the circles are moving the opposite direction. But you can’t process all the information about the circles ringing the yellow dot in your peripheral vision, so the pattern moving inside the circles dominates.

But our perception of the world doesn’t rely solely on vision. We use all our senses to build a representation of what the world is really like. Many illusions occur because what we perceive with one sense conflicts with another. Varun Sreenivasan, a graduate student at the EPFL in Lausanne, Switzerland, has written an amazing account of a 2003 study on the “rubber hand effect.” The basic premise is this: If your real hand is hidden behind a screen and you see a fake hand in its place, then you can “feel” it when a researcher touches the fake hand. Neuroscientists K. Carrie Armel and V.S. Ramachandran wanted to see when the illusion broke down.

First, they asked volunteers to place one hand behind a screen. An experimenter scratched the table in front of the screen while either scratching or not scratching the hidden hand. The participants reported “feeling” a scratch on their real hand whether or not it was actually being scratched.

They also experimented with an extremely unrealistic rubber hand and arm, much longer than a real arm. The researchers bent one of the fake fingers back to what would be a very painful position while lifting the volunteer’s real (hidden) finger only slightly. The participants said they felt real pain, which was only slightly less intense with the extra-long rubber arm than with a realistic rubber arm. A measure of skin conductance showed a dramatic a physiological response in the volunteers as well. Clearly the effect of seeing a finger being bent contributes greatly to our experience of pain.

These illusions are not only fascinating to observe and experience, they also tell us a great deal about how our perceptual system functions. We receive so many inputs from the environment that the brain must prioritize which inputs to trust. Illusions represent the boundaries between conflicting inputs to the perceptual system, and by uncovering them—and often explaining them on their blogs—researchers can also uncover how the brain itself works. You can follow that conversation at ResearchBlogging.org.

Wednesday, September 16, 2009

Studying the Strangest Man

Paul Dirac (left) and Richard Feynman. From The Strangest Man. Photograph by A. John Coleman, courtesy AIP Emilio Segre Visual Archives, Physics Today collection.

For more than five years, former physicist Graham Farmelo devoted himself to unlocking the secrets of one of the most important and curious figures of 20th century science, Paul Dirac. He was born in 1902 and died in 1984, and though lionized by his peers for his fundamental work in quantum mechanics (among other things, he predicted the existence of antimatter and won a Nobel Prize when he was only 31), Dirac’s legacy has fared poorly among the general public. During his research, Farmelo found that most residents of the “famous” physicist’s hometown of Bristol didn’t even know who Dirac was. Unquestionably, this is due to Dirac’s reclusive and taciturn behavior; his social quiescence was so extreme that it inspired his fellow physicists to invent an unofficial unit of measure for the minimal number of words a person could speak in polite company: a “Dirac,” roughly one utterance per hour.

But as Farmelo delved deeper into Dirac’s life for his new biography, The Strangest Man, he discovered surprising complexity and contradiction that gives new appreciation to the physicist’s character: Despite what many perceived as a lack of empathy, Dirac married, raised children, and forged several close lifelong friendships. Despite his professed distaste for unscientific reasoning, in his later life he became increasingly obsessed with philosophical, even religious, questions. And despite his love for the rarefied subject of theoretical physics, Dirac also had a passion for “lowbrow” cartoons and comic books.

Farmelo spoke with Seed’s Lee Billings about the process of researching the book and his astonishing hypothesis that could explain, once and for all, Dirac’s enigmatic behavior.

Seed: What motivated you to spend five years writing a book about Paul Dirac?
Graham Farmelo: I used to be a theoretical physicist, and I can say that everyone in that profession is interested in Dirac. He’s often said to be “the first really modern theoretician” or “the theorist’s theorist.” I remember as an undergraduate coming across my first taste of Dirac’s physics, something called Fermi-Dirac statistics, which governs the transistors and electron flow in your computer. I was blown away, a bit like a young music student listening to Beethoven’s “Moonlight Sonata.” Dirac’s first papers on quantum mechanics still look modern, more than those of any of his fellow pioneers. The mathematical imagination and beauty of those articles is amazing. I wanted to write a biography of him to try to communicate the power and scope of his work to non-specialists who are nevertheless curious about science, and to try to understand his remarkable personality.

In my time in physics, I met quite a few “Dirac fanatics,” people who are obsessive about him. I’m speaking to you from the Institute for Advanced Study in Princeton, and I’ve spent several lunchtimes recounting to the physicists here some new “Dirac stories.”

Seed: “Dirac stories?” Can you give me some examples?
GF: Certainly. At the end of a lecture, Dirac agreed to answer questions. Someone in the audience piped up: “I didn’t understand the equation on the top right of the blackboard, professor.” Dirac was silent for more than a minute. When the moderator asked him if he’d like to answer the question, Dirac shook his head and said, “That wasn’t a question. It was a comment.”

Here’s another: Over dinner one evening at Saint John’s College, Cambridge, an American visitor who was desperate to meet the formidable Dirac steeled himself to ask, “Are you going on vacation this summer, professor?” Silence. About 20 minutes later, Dirac turned to the visitor and said, “Why do you ask?”

Seed: He sounds like quite a deep, literal thinker. Did Dirac have any interests outside physics?
GF:
Yes, a lot, but he just didn’t talk about them. He read widely, from Tolstoy to John le Carré. Among artists, he loved Rembrandt and Salvador Dali. Like Einstein, Dirac’s taste in music was mainly classical, but in later life he had a thing about Cher. To settle a dispute with his wife, he bought a second television so that he could watch a Cher special while she watched the Oscars.

Seed: The book includes several revelatory passages documenting Dirac’s personal life. How did you research and verify that material?
GF:
I devoted a lot of time tracking down Dirac’s surviving friends, people who knew him very well. The most important one I found was his last great friend, Leopold Halpern, an expert on relativity who slept in the open air, refused to wash with soap, and liked to slice open baked potatoes with a karate chop. A few years ago, when Halpern was at death’s door with prostate cancer, he flew across the country to Florida, where Dirac spent the latter part of his life, just so he could row me up Wakulla Springs. He and Dirac used to go rowing every weekend. That was a special trip for me: Even now I’m looking at my arm and there are goose bumps. He showed me places where they talked, even where they went skinny dipping. Two and a half months later, Halpern died.

I spent several months consulting the Dirac archive at Florida State University in Tallahassee, which was virtually untouched. Dirac was an FSU professor for the last 14 years of his life. I found amazing things, not just letters from great physicists like Heisenberg and Schrödinger but also an amazing cache of weekly letters from Dirac’s mother, spanning almost 20 years. Many historians would’ve probably turned their noses up at these, but I found in them a dramatic story that illuminates Dirac’s home life and upbringing. I was also blessed with beginner’s luck when I happened to meet Dirac’s younger daughter at a centenary celebration of his birth. We hit it off well, and one day in her kitchen while I was visiting her, she showed me something like 120 private letters between Dirac and his first serious girlfriend, later his wife. Keep in mind, this man hardly spoke a word, and here he was opening up, writing whole pages—epics for him. I couldn’t believe my luck. Here was Dirac talking about his father with whom he didn’t get along at all, and about what it felt like to be someone conscious, that he was unlike most other people, unable to empathize with them. This is just my opinion here, but I believe he demonstrated many symptoms of what we now call autism, though that condition had not been identified at the time.

Seed: You think Dirac had undiagnosed autism?
GF: I did not go into this book project thinking Dirac was autistic in any way. When I started researching him all those years ago, I barely even knew what the term “autism” meant, and certainly didn’t apply it to Dirac. But as I researched, I encountered rumors about Dirac being autistic, about Einstein being autistic, and speculations that autism was more prevalent in scientists and mathematicians. So during one of my stays at Cambridge, I went to see Simon Baron-Cohen, who is arguably Britain’s leading expert on autism. He knew nothing about Dirac, but, to my amazement, he began describing patterns of behavior that exactly correspond to Dirac’s. Let me stress that this is just a hypothesis, and that I’m personally very skeptical of attempts to psychoanalyze people who are dead. This isn’t theoretical physics; I can’t do a slam-dunk experiment to prove it.

Seed: What were some of the behavioral indicators?
GF:
There are many of them: inability to empathize, extreme taciturnity and literal-mindedness, a passion for a routine, narrow interests, a lack of physical coordination, dislike of sudden loud noises, and so on. Many of the “Dirac stories” told by physicists are, in my opinion, actually autism stories. When people are laughing at these things, they forget what they’re actually doing is mocking.

Seed: Do you think those traits might have helped him in his work or given him a unique perspective?
GF:
Well, he was certainly as focused as a laser and as logical as a computer. He also had a fascinating way of looking at mathematics. He had a phrase, “My equation is smarter than I am.” He really did think that a good equation could be more intelligent than its creator. There’s a kind of mysticism in that. In the last 15 or 20 years of his life, he became obsessed with the philosophy that, for a piece of mathematics to be useful in fundamental physics, it must be beautiful. For instance, he thought the theory of photon and electron interactions—what we call quantum electrodynamics—was ugly, so he wouldn’t accept it. He had this extremely rigorous sense of beauty, and saw each successive revolution in physics progressing through increasingly beautiful mathematics.

Dirac, to his dying breath, pursued this quest for mathematical beauty. For him, everything apart from that principle was just details. The job of the fundamental theorist was to look for mathematically beautiful laws. That’s why the string theorists are on the right track, even though there aren’t experiments to bear them out at the moment.

Seed: So Dirac would be a fan of string theory, you think?
GF:
Well, when people get old, they tend to basically think that everything’s gone to the dogs, and there was an element to that in Dirac, who took virtually no interest in the latest findings in his field. But if you apply his idea about sticking to mathematically beautiful generalizations of past theories and to hell with experiments in the short term, then this philosophy should embolden string theorists, yes.

Sunday, September 13, 2009

The Great Blue Hole - Belize Lighthouse Reef

The Great Blue Hole is a large underwater sinkhole off the coast of Belize. It lies near the center of Lighthouse Reef, a small atoll 100 kilometres (62 mi) from the mainland and Belize City.
The diameter of the circular reef area stretches for about 1,000 feet and provides an ideal habitat for corals to attach and flourish. The coral actually breaks the surface in many sections at low tide. Except for two narrow channels, the reef surrounds the hole. The hole itself is the opening to a system of caves and passageway that penetrate this undersea mountain. In various places, massive limestone stalactites hang down from what was once the ceiling of air-filled caves before the end of the last Ice Age. When the ice melted the sea level rose, flooding the caves.


For all the practical purposes the over 400-foot depth makes the Blue Hole a bottomless pit. The walls are sheer from the surface until a depth of approximately 110 feet where you will begin to encounter stalactite formations which actually angle back, allowing you to dive underneath monstrous overhangs. Hovering amongst the stalactites, you can't help but feel humbled by the knowledge that the massive formation before you once stood high and dry above the surface of the sea eons ago. The feeling is enhanced by the dizzying effect of nitrogen breathed at depths. The water is motionless and the visibility often approaches 200 feet as you break a very noticeable thermocline.

Almost all the divers who visit Belize are keen to add this splendid dive site to their list of conquests. When they understand what the hole is and how it was formed, it makes the dive all the more exciting. The Blue Hole is a "karst- eroded sinkhole." It was once a cave at the center of an underground tunnel complex whose ceiling collapsed. Some of the tunnels are thought to be linked right through to the mainland, though this has never been conclusively proved. The mainland itself has many water-filled sinkholes that are connected to caves and tunnels.


At some time many millions of years ago, two distinct events occurred. First, there was a major earthquake and this probably caused the cave ceiling to collapse forming the sinkhole. The upheaval, however, had the effect of tilting Lighthouse Reef to an angle of around 12 degrees. All along the walls of this former cavern are overhangs and ledges, housing pleistocene stalactites, stalagmites and columns.

Some of the stalactites now hang at an angle, yet we know they cannot develop at any angle other than perfectly perpendicular. In addition, there are those stalactites which were formed after the earthquake and others which were formed both before and after that cataclysmic event-the top of the stalactite being at an angle and the bottom being perpendicular.

At that time the sea levels were much lower than today and the second major event was to change all this. At the end of the Great Ice Age the glaciers melted and sea levels throughout the world rose considerably. This process occurred in stages. Evidence for this are the shelves and ledges, carved into the limestone by the sea, which run the complete interior circumference of the Blue Hole at various depths.

Saturday, September 12, 2009

The Sentence of John L. Brown

Oh, from the fields of cane,
From the low rice-swamp, from the trader's cell;
From the black slave-ship's foul and loathsome hell,
And coffle's weary chain;
Hoarse, horrible, and strong,
Rises to Heaven that agonizing cry,
Filling the arches of the hollow sky,
How long, O God, how long?


THE SENTENCE OF JOHN L. BROWN.

John L. Brown, a young white man of South Carolina, was in 1844 sentenced to death for aiding a young slave woman, whom he loved and had married, to escape from slavery. In pronouncing the sentence Judge O'Neale addressed to the prisoner these words of appalling blasphemy:

You are to die! To die an ignominious death--the death on the gallows! This announcement is, to you, I know, most appalling. Little did you dream of it when you stepped into the bar with an air as if you thought it was a fine frolic. But the consequences of crime are just such as you are realizing. Punishment often comes when it is least expected. Let me entreat you to take the present opportunity to commence the work of reformation. Time will be furnished you to prepare for the great change just before you. Of your past life I know nothing, except what your trial furnished. That told me that the crime for which you are to suffer was the consequence of a want of attention on your part to the duties of life. The strange woman snared you. She flattered you with her word; and you became her victim. The consequence was, that, led on by a desire to serve her, you committed the offence of aid in a slave to run away and depart from her master's service; and now, for it you are to die! You are a young man, and I fear you have been dissolute; and if so, these kindred vices have contributed a full measure to your ruin. Reflect on your past life, and make the only useful devotion of the remnant of your days in preparing for death. Remember now thy Creator in the days of thy youth is the language of inspired wisdom. This comes home appropriately to you in this trying moment. You are young; quite too young to be where you are. If you had remembered your Creator in your past days, you would not now be in a felon's place, to receive a felon's judgment. Still, it is not too late to remember your Creator. He calls early, and He calls late. He stretches out the arms of a Father's love to you--to the vilest sinner--and says: "Come unto me and be saved." You can perhaps read. If so, read the Scriptures; read them without note, and without comment; and pray to God for His assistance; and you will be able to say when you pass from prison to execution, as a poor slave said under similar circumstances: "I am glad my Friday has come." If you cannot read the Scriptures, the ministers of our holy religion will be ready to aid you. They will read and explain to you until you will be able to understand; and understanding, to call upon the only One who can help you and save you--Jesus Christ, the Lamb of God, who taketh away the sin of the world. To Him I commend you. And through Him may you have that opening of the Day-Spring of mercy from on high, which shall bless you here, and crown you as a saint in an everlasting world, forever and ever. The sentence of the law is that you be taken hence to the place from whence you came last; thence to the jail of Fairfield District; and that there you be closely and securely confined until Friday, the 26th day of April next; on which day, between the hours of ten in the forenoon and two in the afternoon, you will be taken to the place of public execution, and there be hanged by the neck till your body be dead. And may God have mercy on your soul!

No event in the history of the anti-slavery struggle so stirred the two hemispheres as did this dreadful sentence. A cry of horror was heard from Europe. In the British House of Lords, Brougham and Denman spoke of it with mingled pathos and indignation. Thirteen hundred clergymen and church officers in Great Britain addressed a memorial to the churches of South Carolina against the atrocity. Indeed, so strong was the pressure of the sentiment of abhorrence and disgust that South Carolina yielded to it, and the sentence was commuted to scourging and banishment.


Ho! thou who seekest late and long
A License from the Holy Book
For brutal lust and fiendish wrong,
Man of the Pulpit, look!
Lift up those cold and atheist eyes,
This ripe fruit of thy teaching see;
And tell us how to heaven will rise
The incense of this sacrifice--
This blossom of the gallows tree!

Search out for slavery's hour of need
Some fitting text of sacred writ;
Give heaven the credit of a deed
Which shames the nether pit.
Kneel, smooth blasphemer, unto Him
Whose truth is on thy lips a lie;
Ask that His bright winged cherubim
May bend around that scaffold grim
To guard and bless and sanctify.

O champion of the people's cause
Suspend thy loud and vain rebuke
Of foreign wrong and Old World's laws,
Man of the Senate, look!
Was this the promise of the free,
The great hope of our early time,
That slavery's poison vine should be
Upborne by Freedom's prayer-nursed tree
O'erclustered with such fruits of crime?

Send out the summons East and West,
And South and North, let all be there
Where he who pitied the oppressed
Swings out in sun and air.
Let not a Democratic hand
The grisly hangman's task refuse;
There let each loyal patriot stand,
Awaiting slavery's command,
To twist the rope and draw the noose!

But vain is irony--unmeet
Its cold rebuke for deeds which start
In fiery and indignant beat
The pulses of the heart.
Leave studied wit and guarded phrase
For those who think but do not feel;
Let men speak out in words which raise
Where'er they fall, an answering blaze
Like flints which strike the fire from steel.

Still let a mousing priesthood ply
Their garbled text and gloss of sin,
And make the lettered scroll deny
Its living soul within:
Still let the place-fed, titled knave
Plead robbery's right with purchased lips,
And tell us that our fathers gave
For Freedom's pedestal, a slave,
The frieze and moulding, chains and whips!

But ye who own that Higher Law
Whose tablets in the heart are set,
Speak out in words of power and awe
That God is living yet!
Breathe forth once more those tones sublime
Which thrilled the burdened prophet's lyre,
And in a dark and evil time
Smote down on Israel's fast of crime
And gift of blood, a rain of fire!

Oh, not for us the graceful lay
To whose soft measures lightly move
The footsteps of the faun and fay,
O'er-locked by mirth and love!
But such a stern and startling strain
As Britain's hunted bards flung down
From Snowden to the conquered plain,
Where harshly clanked the Saxon chain,
On trampled field and smoking town.

By Liberty's dishonored name,
By man's lost hope and failing trust,
By words and deeds which bow with shame
Our foreheads to the dust,
By the exulting strangers' sneer,
Borne to us from the Old World's thrones,
And by their victims' grief who hear,
In sunless mines and dungeons drear,
How Freedom's land her faith disowns!

Speak out in acts. The time for words
Has passed, and deeds suffice alone;
In vain against the clang of swords
The wailing pipe is blown!
Act, act in God's name, while ye may!
Smite from the church her leprous limb!
Throw open to the light of day
The bondman's cell, and break away
The chains the state has bound on him!

Ho! every true and living soul,
To Freedom's perilled altar bear
The Freeman's and the Christian's whole
Tongue, pen, and vote, and prayer!
One last, great battle for the right--
One short, sharp struggle to be free!
To do is to succeed--our fight
Is waged in Heaven's approving sight;
The smile of God is Victory.

Thursday, September 10, 2009

The Aysmmetry of Life

Left-right inequality has significance far beyond that of mirror images, touching on the heart of existence itself. From subatomic physics to life, nature prefers asymmetry to symmetry. There are no equal liberties when neutrinos and proteins are concerned. In the case of neutrinos, particles that spill out of the sun’s nuclear furnace and pass through you by the trillions every second, only leftward-spinning ones exist. Why? No one really knows.

Proteins are long chains of amino acids that can be either left- or right-handed. Here, handedness has to do with how these molecules interact with polarized light, rotating it either to the left or to the right. When synthesized in the lab, amino acids come out fifty-fifty. In living beings, however, all proteins are made of left-handed amino acids. And all sugars in RNA and DNA are right-handed. Life is fundamentally asymmetric.

Is the handedness of life, its chirality (think chiromancer, which means “palm reader”), linked to its origins some 3.5 billion years ago, or did it develop after life was well on its way? If one traces life’s origins from its earliest stages, it’s hard to see how life began without molecular building blocks that were “chirally pure,” consisting solely of left- or right-handed molecules. Indeed, many models show how chirally pure amino acids may link to form precursors of the first protein-like chains. But what could have selected left-handed over right-handed amino acids? My group’s research suggests that early Earth’s violent environmental upheavals caused many episodes of chiral flip-flopping. The observed left-handedness of terrestrial amino acids is probably a local fluke. Elsewhere in the universe, perhaps even on other planets and moons of our solar system, amino acids may be right-handed. But only sampling such material from many different planetary platforms will determine whether, on balance, biology is lefthanded, right-handed, or ambidextrous.

Marcelo Gleiser is the Appleton Professor of Natural Philosophy at Dartmouth College. His forthcoming book, Imperfect Creation: Cosmos, Life, and Nature’s Hidden Code, will be published by Free Press in 2010.

Interesting Link

http://seedmagazine.com/portfolio/05_rider-on-the-storm.html
http://seedmagazine.com/content/article/molecular_mimicry/

Saturday, September 5, 2009

The Origin of the Mind

  • Charles Darwin argued that a continuity of mind exists between humans and other animals, a view that subsequent scholars have supported.
  • But mounting evidence indicates that, in fact, a large mental gap separates us from our fellow creatures. Recently the author identified four unique aspects of human cognition.
  • The origin and evolution of these distinctive mental traits remain largely mysterious, but clues are emerging slowly.

Not too long ago three aliens descended to Earth to evaluate the status of intelligent life. One specialized in engineering, one in chemistry and one in computation. Turning to his colleagues, the engineer reported (translation follows): “All of the creatures here are solid, some segmented, with capacities to move on the ground, through the water or air. All extremely slow. Unimpressive.” The chemist then commented: “All quite similar, derived from different sequences of four chemical ingredients.” Next the computational expert opined: “Limited computing abilities. But one, the hairless biped, is unlike the others. It exchanges information in a manner that is primitive and inefficient but remarkably different from the others. It creates many odd objects, including ones that are consumable, others that produce symbols, and yet others that destroy members of its tribe.”

“But how can this be?” the engineer mused. “Given the similarity in form and chemistry, how can their computing capacity differ?” “I am not certain,” confessed the computational alien. “But they appear to have a system for creating new expressions that is infinitely more powerful than those of all the other living kinds. I propose that we place the hairless biped in a different group from the other animals, with a separate origin, and from a different galaxy.” The other two aliens nodded, and then all three zipped home to present their report.

A Patchwork Mind: How Your Parents' Genes Shape Your Brain

  • When passing on DNA to their offspring, mothers silence certain genes, and fathers silence others. These imprinted genes usually result in a balanced, healthy brain, but when the process goes awry, neurological disorders can result.
  • Imprinting errors are responsible for rare disorders such as Angelman and Prader-Willi syndromes, and some scientists are beginning to think imprinting might be implicated in more common illnesses such as autism and schizophrenia.
  • Even typical brains are the result of asymmetric contributions from Mom and Dad. Higher cognitive function seems to be disproportionately controlled by Mom’s genes, whereas the drive to eat and mate is influenced by Dad’s.
It is true that children inherit 23 chromosomes from their mother and 23 complementary chromosomes from their father. But it turns out that genes from Mom and Dad do not always exert the same level of influence on the developing fetus. Sometimes it matters which parent you inherit a gene from—the genes in these cases, called imprinted genes because they carry an extra molecule like a stamp, add a whole new level of complexity to Mendelian inheritance. These molecular imprints silence genes; certain imprinted genes are silenced by the mother, whereas others are silenced by the father, and the result is the delicate balance of gene activation that usually produces a healthy baby.

Depressingly Easy

  • Rates of depression have risen in recent decades, at the same time that people are enjoying time-saving conveniences such as microwave ovens, e-mail, prepared meals, and machines for washing clothes and mowing lawns.
  • People of earlier generations, whose lives were characterized by greater efforts just to survive, para­dox­ically, were mentally healthier. Human ancestors also evolved in conditions where hard physical work was nece­ssary to thrive.
  • By denying our brains the rewards that come from ­anticipating and executing complex tasks with our hands, the author argues, we undercut our mental well-being.


Depression's Evolutionary Roots

Depression seems to pose an evolutionary paradox. Research in the US and other countries estimates that between 30 to 50 percent of people have met current psychiatric diagnostic criteria for major depressive disorder sometime in their lives. But the brain plays crucial roles in promoting survival and reproduction, so the pressures of evolution should have left our brains resistant to such high rates of malfunction. Mental disorders should generally be rare — why isn’t depression?

This paradox could be resolved if depression were a problem of growing old. The functioning of all body systems and organs, including the brain, tends to deteriorate with age. This is not a satisfactory explanation for depression, however, as people are most likely to experience their first bout in adolescence and young adulthood.

Or, perhaps, depression might be like obesity — a problem that arises because modern conditions are so different from those in which we evolved. Homo sapiens did not evolve with cookies and soda at the fingertips. Yet this is not a satisfactory explanation either. The symptoms of depression have been found in every culture which has been carefully examined, including small-scale societies, such as the Ache of Paraguay and the !Kung of southern Africa — societies where people are thought to live in environments similar to those that prevailed in our evolutionary past.

There is another possibility: that, in most instances, depression should not be thought of as a disorder at all. In an article recently published in Psychological Review, we argue that depression is in fact an adaptation, a state of mind which brings real costs, but also brings real benefits.

One reason to suspect that depression is an adaptation, not a malfunction, comes from research into a molecule in the brain known as the 5HT1A receptor. The 5HT1A receptor binds to serotonin, another brain molecule that is highly implicated in depression and is the target of most current antidepressant medications. Rodents lacking this receptor show fewer depressive symptoms in response to stress, which suggests that it is somehow involved in promoting depression. (Pharmaceutical companies, in fact, are designing the next generation of antidepressant medications to target this receptor.) When scientists have compared the composition of the functional part rat 5HT1A receptor to that of humans, it is 99 percent similar, which suggests that it is so important that natural selection has preserved it. The ability to “turn on” depression would seem to be important, then, not an accident.

This is not to say that depression is not a problem. Depressed people often have trouble performing everyday activities, they can’t concentrate on their work, they tend to socially isolate themselves, they are lethargic, and they often lose the ability to take pleasure from such activities such as eating and sex. Some can plunge into severe, lengthy, and even life-threatening bouts of depression.

So what could be so useful about depression? Depressed people often think intensely about their problems. These thoughts are called ruminations; they are persistent and depressed people have difficulty thinking about anything else. Numerous studies have also shown that this thinking style is often highly analytical. They dwell on a complex problem, breaking it down into smaller components, which are considered one at a time.

This analytical style of thought, of course, can be very productive. Each component is not as difficult, so the problem becomes more tractable. Indeed, when you are faced with a difficult problem, such as a math problem, feeling depressed is often a useful response that may help you analyze and solve it. For instance, in some of our research, we have found evidence that people who get more depressed while they are working on complex problems in an intelligence test tend to score higher on the test.

Analysis requires a lot of uninterrupted thought, and depression coordinates many changes in the body to help people analyze their problems without getting distracted. In a region of the brain known as the ventrolateral prefrontal cortex (VLPFC), neurons must fire continuously for people to avoid being distracted. But this is very energetically demanding for VLPFC neurons, just as a car’s engine eats up fuel when going up a mountain road. Moreover, continuous firing can cause neurons to break down, just as the car’s engine is more likely to break down when stressed. Studies of depression in rats show that the 5HT1A receptor is involved in supplying neurons with the fuel they need to fire, as well as preventing them from breaking down. These important processes allow depressive rumination to continue uninterrupted with minimal neuronal damage, which may explain why the 5HT1A receptor is so evolutionarily important.

Many other symptoms of depression make sense in light of the idea that analysis must be uninterrupted. The desire for social isolation, for instance, helps the depressed person avoid situations that would require thinking about other things. Similarly, the inability to derive pleasure from sex or other activities prevents the depressed person from engaging in activities that could distract him or her from the problem. Even the loss of appetite often seen in depression could be viewed as promoting analysis because chewing and other oral activity interferes with the brain’s ability to process information.

But is there any evidence that depression is useful in analyzing complex problems? For one thing, if depressive rumination were harmful, as most clinicians and researchers assume, then bouts of depression should be slower to resolve when people are given interventions that encourage rumination, such as having them write about their strongest thoughts and feelings. However, the opposite appears to be true. Several studies have found that expressive writing promotes quicker resolution of depression, and they suggest that this is because depressed people gain insight into their problems.

There is another suggestive line of evidence. Various studies have found that people in depressed mood states are better at solving social dilemmas. Yet these would seem to have been precisely the kind of problems difficult enough to require analysis and important enough to drive the evolution of such a costly emotion. Consider a woman with young children who discovers her husband is having an affair. Is the wife’s best strategy to ignore it, or force him to choose between her and the other woman, and risk abandonment? Laboratory experiments indicate that depressed people are better at solving social dilemmas by better analysis of the costs and benefits of the different options that they might take.

Sometimes people are reluctant to disclose the reason for their depression because it is embarrassing or sensitive, they find it painful, they believe they must soldier on and ignore them, or they have difficulty putting their complex internal struggles into words.

But depression is nature’s way of telling you that you’ve got complex social problems that the mind is intent on solving. Therapies should try to encourage depressive rumination rather than try to stop it, and they should focus on trying to help people solve the problems that trigger their bouts of depression. (There are several effective therapies that focus on just this.) It is also essential, in instances where there is resistance to discussing ruminations, that the therapist try to identify and dismantle those barriers.

When one considers all the evidence, depression seems less like a disorder where the brain is operating in a haphazard way, or malfunctioning. Instead, depression seems more like the vertebrate eye—an intricate, highly organized piece of machinery that performs a specific function.

Tuesday, September 1, 2009

Solipsism

Solipsism is the philosophical idea that one's own mind is all that exists. Solipsism is an epistemological or ontological position that knowledge of anything outside the mind is unjustified. The external world and other minds cannot be known and might not exist. In the history of philosophy, solipsism has served as a skeptical hypothesis.

Explanation

Denial of materialist existence, in itself, does not constitute solipsism. Possibly the most controversial feature of the solipsistic worldview is the denial of the existence of other minds. Since qualia, or personal experiences, are private and ineffable, another being's experience can be known only by analogy.

Philosophers try to build knowledge on more than an inference or analogy. The failure of Descartes' epistemological enterprise brought to popularity the idea that all certain knowledge may end at "I am thinking; therefore I am" (cogito ergo sum).[1]

The theory of solipsism also merits close examination because it relates to three widely held philosophical presuppositions, which are themselves fundamental and wide-ranging in importance. These are that:

  1. My most certain knowledge is the content of my own mind—my thoughts, experiences, affects, etc.;
  2. There is no conceptual or logically necessary link between mental and physical—between, say, the occurrence of certain conscious experience or mental states and the 'possession' and behavioral dispositions of a 'body' of a particular kind (see the brain in a vat); and
  3. The experience of a given person is necessarily private to that person.

Solipsism is not a single concept but instead refers to several worldviews whose common element is some form of denial of the existence of a universe independent from the mind of the agent.

History

Gorgias (of Leontini)

Solipsism is first recorded with the Greek presocratic sophist, Gorgias (c. 483375 BC) who is quoted by the Roman skeptic Sextus Empiricus as having stated:

  1. Nothing exists;
  2. Even if something exists, nothing can be known about it; and
  3. Even if something could be known about it, knowledge about it can't be communicated to others.

This can be expressed as:

It's always known that I exist for “my mind is the only thing I know exists”. Someone could be dreaming up a thought of me having a thought, but only I know my thoughts. And for my thoughts, I don't know of any other thoughts from anyone else and if they even control their own.

Much of the point of the Sophists was to show that "objective" knowledge was a literal impossibility. (See also comments credited to Protagoras of Abdera). The influence of the Sophists has been severely downplayed; however, modern linguistic philosophy clearly seems to have its roots in the teachings of the Sophists.

Descartes

René Descartes. Portrait by Frans Hals, 1648.

The foundations of solipsism are in turn the foundations of the view that the individual's understanding of any and all psychological concepts (thinking, willing, perceiving, etc.) is accomplished by making analogy with his or her own mental states; i.e., by abstraction from inner experience. And this view, or some variant of it, has been influential in philosophy since Descartes elevated the search for incontrovertible certainty to the status of the primary goal of epistemology, whilst also elevating epistemology to "first philosophy".

Brain in a vat

In philosophy, the brain in a vat is an element used in a variety of thought experiments intended to draw out certain features of our ideas of knowledge, reality, truth, mind, and meaning. It is drawn from the idea, common to many science fiction stories, that a mad scientist might remove a person's brain from the body, suspend it in a vat of life-sustaining liquid, and connect its neurons by wires to a supercomputer which would provide it with electrical impulses identical to those the brain normally receives. According to such stories, the computer would then be simulating a virtual reality (including appropriate responses to the brain's own output) and the person with the "disembodied" brain would continue to have perfectly normal conscious experiences without these being related to objects or events in the real world.

The simplest use of brain-in-a-vat scenarios is as an argument for philosophical skepticism and Solipsism. A simple version of this runs as follows: Since the brain in a vat gives and receives the exact same impulses as it would if it were in a skull, and since these are its only way of interacting with its environment, then it is not possible to tell, from the perspective of that brain, whether it is in a skull or a vat. Yet in the first case most of the person's beliefs may be true (if he believes, say, that he is walking down the street, or eating ice-cream); in the latter case they are false. Since, the argument says, you cannot know whether you are a brain in a vat, then you cannot know whether most of your beliefs might be completely false. Since, in principle, it is impossible to rule out you're being a brain in a vat, you cannot have good grounds for believing any of the things you believe; you certainly cannot know them.

This argument is a contemporary version of the argument given by Descartes in Meditations on First Philosophy (which he eventually rejects) that he could not trust his perceptions on the grounds that an evil demon might, conceivably, be controlling his every experience. It is also more distantly related to Descartes' argument that he cannot trust his perceptions because he may be dreaming (Descartes' dream argument is preceded by Zhuangzi in "Chuang Chou dreamed he was a butterfly".). In this latter argument the worry about active deception is removed.

Philosophical responses

Such puzzles have been worked over in many variations by philosophers in recent decades. Some, including Barry Stroud, continue to insist that such puzzles constitute an unanswerable objection to any knowledge claims.[1] Others have argued against them, most notably Hilary Putnam.[2] In the first chapter of his Reason, Truth, and History, Putnam claims that the thought experiment is inconsistent on the grounds that a brain in a vat could not have the sort of history and interaction with the world that would allow its thoughts or words to be about the vat that it is in.

In other words, if a brain in a vat stated "I am a brain in a vat", it would always be stating a falsehood. If the brain making this statement lives in the "real" world, then it is not a brain in a vat. On the other hand, if the brain making this statement is really just a brain in the vat then by stating "I am a brain in a vat" what the brain is really stating is "I am what nerve stimuli have convinced me is a 'brain,' and I reside in an image that I have been convinced is called a 'vat'." That is, a brain in a vat would never be thinking about real brains or real vats, but rather about images sent into it that resemble real brains or real vats. This of course makes our definition of "real" even more muddled. This refutation of the vat theory is a consequence of his endorsement, at that time, of the causal theory of reference. Roughly, in this case: if you've never experienced the real world, then you can't have thoughts about it, whether to deny or affirm them. Putnam contends that by "brain" and "vat" the brain in a vat must be referring not to things in the "outside" world but to elements of its own "virtual world"; and it is clearly not a brain in a vat in that sense. One of the other problems is that the supposed brain in a vat cannot have any evidence for being a brain in a vat, because that would be saying "I have what nerve stimuli have convinced me is evidence to my being a brain in a vat" and also "Nerve stimuli have convinced me of the fact that I am a brain in a vat"

Many writers, however, have found Putnam's proposed solution unsatisfying, as it appears, in this regard at least, to depend on a shaky theory of meaning: that we cannot meaningfully talk or think about the "external" world because we cannot experience it; sounds like a version of the outmoded verification principle.[3] Consider the following quote: "How can the fact that, in the case of the brains in a vat, the language is connected by the program with sensory inputs which do not intrinsically or extrinsically represent trees (or anything external) possibly bring it about that the whole system of representations, the language in use, does refer to or represent trees or any thing external?" Putnam here argues from the lack of sensory inputs representing (real world) trees to our inability to meaningfully think about trees. But it is not clear why the referents of our terms must be accessible to us in experience. One cannot, for example, have experience of other people's private states of consciousness; does this imply that one cannot meaningfully ascribe mental states to others?

Subsequent writers on the topic, especially among those who agree with Putnam's claim, have been particularly interested in the problems it presents for content: that is, how - if at all - can the brain's thoughts be about a person or place with whom it has never interacted and which perhaps does not exist.