Friday, June 26, 2009

ENHANCED: Optogenetics

Optogenetics

Brain control has always proven tricky, particularly when it comes to the brain trying to control itself. We have many indirect methods — drugs, meditation, education, travel, etc. — but people have always wanted quick and reliable control of their brain states. And what that actually means is that they want to change an area of the brain. Switching the drives and mental states we need on and off would be considerably less frustrating than the transitioning struggles nature has given us. And so we are entering the era of a new set of technologies for direct neural control.

The best current technology combines psychosurgery and implantation. Right now, hard-to-treat disorders can get a difficult direct neural treatment called Deep Brain Stimulation, or DBs. DBs is like a pacemaker for the brain. An electrode is snaked down to the area associated with the disorder being treated and left in place. After the surgery has healed, the implant pulses current at a frequency that either activates or quiets the area responsible for the condition. Affecting cells further from the electrode means passing more current through nearby cells. DBs is by far the most precise clinical procedure for controlling areas of the brain, but it’s still disappointingly non-specific. Since DBs involves brain surgery, it’s generally a treatment of last resort, but it’s shown good results for previously untreatable cases of Parkinson’s, chronic pain, and depression. Electrode implantation is an extreme measure, not likely to be widely used.

Dr. Karl Deisseroth of Stanford University can go one better. He’s developed a technique called optogenetics that combines genetic engineering, lasers, neurology and surgery to create a direct control mechanism. Optogenetics uses a brain cell switch with two genetic parts. The first is a gene taken from an algae that activates the cell in the presence of blue light in order to turn towards the light and photosynthesis. In a neuron, that activation fires the cell. The second is from an archaeon, a salt-based extremophile, which responds to yellow light by pumping chlorideions. In a brain cell, that means not firing at all.

OptogeneticsTo get the genes in place, Deisseroth’s team opens up the skull and uses a pipette to apply a nonreproducing adenovirus to the desired brain area. The virus is genetically configured to inject both genes into a single cell type. The single cell will take both genes. After the “light switch” genes are in place, those brain cells are now light sensitive and a 50 micrometer fiber optic cable is fed to the area. In this way, they can target very specific deep brain structures, areas currently too deep and fragile for most psychosurgery. Once the researcher attaches the other end of the cable to a laser, he or she has absolute and flawless control over that group of neurons: blue light on, yellow light off.

Dr. Deisseroth is a psychiatrist as well as bioengineer, and he envisions using optogenetics in place of DBs’s not-so-deep cousin, Vagus Nerve Stimulation. Much like DBs, VNs uses an electrode to treat depression and epilepsy but targets where the vagus nerve passes through the neck rather than deep in the brain. It can still cause problems in many patients — sleep apnea, throat pain, coughing, and voice changes are the main complaints. Deisseroth believes optogenetics might be a way of reducing the side effects of VNs by targeting the treatments, rather than just shocking the neck region.

All this points to easier and more effective neural control. We’re still far from knowing which cells do what, and further from orchestrating treatments and enhancements for specific conditions. But for the first time we can map and build useful handles on the very things that make us ourselves.

- Quinn Norton

Thursday, June 25, 2009

Four color theorem


In mathematics, the four color theorem, or the four color map theorem, states that given any separation of a plane into contiguous regions, called a map, the regions can be colored using at most four colors so that no two adjacent regions have the same color. Two regions are called adjacent only if they share a border segment, not just a point.

Three colors are adequate for simpler maps, but an additional fourth color is required for some maps, such as a map in which one region is surrounded by an odd number of other regions that touch each other in a cycle. The five color theorem, which has a short elementary proof, states that five colors suffice to color a map and was proven in the late 19th century; however, proving four colors suffice turned out to be significantly harder. A number of false proofs and false counterexamples have appeared since the first statement of the four color theorem in 1852.

Despite the motivation from coloring political maps of countries, the theorem is not of particular interest to mapmakers. According to an article by the math historian Kenneth May, “Maps utilizing only four colours are rare, and those that do usually require only three. Books on cartography and the history of mapmaking do not mention the four-color property.”

The four color theorem was proven in 1976 by Kenneth Appel and Wolfgang Haken. It was the first major theorem to be proven using a computer. Appel and Haken's approach started by showing there is a particular set of 1,936 maps, each of which cannot be part of a smallest-sized counterexample to the four color theorem. Appel and Haken used a special-purpose computer program to check each of these maps had this property. Additionally, any map (regardless of whether it is a counterexample or not) must have a portion that looks like one of these 1,936 maps. To show this required hundreds of pages of hand analysis. Appel and Haken concluded that no smallest counterexamples existed because any must contain, yet not contain, one of these 1,936 maps. This contradiction means there are no counterexamples at all and the theorem is true. Initially, their proof was not accepted by all mathematicians because the computer-assisted proof was infeasible for a human to check by hand . Since then the proof has gained wider acceptance, although doubts remain

To dispel remaining doubt about the Appel–Haken proof, a simpler proof using the same ideas and still relying on computers was published in 1997 by Robertson, Sanders, Seymour and Thomas. Additionally in 2005, the theorem was proven by Georges Gonthier with general purpose theorem proving software.

Tuesday, June 23, 2009

For particle physicists who study phase transitions, a traffic jam is simply a solid made up of idling cars

A few years ago, Swiss economists Bruno Fey and Alois Stutzer announced the discovery of a new human foible, which they called “the commuters paradox.” They found that when people are choosing where to live, they consistently underestimate the pain of a long commute. This leads people to mistakenly believe that a mansion in the suburbs, with its extra bedroom and sprawling lawn, will make them happier, even if living there might force them to drive an additional 45 minutes to work. It turns out, however, that traffic is torture, and the big house isn’t worth it. According to the calculations of Fey and Stutzer, a person with a one-hour commute has to earn 40 percent more money to be as satisfied with life as someone who walks to the office.

Long commutes make us unhappy because the flow of traffic is inherently unpredictable. As a result, we never adapt to the suffering of rush hour. (Ironically, if traffic were always bad, and not just usually bad, it would be easier to deal with.) As the Harvard University psychologist Daniel Gilbert notes, “Driving in traffic is a different kind of hell every day.”

But why is traffic so unpredictable? After all, the number of cars on a highway during a typical weekday rush hour is fairly constant. And yet, even when there are no accidents—and most traffic isn’t caused by collisions—the speed of traffic can undergo dramatic and seemingly inexplicable shifts.

The key to understanding traffic jams is something known as “critical density,” or the number of vehicles that any road can efficiently accommodate. Past this threshold, when too many cars are trying to cram onto the same six lanes of asphalt, the flow of traffic starts to break down. At this point, congestion becomes all but inevitable, as even seemingly insignificant events, such as a single driver tapping on the brakes, can trigger a cascade of brake lights. That’s when the highway becomes a parking lot.

While the concept of critical density has been repeatedly demonstrated using computer simulations—drivers are surprisingly easy to model as a system of interacting particles—it wasn’t until last year that this theory of traffic was experimentally confirmed. A team of physicists at Nagoya University wanted to see how many cars could maintain a constant speed of 19 mph around a short circular track. It turned out that the critical number was 22: Once that density was reached, tiny fluctuations started to reverberate around the track, which caused the occasional spontaneous standstill. As the scientists note, this is actually a pretty familiar scenario for particle physicists, who are used to studying phase transitions, such as the transformation of liquid water into solid ice. In this case, the critical threshold is temperature, which triggers clusters of molecules to slow down and form a crystal lattice, which then spreads to nearby molecules. A traffic jam is simply a solid made up of idling cars.

The hope, of course, is that by understanding traffic jams we can learn to prevent them. Tom Vanderbilt, in his authoritative book Traffic, describes a simple experiment performed by the Washington Department of Transportation that involved a liter of rice, a plastic funnel, and a glass beaker. When the rice was poured into the beaker all at once, it took 40 seconds for the funnel to empty; the density of jostling grains impeded the flow. However, when the grains were poured in a gradual stream, it took only 27 seconds for the rice to pass through. What seemed slower actually turned out to be 30 percent faster. This helps explain why traffic engineers are so eager to install red lights on highway onramps: By slowing traffic before it enters the concrete funnel, they hope to prevent the road from exceeding its critical density.

And then there’s the insect solution. Dirk Helbing, a “congestion expert” at the Dresden University of Technology, constructed a network of “carriageways” between an ant nest and a source of sugar. Within a few hours, the ants located the most direct route to the sugar, which became dense with hungry insects.

If ants were people, this density would eventually lead to gridlock. However, Helbing discovered that just as the carriageway approached its breaking point, ant “traffic cops” physically blocked the road. This forced the ants to find another route to the sugar, and thus prevented a traffic jam.

Obviously, human cops can’t shut down the interstate just because it’s busy. But the hope is that humans will be able to build smarter GPS networks. By taking real-time traffic data into account, devices linked to this system will direct drivers away from saturated roads. Is the highway approaching critical density? If so, take surface streets. Traffic will always be caused by a miserable sort of randomness—the stochasticity of too many automotive particles in too small a space—but there’s nothing inevitable about gridlock. We can learn to drive like ants, even if we still insist on driving in from the suburbs.

Beyond a Theory of Everything

On the very large and very small versus the very, very complex.

The LHC hasn’t yet provided its first results, the much-anticipated answers to questions we’ve been asking for so long. But they should surely come in 2009, bringing us closer to understanding the bedrock nature of particles, space, and time — toward a unified theory of the basic forces. This would push forward a program that started with Newton (who showed that the force that made the apple fall was same one holding the planets in orbit), and continued through Faraday, Maxwell, Einstein, Weinberg/Salam, and others in a distinguished roll call.

Most exciting of all would be clues to the ultimate unification between the force of gravity, which governs cosmic scales, and the forces of the microworld. Indeed, the quest for a unified theory engages huge numbers of the most talented young theorists (too many, in my opinion — most would derive more satisfaction, and contribute more to science, if they focused on other scientific frontiers that are less intensively studied). But while unified theories are sometimes called “theories of everything,” this phrase is misleading and hubristic. Such theories offer absolutely zero help to 99 percent of scientists. Chemists and biologists don’t fret about their ignorance of subnuclear physics, still less about the mysterious “deep structure” of space and time.

String theory, or some alternative to it, might indeed unify two great scientific frontiers, the very big and the very small — and that would be an immense intellectual triumph. But a third frontier, the very complex, is perhaps the most challenging of all.
THE UNIVERSE IN 2009

In 2009, we are celebrating curiosity and creativity with a dynamic look at the very best ideas that give us reason for optimism. Explore >>

In terms of scale, the most complex entities we know of — ourselves — are midway between atoms and stars. It would take about as many human bodies to make up a star as there are atoms in each of us. Living things are very large compared to atoms: They must be big enough to have layer upon layer of intricate structure. But they cannot be too large, otherwise they would be crushed by gravity.

It may seem topsy-turvy, then, that astronomers can speak confidently about things billions of light-years away, whereas things on the seemingly more graspable human scale, such as theories of diet and child care, are notorious for their lack of consensual progress. But stars are simple. They’re so big and hot that their content is broken down into simple atoms; none matches the intricate structure of even an insect, let alone the human brain.

The sciences are sometimes likened to different floors of a tall building: particle physics on the ground floor, then the rest of physics, then chemistry, then biology, and so forth, all the way up to psychology, with the economists in the penthouse. There is a corresponding hierarchy of complexity — atoms, molecules, cells, organisms, ecosystems, and so forth.

But the analogy of a building is poor. A building, especially a high one, needs secure foundations. But the “higher level” sciences that deal with complex systems each have their own autonomous concepts. An insecure base doesn’t imperil them, as it would a building.

Scientists who try to understand why flows go turbulent, how taps drip, or why waves break, treat the fluid as a continuum: Subatomic details are irrelevant. Even if we could solve Schrödinger’s equation for all the atoms in the flow, the solution would offer no insight into turbulence.

We can predict with confidence that an albatross will return to its nest having wandered 10,000 kilometers or more over the southern ocean. Such a prediction would be impossible — not just in practice, but even in principle — if we considered the albatross as an assemblage of electrons, protons, and neutrons. Animal behavior is best understood in terms of goals and survival rather than any concepts used by physicists or chemists.

Finding the “read out” of the human genome, discovering the string of molecules that encode our genetic inheritance, is an amazing achievement. But it is just the prelude to the far greater challenge of post-genomic science: understanding how the genetic code triggers the assembly of proteins and expresses itself in a developing embryo. Other aspects of biology, especially the nature of the brain, pose challenges at a higher level of the hierarchy that can barely yet be formulated.

Problems in biology and in environmental and human sciences remain unsolved because scientists have yet to elucidate the complex patterns, structures, and interconnections — not because we don’t understand subatomic physics well enough.

If Newton and Einstein are the icons of “unification,” then Charles Darwin is the icon of complexity. In the famous concluding words of On the Origin of Species, he showed how “whilst this planet has been cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been and are being evolved.”

2009 is a double anniversary for Darwin: the 200th of his birth, and the 150th of the publication of his great book. The focus on his intellectual legacy will be a fitting reminder that our beautiful and wonderful everyday world presents intellectual challenges just as daunting as those of the cosmos and the quantum.  — Sir Martin Rees is president of The Royal Society and Master of Trinity College, Cambridge.

Wednesday, June 17, 2009

Darvaz: The Door to Hell

door to hell in Darvaz 1

This place in Uzbekistan is called by locals “The Door to Hell”. It is situated near the small town of Darvaz. The story of this place lasts already for 35 years. Once the geologists were drilling for gas. Then suddenly during the drilling they have found an underground cavern, it was so big that all the drilling site with all the equipment and camps got deep deep under the ground. None dared to go down there because the cavern was filled with gas. So they ignited it so that no poisonous gas could come out of the hole, and since then, it’s burning, already for 35 years without any pause. Nobody knows how many tons of excellent gas has been burned for all those years but it just seems to be infinite there.


door to hell in Darvaz 2

door to hell in Darvaz 3

door to hell in Darvaz 4

door to hell in Darvaz 5

door to hell in Darvaz 6

door to hell in Darvaz 7

door to hell in Darvaz 8

Thursday, June 11, 2009

In the industrial model of student mass production, the teacher is the broadcaster. A broadcast is by definition the transmission of information from transmitter to receiver in a one-way, linear fashion. The teacher is the transmitter and student is a receptor in the learning process. The formula goes like this: "I'm a professor and I have knowledge. You're a student, you're an empty vessel and you don't. Get ready, here it comes. Your goal is to take this data into your short-term memory and through practice and repetition build deeper cognitive structures so you can recall it to me when I test you."... The definition of a lecture has become the process in which the notes of the teacher go to the notes of the student without going through the brains of either.

Wednesday, June 10, 2009

MIRROR NEURONS and imitation learning as the driving force behind "the great leap forward" in human evolution

By V.S. Ramachandran

The discovery of mirror neurons in the frontal lobes of monkeys, and their potential relevance to human brain evolution — which I speculate on in this essay — is the single most important "unreported" (or at least, unpublicized) story of the decade. I predict that mirror neurons will do for psychology what DNA did for biology: they will provide a unifying framework and help explain a host of mental abilities that have hitherto remained mysterious and inaccessible to experiments.

There are many puzzling questions about the evolution of the human mind and brain:

1) The hominid brain reached almost its present size — and perhaps even its present intellectual capacity about 250,000 years ago . Yet many of the attributes we regard as uniquely human appeared only much later. Why? What was the brain doing during the long "incubation "period? Why did it have all this latent potential for tool use, fire, art music and perhaps even language- that blossomed only considerably later? How did these latent abilities emerge, given that natural selection can only select expressed abilities, not latent ones? I shall call this "Wallace's problem", after the Victorian naturalist Alfred Russell Wallace who first proposed it.

2) Crude "Oldawan" tools — made by just a few blows to a core stone to create an irregular edge — emerged 2.4 million ago and were probably made by Homo Habilis whose brain size was half way (700cc) between modern humans (1300) and chimps (400). After another million years of evolutionary stasis aesthetically pleasing "symmetrical" tools began to appear associated with a standardization of production technique and artifact form. These required switching from a hard hammer to a soft (wooden?) hammer while the tool was being made, in order to ensure a smooth rather than jagged, irregular edge. And lastly, the invention of stereotyped "assembly line" tools (sophisticated symmetrical bifacial tools) that were hafted to a handle, took place only 200,000 years ago. Why was the evolution of the human mind "punctuated" by these relatively sudden upheavals of technological change?

3) Why the sudden explosion (often called the "great leap" ) in technological sophistication, widespread cave art, clothes, stereotyped dwellings, etc. around 40 thousand years ago, even though the brain had achieved its present "modern" size almost a million years earlier?

4) Did language appear completely out of the blue as suggested by Chomsky? Or did it evolve from a more primitive gestural language that was already in place?

5) Humans are often called the "Machiavellian Primate" referring to our ability to "read minds" in order to predict other peoples' behavior and outsmart them. Why are apes and humans so good at reading other individuals' intentions? Do higher primates have a specialized brain center or module for generating a "theory of other minds" as proposed by Nick Humphrey and Simon Baron-Cohen? If so, where is this circuit and how and when did it evolve?