Thursday, February 11, 2010

When it comes to scientific publishing and fame, the rich get richer and the poor get poorer. How can we break this feedback loop?

For to all those who have, more will be given, and they will have an abundance; but from those who have nothing, even what they have will be taken away.
—Matthew 25:29

Sociologist Robert K. Merton was the first to publish a paper on the similarity between this phrase in the Gospel of Matthew and the realities of how scientific research is rewarded, though he was likely not the first to note that famous scientists reap more credit than unknowns. Even if two researchers do similar work, the most eminent of the pair will get more acclaim, Merton observed—more praise within the community, more or better job offers, better opportunities. And it goes without saying that even if a graduate student publishes stellar work in a prestigious journal, their well-known advisor is likely to get more of the credit.

Merton published his theory, called the “Matthew Effect,” in 1968. At that time, the average age of a biomedical researcher in the US receiving his or her first significant funding was 35 or younger. That meant that researchers who had little in terms of fame (at 35, they would have completed a PhD and a post-doc and would be just starting out on their own) could still get funded if they wrote interesting proposals. So Merton’s observation about getting credit for one’s work, however true in terms of prestige, wasn’t adversely affecting the funding of new ideas.

But that has changed. Over the last 40 years, the importance of fame in science has increased. The effect has compounded because famous researchers have gathered the smartest and most ambitious graduate students and post-docs around them, so that each notable paper from a high-wattage group bootstraps their collective power. The famous grow more famous, and the younger researchers in their coterie are able to use that fame to their benefit. The effect of this concentration of power has finally trickled down to the level of funding: The average age on first receipt of the most common “starter” grants at the NIH is now almost 42. This means younger researchers without the strength of a fame-based community are cut out of the funding process, and their ideas, separate from an older researcher’s sphere of influence, don’t get pursued. This causes a founder effect in modern science, where the prestigious few dictate the direction of research. It’s not only unfair—it’s also actively dangerous to science’s progress.

It’s time to start rethinking the way we reward and fund science. How can we fund science in a way that is fair? By judging researchers independently of their fame—in other words, not by how many times their papers have been cited. By judging them instead via new measures, measures that until recently have been too ephemeral to use.

Right now, the gold standard worldwide for measuring a scientist’s worth is the number of times his or her papers are cited, along with the importance of the journal where the papers were published. Decisions of funding, faculty positions, and eminence in the field all derive from a scientist’s citation history. But relying on these measures entrenches the Matthew Effect: Even when the lead author is a graduate student, the majority of the credit accrues to the much older principal investigator. And an influential lab can inflate its citations by referring to its own work in papers that themselves go on to be heavy-hitters.

But what is most profoundly unbalanced about relying on citations is that the paper-based metric distorts the reality of the scientific enterprise. Scientists make data points, narratives, research tools, inventions, pictures, sounds, videos, and more. Journal articles are a compressed and heavily edited version of what happens in the lab.

The scientific paper—a vehicle for spreading information about techniques and ideas, not about a researcher’s worth—wasn’t intended for the uses we’ve devised for it. Now that we have other ways of assessing scientists’ merits, we should turn to those instead.

We have the capacity to measure the quality of a scientist across multiple dimensions, not just in terms of papers and citations. Was the scientist’s data online? Was it comprehensible? Can I replicate the results? Run the code? Access the research tools? Use them to write a new paper? What ideas were examined and discarded along the way, so that I might know the reality of the research? What is the impact of the scientist as an individual, rather than the impact of the paper he or she wrote? When we can see the scientist as a whole, we’re less prone to relying on reputation alone to assess merit.

Multidimensionality is one of the only counters to the Matthew Effect we have available. In forums where this kind of meritocracy prevails over seniority, like Linux or Wikipedia, the Matthew Effect is much less pronounced. And we have the capacity to measure each of these individual factors of a scientist’s work, using the basic discourse of the Web: the blog, the wiki, the comment, the trackback. We can find out who is talented in a lab, not just who was smart enough to hire that talent. As we develop the ability to measure multiple dimensions of scientific knowledge creation, dissemination, and re-use, we open up a new way to recognize excellence. What we can measure, we can value.

We don’t have to throw away the citation. Indeed, we shouldn’t. The citation has evolved to its position of importance because it’s a solid data point of value. We just have to put it into the context of all the other data points that it has always lived amongst, but have—until now—been prohibitively expensive to measure and reward.

No comments:

Post a Comment