Here's Nancy Kanwisher's suggestion on how to improve the field of neuroimaging:
When John Ioannidis came to give a talk at the NIH (which was interesting), I asked him (skip to 101:30) for his thoughts on this idea. He laughed and said that he has proposed something similar.
Could this actually happen? Over the next ten years, I'd guess almost certainly not in this precise form; first, gambling is illegal in the US, and second, the markets seem unlikely to scale all that well.
However, the randomized replication portion of the idea seems doable in the near term. This is actually now being done for psychology, which is a laudable effort. It seems to me that randomized replications are likely precursors to any prediction markets, so this is what interested parties should be pushing now.
One objection is that these systems might encourage scientists to undertake more iterative research, as opposed to game-changing research. I have two responses. First, given the current incentives in science (i.e., the primacy of sexy publications), this might actually be a useful countervailing force.
Second, it seems possible (and useful) to set up long-standing prediction markets for a field, such as, "will the FDA approve an anti-amyloid antibody drug to treat Alzheimer's disease in the next ten years?". This would allow scientists to point to the impact that their work had on major questions, quantified by (log) changes in the time series of that market after a publication.
NIH sets up a web lottery, for real money, in which neuroscientists place bets on the replicability of any published neuroimaging paper. NIH further assembles a consortium of respected neuroimagers to attempt to replicate either a random subset of published studies, or perhaps any studies that a lot of people are betting on. Importantly, the purchased bets are made public immediately (the amount and number of bets, not the name of the bettors), so you get to see the whole neuroimaging community’s collective bet on which results are replicable and which are not. Now of course most studies will never be subjected to the NIH replication test. But because they MIGHT be, the votes of the community are real....
First and foremost, it would serve as a deterrent against publishing nonreplicable crap: If your colleagues may vote publicly against the replicability of your results, you might think twice before you publish them. Second, because the bets are public, you can get an immediate read of the opinion of the field on whether a given paper will replicate or not.This is very similar to Robin Hanson's suggestion, and since I assume she came up with the idea independently, it bodes well for its success. Both Hanson and Kanwisher are motivated to promote an honest consensus on scientific questions.
When John Ioannidis came to give a talk at the NIH (which was interesting), I asked him (skip to 101:30) for his thoughts on this idea. He laughed and said that he has proposed something similar.
Could this actually happen? Over the next ten years, I'd guess almost certainly not in this precise form; first, gambling is illegal in the US, and second, the markets seem unlikely to scale all that well.
However, the randomized replication portion of the idea seems doable in the near term. This is actually now being done for psychology, which is a laudable effort. It seems to me that randomized replications are likely precursors to any prediction markets, so this is what interested parties should be pushing now.
One objection is that these systems might encourage scientists to undertake more iterative research, as opposed to game-changing research. I have two responses. First, given the current incentives in science (i.e., the primacy of sexy publications), this might actually be a useful countervailing force.
Second, it seems possible (and useful) to set up long-standing prediction markets for a field, such as, "will the FDA approve an anti-amyloid antibody drug to treat Alzheimer's disease in the next ten years?". This would allow scientists to point to the impact that their work had on major questions, quantified by (log) changes in the time series of that market after a publication.