Friday, January 18, 2013

Idea Friday: Changing the Scientific Publication

The math people (always ahead of the curve) are making a push to run around the publishing industry, and it reminded me about an idea I've been kicking around my head for a while now.  Scientists out there, let me know how crazy you think it would be.

First, some problems with the publication:
  • A scientist's career is largely determined by her/his publications.  More = better.  Higher quality journals = better.
  • Publishing in a high-impact journal particularly increases the pressure to cram manuscripts with data.  Think Cell papers with their 14 figures and 7 supplementals...
  • Fitting all that data together requires molding it into the Story
    • The Story is often great and can help the reader see how things fit together
    • Sometimes though, the Story can skew how the data is interpreted too much, like when you force a jigsaw piece somewhere it doesn't quite belong.
    • The Story in no way ever reflects how and why the research was done, but is almost always applied post hoc.  (In science, you are guaranteed to hear "I tried this and this but that didn't work so now I've got such and such and I need to figure out how I'm supposed to tell a Story with it all")
  • You can publish small studies - but they will be in low-impact journals (unless it's a great discovery).  And don't expect to be able to break your work into the individual pieces and publish that way - journals don't like it when what you're publishing is a simple extension of what you did last time.  Understandably, they'd rather hoard the novelty all at once!
  • As a result, the ratio of what gets published in science to the actual science that gets done is kinda low.
  • Therefore, there's a massive inefficiency in scientific communication among scientists.  Experiments get done, don't fit with anything else, and get put on a shelf.  They might be relevant to other researchers, but they never find out about them unless it comes up serendipitously at a conference or something.
  • On top of that, there's an even worse inefficiency when it comes to quality information being communicated from scientists to the public. And it's the public who, by the way, pays for the research in the first place.
So here's my proposal to take a bite out of those two inefficiencies:

The Publication and Required Outreach Grant 
(PRO grant?  eh?  EH??? ok maybe not but name is unimportant)

  • To change the system, you have to use the one lever that controls scientists more than getting published: getting funded.
  • The NIH starts by establishing a high-amount high-prestige grant.  In the neuro and bio worlds, I'm talking something like HHMI-level.
  • As a requirement for receiving the grant, you must publish a single figure's worth of data every two months.
  • This figure is written up and created by the researcher, then uploaded onto the special NIH website created for this purpose.
  • In addition, the researchers must produce a layman's explanation for their data, to be published alongside for the public's consumption.  A simple button on the website could switch the content from layman mode to scientist mode.
  • Gradually shift most NIH-funded grants to include these requirements, and you've created an entirely new world of research
Here's a fake FAQ to spell out some details:

Would the published experiment/figure have to be related to what the grant funded?
No.  The purpose of this grant is simply to get scientists communicating small bits of otherwise unpublishable data with each other, as well as informing the public about what it is they do.  The work published wouldn't even necessarily have to have been performed while the grant was active - why not share old stuff too?  This keeps scientists happy, as they now have a citation-ready outlet for work they couldn't do anything with otherwise.

What about peer review?
Well there isn't any.  Not before publication anyway.  This is considered essentially an open-review format.  Once the experiment is up, the grant officer at NIH or another administrator will monitor feedback posted about the experiment.  If there seem to be serious flaws in the design, the experiment is marked as Insufficient, and does not count towards meeting the publication requirement of the grant unless modified or replaced.  A bit of human judgment required here - not perfect!

What happens if you don't do it, or upload crap?
If the researcher doesn't upload their single-figure experiment, or if the experiment is of poor quality as determined by feedback from other scientists and the publishing agency, then that experiment is deemed Insufficient.  In this case, the researcher must submit something else, or they will be penalized monetarily, for instance by not receiving all or part of the next chunk of funding allocated on their grant.

Wait what, the grant isn't funded all at once?
Yes, of course.  Funding would be provided yearly (as many grants currently do) contingent on the successful publication every two months of a single-figure experiment.

What if the layman version is garbage?
Same thing applies.  Quality judgments at the end of the day reside at NIH.

What happens when this thing is applied to all grants and the lazy scientists complain?
Scientists might complain about the extra work, or about having work that doesn't lend itself well to one-off experiments (fMRI stuff, maybe).  Firstly, I think there would of course be exceptions, and perhaps this requirement should never be applied to all grants.  Secondly, if a scientist didn't want to meet some or all of the reporting requirements, he or she could forfeit a portion of the money.  This money could then be used to fund the maintenance of the NIH website and administration.  I expect that this would be infrequent though, as academic institutions would pressure researchers to take as much money as possible in order to maximize what they can take in overhead.

What about my scientific career?
Well, the grants would be competitive enough at first that just having them would be a feather in your cap.  In addition, the experiments themselves should be considered publication-quality and citation-ready once open review determines that there's no major problems with them.  The idea is to create an outlet for small reports that's still relatively high-impact.  At first the high-impact comes from the prestige of the PRO grants, then later it comes from the fact that everyone is using the same service. 

There are additional benefits for the graduate students and postdocs who would, naturally, have to do the actual work.  Doing the layman explanations would force them to become trained in how to explain what they do to the public, and could be a strong point on a resume if they ever want to go into a more teaching-based career.

Isn't the publishing industry going to be pissed?
This doesn't affect them too much.  The Story will still be there, and publishing houses will still maintain their control over longer-form publications.  Scientists will keep their best/hottest material for those pieces, but they now have an additional outlet for interesting one-off experiments and neat observations that otherwise would never get shared.  That said, I still heartily support the so-far failed legislative proposals to make all publicly-funded research available for public consumption.  But notice that this doesn't really enhance the public's understanding of science - without training they'll find the papers just as incomprehensible as we did.  This idea at least forces the researchers to frame their work with an understandable structure.

 ________________________

Thoughts?

No comments:

Post a Comment