The following is a guest post by Daniel Mietchen.
Researchers spend a lot of their time thinking about how to test assumptions or hypotheses and how to separate different effects that jointly influence some observation or measurements. In their famous experiment in the late 1890s, for instance, Michaelson and Morley took great care to measure the speed of light both in the direction of the Earth’s rotation, and perpendicular to it. Within a small observational error, the two speeds were identical, which provided the first crucial hints that the speed of light might actually be a constant in a given medium, and that there may actually be no ether involved in transmitting light through space.
Surprisingly, similar rigor is not normally applied to the practice of research itself: we do not know what research funding and evaluation schemes are best suited to make specific kinds of research most efficient, we keep the Journal Impact Factor as a means of evaluating articles, researchers, institutions and all sorts of other non-journal things despite knowing that it is ill-suited for those purposes, and we do not know whether the status quo of keeping the research process out of public view (and publishing some rough summary at the end) is actually beneficial to the research system as a whole.
We want to tackle the latter issue by putting research practice to a test in which we compare the efficiency of traditional to that of open science. While there is some anecdotal evidence, this has never been investigated systematically before. That’s why we are organizing a session at OKFest (Thursday, July 17 • 14:00 – 15:00) to develop a framework for:
- showing that open science can be more efficient than traditional approaches under some conditions
- exploring the space of meaningful conditions in a systematic fashion
- understanding what outcomes can be properly compared in a practical experiment
- actually putting open and traditional research to an efficiency test
- identifying funders that may be interested in supporting such an efficiency test
We hope to see you there in person or via the session’s Etherpad.
The idea is to turn this framework into a research proposal that stands realistic chances of getting funded in some way. The outcomes of the session will then be fed into a whole-day open grant writing session at Open Science, Open Issues on August 19, at the end of which we hope to have a draft proposal that covers all major aspects of the topic and can easily be adapted for submission to suitable funding schemes around the globe.
Even if these proposals are being rejected, the submissions will help to raise awareness of the issue amongst funders and reviewers, and if such a proposal actually gets funded, then we can finally put research to a test to find out whether openness increases research efficiency or not.