Skip to content

Musings on what should be published

May 24, 2014

I just reviewed a paper that wasn’t stupid, and asked an important question. It is just that it was thin, and a null-result. It used 80 participants in 4 cells and it wasn’t repeated measures. They replicated (weakly) one finding, but found no effect for what most likely was what they really were going for.

I’m getting very sensitive to the file-drawer problem. If we have sensible data, should it languish? Yet, there is a problem cluttering up the journals with short, underpowered studies.

I left it up to the editor (who is my colleague) to reject it.

What I would have wanted to see was, first, better power. Then, follow-up work on the particular question.

But, this makes me think about publishing policy. I really understand the desire to publish things that “work”, (except that the indication of what works are so weak in psychology). It is like you want to unveil the final sculpture, the polished version of the violin concerto, the bug-free version of the software – not all the sketches and wrong steps and other discards on the way. You want to publish a real Finding – even if (as in all research) it is tentative.

But, the sketches, and wrong turns, and pilots, and honing have some kind of information. At least sometimes it is really important to know what doesn’t work. And, as was evident from the special issue on replication, there is work out there that people informally know does not work, but is not in the public record because the failure to replicate has not been published.

We had a brief discussion about this at last years “solid science” meeting. Joe Simmons said that there really are loads of piloting of ideas that turned out to be crap that really don’t need to be cluttering up cyberspace and our ability to navigate information, whereas Jelte Wichert’s thought it is really important to have a data-record.

I’m very ambivalent. There is so much data collected – I’m thinking of a lot of final theses that are done – where the research is the equivalent of arts and crafts projects that show that you can do this, but doesn’t really add to the research record.

Or, all those pilots that you do to tweak your instruments and methods. What to do with those? Meehl, in his theory of science videos, suggested that you collect that info in short communications, just for the record.

I’m thinking of two file-drawers I have. One of them really demonstrates that the phenomenon we were testing doesn’t exist. It is a boundary condition. As such, it might have been important to have it out there (5 studies, 90 people in 3 conditions in each, repeated measures). I have another set of 9 studies looking at threat and attention which are more of the “tweak the paradigm” type. Something happened, but it was terribly messy to interpret, and thus we were working on finding an angle where results could be more clear and interpretable. How do you make that distinction?

I have some idea here that it would be nice if one could spend that time with the sketches. Once it works, one needs to replicate, and one only publishes when one feels fairly certain that there is something there (and possibly include links to the sketches). Which, of course, is not how it is done right now, because of the incentive structure.

 

 

Advertisements

From → Publishing

One Comment

Trackbacks & Pingbacks

  1. On Null results, refined. | Åse Fixes Science

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: