Data-peeking? If you do it right, it might be the right thing to do.
When I first read about the questionable practices that researchers engaged in, the one that surprised me the most was that of data-peeking. Because, of course I had done that and my advisor knew about it, and there was no feedback about that being a no-no. No, we did not engage in the “toping up until below .05” practice that some seem to have done, like counting up so many pieces of caramel. It was more like looking after collecting 15 in each group, seeing what things looked like. Or the time I had collected 30 in each group, and we decided that for a cognition and emotion experiment looking at fear and sadness, perhaps we needed to up the power a bit. So we collected 20 n per cell more.
Not sure if this one went anywhere. So much of what I did at grad school ended up in some file drawer or other.
It seemed sensible to me. You wanted to know how things were going, so you could either abort or make necessary changes. Plus, as someone who decided to combine the Christmas practices of both the US and Sweden (meaning I could open things in the morning the 24th), I found it hard to resist seeing how things were going.
In fact, at one time my peeking practice stopped short a version where my assistant had made a programming mistake. (Not his fault, I had been unclear).
I perfectly understand the reason. Now. With all the talk about questionable practices.
And, here comes Daniël Lakens with a nice little paper (and blog post) about how I shouldn’t feel so naughty. There IS a way to data-peek, and still be good. In fact, these are procedures worked out in medical research where perhaps it is a good idea to know early on whether you are killing your participants, either by feeding them bad drugs, or by not feeding them the healing stuff.
I really enjoyed it.
It stays, somewhat gently, on the side of the NHST, but hints at the Bayesian. Perhaps a first nice step.
(Now, if I only had time to sit down and learn doing Bayesian analysis…)