Other collateral damage of iffy methods?
Sanjay Srivastava(who by now should be well known by my handful of readers, as I link him in quite a bit), comments on the recent Psychological Science proposal. He likes 1 and 3, and has issues with 2.
But, what caught my eye (well, actually Dorothy Bishop’s eye in my twitter feed) was the comment. Yes, as she said, kinda dark…
I must confess that when I read the list of questionable practices in False-Positive psychology, I was kind of shocked, because, yes, some of those are kind of practices you do. I have peeked at data (kind of feels like not wanting to wait for opening christmas presents). Though, I haven’t done the “add one more” thing, because it is so obviously iffy. I have loads of data that are not published, because I can’t make sense of it (but, it overall did not lead to anything published. Maybe time to learn to do meat-analyses).
I’m kind of clueless when it comes to competitive pursuits (I hate competing. I niche), so I have kept no track of things like impact factors, or what is needed to please the publishers. Then again, I’m not very successful either.
But, I had a couple of really good masters students talking with me not too long ago. They are planning their masters thesis research, and wanted to replicate and extend something, and they were kind of wrestling with how to extend, because they have been so taught that they have to do something new.
Well, that is what got us in this mess in the first place. Among other things. But, doing something that truly is new and interesting most likely takes a lot of training. There is a lot of info to have before you start seeing the interesting unexpected things. (Lots of the low hanging fruits have already been picked). It also involves a great deal of risk (most of the new things don’t pan out). And, right now I think psychology really needs to deal with robustness. I think if they could replicate this study done in Arizona in Sweden, Denmark and New Zealand, it would be quite important for our understanding.
So, why don’t we get rewarded for this? Why is teaching of budding researchers focused (in part) on how to game the system? Not by outright fraud, but….
And, this leads me into musings about fraud, and iffy practices and competition. Yes, I have talked about people like Stapel, and Hauser, and Schön, and others. And, this summer and early autumn saw the fall of a star blogger (Jonah Lehrer) which was kind of odd following in twitterverse and the blogosphere. Or, for that matter, Lance Armstrong. Competition has its points, but competition also brings out sly and deceptive practices when the competition is fierce enough. And, talent is a dime a dozen, but the slots for fame and fortune are few. Then, those that reach those top slots may not be the best, but those that are more willing to bend rules and deceive. How do you change the incentives? I have no idea.