In the spring, I’ll be teaching the advanced social psychology course again, with a handful of my colleagues. They are student led seminar classes – your basic grad school seminar style – and they are a lot of fun. The student responsible focus on part of the chapter (we are using Taylor & Fiske, which is great, but incredibly dense), and to bring in original literature in their presentation. The literature must be of empirical work – no reviews – as we want them to engage more with how things actually are done.
In the past (due to circumstances beyond our control – read the former dean of the social science faculty) we covered T & F in 2 weeks. Exhilarating and completely exhausting.
As we then wrested control back to ourselves, we now spread T & F over several weeks. So, I figure, it is time to get more serious about looking at the data. I want the students to not only present, but to pick apart and ask, is their conclusions really reasonable, given the evidence? Because one of the problems, I think, is that you get so into the narrative, and very little into the actual calibration, that it is easy to believe in what is really fairy-tales.
Two that have yet to make it onto that blog (but likely will) are this one from James Thompson on whether talking to children really affects their intellect. (N = 29? Correlation? Vague controlling for IQ? Researchers, you have to get better at controlling for individual differences). And this one from Rolf Zwaan testing out the nifty p-hacking app.
I actually suggested to the Masters Students group that they should use Rolf’s 50 question post, and the original paper, for a journal club meeting, and evidently that ended up being quite successful. If students can do this for themselves, we should be able to incorporate it in our classes.
Look at this Beayuoootifol graph from the “multiple labs reproduction projects”, from the reproducibility project. (OK, you have to click through to view it)
Take 13 interesting results. See if we can reproduce them across multiple labs.
10 did most definitely. One is borderline. Two did not.
Isn’t it great? So proud of fellow psychologists! (Note, I have absolutely nothing to do with this. I’m just totally BIRGing*. Has BIRGing been replicated btw?)
I gather Daniel Lakens have accepted the manuscript for publication. Yay for psychology!
I think I should also put in a link here for Etienne Le Bel’s and Christopher Wilbur’s replication attempt of heavy secrets on steep hills. A non-replication this time. (Alas, behind paywall). Original journal did not adhere to Sanjay Srivastava’s proposed “pottery barn rule“. We will remain mum about some of the reasons.
I think this is also a good time to go visit Rolf Zwaan’s blog again. He wrote about Etienne’s replication attempt, prior to its publication, and I think it is illuminating.
Also, a good reminder of Greg Francis’ stance – the fact that someone else cannot replicate a piece of research should not reflect on the original researcher. We are in a messy field. Not everything will pan out. We are testing theories, not people.
*Basking In Reflected Glory, for those not initiated. Kinda like the moon.
The other day, I posted (on my other blog) a kind of darwininan analysis of the scientists predicament – too many scientists, struggle for survival ensues (the aim of science may suffer).
Today, I had this wonderful piece tweeted in – I think the first one was Kate Clancy – Alexandre Alfonso on how Academia resembles a drug gang. An inspiration for him was a chapter in Freakonomics where they discuss what is the allure of being invovled in drugs, rather than, say, flipping burgers. I’ve read that too, and that was also among the background thoughts in my own rambling piece, though I think the comparison with arts and science (as something that will eventually be divided into the celebrated and the unpaid amateurs) was something I first got from my mentor Charles
The gang analogy isn’t new. I tweeted in this piece by Thomas Scheff about a week ago. (In a slightly different format. I found it on my own blog actually – my memory, it aint what it used to be. Or, possibly, now with the net, I can find out how it actually is).
Must be something in the air – or perhaps abduction to best explanation (excuse me – my reviewing is bleeding through) – but Curt Rice also tweeted in this piece suggesting academia is like a fraternity.
Just so tribal, Like David Hull suggested.
But, also, that divide between the tenured and the pretenders suggests Peter Turchin’s analysis of overproduction of elites. That bimodal distributions of haves and wants are suggested in the Tenured and the Adjuncts.
Perhaps, rather, the twilight…
*In my life prior to Academia, I worked in a place that, well, looked like office space. Not quite as soul sucking, though. And, I managed to commute against traffic….
When I first read about the questionable practices that researchers engaged in, the one that surprised me the most was that of data-peeking. Because, of course I had done that and my advisor knew about it, and there was no feedback about that being a no-no. No, we did not engage in the “toping up until below .05” practice that some seem to have done, like counting up so many pieces of caramel. It was more like looking after collecting 15 in each group, seeing what things looked like. Or the time I had collected 30 in each group, and we decided that for a cognition and emotion experiment looking at fear and sadness, perhaps we needed to up the power a bit. So we collected 20 n per cell more.
Not sure if this one went anywhere. So much of what I did at grad school ended up in some file drawer or other.
It seemed sensible to me. You wanted to know how things were going, so you could either abort or make necessary changes. Plus, as someone who decided to combine the Christmas practices of both the US and Sweden (meaning I could open things in the morning the 24th), I found it hard to resist seeing how things were going.
In fact, at one time my peeking practice stopped short a version where my assistant had made a programming mistake. (Not his fault, I had been unclear).
I perfectly understand the reason. Now. With all the talk about questionable practices.
And, here comes Daniël Lakens with a nice little paper (and blog post) about how I shouldn’t feel so naughty. There IS a way to data-peek, and still be good. In fact, these are procedures worked out in medical research where perhaps it is a good idea to know early on whether you are killing your participants, either by feeding them bad drugs, or by not feeding them the healing stuff.
I really enjoyed it.
It stays, somewhat gently, on the side of the NHST, but hints at the Bayesian. Perhaps a first nice step.
(Now, if I only had time to sit down and learn doing Bayesian analysis…)
This is the second fall that I have taught theory of science to the masters students. And, as this is more an avocation than the center of what I’m supposed to research (but I am always interested in the periphery), it takes me quite a bit of work to prepare for the lectures, and I have loads more to learn. Which, well, I like. I like theory of science. I like thinking about how we know things, and whether we really can know these things – boring into the mysteries.
So, for this year, I first went through all the Meehl lectures, which was wonderful.
One of my buddies (who is taking the class) started likening me bringing up Meehl to “and by the way, Carthage should be destroyed”.
I also, actually, actively listened through Kuhn’s “Structure of scientific revolution”. Well, I got a horrible cold, so I figured that was good use of my prep time when I could not keep my mind straight. It actually helps having gone through the actual text, rather than just doing it through hearsay. And, he is not that difficult. (Possibly because it is 50 years later, and I’ve read both sociology of science and work generated through the science wars – guess which side I took). He just goes on and on and on and on…. Gosh, Tom, could you possibly edit that down a bit?
And, I wanted to get to Lakatos. Daniel Lakens said good things about him. It was interesting hearing Meehl tell personal anecdotes about him. And, so Keith Laws started tweeting in quotes, and eventually, tweeted in these recordings of his lectures.
I’m not quite at the point in work burden where I can sit down and leisurely listen to it, but I will, and then I can go here, and find them again.
Meanwhile, I hope you are enjoying them too.
My blogs have been lonely. I’ve been pulled away from updating by my paying job. I’m teaching both theory of science and evolutionary science this fall, both courses very new, and both outside my training (but within my interest). I just experienced a lovely one hour instructing people about the logic behind, and the construction of the IAT, and RT measures in general, basically off the cuff, and wondered why I insist on teaching in areas where I don’t have lots of expertise. But, of course, it is fun (although exhausting). I got into this mess because I’m curious.
Also, I got invited to be a core member of a new blog connected to the Open Science Project. How could I say no?
Of course, I promptly got covered under perp-work (and, well, actually an original article), and have only been able to contribute marginally in the planning. For example, I had no time to write an original post for the launch (it is still languishing in my “blog” folder).
But, launch we did, just the other day. Here is the inaugural post by Denny Boorsboim.
And, I managed to get in a quickie – prompted by the Science sting on OA journals, and a long lab session with my very independent masters-students allowing me some time to write.
I’m very excited about this. It is a good crowd, and another venue of getting the word of Open Science, and Robust Science out.
I’ll still blog here and at my sister blog. Not everything I write about is a good fit for OSC, especially when I get into reviews or musings that have more to do with my own research interests. Now when I hope my duties are becoming less pressing and time-consuming, I have a ton of interesting links that I want to share.
But, please come and visit our new blog. I think exciting things will come out of it.
The other day, Emily Nagoski linked in her latest blog on Twitter, where she asks, what if “the latest science” isn’t what you need to read about?
Hers is about sex (she calls herself sex-nerd. Follow her to satiate your scientific curiosity about sex). She brings up sex manuals from the 20’s, the Hite Report (which is contemporary with my teens), and how it seems like it could be done now. How do we forget?
My forays into fixing science also made me happily cross paths with Fred Hasselman, who has a feature on his blog called “respect your elders”. I think most of us involved in fixing science knows full well that there have been Cassandras (usually male) talking about the problems with psychological research, but he brings it in full force by linking to and citing them.
He was the person linking in the wonderful Paul Meehl series (which I’m now on my second listen-through pass). Done in 1989. Same year as I went back to school. In another post he quotes a lot of questions posed by Tukey that seem very contemporary. From 1954. My parents weren’t even married then.
What is this? Why can’t we keep track of our past?
Other examples – the big article about emotion in Boston Globe, focusing on Lisa Feldman-Barrett. Paradigm shift? Everything we know has been wrong? (Nice commentary here). Now, I enjoy LFB’s research (except when she attacks the universality strawman). But, there is no paradigm shift. A dimensional – not categorical (which speaks nothing about universal) view has been going on since the 40’s, and is quite contemporary. Some proponents: Schlossberg (40’s) Osgood (late 50’s) Tellegen (60’s, ongoing), Lang (70’s ongoing). And, of course Russell, who probably should be considered one of the intellectual fathers of LFB, if we go by the Hullian ideas of scientific inheritance. Even if I’m more categorical, my reading of the literature (much from the 80’s 90’s and early 2000) suggests that both conceptualizations are important and neither wins. It all depends on what your questions are. I think, rather, we are now poised to ask questions we could not ask earlier (perhaps). But, those questions have hung around a long time.
Another area that truly alarms me in its blindness to history is in clinical. It seems (for the moment) to be confined to the UK, and which is much better documented and argued by Keith Laws and HuwTube. But, it reminds me of the argument I was reading in the late 70’s when I first was interested in psychology, and the Left fell in love with Laing and letting the mad out in the street, and the right concurred – for just about diametrically opposite reasons (in my strawman rendition). Or the damned horrors of the mid to late eighties where everybody had a repressed memory to recover of past childhood sexual abuse, and people dug up the McMartin preschool to find remains of satanic sexual practices that only took place originally in a deranged mind, and then in the minds of children who either ended up convinced, or agreed just to get the badgering to stop? Of course, it spawned Elizabeth Loftus, and much interesting research on how memory works, but there were some real victims there at the hands of earnest but irresponsible therapists. Just within the past few days, Swedens very own manufactured Serial Killer has been exonerated, and an inquest will be started on what went wrong. (And, as has been pointed out, 8 murderers are still out there). Of course, one could turn around and blame psychiatry, but I was hanging with the psychologists, and it was there too. (Incidentally, Sweden was not at the fore-front of this craze. When I first heard about the Thomas Quick affair, I had come far enough in my studies, and read enough about memory construction that I immediately disbelieved it. Rather sad being proven right. A quirky coincidence I just found out. He was born in the Village where my grandparents lived until I was 9. Small world).
Why don’t we know our history? Even as shallow a history as mine, and I’m now 54 so middle aged, but not exactly ancient.
The new is interesting, yes. But, we do need to anchor ourselves in what has come before! Perhaps we need to move ahead much more slowly. Much much much more slowly. Avoid our love of the new, like fashion (at my age, you’ve seen them come back several times, and stop caring as much anyway). I’m struck by a passage by Meehl where he talks about Skinner (who he personally knew and had argued with) about the knowledge not being ready! And, later again, how Meehl talks about that there are plenty of interesting questions to ask that we just don’t know enough right now to actually ask.
I understand the lure of the New New thing. To go where nonone has gone before. It is also encoded in how we do things – the request for the novel. Something added. Something New! But, when the new becomes like fashion, not like extending frontiers, it becomes stale and outmoded, just like fashion. And, we make the same mistakes over and over again.
When the Chris Chambers/Marcus Munafo Manifesto was published in the Guardian, a twitter-debate broke out about the pro’s and con’s of pre-registration. Not everybody is enamored with the idea (as I have linked in on these pages before), and one of them distinctly against was Sophie Scott. She promised to put her thoughts down. Here they are, in Times Higher Education. Also, along with the article, she linked in this series of commentaries from people who have reservations against pre-registration, and their reasons for being either ambivalent or against.
Right now (July 25, 2013) there is a Twitter debate going on, and Pete Etchell’s has more or less promised to collate them. I’ll link that in when that happens. Debates are always good. It is so easy to go “what could possibly go wrong” when one is enthusiastic.*
One of the worries is that this is even more hoops to run through, puts even more power in the hands of journals and peer-reviewers (and the possibility to block science that goes against… well. Some people clearly shun no means to try to stop a view that they deem wrong, so I can kind of see it).
There is also a worry that non-preregistered and exploratory work will be seen as “lesser” than the pre-registered work, and a worry that it will water down the research even further.
Lots of interesting points being brought up.
I don’t think registered reports are a panacea for fixing what ails science. But, hey, let’s try something, or let’s try a lot of different things. Rolf Zwaan and Micah Allen had an exchange where one of the points was that let it be a trial run with business as usual (or other alternative models) as comparisons. See how things work out. We are researchers after all. Kind of…. Exploratory. Not all journals are implementing it right away, and I don’t think anyone is touting it as the one fix.
I’m sure there will be more blogging about this, as it evolves, but here is one from Daniel Lakens.
*On edit – Pete is quick! Here is the storify!
Fred Issued a bit of a challenge for people to go listen (he has it under his “respect your elders” series. After all, the problems psychology has are not new). I like multiple motivators, and went listening as I’m going to gear up teaching Philosophy of Science/theory of science again this fall. It also is relevant for my interest in fixing science. And, I like this stuff anyway. Multiple little nudges, and I’m pushed into this particular strange attractor.
The lectures are about 90 minutes long. I’m on the fourth today (the 8th) after having started yesterday. He is that engaging! (I never understood this no-lecture thing. I like lectures. It’s like being told a story!)
Lecture 1 and 2 is very much history, but fascinating. He is intertwining logical positivism – its rise and fall – along with anecdotes of the people involved, as he was personal friends with lots of them, and is also well aware (as he says) that it is just engaging and informative to also hear about who these peple were. He was such a good psychologist.
But, I wanted to mention a bit about lecture 3, where he talks about the context of discovery and the context of justification. Context of discovery are things like – oh, the dream of the snake that bit the tail that inspired the discovery of the structure of the benzene ring, or the forgotten petri-dish with the penicillin mould. Context of justification is then the scientific explanation (but, as he points out very strongy, these overlap!).
But, he also talks about another piece of context/distortion (you should all go and listen), and that is the fee that the universities take out of grants, which in turn pushes researchers to pursue topics that don’t necessarily interest them, but that will get them grants. It predates this analysis by Paula Stephan (I link in Stephen Hsu’s blog – he has some quotes, and links to the Nature commentary) where she talks about the universities turning into malls, where researchers hire space, and the university gains in reputation to attract students. He talks about the increase in publish or perish, and the corrosive effect that has on science.
He also has an interesting commentary on “best practices” information vs data (as he’s been a practicing clinician) which I think all clinicians should consider. (Even if I’m not a clinician, I get involved in teaching ours, and this is something I hear about a lot).
In 1989. At a time where I had not yet completely formulated my intention to get a PhD, where I just slowly started getting math books to repeat my highshool math so I could do dynamics.
The issues are old. As a lot of us know. Now, how to break the inertia, nudge the research into a more beneficial strange attractor….
I enjoyed this conversation between Jon Haidt and Phil Tetlock. (Disclosure, I read the transcript rather than listening. They claim it is edited, but I figure it is so it is comprehensible. I’m sure you have experienced true transcripts and how difficult they are).
One thing that stuck out was the following quote.
Even more important than that, because I think that we’re so limited in our ability to behave ethically in the face of situational pressures, I want to teach our students how to do ethical systems design, how to take all the flaws and weirdnesses of human nature and work with them to design organizations and startup companies where people are always concerned about their reputation. People are concerned about reputation even more than money in most cases. How can we set things up so that people will, in a sense, guard their reputation by doing the right thing? That’s the most important single principle
It struck me that this is something to aim for for science also. Push the system (as it works now) towards something more robust against human our human frailties. I haven’t really thought through how that would work, or what needs to be tweaked.
It is clear that one cannot rely on whistleblowing, because humans know (probably instinctively) that whistleblowers will be treated extraordinarily badly (treated as traitors. I did read an interview with someone doing research on whistleblowers not too long ago – and now I don’t recall where it was or who it was by, but it was in connection with the Snowden affair. And, I’ve also linked in what happened to Robert Trivers when he was sure research results were faked).