Skip to content

Science problems made Cracked. Will it jump the shark next.

Science made Cracked, and Not In A good Way.

Six Chocking Studies that show that Science is Totally Broken.*

The headlines of the six:

#6. A Shocking Amount of Medical Research Is Complete Bullshit
#5. Many Scientists Still Don’t Understand Math
#4. … And They Don’t Understand Statistics, Either
#3. Scientists Have Nearly Unlimited Room to Manipulate Data
#2. The Science Community Still Won’t Listen to Women
#1. It’s All About the Money
Of course, anybody reading here knows about these problems, and have for a long time. But this just shows how urgent it is to not ignore the damned problems!
I’m saying three words first:
Tax payer funded.
And then one more:
Trust.

 

 

 

 

*(Prove. They wrote Prove.  I cannot do that when you are not doing maths. They show, demonstrate, illustrate, ups the confidence, is consistent with, gah. Pass the Smelling salt)  OK, as you were. I’m sure some smartass will comment that it is just fine saying prove.)

 

 

A new round on Social Priming.

PoPS has a section in their new issue containing responses from the Pro-Priming people. Alas behind paywall, but at least some here do have access. It is an interesting read, although I don’t agree with some of it. (My position, somewhat vaguely, is that I’m sympathetic to the idea behind behavioral priming – that we are sensitive to our surroundings and respond to it in ways that we are not really aware of, but I suspect that the conceptualization of it is problematic – don’t ask me to come up with a better one.)

But, I also wanted to link in Daniel Lakens’ blog response to the special issue, which, of course, is open to anybody with access to the net. I thought it was a very nice response.

Reforming Academia

From Dynamic Ecology, thoughts about how to change the funding schemes to ensure an academy focused on research, not prestige. I found the first answer quite interesting. But, I have never heard of Canada as a model before (poor Canadians).

From What’s the PONT is an intriguing post about the scaling problem. It may not be possible to scale up things that work on a small scale. There is a limit to the economy of scale. At some point in the scaling up, something becomes lost (perhaps it undergoes a kind of bifurcation or critical point). I think this is something to keep in mind when we try to educate more and more with less and less. Like the unraveling of the MOOC’s it is clear that it just won’t work. (And, people who had looked at this before basically said “I told you so”. Not quite me, I must confess, until someone pointed out that long-distance education is an old gambit, and the problems don’t go away just because we have new fancy tools). Even Sebastian Thrun has admitted it. A snarkier version from Rebecca Schuman in Slate.

Universities have been hoping to make money on patents from their researchers work. This is most definitely the hope at Lund, and I did read about it in Paula Stephan’s book. But, it is a poor bet. Most of it won’t pay off.

Samuel Arbesman says first to bring back the generalists. (Yay, I say, as I can’t make up my mind whether I’m interested in Emotion, Modeling, Evolutionary Psychology, methodology, behavioral economics, chaos theory, philosophy….), but also that the innovation and research is no longer in the Academy, but among the startups. Going Changizi, as I like to say.

Publishing and open access world links.

And, in this post, I link in things related to publishing and open access.

Randy Schekman won the Nobel Price, and dissed the glam mags (that is Nature, Science and Cell). Here is his The Conversation piece on how to break free from Glam. Not everyone took kindly to what he said. Here is Opiniomics considering that he may be a hypocrite, considering that he has published in the glams. But, perhaps before they were truly glam. Hypocrite or no, I think it is something that needs to be discussed even more than it is done. But, I don’t think it is really the glams fault. Glams wouldn’t be glams if there wasn’t a market clamoring for them. Like, those deciding on grants and careers looking at how many glossy covers. Yes, science as Hollywood. Vote for the sexiest research project of the year! Ronin institute articulated this well.

Related, here is Stephen Curry on the problem with the Glam Magazines. It is a commentary to a debate that he links in (confession, I haven’t watched. 2 hours!), but I think his commentary are worth it, sans watching.

Elsevier, the publisher that is the favorite hate-target it seems, started telling researchers and everybody else to take down the pdf’s to their own (Elsevier published) research. Which, well, they legally are allowed to do, as we regularly sign away our rights. But, it has been sort of a tacit custom that you get to keep your pdf’s on your home page. Sort of like being allowed to have multiple copies of your records I guess. I think it is time to consider better ways of publishing.

Here are some thoughts on that: First Micah Allen’s call for self-publication instead of via publishers. Then Shauna Gordon-McKeon’s 3-part series Chasing Paper from the OSC blog. Part 2 and 3 linked here. For full disclosure, I’m affiliated with the OSC blogs.

The PeerJ blog has a nice interview with Dorothy Bishop where they discuss open access, and her experience with PeerJ.

A Paper from PLOSOne compared post-pub peer review, impact factor and number of citations. None is a really good measure of, well, impact it seems. And, here something from Science critiquing the h-index.

More to come.

Linking in the stats

This fall, I squirreled away 400 e-mails to myself with links to interesting papers or blogs or commentary. Lots of them were things I wanted to stick on my blogs. Now I will try to slowly weed myself down to none again. It will likely results in multiple linking posts, so I declare this to be the first in the series, and it will be all about stats.

First up, Telliamed revisited’s post on the 10 commandments of statistics. Post it prominently on the classroom walls.

This one I have linked to before, but, hey, let’s repeat the good stuff. The p-curve page. Includes the Paper, the app, the user’s guide and supplementary materials. Use it on your favorite area of research.

Speaking of p-curve, Here is a paper (pdf) from Gelman and Loken on how multiple comparisons can be a problem, even when all practices are non-questionable. (Now, I hope that link will work).

A path to learning is to get exposed to What Not To Do! And, the least painful way to do that is to observe other failures, or at least read about them kind of in the abstract. Statistics Done Wrong is an excellent opportunity to do this. It is, um, amazing to realize how many of those misconceptions one has held…

NeoAcademic is a blog from an I/O perspective (Industrial/organizational psychology that is), and Richard Landers posted a series of commentary on a paper comparing Null hypothesis with effect size. I link in the last one (because that is the one that I sent myself), but you can easily get to the other installments from his post.

I also think I’ve linked in Felix Schönbrodt’s post before, but also worth repeating. At what sample size does correlations stablilize?

The collected works of Tukey. In Google Books.

Well, I’m down to November. There is more to come, but I have to sort it through. Probably a second post of stats links.

2013 in review – Word Press thingy (feel self-indulgent now)

The WordPress.com stats helper monkeys prepared a 2013 annual report for this blog.

Here’s an excerpt:

A New York City subway train holds 1,200 people. This blog was viewed about 6,200 times in 2013. If it were a NYC subway train, it would take about 5 trips to carry that many people.

Click here to see the complete report.

Thoughts about how to use HIBAR (had I been a reviewer) in teaching.

In the spring, I’ll be teaching the advanced social psychology course again, with a handful of my colleagues. They are student led seminar classes – your basic grad school seminar style – and they are a lot of fun. The student responsible focus on part of the chapter (we are using Taylor & Fiske, which is great, but incredibly dense), and to bring in original literature in their presentation. The literature must be of empirical work – no reviews – as we want them to engage more with how things actually are done.

In the past (due to circumstances beyond our control – read the former dean of the social science faculty) we covered T & F in 2 weeks. Exhilarating and completely exhausting.

As we then wrested control back to ourselves, we now spread T & F over several weeks. So, I figure, it is time to get more serious about looking at the data. I want the students to not only present, but to pick apart and ask, is their conclusions really reasonable, given the evidence? Because one of the problems, I think, is that you get so into the narrative, and very little into the actual calibration, that it is easy to believe in what is really fairy-tales.

I think, as inspiration, I’m going to use Dan Simons HI-BAR idea.   Several of these critical looks at papers are collected in the HIBAR blog.

Two that have yet to make it onto that blog (but likely will) are this one from James Thompson  on whether talking to children really affects their intellect. (N = 29? Correlation? Vague controlling for IQ? Researchers, you have to get better at controlling for individual differences). And this one from Rolf Zwaan testing out the nifty p-hacking app. 

I actually suggested to the Masters Students group that they should use Rolf’s 50 question post, and the original paper, for a journal club meeting, and evidently that ended up being quite successful. If students can do this for themselves, we should be able to incorporate it in our classes.

Calooo Calay, what a happy day!

Look at this Beayuoootifol  graph from the “multiple labs reproduction projects”, from the reproducibility project. (OK, you have to click through to view it)

Take 13 interesting results. See if we can reproduce them across multiple labs.

10 did most definitely. One is borderline. Two did not.

Isn’t it great? So proud of fellow psychologists! (Note, I have absolutely nothing to do with this. I’m just totally BIRGing*. Has BIRGing been replicated btw?)

Ed Yong wrote it up.

Twice!

I gather Daniel Lakens have accepted the manuscript for publication. Yay for psychology!

As a brief reminder (prompted by my buddy Andrew) – replication is nice, but theory is also needed. (Another reason to link in both Andrew and Denny Borsboom on our OSC blog).

I think I should also put in a link here for Etienne Le Bel’s and Christopher Wilbur’s replication attempt of heavy secrets on steep hills. A non-replication this time. (Alas, behind paywall). Original journal did not adhere to Sanjay Srivastava’s proposed “pottery barn rule“. We will remain mum about some of the reasons.

I think this is also a good time to go visit Rolf Zwaan’s blog again. He wrote about Etienne’s replication attempt, prior to its publication, and I think it is illuminating.

Also, a good reminder of Greg Francis’ stance – the fact that someone else cannot replicate a piece of research should not reflect on the original researcher. We are in a messy field. Not everything will pan out. We are testing theories, not people.

*Basking In Reflected Glory, for those not initiated. Kinda like the moon.

Damn it feels good to be an Academic

The other day, I posted (on my other blog) a kind of darwininan analysis of the scientists predicament – too many scientists, struggle for survival ensues (the aim of science may suffer).

Today, I had this wonderful piece tweeted in – I think the first one was Kate Clancy – Alexandre Alfonso on how Academia resembles a drug gang. An inspiration for him was a chapter in Freakonomics where they discuss what is the allure of being invovled in drugs, rather than, say, flipping burgers. I’ve read that too, and that was also among the background thoughts in my own rambling piece, though I think the comparison with arts and science (as something that will eventually be divided into the celebrated and the unpaid amateurs) was something I first got from my mentor Charles

The gang analogy isn’t new. I tweeted in this piece by Thomas Scheff about a week ago. (In a slightly different format. I found it on my own blog actually – my memory, it aint what it used to be. Or, possibly, now with the net, I can find out how it actually is).

Must be something in the air – or perhaps abduction to best explanation (excuse me – my reviewing is bleeding through) – but Curt Rice also tweeted in this piece suggesting academia is like a fraternity.

Just so tribal, Like David Hull suggested.

But, also, that divide between the tenured and the pretenders suggests Peter Turchin’s analysis of overproduction of elites. That bimodal distributions of haves and wants are suggested in the Tenured and the Adjuncts.

Perhaps, rather, the twilight…

*

*In my life prior to Academia, I worked in a place that, well, looked like office space. Not quite as soul sucking, though. And, I managed to commute against traffic….

Data-peeking? If you do it right, it might be the right thing to do.

When I first read about the questionable practices that researchers engaged in, the one that surprised me the most was that of data-peeking. Because, of course I had done that and my advisor knew about it, and there was no feedback about that being a no-no. No, we did not engage in the “toping up until below .05” practice that some seem to have done, like counting up so many pieces of caramel. It was more like looking after collecting 15 in each group, seeing what things looked like. Or the time I had collected 30 in each group, and we decided that for a cognition and emotion experiment looking at fear and sadness, perhaps we needed to up the power a bit. So we collected 20 n per cell more.

Not sure if this one went anywhere. So much of what I did at grad school ended up in some file drawer or other.

It seemed sensible to me. You wanted to know how things were going, so you could either abort or make necessary changes. Plus, as someone who decided to combine the Christmas practices of both the US and Sweden (meaning I could open things in the morning the 24th), I found it hard to resist seeing how things were going.

In fact, at one time my peeking practice stopped short a version where my assistant had made a programming mistake. (Not his fault, I had been unclear).

I perfectly understand the reason. Now. With all the talk about questionable practices.

And, here comes Daniël Lakens with a nice little paper (and blog post) about how I shouldn’t feel so naughty. There IS a way to data-peek, and still be good. In fact, these are procedures worked out in medical research where perhaps it is a good idea to know early on whether you are killing your participants, either by feeding them bad drugs, or by not feeding them the healing stuff.

I really enjoyed it.

It stays, somewhat gently, on the side of the NHST, but hints at the Bayesian. Perhaps a first nice step.

(Now, if I only had time to sit down and learn doing Bayesian analysis…)

Follow

Get every new post delivered to your Inbox.

Join 790 other followers