Skip to content

GDPR is coming – Are we prepared?

The other day, my chair roped me into a meeting with the director of the new super nifty, super secure, super large storing and processing system for all kinds of data that we want to keep and play with, but need to keep secure. Finally!

A new law: The General Data Protection Regulation (GDPR) is coming into effect in the EU, with strict rules about collecting, storing, and sharing data about individuals. It is stricter than the current Swedish law, which guides the Ethics committee I sit in. This will matter for us who advocate Open Science/Open Data. And, I don’t see a lot of discussion about it.

The law, as I understand it, is mainly there to restrict what the international behemoths of data-gathering (e.g. google, facebook,) get to do with all the metrics they collect from our searches, our gps-tracked wanderings and our participating in crappy fun facebook tests of which Orc we are (I’d like to be Snaga); perhaps to force them to remove that hatchet job on a character that now turns up as the first google search. (Or, even possibly, make sure that the fake plastic eating fish story is not all over the first search page, with the info on it being removed because fraud ending up many scrolls down).

But this can very much impact how we do research, especially the type of research where we collect potentially sensitive information (illness, politics, religion, sex, crime) and possible identifying information – as in information that could be triangulated back to an individual. This encompasses a lot of social, clinical and medical sciences, and may very well impact our ability to share data with other researchers, both inside and outside the EU, unless we start planning now on how to handle this now.

We want open data. We want participant protection. We need to stay within the law as they change.

Bully for you, Chilly for me: Scientific fame

...not that kind of psychologist

Perspectives on Psychological Science published an invited symposium on eminence in psychology, which starts with Robert Sternberg’s Introductory article called “Am I famous yet? Judging Scholarly Merit in Psychological Science: An Introduction ”*

As Bobbie Spellman pointed out on facebook – Only One Woman. Guess what topic?

Sure, judging scholarly merit is an interesting question (Meehl discussed it in his recorded last lecture series – along with its problems), and inquiring into why some individuals are considered eminent in a field, and others not is certainly a legitimate area of research both in psychology and sociology (not to speak of history).

But, the question – and the answers – seem ill posed. Science is about ideas. It is about advancing knowledge. It is created by people, but most likely not by individuals, and they seem to be looking for a way of discovering the feature of individuals that can predict…

View original post 727 more words

Just a few links on open data

I’m going to talk to one of the librarians for Lunds own Open data planning. Just for that, I’m collecting a few links about open data that I came across in the last few days. Figure I could just as well add it here, because I’m likely to want to get back to them, and perhaps others would too.

 

APS finding a home for your open science.

On de-identification

Institute for social sciences, conference on reproducibility and transparency

 

Alert: New app for analysing p-values.

OOOOOOH, nice new shiny stats app: The p-checker

(Posted on shiny apps site no less).

I hold Felix Schönbrot responsible (he tweeted it in).

I haven’ played around with it yet, but it must be shared, and must be placed in a place where I might find it later.

I couldn’t help but associating it with those lines on pregnancy tests, though. So, now I want to have a line that indicates some kind of “yes”.

Looking for unpublished data for Creativity Meta-Analysis. Plz spread the word

I’m one of the supervisors here, and would like to spread the word (and maybe get some data). After all, Doing more meta-analyses is likely part of fixing science!

Subject: Meta-Analysis: Call for Unpublished Data on the Relation between Creativity and Self-efficacy

Meta-Analysis: Call for Unpublished Data on the Relation between Creativity and Self-efficacy

We conducting an exhaustive search of the published literature, and are now making a call to gather findings that are unpublished, or soon to be published. We are also interested in unpublished thesis data. We are specially interested in the zero-order correlations between ANY creativity measures (also self-rated creativity) and self-efficacy beliefs (general self-efficacy, as well as creative self-efficacy).

If you believe your study qualifies for inclusion, we are requesting details about the characteristics of the measurements, as well as of your sample, plus study design. The associated effects sizes are also desirable.
Alternatively, we would be happy if you can provide us with your data and any information required to determine how the variables might be coded.

We will only use the data for the purpose of the meta-analysis and we will delete the data afterward.
You can contribute your unpublished data via email to mailto: jen.haase9@gmail.com
Similarly, if you have any questions about this study, please do not hesitate to get in contact.
Thank you for the assistance and contribution to our work. We will gladly send you a copy of the meta-analysis once it is published.

Best regards,

Jennifer Haase

Master student at Lund University, Department of Psychology

Eva Hoff, Ph. D.

Lund University, Department of Psychology

Åse Innes-Ker

Lund University, Assoc. Prof. Psychology

Two critiques, and a faith restorer

I wanted to share links to some recent blog posts that I thought were interesting. The first is by Scott Atran (who researches terrorism) posting on Peter Turchin’s Social Evolution forum. Scott recently had a commentary up in Nature discussing how difficult it is to even get permission to study terrorism (in part due to ethics committees being set up to protect middle-class students, as he claims). The post is an interesting discussion on research on humans, past and present (and much of psychology is, of course, research on humans). Scott Atran. Psychology, anthropology and a science of human beings/

The Faith restorer is from Michael McCulloughs “Social Science Evolving” blog, where he discusses a p-curve exercise that he used in one of his classes. He had his students setting up teams, and then select a literature for which they did p-curve analysis. For all 10 topics, the data showed evidentiary value! It sounds like such a good project for students to do, and make me feel a bit better this day of numerate chickens.

I thought of that blog as a temperate response to this post on the Error Statistics Philosophy blog.

I think her critique is fair. But, there is evidence among the less charismatic of areas in social psychology.

2014 in review

The WordPress.com stats helper monkeys prepared a 2014 annual report for this blog.

Here’s an excerpt:

A San Francisco cable car holds 60 people. This blog was viewed about 2,000 times in 2014. If it were a cable car, it would take about 33 trips to carry that many people.

Click here to see the complete report.

Stapel’s derailment – Now in English thanks to Nick Brown.

Nick Brown (Who blew up positive Psychology’s metaphorical use of Lorenz Butterfly attractor as just so much nonsense)        took it upon himself to translate Stapel’s autobiography. And, he is making it available for free. Right here. Come on, down load it. I know you want to.

I did.

On Trust, and the Process of Science

Some weeks ago, there were two tweet-streams that were about trust in science.

The first included Akira O’Connor’s successful campaign against a rejection based on a single review wherein he was accused of p-hacking. Evidently he is not alone when it comes to this experience. From being a high-trust endeavor, where you might have accused people of doing inane and misguided research, there is now suspicion that you are fudging research (but see data-Coladas excellent tutorial on how to respond to suggestions of p-hacking).

The second was from Keith Laws, stating that pre-registration is not checking the sloppiness and the HARKing, as journals don’t always hold the researchers to their preregistration.

Trust.

In short supply.

When I re-read David’ Hull’s “Science as a process” this summer I ran across his claim that scientists very rarely falsified results. That is not because scientists are a particularly virtuous group – he really strongly states that scientists are human with all the foibles of ambition, self-serving biases, querulousness as well as the standard issue of nice traits, and that this doesn’t matter for science to work. The reason outright fraud was so rare is that it harms knowledge and ALL of the knowledge workers. As a scientist, you need to trust that what comes before works because, as important as reproducibility is, very few have the time to spend reproducing earlier results. We must trust results. They can be flawed, but they must be honest.

But, why was this enough? Well, his model of how the scientific process in the long run accumulates more knowledge, despite being done by flawed human beings, is one of replication and selection: An evolutionary process. Each scientist wants their ideas to spread, to replicate, to be selected, and one of the mechanisms for this is credit. I have a good idea, I test it and publish. You build on it, and give me credit for the good idea.

If I put out an idea based on faked results, my ideas will be selected against, rather swiftly, once found out. That is, you’re dead. Would any of you cite Stapel? Even his non-indicted papers? How about Marc Hauser? Do we really really know about Föster. Would you cite without careful scrutiny?

At the time Hull was writing this (the book was published in 1988), science was, perhaps, smaller. His test-groups were two branches of classification scientists – those that work on finding ways on how to classify species of animals, plants, protists and the likes. The two groups he followed seemed somewhat intimate, and entangled in discussion. The work was published in this one journal, where about 60% of papers sent in were published.* Many of those 40% not published was because the authors never re-submitted. There was a great deal of scrutiny. A faker might very well be discovered early on, and would be out of the science pool.

Stealing, he, claims, was tolerated (as in plagiarizing and appropriating other people’s ideas), because it only hurt the individual stolen from. Fraud hurts everybody.

The fact that Fraud hurts a sizeable proportion of scientists and science is still true, of course (as does the less than robust science, which perhaps is behind the accusations of p-hacking, but not behind sloppiness with pre-registrations).

So what has happened, if anything?**

As I, and many others before me, have pointed out, Science is now a huge enterprise which overproduces scientists. This makes the competition for slots to get to do science that much more fierce – in true evolutionary manner. Evolutionary processes filters for fittest something, but whether this something also coincides with what humans considers good (in this case, increased true knowledge) is not guaranteed at all. Evolution, in its tritest is whatever survives survives.

Towards the end of his book, Hull asks a number of questions that are outstanding from his evolutionary model. One of them is – what happens if competition sharpens? Competition has always been a part of science, but Hull also spends a great deal of time demonstrating how important cooperation is for science to function well, and for science to produce more and more reliable knowledge. Citation is the minimum of cooperation – all of us need to rely on the work of other scientists in order to advance our ideas, and we need to acknowledge their work. But he goes further, demonstrating that you need cooperative allies – Demes. You may not all agree, but usually there is some idea or concept that you agree upon, and that you are all working on, and that you have a similar view on. This could be Darwinism or Cladistics (from his book). It could also be Social Priming, Persuasion, Emotion, what have you. There can be skirmishes, where one group – Deme – marshals evidence for their idea against the ideas of another group (Categorical vs Dimensional concept of emotion, Cladistics vs. Phenetics; Darwinism vs. Idealism – the latter two from Hull). This arguing can be fruitful, and in itself advances science. Having allies is important. Hull demonstrates quite well that ideas that only have single proponent or proponents who cannot cooperate don’t do well for survival of that particular idea.

Hull also mentions, towards the end of the book, that career concerns (rarely mentioned, but of course mattering) tended to align with the more vocal concerns about getting the science as right as possible. Doing good science in a productive deme got you published and cited more, and could be transformed into better career opportunities and resources for continuing driving the idea forward.

Perhaps it is here things have broken down, in the increased competitiveness – I think Shauna Gordon-McKeon’s “When science selects for fraud” lays this out very well. Career concerns are no longer as well aligned with good science. In fact, it can interfere with it, as has been discussed over and over again in various blogs. (Both Jelte Wicherts and Brian Nosek brought that up in the “beyond questionable science” symposium. Worth a second look here).

So, together, the sheer size, lack of good demes and competitiveness can have diluted how the processes in science effectively select against fraud and cheats.

Honest signals, and their faking.

If you look at game theory/evolutionary models on how trust can be maintained there must be some means for the cooperative individuals to protect against the untrustworthy (inspection), and some means to make it costlier to cheat (e.g. damaged reputation). I mused on a model based on Robert Frank’s emotion model in this blog post, but there is plenty of work looking at how to dis-incentivize cheating.

Concern about reputation (as gossip and reputation is a way to keep cheating in check) is one route towards maintaining trust. In science that would be having a reputation as a good hones scientist. *** But, reputation can be gamed. In my marketing psychology course, based on Cialdini’s “Influence” we discuss how authority can be coopted through, for example, clothing or titles. When the field is large and impersonal, as most scientific fields are now, the indicators may be very much removed from actual performance – indicators like number of publications in which papers with what amount of citations – and here journals are also working on maintaining their reputation by perhaps being known for flashy discoveries, or high rejection rates, none of which necessarily correlates highly with increasing actual knowledge (as perhaps the high retraction rate from the glam magazines indicate. Lots of work has been done on this). Publication, journals, and citation are then not necessarily honest signals for high quality, but sometimes, like the king snake or the cuckoo, mimicry.

Routine inspection (peer-review), is somewhat costly, but should be a way of ferreting out at least some of the cheaters. But, a surprising number of papers have been through peer-review where problems were not discovered. Perhaps, as Frank suggested, inspection got lax because scientists generally trusted that the other scientists were honest. The larger amount of honest cooperator, the less time is needed to devote to inspection (which then can be devoted to other, more productive activities).

When the fields are huge, there is not enough nearness to the agents in order to verify and inspect. What rises to the top may not be who does solid work, but who can project well – possibly a kind of narcissism.

I don’t know how to restore trust. But, the ease of establishing social connections via twitter and blogs may make it easier for us to share what doesn’t work, so we don’t end up like this poor bug (thanks to Felicia Felisberti who tweeted it in).

Efforts to do post-publication peer-review also allow more public scrutiny of results from scientists both friendly and unfriendly towards those ideas. (Friendliness is not a requirement. If you are against a theory you may be more likely to find its holes than if you love it. Hull lifts up that kind friendliness is not a requirement for science to go forward, as much as some of us would like it to be so). And, perhaps lifting up how incredibly important cooperation and collaboration is. Competition has its points, but when you use that as the only gage, you get the Lance Armstrong effect. One can argue about the goodness or badness about that in sports, which I tend to think of as trivial. It is not trivial when your ostensible goal is to increase our knowledge about the world.

*(there is a whole chapter analyzing who is accepting papers from which group to specifically investigate if there were obvious biases against the opposite camp. Conclusion – not really),

**I’m making the assumption that there is an increase in fraud. There certainly has been an increase in less than robust science. Feel free to contest.

***According to Hull there are a couple of other issues involved here, which has to do whether one choose to do solid but not very exciting research or risky research. Plodding puzzle solving is low risk, and a way of maintaining a solid reputation as trustworthy. Taking more risks could either result in a very high reputation if the research pans out, but one risks taking a big hit to reputation if it doesn’t, or if it too frequently turns out that the exciting research is not robust. This is entirely with the assumption that both the plodding and the risky work is done honestly.

****I have adopted Simine Vazires footnotes.

On Null results, refined.

The other day, JP de Ruiter tweeted in:

JP

He has a point.

And, well, we do not want to use the sleight of stats Keith Laws suggests.

Keith

Which, as this post that just precedes this one shows, I have been pondering before, and I’m far from the only one pondering this. (Hey, it is my blog. I get to repeat myself. I think I’m sketching….)

Unlike Animal Farm animals, all studies with null results are not created equal. All of us know the standard reason why null-results are not published passed down through the training generations: There are many reasons why a study doesn’t work out, and a lot of them are scientifically entirely uninteresting. The uninteresting ranges from poorly thought through methods, badly chosen stimuli, errors in timing, badly run studies, crappy conceptualization, like those unhappy families though terribly uninteresting to write tomes about. This is what we remind our students of when they with feeble hope pipe up that it is really interesting to know what doesn’t work.

Sure. But the universe of” doesn’t work” is endless. Only things that don’t work in interesting ways are informative. Which, well, raises the question, what is an interesting way?

I know of two papers that published null-results prior to the replication flurry. On one, my advisor was a co-author along with June Tangney and others on certain aspects of Higgins Self-Discrepancy theory. The second was work by Jari Hietanen where he looked at whether the emotional expression of a centered face with eyes pointing to either direction in an attention paradigm (bear with me) mattered. That is, are we more likely to be lured by the eye-direction of a frightened face (as evidenced by faster reaction times when the target is in the direction, and slower when the target is in the opposite direction) than other emotional expressions?. He didn’t find that in 5 different experiments, using different depictions of faces. Both involved multiple studies and multiple variants of stimuli and paradigms. Tangney’s et al also included an alternative prediction. Lots of work. Perfectly reasonable. Rarely seen.

But, there are a lot of other types of null results.

Across the street from where I work, there is a museum called “skissernas museum” – the museum of sketches. It is filled with earlier drafts, sketches, and preliminary models of artwork that are officially displayed in museums, or as sculptures in squares, and in some cases well known.

A piece of art is not created from blank thoughts to the finished product in one go. Before are the sketches, the attempts, the miniature models. Even I, in my feeble amateur painting spent a bit of time sketching.

This is how I think about my spiders and snakes and attention (insert Oh My here) work, which has yet to see the light of day. We got something in each study, but could not interpret it. So, we kept tweaking them. Changing a thing here or a thing there. Alas, I left for Sweden before we had a tweak that gave us clear results.

A lot of the filedrawer may be just this kind of work. Sketches. Drafts. Preliminary work.

Some are more like our tweaking of a Stapel Ebbinghaus Study (as far as I know based on genuine data) where instead of social categories we used emotional expression. The non-results of that one probably lingers comfortably in that file-drawer, or land-fill as is the case now (as I emptied the drawers out myself). We gave it a good try, didn’t work, oh well, it was a bit of a long shot (although I have seen it done lately. Gasp).

Then there are those that may be informative in different way. I think the five variants of testing whether emotional state influenced perceptual processing of emotion-congruent faces might have deserved a null-publish. We thought it might work, it didn’t, and we had some ideas why (and, also as a warning, don’t waste your time doing this.)

And, then the even more troubling kinds– when researchers have attempted to replicate fairly directly some interesting effect that has already been published, and not getting it.

Pre-registration takes care of some of that, but that is for fairly late in the game. Here things are well thought out, and one can make a full-blown hypothesis testing that may or may not work out, and people are willing to bet both time and money on setting it up. But, not all of the attempts are of that kind.

These last couple of types are the ones that are missing, and that would be informative for research

But the rest? The sketches? And all those attempts that find no results because of reasons that has nothing to do with what is tested, but everything to do with the performance (and one has to remember that we likely all make these kinds of mistakes on the way, where the problems with stimuli, with collection, with design, and thinking things through which is only evident in hind-sight). What to do with them? Not all are strategic cases where you run a lot of studies and publish what “worked”. They just didn’t.

Publish? As if the literature isn’t crowded enough as it is. Even Skissernas Museum limit themselves to fairly late prototypes and sketches.

Paul Meehl suggested that it might be a good idea to have some place summarizing the pilot work that didn’t work out, in order for others to not go down that particular wrong turn. (Some turns are just so attractive that we may go down there multiple times, just to find it is a dead end).

For some areas that may be very interesting to formalize. But keeping it all may be like insisting on plastering every scribble of your kids daycare work on the wall.

Perhaps one of the issues also is that the criteria for publishing has been too lenient, or that the methods for determining what is real (aka null-hypothesis testing) is just too weak. Yes, I know, lots of people think that, and have said that for a long time! (I just re-read Meehls paper on Sir Karl and Sir Ronald where he chides hypothesis testing for being much too light of a challenge for a hypothesis. Put them to risk!).

*Yeah, I realize I covered this in my earlier post too. But it is my blog so I get to repeat myself if I want to. Perhaps I’m sketching.