Ethical systems design for Science?
I enjoyed this conversation between Jon Haidt and Phil Tetlock. (Disclosure, I read the transcript rather than listening. They claim it is edited, but I figure it is so it is comprehensible. I’m sure you have experienced true transcripts and how difficult they are).
One thing that stuck out was the following quote.
Even more important than that, because I think that we’re so limited in our ability to behave ethically in the face of situational pressures, I want to teach our students how to do ethical systems design, how to take all the flaws and weirdnesses of human nature and work with them to design organizations and startup companies where people are always concerned about their reputation. People are concerned about reputation even more than money in most cases. How can we set things up so that people will, in a sense, guard their reputation by doing the right thing? That’s the most important single principle
It struck me that this is something to aim for for science also. Push the system (as it works now) towards something more robust against human our human frailties. I haven’t really thought through how that would work, or what needs to be tweaked.
It is clear that one cannot rely on whistleblowing, because humans know (probably instinctively) that whistleblowers will be treated extraordinarily badly (treated as traitors. I did read an interview with someone doing research on whistleblowers not too long ago – and now I don’t recall where it was or who it was by, but it was in connection with the Snowden affair. And, I’ve also linked in what happened to Robert Trivers when he was sure research results were faked).