The other day, I had a conversation with an academic on Twitter who retweeted a link to the Morris & Krieger study Does Male Circumcision Affect Sexual Function, Sensitivity, or Satisfaction?-A Systematic Review. This study has been fully discredited.
A number of individuals, including myself, called this researcher out. We pointed to numerous other studies as counter-examples. This researcher couldn't be convinced. After all, the study was a "systematic review" which, in the academic (and Wikipedia) communities, means infallible.
I pointed out that the "study" was performed by a pro-circumcision fanatic who was also a circumfetishist. Academics, apparently, put no stock in these details: they assume that each researcher is ethical, unbiased, and would never skew results.
I pointed out that the "study" only looked at 36 of 2675 studies on the subject. Apparently, looking at less than 2% of the studies on a subject qualifies as "systematic" in the research community and is fairly standard. Who knew? But of course the selection criteria couldn't be skewed! How could a researcher possibly creatively select studies to show a specific result? This doesn't happen, apparently.
This researcher stubbornly refused to believe that there were any flaws in this study and that it represented scientific fact. She pointed out another "systematic review" that, even in the summary, admitted that it reviewed "low quality studies" and "more analysis was needed".
My interaction with this academic researcher highlighted the complete and abject failure of medical research to properly vet research and to present the truth. The reason for this failure appears to be naïveté and special interest. The article Lies, Damned Lies, and Medical Science highlights a number of failings of medical research. The failings that I have identified are:
- The belief that each researcher is honest, ethical, and credible
- The belief that there is no reason to vet a researcher's pedigree or motivations
- The belief that only the "type of study" rather than the study methods and/or inclusion criteria is important (this is a failure of Wikipedia as well)
- Dubious methods for determining scientific consensus
The reality is that lots of research is done for dishonest reasons. Lots of researchers are out to promote a certain point of view. If there was research commissioned by BP showing that there was no significant harm to wildlife after the Gulf oil spill, would you believe it? Of course not—only a fool would believe this research to be credible. But in the medical community (and Wikipedia), credibility has no value. Any attempt to point out conflict-of-interest, financial, personal, or political motivations of researchers is considered a valueless ad-hominem attack (personal attack) and is rejected out-of-hand. In the "real" (non-academic) world, there are dishonest people. Let's not be foolish.
The idea that the "type of study" makes a particular study infallible is nonsense. Researchers can choose their own criteria for systematic reviews. Just as you wouldn't trust BP's results on Gulf oil spill effects, you can't judge that Morris's selection criteria for his studies is unbiased either. If his criteria is only studies that show effects on glans sensitivity (and nothing else), then he will get the desired result. If you choose to study the part of the penis that is the least sensitive and present in both intact and circumcised men, you'll likely get the outcome you want. Since both academics and Wikipedians fail to vet study criteria, they fail to produce the truth.
More than one Wikipedian has mentioned that scientific consensus is determined by (literally) counting up the number of studies showing a certain conclusion. Not only does Wikipedia's own policy on this subject fail to confirm the validity of this method, but it makes very little practical sense. Clearly, researchers with a common point of view (such as oil companies, genital mutilation advocates, etc.) will commission research to try to "prove" that their viewpoint is fact. Those with a financial interest in study outcome will fund lots of researchers at lots of universities. And what if that money comes with required study criteria? I asked multiple times for the Wikipedians to provide me a source that confirms that counting up studies is a way to prove scientific consensus. Not surprisingly, none was given.
I honestly believe that the academic I encountered on Twitter was quite foolish and naïve. Wikipedians and other academics feel the same way and have the same mindset. And that mindset has many shortcomings. We do not live in a perfect world and we cannot assume that every person or researcher is honest. To believe a "systematic review" without vetting its methods, its authors, or by validating its results by checking other studies is dangerous and foolish.
Science works to prove theories by aggregating facts and evidence. Medical research may or may not be scientific. To blindly assume that all research is science is lunacy.