, , , , , , , , , , , ,

Another guy wrote: “RCTs remain the gold standard”

I am not anti-RCT nor do I prefer other methodologies across the board. However, people fail to think accurately about RCTs.

Let’s consider the title statement in light of what others have written about RCTs.

“In 2010, I wrote:

Randomized controlled trials (RCTs) have well-known problems with realism or [external] validity (a problem that researchers try to fix using field experiments, but it’s not always possible to have a realistic field experiment either), and cost/ethics/feasibility (which pushes researchers toward smaller experiments in more artificial settings, which in turn can lead to statistical problems).

Beyond these, there is the indirect problem that RCTs are often overrated—researchers prize the internal validity of the RCT so much that they forget about problems of external validity and problems with statistical inference. We see that all the time: randomization doesn’t protect you from the garden of forking paths, but researchers, reviewers, publicists and journalists often act as if it does. I still remember a talk by a prominent economist several years ago who was using a crude estimation strategy—but, it was an RCT, so the economist expressed zero interest in using pre-test measures or any other approaches to variance reduction. There was a lack of understanding that there’s more to inference than unbiasedness.”

“…researchers prize the internal validity of the RCT so much that they forget about problems of external validity and problems with statistical inference.” Let that sink in.

Link follows…

“Although Shadish is reluctant to describe RTCs as the gold standard because the phrase connotes perfection, he does describe himself as a “huge fan” of the methodology.”

Me too.

“Experiments, says Breckler, typically involve a trade-off between internal validity — the ability to trace causal inferences to the intervention — and external validity — the generalizability of the results.

“What people seem to fail to recognize is that the perfect RCT is designed strictly with internal validity in mind,” he says.”

RCTs only apply to the specific cohort studied. Of course, if RCTs are done without paying attention to population variables, then the interpretation risks distribution fallacies. For example, RCT studies on hormone replacement therapy missed the cancer risk to women nearing menopause that were caught by retrospective cohort studies.

“No one suggests that researchers give up RCTs. Instead, they urge the supplementation of RCTs with other forms of evidence.”



If prospective studies correspond to Prometheus, then retrospectives correspond to Epimetheus. Epimetheus had to clean up a lot after Prometheus.

Thinking in terms of “gold standard” is muddled. Thinking in terms of strengths and weaknesses of various methods is clear-headed.

Consider two studies–one is a RCT and the other is a retrospective study. They both appear to look at the same issue. However, the RCT fails to address the central hypothesis while the retrospective study addresses the central hypothesis successfully. Based on the title statement, one would prefer the RCT despite the fact that it is utterly irrelevant to the hypothesis in question. This is why we must be aware of the strengths and weaknesses of RCTs vis a vis other methodologies.