Here are a couple of statements about the justification of the sample size from reports of clinical trials in high-impact journals (I think one is from JAMA and the other from NEJM): We estimated that a sample size of 3000 … would provide 90% power to detect an absolute difference of 6.3 percentage points in the rate of … Continue reading Sample size statement translation
Author: Simon
Andrew Gelman agrees with me!
Follow-up to The Fragility Index for clinical trials from Evidence-based everything I’ve slipped in my plan to do a new blog post every week, but here’s a quick interim one. I blogged about the fragility index a few months back (http://blogs.warwick.ac.uk/simongates/entry/the_fragility_index/). Andrew Gelman has also blogged about this, and thought much the same as I did (OK, I … Continue reading Andrew Gelman agrees with me!
Bayesian methods and trials in rare and common diseases
One of the places that Bayesian methods have made some progress in the clinical trials world is in very rare diseases. And it’s true, traditional methods are hopeless in this situation, where you can never get enough recruits to get anywhere near the sample size that traditional methods demand for an “adequately powered” study, and … Continue reading Bayesian methods and trials in rare and common diseases
“The probability that the results are due to chance”
One of the (wrong) explanations that you often see of what a p-value means is “the probability that data have arisen by chance.” I think people may struggle to see why this is wrong, as I did for a long time. A p-value is the probability of getting the data (or more extreme data) if … Continue reading “The probability that the results are due to chance”
Statistical significance and decision–making
One of the defences of the use of traditional “null hypothesis significance testing” (NHST) in clinical trials is that, at some point, it is necessary to make a decision about whether a treatment should be used, and “statistical significance” gives us a way of doing that. I hear versions of this argument on a regular … Continue reading Statistical significance and decision–making
“Something is rotten in the state of Denmark”
The DANISH trial (in which, pleasingly, the D stands for “Danish”, and it was conducted in Denmark too), evaluated the use of Implantable Cardioverter Defibrillators (ICD) in patients with heart failure that was not due to ischaemic heart disease. The idea of the intervention is that it can automatically restart the heart in the event of a … Continue reading “Something is rotten in the state of Denmark”
Classical statistics revisited
I’ve written before about the use of the term “classical” to refer to traditional frequentist statistics. I recently found that E.T Jaynes had covered this ground over 30 years ago. In “The Intuitive Inadequacy of Classical Statistics” [1] he writes: What variety of statistics is meant by classical? J.R. Oppenheimer held that in science the … Continue reading Classical statistics revisited
Radio 4 does statistical significance
There was an item on “Today” on Radio 4 on 22 September about Family Drug and Alcohol Courts – which essentially are a different type of court system for dealing with issues about the care of children in families affected by drugs and alcohol. I know nothing about the topic, but it seems they offer … Continue reading Radio 4 does statistical significance
Feel the Significance
Pleasantly mangled interpretation of p-values that I came across recently: (STT is Student t-test and WTT is Wilcoxon t-test) “The two-tailed z-tests produced calculated p-values of < 1.0 × 10−6 for STT and WTT at α = 0.05. As the calculated p-values are much less than α, the Null Hypothesis is rejected which therefore proves that there is a significant … Continue reading Feel the Significance
The Fragility Index for clinical trials
Disclaimer: The tone of this post may have been affected by the results of the British EU referendum. There has been considerable chat and Twittering about the “fragility index” so I thought I’d take a look. The basic idea is this: researchers get excited about “statistically significant” (p<0.05) results, the standard belief being that if … Continue reading The Fragility Index for clinical trials