EVIDENCE OF EFFICACY
Randomised controlled trials (RCTs)
19. Randomised Controlled Trials (RCTs) are the best
way of determining whether a cause-effect relationship
exists between a treatment and an outcome.[23]
Well designed RCTs have the following important features:
- randomisation: patients should
be randomly allocated to placebo (dummy treatment)[24]
or treatment groupsthis ensures that there are no systematic
differences between patient groups that may affect the outcome;
- controlled conditions: aside from the treatment
given, all patients should be treated identically, whether in
placebo or treatment groupsthis excludes other factors
from influencing the outcome;
- intention to treat analysis: patients are analysed
within their allocated group even if they did not experience the
interventionthis maintains the advantages of randomisation
which may be lost if patients withdraw or fail to comply;
- double blinding: patients and clinicians should
remain unaware of which patients received placebo or treatment
until the study is completedthis eliminates the possibility
of preconceived views of patients and clinicians affecting the
outcome; and
- placebo controlled: if there is no appropriate
alternative treatment against which to compare the test treatment,
the intervention under consideration is tested against a dummy
treatment to see if the intervention has any benefit or side effects.
20. In clinical research, it is widely accepted that
RCTs are the best way to evaluate the efficacy of different treatments
and distinguish them from placebos. However, some supporters of
homeopathy claim that RCTs are not an appropriate way to test
homeopathy because "they are far less suitable when studying
the overall effects of a holistic therapy in a complex organism
with multiple problems".[25]
We do not agree. If homeopathic productsor any medicinal
productare more than placebos, and all other elements of
the "holistic" care package are the same (controlled),
it should be possible to see differential results between the
test substance and the placebo. We
consider that conclusions about the evidence on the efficacy of
homeopathy should be derived from well designed and rigorous randomised
controlled trials (RCTs).
Meta-analyses and systematic reviews
21. There may be variation in the results produced
by different RCTs, particularly if there are many trials with
low statistical power, that is, small trials with low numbers
of participants. When trials produce varying results, proponents
of both sides of an argument can "cherry-pick" data
to support whichever side of the argument they like. This is a
situation we wish to avoid. We can do so by turning to two types
of analysis of clinical trials to help us appraise the evidence:
meta-analyses and systematic reviews.
22. Meta-analyses combine the results of trials,
increasing the sample size and statistical power of the data.
Meta-analyses may reveal statistically significant trends that
were not apparent by studying the trials individually. When pooling
data, it is important to ensure that the data are comparable.
It is preferable that a meta-analysis only include well designed
trials, since these trials produce the most rigorous data. When
meta-analyses are conducted on less well-designed trials, the
design flaws should be recognised and the diminished power of
the data acknowledged.
23. Systematic reviews refer to the process of collecting,
reviewing and presenting all the available evidence, for example,
by selecting trials listed in the PubMed database[26]
that meet pre-defined criteria. Systematic reviews often, but
not always, include a meta-analysis.[27]
24. Properly conducted systematic reviews have the
following important features:
- the prior determination and
explanation of eligibility criteria (which will allow or disallow
inclusion of published studies) for the systematic review;
- a literature search looking
for all potentially relevant published studies;
- examination of the methodology of all potential
candidate studies to ensure that they fit the eligibility criteria;
this includes clear rules about the design and methodology of
such studies.
- assembly of the most complete dataset feasible;
- analysis of the results of included studies,
with statistical analysis (meta-analysis) if appropriate; and
- a critical summary of the systematic review,
including identification of the "confidence intervals"[28]
and "statistical significance"[29]
of any findings.
25. We
expect the conclusions on the evidence for the efficacy of homeopathy
to give particular weight to properly conducted meta-analyses
and systematic reviews of RCTs.
THE DISTINCTION BETWEEN EFFICACY
AND EFFECTIVENESS
26. It has been suggested that it is useful to draw
a distinction between efficacy and effectiveness.[30]
Dr Peter Fisher, Director of the Royal London Homeopathic Hospital,
explained the difference:
In simple terms the distinction is between ideal
conditions and real world conditionsefficacy being ideal
conditions and effectiveness being real world conditions.[31]
27. Professor Edzard Ernst, Director of the Peninsula
Medical School, gave the following example:
Efficacy tests whether treatment works under ideal
conditions; for instance, a hypertensive agent may well be effective
under ideal conditions and then will not work in the real world
because people experience side-effects.[32]
28. The opposite might also occur: a product might
not work in "ideal" conditions, but may appear effective
in "the real world". In the case of homeopathy, arguments
have predominantly centred around whether or not it is a placebo
treatment. If homeopathy was better than a placebo treatment,
one would expect tests of efficacy to show that it is efficacious;
and "real world" tests of effectiveness to show that
it may or may not be effective. If homeopathy was a placebo treatment,
it would fail tests of efficacy, but with tests of effectiveness
it would appear to be effective for some conditions and some patients,
but not for others.
A summary of the logical outcomes depending on whether
homeopathy is or is not a placebo