APPENDIX 3
Memorandum from Professor Nancy Cartwright,
London School of Economics and Political Science
SUMMARY
One of the questions posed concerns the mechanisms
to ensure that policies are based on evidence. But is equally
important to attend to the methods available for using evidence.
In particular I shall briefly set out my view that there is a
fundamental difficulty present in various methods that policy
makers are now being urged to employ to evaluate and use evidence,
particularly scientific evidence.
I am a Professor in the London School of Economics
Department of Philosophy, Logic and Scientific Method specializing
in methodology of both the natural and social sciences. My most
recent work concerns the nature of evidence for evidence-based
policy. I am not myself ideological about any particular methodsfor
example I am not a Bayesian, nor an anti-Bayesian; I investigate
the advantages and disadvantages of a great variety of methods,
both "hard" and "soft"; and I have done special
studies from the use of quantum physics to build lasers to evidence
for causal connections between health and status.
1. It is widely claimed that evidence can
be assessed in terms of certain standard, privileged techniques,
such as randomized clinical trials. I believe that the privileging
of these procedures as a basis for policy is a serious mistake.
On the one hand these procedures are themselves fallible, especially
when we have to make inferences from test situations to the real
situations in which policy will be implemented. On the other hand,
it ignore hosts of other relevant information, much of which we
have paid dearly for through research councils and the like, and
which, all-told, can point in a different direction from the privileged
techniques. All methods require assumptions as inputs and in every
case the output conclusion can only be as secure as the input
assumptions. For different questions, what matters is to understand
which input assumptions for which methods are most secure.
2. The use of evidence ranking systems seems
to be spreading fast. I think this is badly misguided. Many of
these systems suggest basing decisions on only the top-ranked
evidence, if there is any such. But the best decisions are made
on the basis of the total evidence. This will include a
great deal of evidence not rated by most evidence-ranking systems
and a great deal that may merit a low rank, which we are thus
told to ignore, without consideration of the amount, the source,
or the overall pattern. This includes evidence that is merely
"suggestive"; results that count as evidence by the
hypothetico-deductive method, which methodologists have long touted
as the principle method of physics but that is despised by most
ranking schemes; derivation from theory; consequences of econometric
modelling; and so forth. To the contrary, it is best to look at
everything, taking into account how secure each result is and
how heavily it weighs for the proposal and also taking into account
the overall pattern of the evidence.
3. There is also a movement that suggests
that evidence collected by agencies, such as consultancy firms,
that know nothing about the subject matter will be better since
the agency will have no stake in the results. But it is widely
recognized that good studies generally require huge amounts of
background knowledge, deployed in subtle ways. There is a related
widespread assumption that the goodness of a study can be evaluated
through a formal checklist. But there is much work to show that,
to the contrary, expertise and implicit knowledge and practices
matter tremendously.
4. From a methodological perspective, there
are two fundamental unresolved problems we face in using evidence
for policy. First, the value of evidence cannot be checked by
mechanical procedures. Second, it is wasteful to ignore any evidenceand
can lead to disastrous consequences. But there are also no good
mechanical procedures for combining evidence of disparate sorts,
for seeing how the pieces fit into a total picture. These are
difficult problems that must be dealt with using good sense and
intelligence. Trying to substitute flawed mechanical procedures
in a drive for "objectivity" or transparency will generally
lead to flawed outcomes.
January 2006
|