Can you explain the Frequency and Propensity Theories versus the Bayesian Confirmation Theory?
Well, I'm feeling particularly masochistic this evening, so I'll actually attempt some sort of answer to
this question. First, I have some weak understanding of the frequentist position, and some weak
understanding of Bayesian statistics, but I do not know what you mean by "Propensity" theory. So I'll
skip that. Moving right along, I should tell you that this area is really a horrible one, and not only to
outsiders. There has been, and still is, a continuous and sometimes acrimonious debate between
frequentists (the school founded by Fisher, Neyman and Pearson) and Bayesians. There are journals
which do not allow confidence intervals, or frown on them (some Bayesian journals), and others who
will not publish an article without confidence intervals. Article after article, book after book have been
written expounding and arguing for one and against another of these viewpoints. So any summary I
can give will probably be too brief, inconclusive, confusing, and debatable, if not simply incoherent.
But here goes.
The difference is between a picture of reality in which there are definite properties of things that we
can discover if we're ingenious enough, and one in which we are floating in a kind of flux, from which
we (hopefully) extract regularities. That is, the former, frequentist approach assumes that there are
properties of coins, say, which ensure that if we flip them enough times, we will get half heads and
half tails (neglecting the edges). The Bayesian approach says something like, that's all very well, but
in the realworld all we can do is start by assuming something like the coin will land half the time
heads and half tails, toss the coin, and see what happens. There are no assumptions in the latter
approach as to what the coin "really" is, just an estimate of how the tests are going to turn out. One
starts by assuming something or other (a "prior"), flips the coin, and modifies one's initial estimate on
the basis of that flip. And so forth. At this point, the frequentist starts making remarks to the effect that
there isa real world out there somewhere, with real things that have real properties, and we need to
find out what all those things are, so let's samplethose properties with some coin flipping. The
Bayesian replies, yes, sure, and how do we do that,all we're really doing is flipping the coin, right?
They start different journals, and off we go.
So given the frequentist approach, what you want to do is, under suitably controlled conditions, take
samples and compare them with the image of reality you have: a real coin has equal chance of heads
or tails. You look at your samples, and use math to see how likely, given what a real coin (you think)
is, those samples are accurate. If they're really off, then you either look at how you're sampling, or
you have to revise your ideas about coins.
In the Bayesian approach, you start with a prior estimate, that heads and tails have equal probability,
for example, take the samples, do the math, and find what the new probabilities are based on the
initial estimate and the new data. You just keep doing that until you get a good idea of where the
system is going, i.e., until things don't change too much after a while. At that point maybe you say
something about what real coins are like, or maybe you don't, if you're really hard-core.
Both of these approaches have problems, but the Bayesian one is really the one favored now in many
areas, and I also like it myself. After, all, shouldn't you do science by taking into account as much of
the previous data as you can, changing your idea of reality as you go, based on that data, rather than
keeping a constant idea with which to compare your data? Perhaps it comes down to a kind of
Kuhnian question: shouldn't one be on one's toes in case a paradigm shift has to be made? I think so,
You might take a look at what Popper (frequentist) and Carnap (Bayes) have to say on this, but really,
there's tons of recent literature, easy to look up. I liked: Resnik, M. D. (1997) Choices: an introduction
to decision theory, but it's not my area, and there are probably better introductions.
Steven Ravett Brown