On Sun, 23 Feb
2003, Ryan Davies wrote:
Hey Professor King,
I'm a little confused about by the way you define the negative
binomial distribution in your lecture notes. Sheldon Ross gives
P{X = n} = ((n - 1) choose (r - 1))*p^r*(1-p)^(n-r)
and the intuitive explanation that n is the number of trials of a
binomial variable until r sucesses are attained. I can't see how this
is equivalent to the distribution you give in your lecture notes. So I
was wondering if they are in fact the same and I just can't spot how, or
if they're different (and if so, why they're called the same thing).
Thanks,
Ryan
its a good question. these distributions can often be used for different
purposes, and Ross gives probably the more common usage, but less useful
for us. what I did in my notes and in my book was to find the mean of the
distribution E(Y) and called that mu and the variance E(Y) and called that
mu*sigma^2. then i set the mean (a function of p,r,n) equal to the mu and
the variance (also a fucntion of p,r,n) equal to mu*sigma^2. then fixing
n, I solved for p and r. finally, i substituted the right side of these
eqns in for p and r, and the result was what I give. (I might have r and
n switched; i don;t have his book in front of me at the moment.) Give it
a try if you have time; this is exactly the kind of thing you will often
need to do if you are trying to adapt a distribution for a new purpose.
I'm a little confused about "setting the variance to mu*sigma^2" -
isn't
sigma^2 the old variance, or r(1-p)/p^2? Also, do you think you could help
me to understand why the gamma distribution is the proper distribution to
draw ~lambda from? (also, why the beta is chosen for the beta-binomial for
the similar purpose)