hi!
Quick (but fundamental question)
in exercise 1 gamma needs to be positive, so I re-parameterized...In
order to get the right MLE from optim() i didn't
forget to "re-parameterize" again...and i get the same result as the
one i found analytically. However, things change when i look for the
SE. Indeed, I get different answers analytically and with R (using the
hessian). I know this comes from the re-parameterization because I
don't have this issue when I change the whole function and do not
re-parameterize gamma. So my question is:
- if I re-parametterize, how do I apply the transformation to the
hessian to get the right result
how does that fit with the section notes that follows, why do we take
"pnorm" (this is the first transformation that is applied in the
ll.binom function) of "opt.1000 - 1.96*se" and not of "se" for
instance???
#binomial log-likelihood (N = # of trials for each observation)
ll.binom <- function(par, y, N){
# reparameterize pi; only search over [0,1]
p <- pnorm(par)
# log-likelihood
out <- sum(y*log(p) + (N-y)*log(1-p))
return(out)
}
# compare to wald ci
se <- sqrt(solve(-optim(par=2, fn=ll.binom, y=samp.1000, N=10,
method="BFGS",
control=list(fnscale=-1), hessian=T)$hessian)) #$
wald.ci <- c( pnorm(opt.1000 - 1.96*se), pnorm(opt.1000 + 1.96*se))
wald.ci # 0.7364839 0.7535663
- if i do not re-parameterize in order to be done with it and have
both my analytical
and my R result fit, how can i justify i am not re-parameterizing gamma?!
thanks!
charlotte