Hi 2001,
Quick fyi- I just uploaded an edited version of PS5, i.e. different from the
version posted earlier today. The changes are minor, but should make parts
d-f clearer.
Also, note that the data is in a matrix, but R recognizes each element to be
a character. Most operations we use in R require numeric rather than
character elements so one fix is to use something like:
X <- matrix(data = NA, nrow = 23, ncol = 3)
colnames(X) <- c("Intercept","Temperature","Pressure")
X[,1] <- 1
X[,2] <- as.numeric(data[,3])
X[,3] <- as.numeric(data[,4])
y <- as.numeric(data[,2])
Cheers,
Jenn
My quadratic approximation seems to have a maximum near my MLE, but
is vertically way off. I'm not sure if this should matter or not, but
it's not like that in the example so I think it means something has
gone wrong. I was hoping somebody could take a look at my code for
the quadratic approximation and give me some clue as to where I am
going wrong. I have implemented the Fisher information as follows:
fi <- function(par, x=dat$yearsinoffice){
lambda <- par[1]
out <- (-1*(((-1*length(x))/mean(x)^2) + ((2*x + 2*length(x))/mean(x)
^3)))
return(out)
}
I made vectors for plotting the log likelihood (ll) an quadratic
approximation like this:
a <- seq(from = 3, to = 10, by = .01)
out <- c()
for (i in 1:length(a)){
out[i] <- ll(a[i], x=dat$yearsinoffice)
}
out2 <- c()
for (i in 1:length(a)){
out2[i]<- -1/2*fi(mle.num$par, x=dat$yearsinoffice)*(a[i]-mle.num$par)^2
}
This does not seem to cause problems for the maximum value of my
quadratic approximation, but it it makes it far enough off on the y
axis that I can't get them on the same chart. I know the MLE should
not be affected by shifting the thing up and down, but this makes me
suspicious that I've implemented something wrong and my confidence
intervals might be wrong as a result.
A few questions on calculating standard errors (question e):
If I am using my formula for the Fisher information to calculate SE, what can I
do to check that I am in fact getting the right SE?
I am trying to use the hessian output form optim() to get the variance
covariance matrix (as in the section R code) and calculate SE. However, if I
have different reparemeterizations in my log-likelihood function, the hessian
output changes. What happens to the hessian when there is a reparameterization
going on, and is there a way to de-reparameterize it in a way that leads to the
right variance covariance matrix?
thanks,
Jane
I am still hung up on how to compute the fisher information. Could
somebody briefly describe how it was computed in the example in
section? As I mentioned before, I have just been putting a negative
sign in front of the second derivative of the log likelihood
function, but this produces results that are not intuitive (such as -
n on the in-class example).
We are new to R and are trying to use the pdata.frame command within the PLM package (panel model estimation for mixed effect models). We receive this message: "Error: could not find function "pdata.frame"" even though we have installed the plm and Ecdat packages and libraries. If you could provide us with any suggestions on how to proceed, we would greatly appreciate it.
Best,
Kelly, Mila and Maria Angelica
_____________________________________
Kelly Bay
Brown University
Department of Political Science
Kelly_Bay at brown.edu
I'm going through the section code & I'm trying to figure out why
you've used I() in drawing the first graph - it doesn't seem to make
any difference if I take this out. I've checked the help files and it
isn't a great deal of help.
Jeremy
Dr Jeremy Hodgen
Senior Lecturer in Mathematics Education
King's College London
Department of Education and Professional Studies
Franklin-Wilkins Building
Waterloo Bridge Wing
150 Stamford Street
London SE1 9NH
Tel: 020 7848 3102
Fax: 020 7848 3182
E-mail: jeremy.hodgen at kcl.ac.uk
I've managed to install Zelig. Thanks all for the helpful suggestions.
--Laurence
On Fri, Mar 7, 2008 at 1:06 AM, Jane E. Vaynman <jvaynman at fas.harvard.edu>
wrote:
>
> Laurence-
>
> For mac, I did this:
>
> go to the packages & data heading at the top navigation. select package
> installer. for the drop down on repositories, select "other repository"
> and
> then in the field to the right, put in http://gking.harvard.edu. uncheck
> the
> "binary format packages" checkbox. then hit the get list button. Zelig
> should
> come up along with others, and then you can select it and press install.
>
> -jane
>
>
>
> Quoting Laurence Tai <ltai at post.harvard.edu>:
>
> > Hi all,
> >
> > I'm still trying to install Zelig on a Mac OS X (10.4.11). Following
> the
> > instructions in the handbook doesn't seem to work, either typing the
> > commands in R or by trying to manually download it and load it into R.
> Error
> > messages I get when I try to do it in R include the following:
> >
> > Warning: unable to access index for repository
> > http://gking.harvard.edu/bin/macosx/universal/contrib/2.6
> > trying URL '
> >
> http://cran.cnr.berkeley.edu/bin/macosx/universal/contrib/2.6/zoo_1.4-2.tgz
> '
> >
> > ...and later...
> >
> > Warning message:
> > package 'Zelig' is not available
> >
> > I would appreciate any suggestions for getting Zelig to install
> properly.
> >
> > Thanks,
> > Laurence
> >
>
>
>
>
>
Hi all,
I'm still trying to install Zelig on a Mac OS X (10.4.11). Following the
instructions in the handbook doesn't seem to work, either typing the
commands in R or by trying to manually download it and load it into R. Error
messages I get when I try to do it in R include the following:
Warning: unable to access index for repository
http://gking.harvard.edu/bin/macosx/universal/contrib/2.6
trying URL '
http://cran.cnr.berkeley.edu/bin/macosx/universal/contrib/2.6/zoo_1.4-2.tgz'
...and later...
Warning message:
package 'Zelig' is not available
I would appreciate any suggestions for getting Zelig to install properly.
Thanks,
Laurence
I was unsure in b and c how to treat N observations with T trials.
Are we just calculating the probability of k successes in N*T trials,
or what? I did it this way, but it seemed that they would have just
told us N trials since we would have done essentially the same thing.
I hope I made my question clear. Basically, I am wanting to know if
this information changes the problem from the standard way of
thinking about a binomial distribution (a number of trials, k, p) to
something else (for example, where a successful observation is one
with all successful trials, etc.)
Thanks,
Keith