Hi all,
PS 4 has been graded and will be returned on Monday if you turned in a hard
copy. Those who only submitted online already have their grades in their
dropbox. Just a few comments:
Problem 1:
- One assumption that many people left out is that we only observe the sum
of the Bernoulli trials for a binomial.
- When shifting up and down, remember that you should be adding/subtracting
a constant since it's a log-likelihood. If we were plotting the likelihood,
then we would be multiplying a constant.
- When reparameterizing pi, many of you said that we need to do this
because we cannot do log() of a negative number. While this is true, the
main reason we are reparameterizing is because we need pi to be in [0,1]
because it's a probability. Hence, while we can do log(2), 2 is not an
appropriate value for pi.
Problem 2:
- While most of you were able to understand the indicator variables, many of
you programmed it so that the indicator variable was generated inside the LL
function. The idea should be that the indicator variable is just another
variable in your data, and should not be generated inside the function.
This is so that we can be as flexible as possible. For example, if you
programmed it inside your function, it would not work if I told you that all
odd numbered observations came from one distribution and all even numbered
ones came from another.
- Some of you programmed the LL function using ifelse statements or some
version of them. While you may arrive at the correct answer, the ifelse
statements are completely unnecessary. You can just program the actual
log-likelihood in one line using the indicator variable.
I encourage you all to look at the answer key.
--
Patrick Lam
Department of Government and Institute for Quantitative Social Science,
Harvard University
http://www.people.fas.harvard.edu/~plam