Yep, that's what I meant. Sorry about not being clear earlier.
-Bernard
-----------------------
Bernard L. Fraga
Ph.D. Student, Harvard University
Government and Social Policy
bfraga at
Not sure what you mean by "specify error
artificially" (perhaps you
are talking about how we generated our fake data?) but I hope the
following explanation helps.
The model from section (specifically the part on simulating
quantities of interest) was a linear regression model. So, sigma
squared is part of the model and part of the likelihood that we
maximize. It is a necessary parameter and must be estimated and
simulated along with the other parameters (i.e. the beta
coefficients). Whether or not there is a sigma squared parameter
simply depends on your model -- your model tells you what parameters
to estimate, simulate, etc.
Best,
Miya
On Sat, Mar 7, 2009 at 6:16 PM, Bernard L. Fraga <bfraga at
fas.harvard.edu
wrote:
Sounds good. Quick Question: How do
we pull out the sigma^2 from a
model when we don't specify error artificially (like in the section
slides)?
Thanks in advance.
-Bernard
-----------------------
Bernard L. Fraga
Ph.D. Student, Harvard University
Government and Social Policy
bfraga at
fas.harvard.edu
-----------------------
On Mar 7, 2009, at 1:44 AM, Patrick Lam wrote:
> Recall that sigma^2 is a parameter just like the betas. It is the
> variance of the normal distribution that the y's are drawn from.
> In our setup, we are trying to maximize over all the parameters
> (which include 4 betas and 1 sigma^2), so the last term of the
> parameter vector happens to be the sigma^2 in the way we set it
> up. It's no different than if we were trying to maximize for two
> pi's in the last HW, where the last item in the parameter vector
> was pi_2.
>
> On Fri, Mar 6, 2009 at 8:11 PM, charlotte cavaille <charlotte.cavaille at
gmail.com
>
wrote:
> Dear Miya
>
> I have a question on lectures notes: in the following piece of R
> code i don't get the part in bold:
>
> # Predicted Values
> set.seed(12345)
> M <- 1000
> par.draws <- mvrnorm(M, mu=opt.new.par , Sigma=opt.new.vcv)
> beta.draws <- par.draws[,-5]
> sigma2.draws <- exp(par.draws[,5])
> X.c <- apply(X.data, 2, mean)
> mu.c <- c()
> for(i in 1:nrow(beta.draws)){
> mu.c[i] <- X.c %*% beta.draws[i,]
> }
> m <- 1
> Y.c.predict <- c()
> for(i in 1:length(mu.c)){
> Y.c.predict[i] <- rnorm(m, mean=mu.c[i], sd= sqrt(sigma2.draws[i]))
> }
> mean(Y.c.predict); sd(Y.c.predict)
> #84.92893
> #2.005887
>
>
> why is the 5th column of an output of a multivariate normal
> sampling that of the variances????
>
> Thanks!
>
> Charlotte
>
> _______________________________________________
> gov2001-l mailing list
> gov2001-l at
lists.fas.harvard.edu
>
http://lists.fas.harvard.edu/mailman/listinfo/gov2001-l
>
>
>
>
> --
> Patrick Lam
> Department of Government and Institute for Quantitative Social
> Science, Harvard University
>
http://www.people.fas.harvard.edu/~plam
> _______________________________________________
> gov2001-l mailing list
> gov2001-l at
lists.fas.harvard.edu
>
http://lists.fas.harvard.edu/mailman/listinfo/gov2001-l
_______________________________________________
gov2001-l mailing list
gov2001-l at
lists.fas.harvard.edu
http://lists.fas.harvard.edu/mailman/listinfo/gov2001-l
--
Miya Woolfalk
Ph.D. Student
Harvard University
Government and Social Policy
_______________________________________________
gov2001-l mailing list
gov2001-l at
lists.fas.harvard.edu
http://lists.fas.harvard.edu/mailman/listinfo/gov2001-l