Hi all,
I am having trouble loading ussu.rdata. When I say
ussu<-load("ussu.rdata"), typing in ussu yields ".Traceback" "ussu" . I've
looked at the file in a text editor and in Excel, and I can't see the table
in the column of numbers. If anyone else has successfully loaded the file, I
would appreciate his or her suggestions.
Thanks,
Laurence
Dear List,
In problem 1 part C when it says "in a table, present the following," am I right
in thinking that the question just asks for one number? (like, one percentage?)
Or is this table supposed to have 78 rows with True/False statements for each
observation?
Thanks
Didi
Hello classmates,
I am running into a snag with my negative binomial log likelihood
function that I cannot seem to fix. My code and error are below:
> #NEGATIVE BINOMIAL LOG-LIKELIHOOD FUNCTION
> ll.negbin <- function(par, X, Y){
+ theta <- X%*%par[1:ncol(X)]
+ gamma <- par[(ncol(X)+1)]
+ sigma2 <- exp(gamma)
+ out <- sum(lgamma((theta/(sigma2 - 1)) + 1) - lgamma(Y+1) - lgamma
(theta/(sigma2 -1)) + Y*log((sigma2-1)/sigma2) - (theta/(sigma2 - 1))
*log(sigma2))
+ }
>
> #OPTIMIZE
> opt <- optim(c(0,0,0,0,0), ll.negbin, X = X, Y = Y, method =
"BFGS", control = list(fnscale = -1), hessian = TRUE)
Error in optim(c(0, 0, 0, 0, 0), ll.negbin, X = X, Y = Y, method =
"BFGS", :
initial value in 'vmmin' is not finite
So there's some problem with my log likelihood function, but I'm not
sure what it is. To elaborate on the problem, here are the results
when I evaluate the function outside of optim:
> test <- ll.negbin(par=c(0,0,0,0,0), X=X, Y=Y);test
[1] NaN
What is my problem here?
Thank you,
Keith
Did anyone else get this warning?
> snctmodel <- zelig(RES ~ IMPORT + COST + TARGET + COOP +
TARGET*COOP, model="oprobit", data=snct)
Warning message:
In function (formula, data, weights, start, ..., subset, na.action, :
design appears to be rank-deficient, so dropping some coefs
What causes this? Does it mean the estimates are wrong?
Yes you can use PP IFF (so that's if and ONLY if) you send us your slides by
Wednesday midnight so we can fit them into the slideshow with the abstracts.
But remember that you only have three minutes!
j.
From: Jose Luis Romo Cruz [mailto:jose_luis_romo at ksg08.harvard.edu]
Sent: Tuesday, April 15, 2008 8:00 PM
To: jhainmueller at gmail.com; jmlarson at fas.harvard.edu
Cc: viridianarios at gmail.com
Subject: Abstract and Title
Hi,
Attached you will find our abstract and title of our article. We have some
results that we would like to present to the group. Can we use Power Point?
Thanks a lot
Jose Luis Romo Cruz
Harvard University | Kennedy School of Government
Master in Public Policy 2008
(857) 233 7289
jose_luis_romo at ksg08.harvard.edu
Ok I think I understand now (I misremembered the number in the PS and so I
thought you were referring to part A) so ignore my previous comment.
For part C you should do exactly what you had in mind. Switch Import for
each observation and then check if these N counterfactuals are inside the
convex hull of the actually observed data. Hth, Jens
From: Marcy McCullaugh [mailto:marcy52080 at gmail.com]
Sent: Tuesday, April 15, 2008 12:47 PM
To: jhainmueller at gmail.com
Subject: Re: [gov2001-l] Clarification
Hi Jens,
Yes, I understood that for Part B and did so accordingly, but I am still
confused as to how to set up Part C, which is what my e-mail was asking
about. It says to create a factual dataset "as the world was observed,"
which I thought referred to just the original sanctions dataset, and a
counterfactual dataset "where IMPORT is switched for all observations." I
assumed this meant we're supposed to switch the 1's to 0's and the 0's to
1's, but I just wanted a clarification. If we are supposed to do a
treatment vs. control set-up for the convex hull question a la the section
notes and Part B, then the question was worded quite unclearly.
Thank you,
Marcy
On Tue, Apr 15, 2008 at 9:31 AM, Jens Hainmueller <jhainmueller at gmail.com>
wrote:
The questions ask for the difference in the expected probabilities between a
state of the world where import is 0 and import is 1, so you should set the
X accordingly (it says nothing about imports varying by person).
Hth,
jens
From: gov2001-l-bounces at lists.fas.harvard.edu
[mailto:gov2001-l-bounces at lists.fas.harvard.edu] On Behalf Of Marcy
McCullaugh
Sent: Tuesday, April 15, 2008 12:26 PM
To: gov2001-l at lists.fas.harvard.edu
Subject: [gov2001-l] Clarification
Hi,
I am a little confused about the wording in Q1, Part C:
For the counterfactual dataset, are we supposed to switch all import
observations, where a 1 becomes 0 and a 0 becomes 1, or are we just supposed
to switch all the 0's to 1's per #2 in Part B?
Thanks,
Marcy
--
Marcy E. McCullaugh
Ph.D. student
Department of Political Science
University of California, Berkeley
210 Barrows Hall
Berkeley, CA 94720
Exchange Scholar 2007-08
Department of Government and
Davis Center for Russian and Eurasian Studies
Harvard University
1730 Cambridge Street, 3rd Floor
Cambridge, MA 02138
_______________________________________________
gov2001-l mailing list
gov2001-l at lists.fas.harvard.eduhttp://lists.fas.harvard.edu/mailman/listinfo/gov2001-l
--
Marcy E. McCullaugh
Ph.D. student
Department of Political Science
University of California, Berkeley
210 Barrows Hall
Berkeley, CA 94720
Exchange Scholar 2007-08
Department of Government and
Davis Center for Russian and Eurasian Studies
Harvard University
1730 Cambridge Street, 3rd Floor
Cambridge, MA 02138
Hi all,
I understand that a bootstrap (like we did in an early problem set) is
one way to estimate the variability of a point estimate. However, I am
not sure how one should interpreted bootstrapped standard errors: it
seems like they should almost always be larger than the original (say,
OLS) errors because we're always using a subset of the data and so
less degrees of freedom with each simulation. How can one implement
the bootstrap to show that the standard errors for an estimation
technique used on the whole dataset underestimates the variability in
the point estimates? Does the difference between the regular and
bootstrapped standard errors just have to be huge?
--
Jon Bischof
Graduate Student
Department of Government
Harvard University
Hey all,
I'm having trouble with the log-likelihood for the poisson
distribution. Gary's book says that the LL function is:
lambda <- X%*%par
out <- sum((lambda%*%t(y))-exp(lambda))
However, lambda*y is a nxn matrix and exp(lamda) is a nx1 matrix, so
they cannot be subtracted. Furthermore, we want in the end a kx1
matrix of betas, and there is no k length dimension in either of these
matrices. Clearly I have somehow misinterpreted the slides, but I'm
not sure how. Any ideas for getting out the betas?
--
Jon Bischof
Graduate Student
Department of Government
Harvard University