Dear Class,
My partner and I are clustering the standard errors in new way (that we think
makes more sense in theory than what the authors were doing), but now, while
some clustered standard errors are very slightly bigger than the lm output,
some standard errors are actually smaller (in the same regression). Does
anybody have any intuition about what this might mean?
-Leslie and Adela
_______________________________________________
gov2001-l mailing list
gov2001-l(a)lists.fas.harvard.edu
http://lists.fas.harvard.edu/mailman/listinfo/gov2001-l
--
gov2001 mailing list served by Harvard-MIT Data Center
List Address: gov2001(a)lists.gking.harvard.edu
Subscribe/Unsubscribe: http://lists.gking.harvard.edu/?info=gov2001
Dear Gov2001 Alumni-to-be:
For when opportunities come up that might be of interest to the group (jobs,
projects, collaborations, data, etc), I have in the past kept this email
list around for the year until the next cohort starts the next Spring. Last
year, we switched to a Gov2001 Facebook group. So if you're so inclined,
and I hope you are, please join. Just click this:
http://www.facebook.com/home.php?sk=group_179456425415040&ap=1
Gary
--
*Gary King* - Albert J. Weatherhead III University Professor - Director,
IQSS - Harvard University
GKing.Harvard.edu <http://gking.harvard.edu/> - King(a)Harvard.edu -
@kinggary<http://twitter.com/kinggary>- 617-500-7570 - Asst 495-9271 -
Fax 812-8581
_______________________________________________
gov2001-l mailing list
gov2001-l(a)lists.fas.harvard.edu
http://lists.fas.harvard.edu/mailman/listinfo/gov2001-l
With regard to my last email, if it helps at all, the error that I'm getting
when I try to use fixed effects with ivreg is
Error in linearHypothesis.default(object, Rmat, vcov. = vcov., test = ifelse(df
> :
there are aliased coefficients in the model
It looks like aliased coefficients result when there's a bimodal likelihood.
Anybody know how to fix this?
_______________________________________________
gov2001-l mailing list
gov2001-l(a)lists.fas.harvard.edu
http://lists.fas.harvard.edu/mailman/listinfo/gov2001-l
--
gov2001 mailing list served by Harvard-MIT Data Center
List Address: gov2001(a)lists.gking.harvard.edu
Subscribe/Unsubscribe: http://lists.gking.harvard.edu/?info=gov2001
Hi Class,
Sorry for all the email harassment. I am using fixed effects for villages with
an IV. I have a few questions:
1) This doesn't work with ivreg, so I'm just doing each stage separately (i.e.
taking the fitted values of the first stage and putting them into the second
stage with all the same controls.) Is that valid? Am I cheating?
2) While it does report the coefficients, I'm getting messages like this "2 not
defined because of singularities". I checked, and these are some of the
villages with very few numbers of observations. Should I just ignore this and
proceed?
Thanks!
-Leslie
_______________________________________________
gov2001-l mailing list
gov2001-l(a)lists.fas.harvard.edu
http://lists.fas.harvard.edu/mailman/listinfo/gov2001-l
--
gov2001 mailing list served by Harvard-MIT Data Center
List Address: gov2001(a)lists.gking.harvard.edu
Subscribe/Unsubscribe: http://lists.gking.harvard.edu/?info=gov2001
Hey class,
We're working on adding village-level fixed effects to our replication model.
However, there are some villages that have as few as 6 observations and other
that have over one hundred. It seems strange to add a dummy for a village with
so few observations.
Should we just do it anyway? Additionally, there are as many as 500 villages in
our data set. Do people include fixed effects dummies when there are so many
like that?
-Leslie
_______________________________________________
gov2001-l mailing list
gov2001-l(a)lists.fas.harvard.edu
http://lists.fas.harvard.edu/mailman/listinfo/gov2001-l
--
gov2001 mailing list served by Harvard-MIT Data Center
List Address: gov2001(a)lists.gking.harvard.edu
Subscribe/Unsubscribe: http://lists.gking.harvard.edu/?info=gov2001
Gov 2001,
I hope to see you all at the party tomorrow at noon! Its going to be a blast.
Don't forget, you can find directions to Gary's house at
http://gking-projects.iq.harvard.edu/directions/. If you are going by
subway and live on the redline it can take a little bit of time to get
there (its towards the end of one of the green line branches) so plan
accordingly.
Brandon
_______________________________________________
gov2001-l mailing list
gov2001-l(a)lists.fas.harvard.edu
http://lists.fas.harvard.edu/mailman/listinfo/gov2001-l
--
gov2001 mailing list served by Harvard-MIT Data Center
List Address: gov2001(a)lists.gking.harvard.edu
Subscribe/Unsubscribe: http://lists.gking.harvard.edu/?info=gov2001
Hi everyone,
You may already know about this, but I wanted to share a R package that I've
found very useful - apsrtable. It works like xtable, but the output looks
more professional and you can export multiple models to a table at the same
time. The only catch is that in Latex, you have to type
\usepackage{dcolumn} at the top in order to recognize the tables.
Also, if you need to rotate a table sideways in Latex, you type
\usepackage{rotating} at the top of the document and then
\begin{sidewaystable} instead of \begin{table} when you want to start a
sideways table.
Best,
Erin
_______________________________________________
gov2001-l mailing list
gov2001-l(a)lists.fas.harvard.edu
http://lists.fas.harvard.edu/mailman/listinfo/gov2001-l
Hi class,
I'm trying to plot confidence intervals using zelig. We did this once for a
p-set not using zelig, but I'm doing the rest of my analysis is using zelig,
so I want to stick with it. I'm getting an error, though: "Error in
grep(tmp1, tmp) : invalid 'pattern' argument" . I was wondering if someone
knows what it is I might be doing wrong. Thanks!
Rachel
_______________________________________________
gov2001-l mailing list
gov2001-l(a)lists.fas.harvard.edu
http://lists.fas.harvard.edu/mailman/listinfo/gov2001-l
Hi everyone. In class today, we're going to pose this question (edited from
Dan Altman). I hope you'll have some thoughts about it. See you soon...
What happens when the only assumptions are unbelievable?
In my subfield (International Relations), 99% of work relies on a heroic and
likely unrealistic ignorability assumption.
How seriously should this work be taken? Why shouldn't one adopt a somewhat
nihilistic attitude towards virtually the whole of this body of work?
Gary
--
*Gary King* - Albert J. Weatherhead III University Professor - Director,
IQSS - Harvard University
GKing.Harvard.edu <http://gking.harvard.edu/> - King(a)Harvard.edu -
@kinggary<http://twitter.com/kinggary>- 617-500-7570 - Asst 495-9271 -
Fax 812-8581
_______________________________________________
gov2001-l mailing list
gov2001-l(a)lists.fas.harvard.edu
http://lists.fas.harvard.edu/mailman/listinfo/gov2001-l
Hi Class,
I have a quick question that I was hoping someone could help me with.
I have created this table (see below) that shows how many cases are
decided by a court in each quarter, and how many judges were serving
on the court in that quarter. I am hoping to try to find a way to
divide the values in the table by their value for the ACTIVEJ
variable. In other worse, in 20001 there were 668 cases decided by 10
judges. I m not sure how I can divide the values though so that I can
know the number of cases per judge.
Any suggestions?
Adam
> table(case.overrule.10$JUDGQY, case.overrule.10$ACTIVEJ)
8 9 10 11 12
20001 0 0 668 0 0
20002 0 0 784 0 0
20003 0 0 676 0 0
20004 0 0 720 0 0
20011 0 0 675 0 0
20012 0 602 0 0 0
20013 795 0 0 0 0
20014 615 0 0 0 0
20021 0 616 0 0 0
_______________________________________________
gov2001-l mailing list
gov2001-l(a)lists.fas.harvard.edu
http://lists.fas.harvard.edu/mailman/listinfo/gov2001-l
--
gov2001 mailing list served by Harvard-MIT Data Center
List Address: gov2001(a)lists.gking.harvard.edu
Subscribe/Unsubscribe: http://lists.gking.harvard.edu/?info=gov2001