Hi,
I'm doing problem 1.1 in problem set 6 and the output is only giving me coefficients, residual deviance & AIC. There are no standard errors or t values reported (as shown in the section example). I've checked all my variables and they're all reading in as full length vectors, so I'm not sure what the problem is. Anyone have any ideas? (final.data is my name for the data set)
z.out <- zelig(factor(attention)~educ+income+black+age+dem+rep+lib+con+contact+(black*contact),
model="oprobit", data=final.data)
z.out
Thanks.
Laurel
_______________________________________________
gov2001-l mailing list
gov2001-l(a)lists.fas.harvard.edu
http://lists.fas.harvard.edu/mailman/listinfo/gov2001-l
Hey folks,
Does anybody know which Iacus article we are supposed to read for next week?
There are two posted on the website and both seem to be about matching...
Thanks!
Best,
Leslie
_______________________________________________
gov2001-l mailing list
gov2001-l(a)lists.fas.harvard.edu
http://lists.fas.harvard.edu/mailman/listinfo/gov2001-l
--
gov2001 mailing list served by Harvard-MIT Data Center
List Address: gov2001(a)lists.gking.harvard.edu
Subscribe/Unsubscribe: http://lists.gking.harvard.edu/?info=gov2001
Hi everyone --
Just a reminder that your replicated results are due tomorrow in section (at
7PM). As a reminder, you should submit a CD-ROM with (1) a PDF of the paper
you've chosen for replication (2) the data used in the replication (3) the
code you have used for the replication, and (4) a draft of your paper with
little text but with the key results replicated and nicely presented in
tables and figures, and a proposed table of contents. The CD-ROM will be
for the group that is re-replicating your results. In addition, please
submit a hard copy of your draft paper to Brandon and me.
If you can't make it to section to turn in your replication, please either
give it to someone who is coming to section, or e-mail me so we can meet in
CGIS before section to get it.
Last, the re-replication assignments are posted on the website under Final
Paper Information. During section, you can hand your CD-ROM to the group
that is re-replicating your results. We will talk about the re-replication
tomorrow in section and post specific instructions for the re-replication
tomorrow.
Best,
Molly
_______________________________________________
gov2001-l mailing list
gov2001-l(a)lists.fas.harvard.edu
http://lists.fas.harvard.edu/mailman/listinfo/gov2001-l
Hi Gov 2001-ers,
My partner and I are having some problems reproducing a table, and we think
that it has to do with clustered standard errors and/or probit "marginal
effects". The paper we are replicating uses the following STATA code for
one of their regressions:
dprobit applydummy `controls01' if sample, cluster(cluster)
predict apply1
return list
where "applydummy" is the dependent variable (a 0 or a 1) and " 'controls01'
" is a set of explanatory variables (which we have). A few questions:
1. Apparently "dprobit" in Stata gives "probit marginal effects". Does
anyone know what "probit marginal effects" are, how they differ from the
probit models/regressions we've learned in class, and how to program them in
R?
2. We think that the Stata file is using clustered robust standard errors
for this regression (clustering on the variable 'cluster'). However, we
have not been able to reproduce these standard errors in R (although we have
tried the "clx" command that was recommended in an earlier email). I know
several other people have been dealing with clustered standard errors--can
anyone that was successful give us some pointers?
THANK YOU!
Liz and Erin
_______________________________________________
gov2001-l mailing list
gov2001-l(a)lists.fas.harvard.edu
http://lists.fas.harvard.edu/mailman/listinfo/gov2001-l
Hey class,
Does anybody know how clustered standard errors affect the degrees of freedom of
an F statistic? My partner and I would be eternally grateful. We are trying to
compute the first stage F stat for an IV, but we are getting the wrong answer
b/c the authors are using clustered standard errors. I believe we jsut need to
change the degrees of freedom in the F test, but I don't know how.
Best,
Leslie and Adela
_______________________________________________
gov2001-l mailing list
gov2001-l(a)lists.fas.harvard.edu
http://lists.fas.harvard.edu/mailman/listinfo/gov2001-l
--
gov2001 mailing list served by Harvard-MIT Data Center
List Address: gov2001(a)lists.gking.harvard.edu
Subscribe/Unsubscribe: http://lists.gking.harvard.edu/?info=gov2001
Hi 2001,
We are trying to implement nearest neighbor matching using Zelig. We are
interested in the average treatment effect of UNSC membership. The matching
is based on population, per capita income, and level of democracy using the
four closest matches. The stata code is
nnmatch delta4GDPpcWB year4SC demaut lpopWB lGDPpcWB if
(year4SC==0|year4SC==1)&unmem==1, tc(att) m(4)
We have tried to implement this in Zelig using MatchIt in the following way:
m.out1 <- matchit(year4SC ~ demaut+lpopWB+lGDPpcWB, method = "nearest", data
= subset1)
z.out1 <- zelig(delta4GDPpcWB ~ demaut+lpopWB+lGDPpcWB, data =
match.data(m.out1, "control"), model = "normal.bayes")
x.out1 <- setx(z.out1, data = match.data(m.out1, "treat"), cond = TRUE)
s.out1 <- sim(z.out1, x = x.out1)
summary(s.out1)
However, this fails to replicate the stata results. Does anyone have any
thoughts here? Or more technically, does anyone know how we can we specify
the number of matches (we want four)?
Thanks in advance,
Erin and Volha
_______________________________________________
gov2001-l mailing list
gov2001-l(a)lists.fas.harvard.edu
http://lists.fas.harvard.edu/mailman/listinfo/gov2001-l
Hi everyone,
I have a problem with reading dta in R.
In original stata, there is no missing data for some of the variables we
want to use.
But when I put the datasets in R, the missing data occur.
What can I do for that?
thanks!
erru
2011/3/19 Brandon Stewart <brandonmstewart(a)gmail.com>
> Also check out the bigdata package. If you go with Gary's suggestion, the
> "scan" command reads files in line by line.
>
> Brandon
>
>
> On Sat, Mar 19, 2011 at 12:10 PM, Gary King <king(a)harvard.edu> wrote:
>
>> don't read in the data set all at once. just read in a bit at a time and
>> process it. you probably don't need 850megs in memory all at once.
>> Gary
>> --
>> *Gary King* - Albert J. Weatherhead III University Professor - Director,
>> IQSS - Harvard University
>> GKing.Harvard.edu <http://gking.harvard.edu/> - King(a)Harvard.edu -
>> @kinggary <http://twitter.com/kinggary> - 617-500-7570 - Asst 495-9271 -
>> Fax 812-8581
>>
>>
>>
>> On Sat, Mar 19, 2011 at 12:00 PM, Slawa Rokicki <slawa.rokicki(a)gmail.com>wrote:
>>
>>> Hi,
>>>
>>> We are working with a HUGE dataset (850MB). R is not happy. We were
>>> thinking of using SPSS to cut down our dataset into a working one with only
>>> the variables we need. Otherwise, I googled it and found a package called
>>> filehash that could potentially work, but would require a lot of changes in
>>> our code.
>>>
>>> Do other people have this problem? Any advice?
>>>
>>> Thanks!
>>> Slawa
>>>
>>>
>>>
>>> _______________________________________________
>>> gov2001-l mailing list
>>> gov2001-l(a)lists.fas.harvard.edu
>>> http://lists.fas.harvard.edu/mailman/listinfo/gov2001-l
>>>
>>>
>>
>> _______________________________________________
>> gov2001-l mailing list
>> gov2001-l(a)lists.fas.harvard.edu
>> http://lists.fas.harvard.edu/mailman/listinfo/gov2001-l
>>
>>
>
> _______________________________________________
> gov2001-l mailing list
> gov2001-l(a)lists.fas.harvard.edu
> http://lists.fas.harvard.edu/mailman/listinfo/gov2001-l
>
>
--
Erru Yang
Global Health and Population
Harvard School of Public Health
631-880-9605
_______________________________________________
gov2001-l mailing list
gov2001-l(a)lists.fas.harvard.edu
http://lists.fas.harvard.edu/mailman/listinfo/gov2001-l
Hi,
We are working with a HUGE dataset (850MB). R is not happy. We were
thinking of using SPSS to cut down our dataset into a working one with only
the variables we need. Otherwise, I googled it and found a package called
filehash that could potentially work, but would require a lot of changes in
our code.
Do other people have this problem? Any advice?
Thanks!
Slawa
_______________________________________________
gov2001-l mailing list
gov2001-l(a)lists.fas.harvard.edu
http://lists.fas.harvard.edu/mailman/listinfo/gov2001-l
Hi All-
I am trying to use a small sample correction for a seemingly unrelated
regression (SUR) model. When I try to use the estfun() function (which I've
used for corrections to other models) I get the error "Error in
UseMethod("estfun") : no applicable method for 'estfun' applied to an object
of class "c('multiple', 'systemfit')". So presumably estfun can't deal with
multiple dependent variables.
I found this documentation on zelig (
http://people.iq.harvard.edu/~falimadh/inst/doc/sur.pdf) and this
documentation on Stata (http://www.stata.com/help.cgi?sureg) but I still am
not sure how to apply this...
Any advice would be appreciated.
Hope y'all are having nice breaks!
Julie
_______________________________________________
gov2001-l mailing list
gov2001-l(a)lists.fas.harvard.edu
http://lists.fas.harvard.edu/mailman/listinfo/gov2001-l
Hey class,
When trying to use *pdata.frame* function on a dataframe, we get an error
"series are constants and have been removed". Does anyone have an idea
of how to deal with this and what the problem may be?
The current code is as follows:
data<-pdata.frame(mydata, index = c("ccode","year"), drop.index = TRUE,
row.names = TRUE).
Or perhaps there's an alternative way of creating a data structure that will
have both individual and time indexes you know of??
The goal is to lag some variables while taking into consideration the
individual data.
Thanks,
Volha
_______________________________________________
gov2001-l mailing list
gov2001-l(a)lists.fas.harvard.edu
http://lists.fas.harvard.edu/mailman/listinfo/gov2001-l