Hi all,
This has come up a few times today, so I thought I'd clarify --
Even if you are prevented by your original author from uploading or
distributing the data, you should still create a replication file and
upload what you can (including the R code that you used). You should
also get and report the UNF and cite it in your paper. Here's an
example:
Gary King; Langche Zeng, 2006, "Replication Data Set for 'When Can
History be Our Guide? The Pitfalls of Counterfactual Inference'"
hdl:1902.1/DXRXCFAWPK UNF:3:DaYlT6QSX9r0D50ye+tXpA== Murray Research
Archive [distributor]
I've closely monitoring people's submissions to dataverse, and I don't
want to release any replication archives that don't have data files,
abstracts, and R code. If you are in a position where you can't upload
data, but you have the other components, let me know and I will
release your study.
Maya
Hi everyone,
I just posted the final exam for those extension school students who are not
submitting a final paper. You can access it on course website in the Final
Exam folder in the Problem Sets section. The exam will be due one week from
today. Good luck!
Iain
Dear all,
I am having trouble interpreting the Rho statistic for a panel-corrected
Prais-Winsten statistic with AR1. When fixed effects are introduced in the
regression, the rho statistic we are looking at changes from 0.110 to
-0.001. Some of the significant coefficients in the regression also change
dramatically. Should we be concerned about an rho statistic so close to
zero? Is there an optimal rho for this kind of model?
Thank you very much!
Andrei
Hey folks,
Just got some feedback from the Dataverse team that some of you are
releasing replication archives without completely cataloging what's going on
(e.g., no abstracts, etc.). So, just a friendly request to be comprehensive
in your cataloging. It helps the staff and it also helps future scholars
find your work down the road.
Maya
Iain, Maya,
Wondering how much this paper should be able to "stand alone", vs. require that the reader be familiar with the paper we are replicating. this impacts how much of our paper is going to be taken up with background explaining what the original paper did. For now, we're keeping it short, with just ~ 2 pages on background of the original paper, keeping the bulk about how we evaluated it and augmented it. Does that seem right? Also, is there a general guideline on how long this should be?
Thanks!!
EXL
--
ERIC LIN
Technology and Operations Management
Harvard Business School
Boston, MA 02163
elin at hbs.edu
We have a table in LaTeX that fits on the page, but has a *lot* of
columns, which gave us a situation where it looks uncentered on the
page, because we have a left margin but the table goes off to the
right. It fits but looks weird. Is there any way to either resize or
re-center the table so it isn't off kilter like this?
Thanks!
~Meryl
Hi all,
As you know, the final paper is due Thursday April 29th at 5pm. Please see
the syllabus on the course website for the course policy on extensions.
There is now a dropbox available on the website in the Course Dropboxes
section. Please submit your papers in the Final Paper dropbox; there is no
need to print out a hard copy or to submit data or code.
Best, Iain
How can I save output when working with R in the RCE environment?
It would be useful to save the R workspace, in case I wish to work on the
output further on my own computer.
Regards, Nicola
Hi all,
For those of you who are extension school students (who are not submitting a
final paper) we will release the final exam this Thursday at 6pm. Extension
school students who are submitting a final paper do not need to submit a
final exam. The exam will be due one week later on Thursday May 6th at 6pm.
When we post the exam we will provide instructions about where to locate it
on the course website. The exam itself will contain instructions on
submission and a procedure for asking clarifying question.
Best, Iain
Hey class,
I have two quick questions regarding amelia. Apologies if they have
already been answered. First, is the result going to be different if I
average my imputed values of the 5 data sets first and then run the
models on them? Second, given my results on the 5 data sets, what is
the proper way to get the "averaged standard error", do I take the
average of the standard errors?
Thanks,
Iza