On Fri, 3 May 2002, McElroy, Brendan wrote:
Dear Gary,
Thanks for your response to the above request. I think I understand what
you mean by estimating theta: take a random sample of the dataset, run EMis
and take the average vector of coefficients and variance-covariance matrix
from the five imputed datasets and input them back into Amelia using the
AMmupr and AMsigpr options. I'm not sure what to do next, since I still
can't load the full dataset into Amelia. Can you help?
i'd take a random sample and put that into amelia to get estimates of
theta. then, at least using the Gauss version, you would be able to use
that estimated theta matrix to produce the imputations on (all sequential)
subsets of the data. We should automate this, but I don't think its been
done yet.
I've also got a second request. I can't seem to find anything on the STATA
combining commands 'mi', either on STATA's net resources or on the
statalist. Apparently there was a zip file on your website containing the
ado files at some stage. If you still have them, can you send them on to
me, or can you let me know where to go to get them?
I'd have a look at Clarify, also at my web page. Clarify is Amelia-ready.
Gary
Thanks again.
Brendan McElroy
HRB Research Fellow
Departments of Economics and General Practice
Aras na Laoi
University College Cork
Western Road
Cork
Ireland
Tel: +353 21 490 3522
-----Original Message-----
From: Gary King [mailto:king@harvard.edu]
Sent: 27 April 2002 23:02
To: McElroy, Brendan
Cc: 'amelia(a)latte.harvard.edu'
Subject: Re: [amelia] altering memory allocation in Amelia
I don't know its ever been tested with that many observations.
I think a reasonable procedure would be to take a random sample just for
estimating theta and then doing the imputation for each observation.
that would be computationally efficient and wouldn't lose very much
efficiency. I'd have to look to see whether its possible to do this with
the present version of Amelia...
Gary
On Fri, 26 Apr 2002, McElroy, Brendan wrote:
I'm new to Amelia and I'm having
problems with memory size. I have a
STATA
dataset with nine variables and 400,751
observations weighing in at 7.8Mb.
I can load three of the variables - cost, age and sex - into Amelia (for
windows) but the program crashes when I try to run it. There are only 29
missing records on the age variable and the other two are fully coded.
The
program crashes immediately when I try to load
the full dataset. One of
the
variables - disability - has 58,162 missing
records and this is what I
really want Amelia to help with. I guess I've two related questions: Is
my
dataset so big that multiple imputation will take
too long and I should
revert to something like listwise deletion or least squares imputation,
both of which STATA can handle easily? If it's not too big, how do I
alter
the memory space allocated by the program?
Yours sincerely,
Brendan McElroy
HRB Research Fellow
Departments of Economics and General Practice
Aras na Laoi
University College Cork
Western Road
Cork
Ireland
Tel: +353 21 490 3522
-
amelia mailing list served by Harvard-MIT Data Center
List Address: amelia(a)latte.harvard.edu
Subscribe/Unsubscribe:
http://lists.hmdc.harvard.edu/?info=amelia
-
amelia mailing list served by Harvard-MIT Data Center
List Address: amelia(a)latte.harvard.edu
Subscribe/Unsubscribe:
http://lists.hmdc.harvard.edu/?info=amelia