If you are saying that your quantity of interest is the entry in the missing cell value, then I'd take a large number of Amelia runs (i.e., set m=100 or 1000).  then you will get m data sets and you can average them to get the point estimate of the cell you are interested in, and take the standard deviation or confidence intervals to get uncertainty.   
Gary

On Tuesday, October 25, 2011, Fernando Mayer wrote:
Hello Amelia list,

nearly one year ago I've posted a question in this list (it can be
seen here [1]) about a dataset I was making imputations using Amelia.
The objective of such imputations is only to complete missing data and
stop there. No further analysis should be made.

Now I have a similar dataset which I'm trying to impute, with the same
objective. In summary I'm using Amelia to impute, say m = 15, and
using the mean and variance of these 15 imputations as my final result
(more specifically, the mean is the variable of interest). However
I've noticed that when I make two or more Amelia runs with m = 15, I
can have very different final results (the means), most possibly due
to the high variability of the data itself. I understand this is
normal, since Amelia uses bootstrapped data to generate each
imputation, so the results are expected to differ. However the high
discrepancy I'm getting with different Amelia runs is a problem since
I'm using only the mean of m imputations as the final result. So, if I
use one Amelia run, my result is totally dependent of what happened in
this unique run.

What I'm trying to do now is to get more "consistent" results, in the
sense that my final result is not dependent of only one Amelia run. To
achieve this, I thought in using the Central Limit Theorem (CLT) to
get my final mean, as follows:

1) Run the same Amelia model 1000 times, with m = 15
2) Within each of the 1000 runs I extract the mean of the m
imputations, so I have 1000 means (assuming that each Amelia run is
independent from each other, so I treat the means as iid random
variables)
3) Calculate the mean from the distribution of these 1000 means, which
should be normally distributed by the CLT (and that E(\bar{X}) = \mu
and Var(\bar{X}) = \sigma^2/n).

I've made a few runs of 1000 Amelia runs following this pseudo-code,
and the final result is very similar among them (i.e. they have almost
identical normal distributions and very similar final means). For me
this sounds more reasonable to use than one only Amelia run to extract
a mean, but I would like to hear yours opinion about this, and in
particular if this is a valid methodology to do what I'm trying to do
with Amelia.

Thank you very much in advance.

[1] http://lists.gking.harvard.edu/lists/amelia/2010_09/msg00012.html

---
Fernando Mayer
URL: http://sites.google.com/site/fernandomayer
e-mail: fernandomayer [@] gmail.com
-
Amelia mailing list served by Harvard-MIT Data Center
[Un]Subscribe/View Archive: http://lists.gking.harvard.edu/?info=amelia
More info about Amelia: http://gking.harvard.edu/amelia


--
Gary
--
Gary KingAlbert J. Weatherhead III University Professor - Director, IQSS - Harvard University
GKing.Harvard.edu - King@Harvard.edu - @kinggary - 617-500-7570 - Asst 495-9271 - Fax 812-8581