Matt,

Thanks.. what about for pooled median values? Using this code for median "averages " across  imputations  would be inappropriate, no? 

Skip 

On Tue, May 31, 2016 at 4:43 PM, Matt Blackwell <mblackwell@gov.harvard.edu> wrote:
Hi Skip, 

Unfortunately, there is no canned way to get all summary statistics, but you can use the "mi.meld" function to take a list of quantities of interest (e.g., means) and their SEs and combine them using the Rubin rules. If you don't care about the uncertainty and just want to average the quantities, then you can use these two bits of code:

## print the estimated means for each variable
rowMeans(sapply(imp$imputations, colMeans))

## print the estimated SDs for each variable
rowMeans(sapply(imp$imputations, function(x) apply(x,2,sd))

Hope that helps!

Cheers,
Matt

~~~~~~~~~~~
Matthew Blackwell
Assistant Professor of Government
Harvard University

On Tue, May 31, 2016 at 12:29 PM, Skip Barbour <russellbarbour@gmail.com> wrote:
Is there a way of  getting the pooled descriptive statistics  from  the imputed datasets?  I guess this must be spelled out somewhere, but  reviewing the Amelia and Zeplig documentation  I have not yet found it. Sorry  for asking  such a basic  question

Skip Barbour

On Thu, Jan 21, 2016 at 11:05 PM, Matt Blackwell <mblackwell@gov.harvard.edu> wrote:
Hi Sean, 

Apologies for taking so long to get back to you on this. I think what you are trying to accomplish is less for multiple overimputation. What moPrep is trying to do here is use the variance of the mismeasured observations relative to the variance of the gold standard observations. But if all of your mismeasured observations have 0 variance (since they are all 0!) then this strategy won't work. Thus, you can do one of two things:

1) Provide a standard error of the measurement error for those observations (using the error.sd argument)

2) Simply set those observations to NA and impute those observations like usual in amelia() (possibly with bounds argument to make sure they will be positive)

Hope that helps!

Cheers,
Matt

~~~~~~~~~~~
Matthew Blackwell
Assistant Professor of Government
Harvard University

On Wed, Dec 30, 2015 at 2:24 PM, Sean Kates <sk5350@nyu.edu> wrote:
After updating to the newest version of Amelia (1.7.4), I tried overimputing a dataset that has incorrect values in one of its variables. All of the error observations are measured identically (as zeros, where they should be positive). The code I originally used is below, and it triggers a warning of the type: "Some observations estimated with negative measurement error variance. Set to gold standard."

dat<-data.frame(A, B, C, VS)
mopd<-moPrep(dat, VS~VS, subset=VS<.0001)

I looked through the github code as to what causes this error (other than, of course, the negative error variance), and more importantly, how to activate the gold.standard (which for my purposes is the rest of the values for VS) and presumably fix this issue. After trying quite a few different possible codings, I can't get it to work. I either receive the same error, or a host of errors surrounding how I've included gold.standard in the code. I would think it should be easy, since I'm basically bifurcating my data (all data under some amount is the subset measured with error; all data over the amount can be considered gold-standard data), but can't figure it out. Thanks for any  help you can give,

Sean

--
Amelia mailing list served by HUIT
[Un]Subscribe/View Archive: http://lists.gking.harvard.edu/?info=amelia
More info about Amelia: http://gking.harvard.edu/amelia
Amelia mailing list
Amelia@lists.gking.harvard.edu

To unsubscribe from this list or get other information:

https://lists.gking.harvard.edu/mailman/listinfo/amelia


--
Amelia mailing list served by HUIT
[Un]Subscribe/View Archive: http://lists.gking.harvard.edu/?info=amelia
More info about Amelia: http://gking.harvard.edu/amelia
Amelia mailing list
Amelia@lists.gking.harvard.edu

To unsubscribe from this list or get other information:

https://lists.gking.harvard.edu/mailman/listinfo/amelia



--
There is nothing so fatal to character as half finished tasks.
David Lloyd George




--
There is nothing so fatal to character as half finished tasks.
David Lloyd George