In addition to the newer "multicore" abilities you mention, a small empirical prior, will speed up convergence. The "empri" argument sets an empirical/ridge prior. A value of a half to 1 percent of the sample size would be small, aid numerical stability,
and unlikely to noticably change results (unless you are using time series cross sectional data, in which case you might use 1 percent of the sample within any cross sectional unit).
The "tolerance" changes the point at which the EM algorithm is judged to have converged, and setting that larger, (like .001, or even .005) is probably quite safe. We were very conservative with this tolerance choice, and should reexamine other options
to set it dynamically.
Best,
James.
--
James Honaker, Senior Research Scientist
//// Institute for Quantitative Social Science, Harvard University
I'm looking to speed up the run time of a single imputation on a large data set with repeated measures that takes many hours. Will running the imputation in parallel with the parallel="multicore" option and 6 cores speed up the run time of a single imputation,
or will it only speed up the run time of multiple imputations (by running them simultaneously)? What are my best options for making the single imputation run faster while minimizing any sacrifices in imputation accuracy?
Many thanks!
-Isaac