Tomas
Thanks for the response. The first method you propose for exploration
could be very useful. I guess I didn't think of it in thee terms and
I probably should have.
The second approach is very similar to what I was thinking of doing if
all else fails. I would have imputed the expected values and just use
that for EFA. (I would do this either through imputing a large number
of times and averaging the imputed values, or just use the EM
algorithm probably through SPSS MVA.) I guess this does the same
thing that you approach proposes.
The problem with this approach is that it underestimates the
uncertainty of the unknown values. For most analysis rubin's rules
can be used to combine all point estimates as long as they have a
standard error. Mplus will now give you standard errors for the
loadings so that is very useful. But the decision on the number of
factors to extract is problematic. (Even without any missing data it
is problematic.) Lets say I establish a rule on the number of factors
to extract. I impute m times. Analyze the m datasets and I get 3
factors for a 1/3*m datasets, 4 for 1/3*m and 5 for 1/3*m. How many
to extract then? Go with the median? Why? Why not?
Of course, one solution is not to bother. Just do it and consider the
problem when (if) it arises. I can totally imagine that all
imputations would give me the same results. Problem solved then.
There also might be something else I am missing. It is not entirely
clear what the advantage of any of the above is to listwise deletion.
etc.
Thanks for your quick input.
L
On Dec 7, 2009, at 5:58 PM, Tomáš Kubiš wrote:
Hi Levente,
if I understood your issue right, you have MI data and want to
conduct fa on this data. I faced the same problem and came up with
two ways how to conduct such an analysis.
First, you can use imputations separately, conduct the fa on each
one of them and observe if there are significant differences in the
loadings of factors in comparisson to an unimputed dataset. Moreover
you can observe if imputaions differ among themselves. For pure
exploration, this will help you to identify dimensions and you can
generalize the result in that you build an average of loadings for
each factor and thus get one set of factors.
Second, you can take the covariance or correlation matrices of the
imputed datasets and build an average of them. Then you can conduct
the fa on this average cor or cov matrix. You have one matrix and
thus get again one set of factors.
As you said, it is difficult to find literature and I didn't find
support for any of these two methods. I would be inclined to use the
second one and make an average of cor matrices because it is the
input into the factor analysis.
Good luck with your analysis and I hope that you will get some more
scientific help.
Regards,
Tomas
2009/12/7 Levente Littvay <levi(a)littvay.com>
Dear Amelia List
Does anyone know of a good and accessible way to analyze multiply
imputed data using exploratory factor analysis? (Possibly, would
Zelig know how to do this?) Anything else? SPSS won't do it.
Mplus won't do it (as far as I can see). I found very little by way
of published work on the topic. I could use a bit of guidance.
Thanks
Cheers
Levente Littvay
Assistant Professor
Department of Political Science
Central European University
-
Amelia mailing list served by Harvard-MIT Data Center
[Un]Subscribe/View Archive:
http://lists.gking.harvard.edu/?
info=amelia
More info about Amelia:
http://gking.harvard.edu/amelia