Hi Ömer,
It seems as though you are running into memory issues with R itself. Note
that using "intercs = TRUE" and "polytime = 2" will add 3*K variables
to
the data, where K is the number of dyads in the data. Given your
description of the data, that could be an extremely large data set. You
might want to run Amelia on a smaller subset of the data to see how the
imputations go and then tentatively test out smaller imputation models.
Hope that helps!
Cheers,
matt.
~~~~~~~~~~~
Matthew Blackwell
Assistant Professor of Political Science
University of Rochester
url:
http://www.mattblackwell.org
On Thu, Feb 7, 2013 at 7:24 AM, OMER FARUK Orsun <oorsun(a)ku.edu.tr> wrote:
Dear Lister,
I am using Amelia II (Version 1.6.4) with a 500 GB computer specification
and my data consist of directed dyads and my imputation model has 94
variables and 493,853 observations. I use the following command:
library(Amelia)
library(foreign)
mydata <- read.dta("data.dta")
require(Amelia)
set.seed(1234)
a.out <- amelia(mydata, m=10, p2s = 2, tolerance = 0.005, empri =
.1*nrow(mydata), ts="year", cs="dyadid" , polytime=2, intercs =
TRUE)
After 7 hours, I receive the following message:
amelia starting
beginning prep functions
*Error in cbind(deparse.level, ...) :*
* resulting vector exceeds vector length limit in 'AnswerType'*
I've already searched the Amelia II archieves and R archives, I was not
able to locate a solution.
I would deeply appreciate any help!
Best Regards,
Ömer
_________________________________
Ömer Faruk Örsün
PhD Candidate
Department of International Relations
Koç University
CAS 289
_________________________________
--
Amelia mailing list served by HUIT
[Un]Subscribe/View Archive:
http://lists.gking.harvard.edu/?info=amelia
More info about Amelia:
http://gking.harvard.edu/amelia
Amelia mailing list
Amelia(a)lists.gking.harvard.edu
To unsubscribe from this list or get other information:
https://lists.gking.harvard.edu/mailman/listinfo/amelia