posted 15 years ago
Currently I am looking for an efficient way to perform the Welch two sample t.test (t.test function in "R", TTEST with heteroscedastic populations in Microsoft Excel) on about 100 million pairs of vectors. The elements of the vectors are extracted from a database. Once the results are computed and stored in a database table, a correction for multiple comparisons (such as Bonferroni or FDR) has to be applied for the 100 millions p-values calculated.
My proposed approach includes MySQL and "R"; the problem is the large number of SQL queries to be executed and, out of the box, the lack of parallelism for both MySQL and "R".
The backup plan considers PostgreSQL and custom PostgreSQL functions, written either in C++ or PL/R.
Would Pentaho make any decisive difference by:
- enabling multicore and/or cluster computation for a more efficient resource usage
- provide a simple, out of the box approach for this class of problems (statistic functions + large data)?
The second question is about reporting, can Pentaho output a report from two databases? It will have to issue a query to each database and merge the results in memory. Since the databases are very large compared with relatively small report (that would fit in memory), it is unpractical to copy the databases to a third database and perform a classic SQL join.
Thank you,
Razvan
PS. quoted "R" because of getting this message:
We're sorry, but your post appears to contain abbreviations that we don't like people to use at the Ranch. Because JavaRanch is an international forum, many of our members are not native English speakers. For that reason, it's important that we all try to write clear, standard English, and avoid abbreviations and SMS shortcuts. See here for more of an explanation. Thanks for understanding.
If the abbreviation occurs within code, you can use code tags to post it successfully.
The specific error message is: "r" is a silly English abbreviation; use "are" instead.