I orginally posted this on "Distributed computing" but didn't get final answers. Here is the question --- We found it slow to run some complex java 2D graphics (like a go game) on unix box, but it is faster to run it on windows (maybe many people have realized this). However the database is installed on unix machine and we need to use the database prior to creating java 2D. Which of the following two approaches makes more sense ? 1. Use RMI mechanism to split the code between unix and windows, let the unix part access local db, then pass generated objects to windows and let the codes on windows generate java 2D graph ? But this requires sending objects through network, it may slow down the process ? 2. Let the application mainly run on the windows, i.e. let the application access raw data from Unix database and then create customized objects on windows directly, and then run the graphics on windows directly ? This way we don't need to send any objects across network, right ? But I am not sure how slow it is to access database on a unix box from windows ? And what potential problem may arise ?
The Performance forum is probably a better place for your question. I'd say you need to profile (using something like Borland OptimizeIt) your application to find out what the bottleneck is before you make any decisions about how to distribute it. Is it the network? The database? The graphics? The inefficient algorithms? There is no need to guess, -- the profiler will tell you all and will steer you in the right direction.
snakes are really good at eating slugs. And you wouldn't think it, but so are tiny ads: