This week's book giveaway is in the JavaScript forum.
We're giving away four copies of Cross-Platform Desktop Applications: Using Node, Electron, and NW.js and have Paul Jensen on-line!
See this thread for details.
Win a copy of Cross-Platform Desktop Applications: Using Node, Electron, and NW.js this week in the JavaScript forum!
  • Post Reply Bookmark Topic Watch Topic
  • New Topic

Which approach is better?  RSS feed

 
rastogi payam
Ranch Hand
Posts: 47
Chrome Eclipse IDE Tomcat Server
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
We have a large file on disk (~1Gb). We are reading that file into memory using java.nio MemoryMappedByteBuffer class. We have limited the in memory file size to 1Mb. Now we need to get the data from this in-memory file and have to process it passed it to the socket as a chunk of 10 Byte.
According to the given information which approach is better
1. Copy the entire data (1Mb) into an array and then process it.
2. In loop - Copy the chunk of size 10 Byte into an array and processs it.
 
Claude Moore
Ranch Hand
Posts: 873
8
IBM DB2 Java Netbeans IDE
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I'd bet answer 1, just because a) in none of these scenarios you'll be able to avoid processing 1GB of data in very small chunks b) I presume that I/O on original file of 1 GB may be minimized if you copy blocks of data in memory of 1 GB.
To be honest: the real bottleneck will be processing data via socket, despite how you're going to read the original bulk data.
 
fred rosenberger
lowercase baba
Bartender
Posts: 12527
48
Chrome Java Linux
  • Likes 3
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
according to the given information, the best way is to write the code in whatever way is the cleanest, easiest to read, understand, and maintain. Your specs have no requirements for any kind of speed, so forget about it entirely. You will almost ALWAYS be better writing code that makes sense over trying to do micro-optimization but add hundreds of hours to coding and debugging time.
 
Ulf Dittmer
Rancher
Posts: 42972
73
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
My gut feeling is that handling 1MB chunks gets you a much better data handling to data processing ratio, and is thus preferable.

I agree with Fred that clean code is preferable until you have proven that you need to optimize for speed. But the differences between the two approaches in terms of code cleanliness are likely to be small, so you might as well choose whichever approach gives you better performance.
 
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
Boost this thread!