Win a copy of Node.js Design Patterns: Design and implement production-grade Node.js applications using proven patterns and techniques this week in the Server-Side JavaScript and NodeJS forum!
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Ron McLeod
  • Rob Spoor
  • Tim Cooke
  • Junilu Lacar
Sheriffs:
  • Henry Wong
  • Liutauras Vilda
  • Jeanne Boyarsky
Saloon Keepers:
  • Jesse Silverman
  • Tim Holloway
  • Stephan van Hulst
  • Tim Moores
  • Carey Brown
Bartenders:
  • Al Hobbs
  • Mikalai Zaikin
  • Piet Souris

Copy large size file from NAS drive takes longer time

 
Greenhorn
Posts: 24
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
We have below code to copy a zip file > 170GB from NAS drive, takes longer time, please advice to improve performance.


 
Ranch Hand
Posts: 417
Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hello,

Hehehe, I bet it must be slow copying 170GB in chunks of 1KB ;-))))

Just use a bigger buffer like 8,16 or 32 MB for example. You can even go higher depending on memory resources available. Experiment and find out what works best for you.


Take care,
 
A.J. Côté
Ranch Hand
Posts: 417
Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
You may also have a look at java.io.BufferedInputStream/BufferedOutputStream although I don't think it is needed for a simple copy operation. Just making your own buffer bigger should achieve the same. Using those could even make things a little slower since you would be double-buffering.

Nevertheless, keep these 2 in mind for cases where you do not have the liberty to use your own buffer. For example, when using a third party library that wants an InputStream or OutputStream as parameter. In those cases, wrapping the streams in their Buffered version with a well sized buffer will always speed things up.

Be aware the default buffer size is only 8K so make it bigger creating the streams with the specific constructor that allow you to do so.

Cheers,

 
A.J. Côté
Ranch Hand
Posts: 417
Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
This will sound awfully complex so please just ignore it if it sounds too complex for you. You should achieve reasonable performance simply by making your buffer bigger e.g. changing 1024 for a bigger value. That's it.

Anyway, if you are really a performance maniac, you could use PipedOutputStream, PipedInputStream for maximum performance. The Piped streams need 2 threads in order to work correctly. You would end up with one Thread filling the buffer from the NAS drive and another one writing to the local disk. That should be fast!

bufferSize=8MB //for example

Thread 1:
copy from NASInputStream to PipedOutputStream with your own buffSize buffer

Thread 2:
copy from PipedInputStream(bufferSize or bufferSize*2) to local file with a second of your own bufferSize buffer.
 
All that thinking. Doesn't it hurt? What do you think about this tiny ad?
Building a Better World in your Backyard by Paul Wheaton and Shawn Klassen-Koop
https://coderanch.com/wiki/718759/books/Building-World-Backyard-Paul-Wheaton
reply
    Bookmark Topic Watch Topic
  • New Topic