I would really like elaboration on what is going on in here because, though I understand the concept that is taking the data as bytes from the src file to a buffer that is X in size, (an array of bytes with 1024 elements in this case). So it pulls as much as it is allotted from the src, writes what it has, the len var is some sort of counter, once the end of the buffer is reached, it fetches the next segment of bytes, and repeats until it runs out of bytes to get from the src.
Is there a history to this method?
Any where I can get information or insight on to understanding the knowledge that this structure builds upon because it is honestly confusing me.
I/O operations are generally very expensive, so you want to minimize their use. Instead of doing a read and a write for every byte in the source file, it does one for every 1024 bytes, typically improving the performance a lot.
It's not really necessary to write code like this yourself. Classes like BufferedInputStream do this for you, and for copying files you can use the transferTo()/transferFrom() methods of FileChannel.
Dave Tolls wrote:That code will occasionally fail.
It's a common mistake I've seen, but the while clause should be:
If you use the current '> 0' it is possible for the reader to pick up 0 bytes on a read (for example, something else is using the disk at the time). '0' does not mean EOF, -1 does.
Oh thanks for this, will be sure to make amends to the practice.
Also, thanks for the feed back. Helped clear up the assembled concept I had in my head for what was going on in the code.
I suppose it was mostly confirmation I was looking for but without a doubt I'll have to keep things like that FileChannel transfer methods in mind too when revising this code.