Hi,
I was wondering if there is way of doing IO efficiently on large files. Generally, we use buffer streams...which reads/loads everything in the memory and then we perform the transformation and write the transformed file back. The problem here is if there are 100s of users using the same file or different large files concurrently, it can take up a lot of memory to load all files and perform the transformation. I thought of using file streams, but that can slow down the entire processing because of read/write from disk. I was wondering if there was an efficient way of doing this in the memory itself. Any help shall be appreciated.
I was wondering if there is way of doing IO efficiently on large files. Generally, we use buffer streams...which reads/loads everything in the memory and then we perform the transformation and write the transformed file back. The problem here is if there are 100s of users using the same file or different large files concurrently, it can take up a lot of memory to load all files and perform the transformation. I thought of using file streams, but that can slow down the entire processing because of read/write from disk. I was wondering if there was an efficient way of doing this in the memory itself. Any help shall be appreciated.