Memory mapping is intimately linked to the virtual memory behavior of the OS. These numbers will vary by OS type, version and configuration.
The NIO MappedByteBuffer is a
Java object wrapped around a chunk of mapped native memory. Unlike C, where the mmap() system call returns a physical address which can be used directly by the hardware, the JVM must mediate access to the mapped memory. There is also some overhead in setting everything up.
If you're just reading a file sequentially, creating a MappedByteBuffer is unneccessary. Use the read() method of FileChannel and read into a ByteBuffer. That's what those are for. It may take longer to setup the MappedByteBuffer than it would to just read the data.
Under the hood, most OSs use memory mapping to perform I/O anyway. And many do predictive read-ahead buffering to
boost throughput. If you're mapping the file yourself you won't get that benefit.
MappedByteBuffers are good for mapping large files and for implementing a sort of persistent, shared memory.
Depending on the OS's virtual memory design, mapping a huge file may not consume any virtual memory space at all (because the virtual memory pages are backed by the file itself). This would allow you to appear to have a humungous data array in memory all at once. The data will be dynamically paged in and out as needed, based on usage. This is similar to accessing a file randomly, but you don't need to seek and read chunks, it appears to all be there all the time.
If multiple parties map the same file, any updates made by any of them are seen instantly by all the others. This could be used as a status area, or scoreboard type of thing for cooperating processes to communicate (not all of which need to be Java, by the way). And the content will remain in the file after all the processes exit.
Mapped files have their uses, but mapping a file is not necessarily a faster way to read it.