File.lastModified() javadoc says - A long value representing the time the file was last modified, measured in milliseconds since the epoch (00:00:00 GMT, January 1, 1970), or 0L if the file does not exist or if an I/O error occurs
But this method is rounding off the value on the Linux, to its nearest second unlike windows. (for example if a file is modified at 1173423665215 msec, the above method is returning the value 1173423665000 on linux.)
1) Is there a way to avoid this rounding off on linux? OR 2) Are there any other methods available to get the file modified time in milliseconds. 3) Any method to know what is the precision of platform ( 1msec on windows and 1sec on linux.)
Originally posted by bart zagers: The question is, do you really need millisecond precision? As far as I know, even Windows does not give you really millisecond precision, but only an "approximation".
It might not be of much relevance to Rajesh, while his question doesn't support that idea too much, but the java documentation claims:
Returns: A long value representing the time the file was last modified, measured in milliseconds since the epoch (00:00:00 GMT, January 1, 1970), or 0L if the file does not exist or if an I/O error occurs
which is simply wrong. It doesn't.
java.io.File.lastModified itself calls java.io.FileSystem.getLastModifiedTime (File f) But I don't have the source of the native implementation. Perhaps I should [ February 15, 2008: Message edited by: Stefan Wagner ]
I am not worrying about millisecond precision. But looking for a way to get the same behavior across the platforms.
As part of my requirements, am storing the file fetched time (System.currentTimeinMillis() ) and before updating the file, am comparing the fetched time with file.lastModifiedTime() to know whether the file got updated from the point we read it.
But because of precision problem, the code behaves in different ways on different platforms!.