• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

java - read from a file only when write is completed

 
Greenhorn
Posts: 26
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi ,
My requirement is to run a thread as separate process to write bulk data to a file using java. I must be able to read from the file only when the write operation to the thread is completed. Is it possible using java. If yes could you please give me sample code for that.
Thanks in advance.
[ August 12, 2008: Message edited by: Bear Bibeault ]
 
Ranch Hand
Posts: 479
1
IntelliJ IDE Spring Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi,

I think if you implement the Observable and Observer interfaces and let the Observable (write operation) notify the Observer (read operation) once it has completed you would be able to get it working.

Cheers,
Raj.
 
Bartender
Posts: 4179
22
IntelliJ IDE Python Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Another route would be to take advantage of the ExecutorServices framework in java.util.concurrent.

You would start a service and submit the writing thread to work, getting a Future object in return.

Your reader then uses Future#get() method to wait till the reader is done so it can begin its work.

See: the java.util.concurrent API
 
Sheriff
Posts: 67746
173
Mac Mac OS X IntelliJ IDE jQuery TypeScript Java iOS
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
"arun", you have previously been warned on one or more occasions regarding adjusting your display name to meet JavaRanch standards. This is not optional. Please take a look at the JavaRanch Naming Policy and adjust your display name to match it prior to your next post.

Your display name must be a first and a last name separated by a space character, and must not be obviously fictitious.

Be aware that accounts with invalid display names are disabled.

bear
JavaRanch Sheriff
 
Saloon Keeper
Posts: 27764
196
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Thnaks for fixing your display name, arun.

It looks like most of the answers are being made by people thinking specifically about the Java concurrency features, but you didn't actually say that was the environment. I'll step back a little and cover some basic strategies.

Regardless of the language, the hardest thing about the problem is figuring out when the file writing is complete. This is a problem that gave someone I know much grief since he didn't know some of the tricks I do either.

You can tell when the write is complete in one of 3 primary ways:

1. You can monitor the file size and when it stops growing, assume it's done.

2. You can post an indicator (sentinel) when the operation is done.

3. You can have the writing task notify the consumer task(s) when the operation is done. This can further be split into connecting the two tasks directly or having them connect to a shared publish/subscribe facility.

Monitoring file size is unreliable. On the Internet, data transmission can vary in speed and stall for long periods of time. To allow for this, you have to set a fairly long timeout period, which means that the actual time the consumer can start working with the data is delayed. Plus a transmission failure where part of the data arrives but not all of it cannot be easily detected.

There's also a problem, since the polled filesize may not accurately reflect the number of bytes written. Some OS's update only in block amounts, and I've even seen cases where the size never got updated at all until the file was closed.

Sentinels come in many forms. One of the most common is to simply send a second very short file after the first. This file might be a single byte, or even zero bytes if that's allowed, since it's the existence of that file that counts, not its content. Since the sentinel is only send after the data, this is fairly reliable, assuming your transmission facilities don't contain asynchronous capabilities that cause data to be received out of order. Of course, one downside to this is that the sending process has to be modified to add support for sending the sentinel file after the data file.

Sentinel files can also trigger the usability of multiple files. I worked on a mainframe system decades ago where that was a core feature of the processing cycle.

The final approach requires that both producers and consumers have adopted a common inter-process communications facility. Depending on the application, this facility may be simple or complex. One of the simplest of all is to use a semaphore, but this means that both processes reside in the same JVM. At the other extreme, you might use a large-scale general-purpose facility such as JMS.

A variation of this third approach is possible in systems where you can hook into the filesystem itself. In that case, you add logic that gets triggered by the file close routine that posts the event via whatever notification channels you may be monitoring.
[ August 13, 2008: Message edited by: Tim Holloway ]
 
Author and all-around good cowpoke
Posts: 13078
6
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
The more I think about this, the harder it gets.
For example, suppose the writing app posts a JMS message saying it is done, but then gets more data, opens the file again and writes more.

It would appear that some sort of coordinated handshake is required so that the writing process "knows" not to write new data when the reading process is in mid file. This handshake could be done with JMS but working with sockets might be simpler.

So the important question is "Does the bulk data writing process ever need to open the file again to write more."

Bill
 
Tim Holloway
Saloon Keeper
Posts: 27764
196
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hmmmm. It does say "when the write operation is completed", not "when the file write is completed (and the file is closed)."

At its extreme, you end up with the case of a file follower such as the classic Unix command:



I believe in that particular case, the tail app does something like spit out everything it found, note the location of end-of-file, close the file, sleep, wake, open the file, seek to the noted location, repeat.

This isn't a universal solution, since buffering differences on some architectures can cause partial line output (in the case of a text file).

When following a file that posts change events, I normally consolidate the events if they come in faster than I can deal with them, though in some cases that's not feasible.

Obviously, there's no one-size-fits-all solution. It would depend on the platform(s) involved, the resources available and the needs of the applications.
 
reply
    Bookmark Topic Watch Topic
  • New Topic