Bookmark Topic Watch Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Report post to moderator
Java 1.5 introduced a new package called java.util.concurrent that provides a lot of utility classes that help in implementing multi threaded applications. Some of the very useful classes are related to the JavaDoc:java.util.concurrent.ExecutorService . Simply put, an executor is a class that allows you to run tasks in the background. You have to create an object of a class that either implements the Runnable interface or the Callable interface and submit it to the Executor and the Executor will execute the object in a background thread. Java provides an implementation of Executor Service that encapsulates a pool of threads called JavaDoc:java.util.concurrent.ThreadPoolExecutor. This class provides a very powerful yet simple way of executing tasks in the background.



Why thread pool?

A question that comes up usually is why should I even use thread pools? I have this application that needs to do execute several things in parallel, or maybe I have to do some sort of processing that takes hours to complete, and I want to split the processing into chunks and run them in parallel in multiple threads. Why can't I just create a thread for every task that needs to be run in parallel, start all the threads and then wait for the threads to complete. Simple, no? Not quite. There are 2 drawbacks to this approach of laissez-faire threading.


  • Threads are not cheap.


  • When a program starts a thread, the OS has to do some initialization to set up the threads's context. Once a thread starts, from the OS point of view, a thread can be one of several states. A thread can either be running, or waiting, or stopped. When its running, it's executing some code, and the OS will try it's best to assign the CPU to execute the thread. When the thread is waiting for some IO to occur, or waiting on a synchronized method or locked object, it is considered to be in wait state. A running thread is costly because it is actively asking for the CPU's attention, and the OS has to do a context switch whenever it has to switch the CPU between threads. Initialization of a thread is also costly but not as costly as a running threads, because the OS has to spend some effort setting up the thread's context. A thread in wait state costs almost nothing. The only cost is the cost of keeping the thread's context in memory. Most modern OS have very efficient implementations of semaphores and mutexes, which is what the JVM uses internally when the thread waits on a synchronized call or locked object.


  • Resources are limited

  • Modern computers are very powerful. They have lot of CPUs and loads of memory and even more disk space. And they are getting faster and bigger every day. Powerful as they are, they aren't magic boxes. You cannot run 10,0000 threads and expect your program to run 10,000 times faster. Computing resources are always going to be limited, no matter how vast they might be. Every piece of code you write is going to use some CPU, some memory and some IO. Each thread that executes your code will use that CPU, memory and IO. If you start 10 threads, you are going to use 10 times the resources. If you start 100 threads you will use 100 times the resources. Pretty soon, you are going to run out of resources.

    This is why it is not beneficial to start a thread for every task in many cases. First of all if you have 10,000 tasks, you cannot start 10,000 threads. The number of threads should be constrained by the resources needed by the task and the resources available. For example, if your task needs to do a lot of computation in memory, which means its going to be active 100% of the time, and it needs 500M of memory to execute. Lets say you have 6 CPUs and 2g of memory, you are going to be memory bound; since 500M*4 = 2g you cannot have more than 4 tasks running at the same time. OTH, if you had 4g of memory, you are going to be CPU bound. You cannot have more than 6 tasks running in memory because each task is going to take the attention of a CPU and you have only 6 CPUs. Also, because it is so much cheaper to have a thread waiting, it is beneficial to start threads and keep them waiting for a task than to start the thread whenever a task needs to be executed, if you are going to be constantly running background tasks.

    Lets say you are writing a server application that forks off some background processing every time a client makes a request. Also, lets say that your background processing is CPU bound, and you have 64 cpus. Lets say you are planning for an average load of 40 concurrent requests, but want to be able to support peaks of 1000/ 10000 requests. Rather than starting a thread for every request, its much better to be start a pool of 60 threads that wait for requests to arrive. Whenever, a request comes in, hand it off to a waiting thread. When the thread finishes executing the task, it goes back to waiting for the next task. If the number of requests is much larger than the number of requests that can be simultaneously executed, the requests wait In a queue for a thread to become available.

    Certainly, there are situations when it's cheaper to start/stop the threads, which is why Java hasn't removed support for the Thread API. If you have a long lived task that is executed very sporadically (let's say like a maintenance job that runs nightly)', its better to just start a thread for it when you need. However, if you are going to be executing a lot of background tasks frequently, its better to use a thread pool




    How does a thread pool work?

    The Thread Pool is designed using the Producer-Consumer pattern. The producer consumer is made up of one (or more) producers, a queue and multiple consumers. The consumers wait on objects to appear on the queue. The producer is responsible for putting objects in the queue. When a objects appears on the queue, a consumer pops it out and consumes it.

    In this case, each thread in the thread pool acts as a consumer, and the objects in the queue are tasks that implement either the Runnable or Callable interface. The Producer is any class that is feeding the task to the queue. This could be your main method, or the method that is invoked when a request is received by your application. Runnable interface has an execute method that contains the code that should be executed by the background thread. Callable has a method called call that contains the code that should be executed by the background thread. When the producer puts a Runnable/Callable in the queue, one of the idle threads picks it up and calls the run/call method. While it is executing the run/call method, it is considered active and it will not pick any tasks from the queue. In the meantime, if there is another task placed in the queue, the next idle thread will pick up the task and start executing it. When a thread is done executing a task, it goes back to idle state and waits for another task to appear on the queue

    Runnable vs Callable

    Runnable and Callable are logically equivalent, except that the Callable interface's call method has a return value, whereas Runnable interface's execute method returns void. You should use Runnable when you are not interested in the result of the background operation, and use Callable when you want to get the result of the operation

    For example, let's say you are writing a DataDumper application where read a file, parse a line and insert the data into the database, and you want to have a task for each line. You should use Runnable here because all you are doing is inserting data into the database, and you don't want to return any results back to the caller. You would implement this something like this


    Now let's say you are writing an application that does some computation based on data of each line. In the end, you want to aggregate the results. In this case, if you have a task for each line, you should use Callable, because you want to return data from the task to the producer, so the producer can aggregate the results.



    Future

    What is this Future class, you say? Java can predict the future? the FUTURE.... THE FUTUUURRE!! No, Java doesn't predict the future. Future class is basically a wrapper class that wraps the result of a Callable. When you submit a Callable task to the Executor, it returns a Future object. The Future object has a get method that allows you to get the result of that Callable (the value that the call method returned). If the task is complete, get method will result of the Callable. If the task is currently running or waiting in the queue, get method will block till the task executes. Future also has some helper methods that let you check if the task is done. This allows you to do some progress tracking. For example, if you submit 100 tasks to the queue, you will have 100 Future objects. You can calculate how many of them are done by just counting up the number of Future objects where isDone = true

    Actually, even submitting a Runnable returns a Future object. The get method of a Future object that is associated with a runnable task will return null after the task is complete. So, you can use Future to even track the progress of Runnable tasks.

    In fact, since Future provides status of each task, you can use Future instead of shutdown/awaitTermination pair shown in the code above. This means that you don't have to shut down the thread pool after submitting the tasks. You can keep the thread pool alive and keep submitting tasks to it.



    Different kinds of thread pools

    Java has implemented several kinds of thread pools. JavaDoc:java.util.concurrent.Executors is a utility class that provides factory methods to create thread pools.

  • Fixed thread pool

  • This is a thread pool where number of threads in the pool are always fixed. You create this using Executors.newFixedThreadPool method. use this thread if your use case requires that a large number of tasks start at the same time (for example, a job gets broken down into smaller tasks)

  • Cached thread pool

  • This is a thread pool that will create new threads as it needs. If there are idle threads in the pool, the threads will execute the tasks. If there are no idle threads, it creates threads. If a thread is idle for 60 secs, it will stop the thread. You create this using Executors.newCachedThreadPool Use this when you have very frequent very short lived jobs. This thread is vulnerable to slowing down the system if there are spikes in the number of tasks. be careful with this one.

  • Single threaded thread pool

  • This is simply a thread pool with one thread. You create it using Executors.newSingleThreadedExecutor. Use it if you want to run a single task in the back ground at a time.



    When not to use thread pool

    There is a common misconception that just making your application multithreaded will make it faster. That is not always correct. Parallelizing any solution adds overhead, and it may happen that running it multi threaded might make it runs slow. The gains of multithreading are realized if you are doing lots and lots of processing.

    Ideally, the tasks should be independent of each other. If the tasks are sharing a common resource, the common resource will have to be synchronized, which means that threads will wait for each other. Ideally, the task shouldn't go through synchronization blocks. If your application needs heavy synchronization, take a step back and ask your self :"Have I designed this the right way?" and "Does the problem statement lend it self to parallelization". Note that database is a shared resource. Most databases put read/write locks on rows when you operate on them. This can cause threads to wait for each other if they are trying to operate on the exact same record

    Parallelizing an application can mean loss of transactionality. Let's say you had a huge data load operation that took hours to execute. You executed the whole thing in a transaction. Your whole design relied on the fact that if something failed in between, the transaction rolls back. Guess what? Let's say, you parallelize your data load by split the records into batches and creating a task per batch. Each task has to be in it's own transaction. You cannot have a transaction open across threads. If one task fails, and you can stop all the other tasks, but then you will have to manually rollback all the data.



    ThreadsAndSynchronizationFaq
     
    Don't get me started about those stupid light bulbs.
      Bookmark Topic Watch Topic
    • New Topic