• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

Doug Lea PooledExecutor won't grow larger than minimumPoolSize

 
Author
Posts: 131
7
Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Is anyone familiar with the Doug Lea PooledExecutor thread pool? I've configured my poool to have minimumPoolSize=10 and maximumPoolSize=50. I wrote a test which executes 200 runnables. I expected the thread pool size - as returned by getPoolSize() - to eventually return 50 indicating that I've gotten to the max number of threads I configured for the pool. However this does not seem to happen. The 200 runnables get queued up - using a LinkedQueue - but no matter how many are in the queue the size of the pool - as returned by getPoolSize() - never gets bigger than minimumPoolSize. It's like minimumPoolSize is being used as the maximum number of threads the pool may have. Has anyone else seen this behavior? Is this the way it's suppose to work?
 
Bartender
Posts: 4179
22
IntelliJ IDE Python Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I assume you are talking about this: PooledExecutor, but for which there is also a built in implementation for Java 1.5+: ThreadPoolExecutor.

Both versions of the Pool behave the same, and can be summed up from the API:

Queueing
...
When given a choice, this pool always prefers adding a new thread rather than queueing if there are currently fewer than the current getMinimumPoolSize threads running, but otherwise always prefers queuing a request rather than adding a new thread. Thus, if you use an unbounded buffer, you will never have more than getMinimumPoolSize threads running.



So as you add new tasks then the Executor queue, if there are fewer than 'minimum' threads a new thread is added. If the 'minimum' threads already are made then the task gets queued. If the the queue is filled then more threads are made, up to the 'maximum' thread count. So you are probably either working with an unbound queue (so the queue never gets filled) or you are working in that range where the queue has yet to be filled.
 
Michael Remijan
Author
Posts: 131
7
Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Is there a way to change this behavior? It seems a little counter intuitive to me if I add say 1000 Runnables to the queue that my pool would not grow to 50 if I set my minPoolSize=10 and my maxPoolSize=50. I want to be able to start background processing so I don't want to block when adding to the pool, but at the same time I want my pool to grow to its maximum size to process all the Runnables. I'm guessing setting both the min- and maxPoolSize=50 (or whatever I want the max to be) would be the way to do this?
 
Steve Luke
Bartender
Posts: 4179
22
IntelliJ IDE Python Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Yeah, the easiest route is to set both min and max to 50 but then you have to worry about timeouts. In some implementations the threads bellow the minimum count don't timeout - which means you will always have 50 threads even if they are all idle.

The alternative is to brovide a fixed sized queue with a task rejection policy (sorry I forget the exact term and can't look it up now but it is in the API) that reacts to rejected tasks in a specific fashion. The default is to wait until the task can be added to the queue but there are alternatives, and if one doesn't suite your needs you could implement your own.
 
Michael Remijan
Author
Posts: 131
7
Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
In the documentation for EDU.oswego.cs.dl.util.concurrent.PooledExecutor, right above the part of the documentation you quoted, the documentation also says:

If you are sure that this cannot happen, then you can instead supply a queue of some sort (for example, a BoundedBuffer or LinkedQueue) in the constructor. This will cause new commands to be queued in cases where all MaximumPoolSize threads are busy.



This really sounds to me like once the maximum number of threads - as defined by MaximumPoolSize - is reached, then commands will start going to the queue. The pool then should grab the commands out of the queue. Once processing has quieted down, the pool should return to it's minimum size.

But, as you pointed out, in the documentation about queues it is stated the queue is prefered once MinimumPoolSize is reached. This behavior is more clear looking at the code. First, threads up to MinimumPoolSize are created. After this is reached, an attempt is made to put the command into the queue. If the attempt to enque fails, only then does a thread pool size start to grow up to maximum. If maximum has been reached the rejection policy is asked what to do.

This behavior is odd and I can't quite figure out why it works this way. When I think of a thread pool, I think about a minimum number of threads sitting there waiting to process commands. A command goes in and it goes to a thread, if there are no threads available, start adding new threads until a maximum is reached. If the maximum is reached and commands are still coming in, then start queuing the requests. If the max of the queue is then reached, then some sort of rejection policy is consulted on what to do next. I'd like a pool which does this.
 
Steve Luke
Bartender
Posts: 4179
22
IntelliJ IDE Python Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Michael Remijan wrote:...
This behavior is odd and I can't quite figure out why it works this way.



Yeah, when I think of how it works I tend to think of there being two Thread Pools. The first (from 0 to minimumThreadCount) is the main Thread Pool which represents your best-effort compromise between speed (more threads) and resources. The Queue and the threads between minimum and maximumThreadCount are fall-back behaviors. What happens when your best compromise between speed and resources leaves tasks unable to complete? Queue them up so they can be gotten to when a Thread has time. But even this may need a fall back mechanism - if tasks need to be done in a timely fashion, or just storing those tasks is memory intensive then you need some way to relieve the burden - and this is where the extra Thread Pool (count between minimum and maximum Threads) comes into play. Using more than the minimum number of Threads is non-optimal, so try to avoid it and if you can't. then go ahead and make more Threads but let those Threads timeout if they don't get used so we drop back down to the optimal situation when we can.

This mental concept of how it works is helped by the Java 1.5 java.util.concurrent implementation which calls those threads between 0 and minimumThreadCount the 'Core Pool.' I have no idea if this is how Doug Lea envisioned it when he developed the idea but it works for me.

When I think of a thread pool, I think about a minimum number of threads sitting there waiting to process commands. A command goes in and it goes to a thread, if there are no threads available, start adding new threads until a maximum is reached. If the maximum is reached and commands are still coming in, then start queuing the requests. If the max of the queue is then reached, then some sort of rejection policy is consulted on what to do next. I'd like a pool which does this.



So basically this is what you want to do, if I understand:
When a new task comes in ...
1) Either pre-start minimum threads, or prefer making a new thread when less than minimum threads exists
2) If minimum threads already exist send the task to an already present, idle thread
3) If there are no idle threads and there are less than maximum threads running, then add a new thread to execute the task
4) If there are already maximum threads running, then queue the task
5) If the queue is filled, then send it to a rejection policy

This is essentially what happens when you have minimum == maximum thread count, with the exception of step 2. If this is important to you then you might consider using a synchronous queue to send tasks to the pool. A synchronous queue is essentially a zero sized blocking queue (I think in the library you are using they are called SynchronousChannel) - so when the task is added to the queue it is either immediately taken by a Thread or Queueing fails - forcing a new non-core Thread to be made. If all threads are already made it would be sent to the rejection policy.

And here is where your custom code comes in. You implement a rejection policy which takes the failed task and re-queues it into a fixed sized queue, and if that fixed sized queue fills up, then passes it on to one of the pre-implemented rejection policies. The trick is then to get tasks from this queue back into the execution circuit which you could do either by using a 're-filling' thread which attempts to re-execute the task or by modifying the Worker implementation to check this back-up queue before using the 'new task' queue.
 
reply
    Bookmark Topic Watch Topic
  • New Topic