• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
  • Campbell Ritchie
  • Paul Clapham
  • Ron McLeod
  • Jeanne Boyarsky
  • Tim Cooke
  • Liutauras Vilda
  • paul wheaton
  • Henry Wong
Saloon Keepers:
  • Tim Moores
  • Tim Holloway
  • Stephan van Hulst
  • Carey Brown
  • Frits Walraven
  • Piet Souris
  • Himai Minh

Java EE 8 High Performance

Ranch Hand
Posts: 36
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
What is the trickiest part of concurrence in Java 8 and how does this book help?
(Optional question - did anything significant change with the new releases?)
Posts: 13
  • Likes 1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Thomas,

When it comes to concurrency, the goal is to avoid locks as much as possible. A Simple example is the cache one: if you put some caching to make some database calls faster allowing to skip them but that the cache access requires a lock on the whole application
then your optimization will quickly become a degradation and instead of seeing a performance improvement you will see worse figures.

What is interesting with Java 8 is that there are new API allowing to handle that more smoothly and naturally.
The nice part of these new API is that they have been integrated with JavaEE (or are easy to integrate) pretty quickly allowing you to use the best of both worlds without much efforts.

To illustrate that I'll take a simple example: until Java 7/JavaEE 7 the asynchronism was mainly handled through futures and there was no natural way to be notified of the completion of the asynchronous task.
Only available option was to block to get the result (or to test if you have the result). This is quite a poor solution and enforces you to "poll" the result to know it is available which implies using another thread (or thread pool) in the best case.
Since Java 8 you have the CompletionStage (we often know the CompletionFuture which is the default implementation of a CompletionStage). The nice thing about that new JavaSE API is it allows to chain some processing on the result before it is available. So no need of this old polling pattern anymore and no need to become blocking at all, you can be 100% reactive and wait it is available to process the data (convert them, store them etc...).
CDI 2.0 embraced this API with its asynchronous event API and firing an asynchronous CDI event you retrieve the result as a CompletionStage which allows you to chain all the processing without waiting for the data to be here.
Servlet layer has since JavaEE 6 an AsyncContext which is used by JAX-RS 2.0 (JavaEE 7) to handle its AsyncResponse. All that means you can now (JavaEE 8) be asynchronous from the network layer (NIO) using JAX-RS AsyncResponse and complete your (http) response thanks to a CompletionStage you got from CDI for instance. The threads will only be used to process data and not to wait for some data availability!

This is only one way to become reactive in JavaEE 8, in the book I show how to do it from a WebSocket layer for instance, but the spirit is the same.

In terms of performance this has indeed an overhead but you quickly gain in scalability become you optimize the CPU and threads usage (goal is to use it only when you have some processing to do and not to wait for a response to come).

With this new development pattern, the tricky part is probably the exception handling. It is not more a try/catch but you need to rely on callbacks (as for the main processing) to do it. It is not hard by itself but requires to change a bit your mind
so can need some time to be used to that kind of thinking.

The other tricky part in EE with concurrency can be the context propagation. When you use ee concurrency utilities (the thread pools for JavaEE to summarize it), you will execute some task in another thread than the caller in general.
The question is: what is my "EE context" in this new thread? Is my @RequestScoped bean the same as before? etc... All that can lead to some surprises. For instance if you use a @RequestScoped which is injected in a bean which is triggering a lambda using the request scoped field then the lambda will likely use another instance of the request scoped bean because you will get one instance per "thread". The book has a part explaining what the container does which should give you the needed knowledge to let you identify this kind of issue quickly.

This is the approach I took with this book: give to the reader enough knowledge to let him not be lost when needing to work on an application, even not having coded it himself.

Thomas Zink
Ranch Hand
Posts: 36
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Thanks for the thorough reply.
Nothing? Or something? Like this tiny ad:
free, earth-friendly heat - a kickstarter for putting coin in your pocket while saving the earth
    Bookmark Topic Watch Topic
  • New Topic