• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

Is the applicationscoped instance shared among all users ?

 
Ranch Hand
Posts: 99
5
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I'm confused about @applicationScoped access between users. So an ApplicationScoped bean will be only one instance of the class.
Since there is only one instance that means every user visiting a JSF page that has an EL expression like so:



would access the instance sequentially ? Meaning it could be a bottleneck.
Or is there a copy sent to the thread pool each time an user request it ?

Thanks in advance Tim, as it is often you who respond to my questions :p.

 
Saloon Keeper
Posts: 27762
196
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Likes 1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
JSF Application Scope is 100% identical to standard J2EE Application Scope. Because that's what it is, with the addition of JSF's automatic instantiation mechanism.

Application scope object are shared with every webapp user, true. Because webapp response processing is asynchronous, that means that not only can multiple users be accessing an application-scope object at the same time, but also the same users can be making multiple simultaneous accesses. Therefore any accesses to the application scope object should be made thread-safe if there is a possibility for trouble. For example, if I store a List of SelectItems in an application scope bean for a shared menu, that's probably not going to need full synchronization, since it's read-mostly access, and you can replace the list as an atomic operation. On the other hand, a visitor counter would be best managed as a synchronized property.

If you're sharp, you'll notice that if a single user can access an application-scope object multiple times simultaneously, the same can be said of the user's session-scope objects. In practice, however, a given user's actions aren't as likely to interact in potentially harmful ways, so we don't usually design for concurrency on session-scope objects. Keeps the logic simple and reduces overhead.
 
Cedric Bosch
Ranch Hand
Posts: 99
5
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Tim Holloway wrote: that means that not only can multiple users be accessing an application-scope object at the same time, but also the same users can be making multiple simultaneous accesses. .



Thanks that clears the grey area I had a lot! So basically as long as the application scoped object is read-only, by read only I mean that no data is written onto the object fields there won't be any problem (which is how I use applicationscoped bean). Still I have to ask a follow up then : how ? Since there is only one instance of the object, how do multiple client (which I'm guessing are threads) read simultaneously the data ? Is there multiple copies of the object ? I just fail to see how memory (the single object in memory) is read in parallel unless the memory is duplicated. My understanding is that threads cannot read the same place in memory in parallel.. Maybe this question is beyond the scope of JSF and could be more about multithreading idk.
 
Marshal
Posts: 28193
95
Eclipse IDE Firefox Browser MySQL Database
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Cedric Bosch wrote:My understanding is that threads cannot read the same place in memory in parallel.



Really? I don't see why not, but I must say the idea never occurred to me. My assumption was that threads (i.e. processor cores) could read a memory location simultaneously, but that's just an assumption. Do you have some basis for it or is it just an assumption on your part?
 
Tim Holloway
Saloon Keeper
Posts: 27762
196
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
There's threads and then there are "threads".

On the old single-core CPU systems, multiple threads could be running, but only one would actually be active at any given point in time. So multi-thread access had no issues with multiple threads reading the same physical memory location as only one thread could be physically doing so at a time.

With the advent of multi-processor and multi-core CPU systems, this was no longer true, and even less so when you start talking pipelined architectures where several instructions can be in different stages of execution simultaneously per core. Plus you have the additional issues of multiple cache layers with stucc percolating up and down.

Rest assured, however, that the hardware designers have taken all this into account in order to ensure consistent operation and that yes, indeed, you can safely have multiple threads reading a given memory location concurrently and simultaneously. It's all sorted out in the hardware.

Writing on the other hand, is a touchier subject. Multi-processing systems typically have special "spin lock" instructions to ensure that multiple accessors are granted read/modify access according to precise pre-determined rules. Spin locks are used when very-low-overhead synchronization is required. Java synchronization is mostly at a higher level, so while spin locks may help control the gates to the Java synchronization mechanism, higher level services handle the sync control for the extended term.

Bottom line: you can safely read, but if you want to write, you generally need Java synchronization.

Incidentally, there's also a mechanism known as "memory-mapped I/O", where "RAM" is actually control and data registers for I/O devices. In which case, what you write in may not be was comes back out again. That's an entirely different mechanism, and I don't believe Java supports it, as it would violate "write once/run anywhere". In C, this mechanism is supported by the "volatile" attribute.
 
Cedric Bosch
Ranch Hand
Posts: 99
5
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Wonderful answer thank you.

The relation hardware code is really something I need to understand on a deeper level I feel like.
 
Tim Holloway
Saloon Keeper
Posts: 27762
196
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Cedric Bosch wrote:Wonderful answer thank you.

The relation hardware code is really something I need to understand on a deeper level I feel like.



I still have the first computer I ever owned out in my garage. It's a 6U-high monstrosity with a 30-lb power transformer in it and capacitors the size of soup cans (this was before switching power supplies).

It was fairly easy to understand how CPUs worked back then. They operated in the Turing model: fetch an instruction, execute it, fetch another instruction, repeat. You could precisely predict how long code would execute because every instruction had a fixed cycle time or times (often conditional branches took different amounts of time based on whether the condition caused a branch or not).

This was basically true even on mainframes.

But fairly early in the history of microprocessors, they started doing magical things. Stuff like the "Harvard Architecture". CPU onboard cache (prior to that, the only CPU "cache" was the distinctly addressable Register File). The CPU began to fracture instructions which used to be atomic into sub-steps and bubble them though, even doing predictive operations and discarding results which wouldn't be used. For example, you get all ready to add 2 numbers, pulled them into cache, then before they get to the register-processing stage, some other instruction initiates a branch that bypasses the add.

Which is why I avoid trying to optimize stuff at the machine instruction level these days. It's no longer a strictly deterministic function and more a matter of statistics and the only really accurate way to estimate timing is to benchmark real-world operation.

In short, the conceptual model of how CPUs work hasn't changed since the 1960s and as software developers, we do our work based on that mode. But what actually goes on "under the hood" (or bonnet, if you prefer) is quite strange and marvelous and makes a major study in its own right these days.

And that's just for ordinary business logic. When you start adding in the support for coordinating multiple cores, it gets even gnarlier.
 
please buy my thing and then I'll have more money:
a bit of art, as a gift, that will fit in a stocking
https://gardener-gift.com
reply
    Bookmark Topic Watch Topic
  • New Topic