• Post Reply Bookmark Topic Watch Topic
  • New Topic

Multi-Threaded access primitive Array / Volatile  RSS feed

 
Dan Hop
Greenhorn
Posts: 7
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi folks, I was wonder if it is 'safe' to create a primitive array of ints say:

int[] ints = new int[500];
..//initialize all ints to 0.

then access and increment the array values without any synchronization.
I do NOT care if multiple threads read the same value, then increment/set, I want multiple threads to be able to access/update the array in parallel. (i.e. no blocking across the entire array is a must)

only that each thread when it increments by 1 it will infact increment by 1 based on the value that thread read.
i.e. its ok for n threads to read the value 5, and all of them increment it to the value 6, ideally I would use 'volatile' to avoid this but that is not possible in 1.6..the task at hand is statistical gathering of live data (it does not need to be 100% accurate).

or will doing i++, or i+=2, result in potentially a non-sensical value, since i++ is not atomic?

thoughts?

As an alternative to insure each value is 'atomic' could I do:

AtomicInteger[] ints = new AtomicInteger[500]();
// this would insure each Integer in the array of 500 would be atomic...obviously I would prefer to use int vs the heavier Integer.

is AtomicInteger[] equavlent to AtomicIntegerArray()? (its unclear if this class makes atomic updates across the enter collection on per value basis)

Cheers, and thanks for any advice..


 
Nitesh Kant
Bartender
Posts: 1638
IntelliJ IDE Java MySQL Database
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
If you do not need the atomic increment and you can tolerate the deviation in the values equivalent to the concurrency you have, then i think a pre-initialized final array is a better bet.
(As you have mentioned, pre-initialization and constant size (making it final) is the key here to thread-safety)

However in between AtomicInteger[] and AtomicIntegerArray, the latter is a better option as discussed here too. From a user point of view, the difference between the above two is that you do not really have to initialize all the objects in the array if you use AtomicIntegerArray. Both of these data structures will be atomic though.

Dan Hop wrote: (its unclear if this class makes atomic updates across the enter collection on per value basis)


It does on per value basis. What is the atomicity across the entire collection you are referring to?
 
Chris Hurst
Ranch Hand
Posts: 443
3
C++ Eclipse IDE Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
If your saying your going to alter the array values and read them from other threads without synchronisation etc then all your final initialised array is giving you is all threads at the start are guaranteed see that same thing (initialised array) after that its a random generator in theory in practice you'll probably see little difference provided you avoid some edge cases lots of processors, terracota etc. but note also in theory some really bad things can happen usually these are the ones from your compiler or JVM rather than the OS and they could happen every time. I strongly advise you to read JSR 133 (google) and if your not comfortable with it go for the simplest synchronised solution if you require your code to be correct. Unless you go atomic, volatile etc or just put some synchronisation in which is by far the more sane solution.

I would implement a simple solution with synchronisation, profile it and if this is an issue then optimise it.
 
Seetharaman Venkatasamy
Ranch Hand
Posts: 5575
Eclipse IDE Java Windows XP
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Chris Hurst wrote: I would implement a simple solution with synchronisation, profile it and if this is an issue then optimise it.

that is nice
 
Dan Hop
Greenhorn
Posts: 7
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
well originally I was actually using a ConcurrentHashMap (actually originally I was using ...HashMap..naturally that led to issues), the concurrentHashMap caused a significant performance drop..I believe the main reason is the code does lots and lots of puts, and those are blocking calls.

As such my new strategy is to use the concurrentHashMap as an index into the array (reads are all non-blocking into the map) (be it just int[], or AtomicIntegerArray, or AtomicInteger[]).
That I still need to figure out.

As of right now my array will never shrink (an object added to it will never be removed, and the index will always reference the same object), and it does have a max size, 500 (yeah..atm) 500 AtomicIntegers don't scare me even if 400 of them don't actually ever get used. So I think I'm still leaning towards AtomicInteger[] at the moment.

AtomicInteger[]:
Will insure that no 2 threads will read the same value and increment assuming I use the increment function, and not read/set.

int[] on the other hand:
would allow 2 threads to read the same value and both increment to same value (not good, not disastrous in my case).
but..from what I remember in the early days..jdk 1.2 i++ is NOT thread safe, I think the bits can actually get corrupted since it would allow threads to see partial updates from other threads. based on the word boundaries..this is where I'm not sure.

I think regardless I will start out with AtomicSomething, and perf test it, it I regain the vast majority of the perf drop, then its good enough, with the added advantage it being fully atomic

Adding final on an array declaration doesn't make the references in the array final (just the reference to array itself)..so that wouldn't help in this case..I wouldn't think.
 
Chris Hurst
Ranch Hand
Posts: 443
3
C++ Eclipse IDE Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
i++ is not thread safe but shouldn't corrupt the bits ;-) just report wrong results from the more expected set, unless I'm missing something (possibly) ;-) int's are 32 bits and ok , longs for instance and your in a whole new world of hurt, word tearing may hit you and whole new corrupt values escape ;-) eg suddenly highly negative or large values appear.

But note i++ missing the odd increment is not the worst that can happen in theory (note I used theory) the array being updated may never, ever be visible to another thread (each thread can run as if it had its own copy of the array (theoretical, though use say terracotta and it'll happen every time all the time)) as you have no happens before ordering , and having seen some new examples recently (see javamemory model discussion group)its even possible to stores reordered and you counters to appear to run backwards :-( use AtomicIntegerArray its why its there ;-)

PS ConcurrentHashMap which you mentioned also has lots of good stuff for quick reliable access , in general use the existing library data structures there well debated. and if there are any performance tricks to be had they'll have 'em. ConcurrentHashMap should have multiple locks behind it (lock striping) so multiple threads reading and writing to the hashmap can work in parallel depending on how lucky you get with your 'buckets' so I'm surprised you say you observed poor performance.

PPS final on the array reference does only make the reference final but it also does something else ;-) it guarantees alls read through the reference at the time the final was 'frozen' are guaranteed to be up to date from any other thread but your right that's not very helpful to you in this case as they are all initialised to zero in your case ;-) and its subsequent writes you want cross thread visibility of.
 
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
Boost this thread!