The problem is caused because your accumulator and combiner functions are not stateless. The
reduce() method requires stateless and associative functions. Stateless means that the functions are based on nothing else but the input arguments, and they cause no side-effects. Your accumulator calls
List.add() and your combiner calls
List.addAll(), both causing side-effects in the input arguments.
This usually doesn't cause problems in serial reductions, because the combiner is never called, and the accumulator is called once for each element in the stream. However, here's what happens in a parallel reduction:
The elements are split in two halves, which are both reduced in parallel.The first half is reduced by adding FunctionalInterface.class to the identity list. Your identity list is now no longer empty.The second half is reduced by adding Stream.class to the identity list. Because the previous reduction was stateful, the identity list is now [FunctionalInterface.class, Stream.class].The reduction operation expects the results of your accumulator to be independent objects, so it now has two (supposedly independent) reduced halves which it has to combine. Essentially it performs identity.addAll(identity), which results in [FunctionalInterface.class, Stream.class, FunctionalInterface.class, Stream.class]
You can make the accumulator stateless by copying the input list, adding the element to that, and returning the new list. You can make the combiner stateless by copying the first input list, adding all elements of the second list to it, and returning the new list. Because copying the lists all the time can become very expensive, the
Java designers added a mutable reduction operator:
Stream.collect().
Stream.collect() does the same as
Stream.reduce(), except it keeps in mind that the accumulation is reused by the different threads performing the reduction.
A handy guideline is to use
Stream.reduce() if your result is an immutable type, and
Stream.collect() if your result is a mutable type.