Campbell Ritchie wrote:So it depends on the quality of the hashCode() implementation.
Bob Sherry wrote:I do not understand how HashMap can have an O(1) (worst case) insertion time even with an Integer key because the insertion routine is going to do a mod operation on that hash value and that could result in a collision.
Is that what they call amortised constant time?
Mike Simmons wrote:. . . The average performance is certainly O(1) though.
. . . and we are back with “average”. Remember the “N” in MS' formulae refers to the size of the map rather than the number of pairs being “put” into it. You can avoid such rearrangements if you predict the requisite size in advance:- ...new HashMap<>(56) to accommoadte 56 pairs will give you a 128‑bucket map because 56 is greater than 75% (default load factor) of 64.
Mike Simmons wrote:. . . it balances out.
Campbell Ritchie wrote:Is that what they call amortised constant time?
Bob Sherry wrote:I do not know what the worse case running time of the insertion into a HashMap is. It depends on the algorithm used. It should be a function of how much of the HashMap is already used. The problem is collisions.
Thank you. As you say, collisions are only really a problem if they are frequent because of a poor hash code algorithm. In the “good old days” the buckets incorporated linked lists, so the worst case complexity, as Joshua Bloch points out in Effective Java (at least editions 1 and 2) was linear (O(n)).
Stephan van Hulst wrote:. . . these days HashMap uses a tree to store colliding elements. . . .
Campbell Ritchie wrote:I would add new elements to the head of the list, so there would be no performance difference caused by collisions. Any performance problems would only occur when searching the Map.