Wondering what would perform overall better in removing these elements?
The size, isEmpty, get, set, iterator, and listIterator operations run in constant time. The add operation runs in amortized constant time, that is, adding n elements requires O(n) time. All of the other operations run in linear time (roughly speaking). The constant factor is low compared to that for the LinkedList implementation.
This class offers constant time performance for the basic operations (add, remove, contains and size), assuming the hash function disperses the elements properly among the buckets. Iterating over this set requires time proportional to the sum of the HashSet instance's size (the number of elements) plus the "capacity" of the backing HashMap instance (the number of buckets). Thus, it's very important not to set the initial capacity too high (or the load factor too low) if iteration performance is important.
What was "dumb" about it?
Originally posted by Tariq Ahsan:
Thanks all for your responses to my sort of dumb question.
Originally posted by Gamini Sirisena:
A small addition..
The HashSet determines a duplicate by calling the equals method of the objects being added. Unless the equals method of the objects that is being added to the HashSet is properly overridden the HashSet will accept "duplicates".
Originally posted by Wirianto Djunaidi:
Considering the name of the class is HashSet, I would think it will rely more on hashCode() method. Of course the common best practice is where a class return true on equals() it should return the same value on hashCode(), so you can kind of say that..but no guarantee