The intent of this library is to implement the unordered containers in the draft standard, so the interface was fixed. But there are still some implementation decisions to make. The priorities are conformance to the standard and portability. The wikipedia article on hash tables has a good summary of the implementation issues for hash tables in general. Data StructureBy specifying an interface for accessing the buckets of the container the standard pretty much requires that the hash table uses chained addressing. It would be conceivable to write a hash table that uses another method. For example, it could use open addressing, and use the lookup chain to act as a bucket but there are a some serious problems with this:
So chained addressing is used. For containers with unique keys I store the buckets in a single-linked list. There are other possible data structures (such as a double-linked list) that allow for some operations to be faster (such as erasing and iteration) but the possible gain seems small compared to the extra memory needed. The most commonly used operations (insertion and lookup) would not be improved at all.
But for containers with equivalent keys a single-linked list can degrade badly
when a large number of elements with equivalent keys are inserted. I think
it's reasonable to assume that users who choose to use This works by adding storing a circular linked list for each group of equivalent nodes in reverse order. This allows quick navigation to the end of a group (since the first element points to the last) and can be quickly updated when elements are inserted or erased. The main disadvantage of this approach is some hairy code for erasing elements. Number of BucketsThere are two popular methods for choosing the number of buckets in a hash table. One is to have a prime number of buckets, another is to use a power of 2. Using a prime number of buckets, and choosing a bucket by using the modulus of the hash function's result will usually give a good result. The downside is that the required modulus operation is fairly expensive. Using a power of 2 allows for much quicker selection of the bucket to use, but at the expense of loosing the upper bits of the hash value. For some specially designed hash functions it is possible to do this and still get a good result but as the containers can take arbitrary hash functions this can't be relied on. To avoid this a transformation could be applied to the hash function, for an example see Thomas Wang's article on integer hash functions. Unfortunately, a transformation like Wang's requires knowledge of the number of bits in the hash value, so it isn't portable enough. This leaves more expensive methods, such as Knuth's Multiplicative Method (mentioned in Wang's article). These don't tend to work as well as taking the modulus of a prime, and the extra computation required might negate efficiency advantage of power of 2 hash tables. So, this implementation uses a prime number for the hash table size. Equality operators
Active Issues and ProposalsRemoving unused allocator functions
In N2257,
removing unused allocator functions, Matt Austern suggests removing
the Swapping containers with unequal allocatorsIt isn't clear how to swap containers when their allocators aren't equal. This is Issue 431: Swapping containers with unequal allocators.
Howard Hinnant wrote about this in N1599
and suggested swapping both the allocators and the containers' contents. But
the committee have now decided that In N2387, Omnibus Allocator Fix-up Proposals, Pablo Halpern suggests that there are actually two distinct allocator models, "Moves with Value" and "Scoped" which behave differently:
With these models the choice becomes clearer:
The proposal is that allocators are swapped if the allocator follows the "Moves with Value" model and the allocator is swappable. Otherwise a slow swap is used. Since containers currently only support the "Moves with Value" model this is consistent with the committee's current recommendation (although it suggests using a trait to detect if the allocator is swappable rather than a concept). Since there is currently neither have a swappable trait or concept for allocators this implementation always performs a slow swap. Are insert and erase stable for unordered_multiset and unordered_multimap?
It is not specified if const_local_iterator cbegin, cend missing from TR1
Issue
691 is that |