but also still compare the strings.
Actually, the fast hashmap implementation compares only the hash keys and not the strings (or whatever object you are using), on the understanding that they are as unique as the entity from which they are created. The modulus of the hash by the container size is used to determine the first location of the entry. Each entry also contains the entity's hash as well as the entity. Due to the hash%size operation, you will tend to hit something unrelated once it gets too full.
If you are lucky, the first entry you hit is the one you are looking for (which you determine by comparing your hash with the entity's hash).
If you are unlucky, you might have to step a few left and right until you find it. This is given the assuming storage algorithm will try to find the closest empty slot to the initial target.
This type of algorihm is discussed in Bjarne Stroustrup's C++ 3rd ed as an exercise in container design ;-)
On every system I've tried, this method of hashed lookup outperforms the trie for longish strings when the hash map is generally less than 60% full. This can be accounted for simply though cacheing alone. Once you get past 60%, efficiency starts to fall off. You'd generally design the container to expand when it hits a load of say 75%.
Of course, the fundamental flaw in this is that you can get into trouble if two keys collide :-D
In our experiment here, the trie is likely to be faster since we probably arent looking for excessively long strings. The place you'd most want to ensure you optimised is in node allocation. You really dont want to be allocating thousands of small nodes individually for the trie.