While this example only handles a small amount of words (2000) due to file size storage restrictions in a forum, its a technique is to split the words to load 2 hash tables based off the alphabet location of the first character, in this example < m and m to z.
By incurring a small split overhead, the hash table only handles half as many words and with very large word or text counts, it reduces the time to scan either hash table.
It is relatively simple code to write, just allocate 2 pairs of arrays, initialise both pairs and load the words into the two hash tables.
It builds successfully either directly from the PBCC compiler or from within the PBCC IDE.