Analysis Coalesced Hashing
Problem Definition: In this project you are to make a serious study of Coalesced Hashing. This technique is discussed in the paper “Implementations for Coalesced Hashing” by Jeffrey Scott Vitter, CACM, Dec 1982. This paper, the link is on the web site, divides a hash table into an address region and a cellar. The cellar is used to store records that collide when inserted. The paper indicates that near optimal performance occurs at B=.86 where B is the ratio of the size of the address region to the size of the entire table. You project is to write a simulation that supports this statement. Run your simulation for a variety of hash table sizes and B values. Draw graphs to support you work. Write up the project in word using the graphs (as generated by PIL) embedded. Attach the source to this paper, place a header page and staple. Turn in on the above date.
NOTES: Your main focus in this project is to obtain data that would allow you to draw a graph such as that on page 925. Here we will restrict the project to successful searching via successful probe counting only so don’t worry about unsuccessful searching. Once a table is loaded it is very easy to determine (by a calculation) the average probe count for that set of data. See Fig 1 (a) for an example. It would suffice to create four curves on the same graph having the following different loadings ( .7,.8,.9. and 1.0). The graph on page 925 has a loading of 1.0. You will need to execute multiple runs for the range of address factors that go from say .4 to 1.0 in whatever step size you choose as long as a minimum on the curve around .86 would be visible. You only need to implement the basic algorithm, ie late-insertion. Also make enough runs so that averaging them will make the curves somewhat smooth.
As a final comment please note that you can use any address size you choose. It does not have to be a prime number. We are not hashing real data that may be clustered. We are loading the table with data that is randomly generated so placement in the table is properly spread out. You can use the usual division method discussed in the overheads ie. n mod m where m is the size of the address region. This makes selection of the address region size for a specific array size easy. Let me say this again, you pick an array size, say 1000, and then use a variety of address region sizes in that array to collect data. IE m’ is assumed to be constant for data collection. If you change m’ then data collection on these should be shown on different graphs.