Analysis of coalesced hashing

Cancelado Publicado Nov 11, 2014 Pagado a la entrega
Cancelado Pagado a la entrega

Problem Definition: In this project you are to make a serious study of Coalesced Hashing. This technique is discussed in the paper “Implementations for Coalesced Hashing” by Jeffrey Scott Vitter, CACM, Dec 1982. This paper, the link is on the web site, divides a hash table into an address region and a cellar. The cellar is used to store records that collide when inserted. The paper indicates that near optimal performance occurs at B=.86 where B is the ratio of the size of the address region to the size of the entire table. You project is to write a simulation that supports this statement. Run your simulation for a variety of hash table sizes and B values. Draw graphs to support you work. Write up the project in word using the graphs (as generated by PIL) embedded. Attach the source to this paper, place a header page and staple. Turn in on the above date.

NOTES: Your main focus in this project is to obtain data that would allow you to draw a graph such as that on page 925. Here we will restrict the project to successful searching via successful probe counting only so don’t worry about unsuccessful searching. Once a table is loaded it is very easy to determine (by a calculation) the average probe count for that set of data. See Fig 1 (a) for an example. It would suffice to create four curves on the same graph having the following different loadings ( .7,.8,.9. and 1.0). The graph on page 925 has a loading of 1.0. You will need to execute multiple runs for the range of address factors that go from say .4 to 1.0 in whatever step size you choose as long as a minimum on the curve around .86 would be visible. You only need to implement the basic algorithm, ie late-insertion. Also make enough runs so that averaging them will make the curves somewhat smooth.

As a final comment please note that you can use any address size you choose. It does not have to be a prime number. We are not hashing real data that may be clustered. We are loading the table with data that is randomly generated so placement in the table is properly spread out. You can use the usual division method discussed in the overheads ie. n mod m where m is the size of the address region. This makes selection of the address region size for a specific array size easy. Let me say this again, you pick an array size, say 1000, and then use a variety of address region sizes in that array to collect data. IE m’ is assumed to be constant for data collection. If you change m’ then data collection on these should be shown on different graphs.

I also expect that the project be written in Python or C++ and that the hash table and associated operations be placed in a Python Class or C++ class. If you are using Python you may want to use numpy for your arrays.

Python

Nº del proyecto: #6719098

Sobre el proyecto

1 propuesta Proyecto remoto Activo Nov 17, 2014

1 freelancer está ofertando el promedio de $25 para este trabajo

sIlIVBEjIOAR

Hi, we can do it. We are experienced in software development, Linux, Java, Python, advanced numerical computations, data analysis, crawler development et. Add me on skkpye, s o l v e r i o, so we can discuss the detail Más

$25 USD en 30 días
(0 comentarios)
0.0
chandraprakashvw

A proposal has not yet been provided

$25 USD en 7 días
(0 comentarios)
0.0