The merge/purge problem for large databases
- 22 May 1995
- journal article
- conference paper
- Published by Association for Computing Machinery (ACM) in ACM SIGMOD Record
- Vol. 24 (2), 127-138
- https://doi.org/10.1145/568271.223807
Abstract
Many commercial organizations routinely gather large numbers of databases for various marketing and business analysis functions. The task is to correlate information from different databases by identifying distinct individuals that appear in a number of different databases typically in an inconsistent and often incorrect fashion. The problem we study here is the task of merging data from multiple sources in as efficient manner as possible, while maximizing the accuracy of the result. We call this the merge/purge problem. In this paper we detail the sorted neighborhood method that is used by some to solve merge/purge and present experimental results that demonstrates this approach may work well in practice but at great expense. An alternative method based upon clustering is also presented with a comparative evaluation to the sorted neighborhood method. We show a means of improving the accuracy of the results based upon a multi-pass approach that succeeds by computing the Transitive Closure over the results of independent runs considering alternative primary key attributes in each pass.Keywords
This publication has 5 references indexed in Scilit:
- AlphaSortPublished by Association for Computing Machinery (ACM) ,1994
- Techniques for automatically correcting words in textACM Computing Surveys, 1992
- The breakdown of the information model in multi-database systemsACM SIGMOD Record, 1991
- Automatic correction to misspelled namesCommunications of the ACM, 1987
- Duplicate record elimination in large data filesACM Transactions on Database Systems, 1983