TY - GEN
T1 - Data Deduplication techniques and analysis
AU - Maddodi, Srivatsa
AU - Attigeri, Girija V.
AU - Karunakar, A. K.
PY - 2010
Y1 - 2010
N2 - Data warehouses are the repositories of data collected from several data sources, which form the backbone of most of the decision support applications. As the data sources are independent, they may adopt independent and potentially inconsistent conventions. In data warehousing applications during ETL (Extraction, Transformation and Loading) or even in OLTP (On Line Transaction Processing) applications we are often encountered with duplicate records in table. Moreover, data entry mistakes at any of these sources introduce more errors. Since high quality data is essential for gaining the confidence of users of decision support applications, ensuring high data quality is critical to the success of data warehouse implementations. Therefore, significant amount of time and money are spent on the process of detecting and correcting errors and inconsistencies. The process of cleaning dirty data is often referred to as data cleaning. To make the table data consistent and accurate we need to get rid of these duplicate records from the table. In this paper we discuss different strategies of Deduplication along with their pros and cons and some of methods used to prevent duplication in database. In addition, we have made performance evaluation with Microsoft SQL-Server 2008 on Food Mart and AdventureDB Warehouses.
AB - Data warehouses are the repositories of data collected from several data sources, which form the backbone of most of the decision support applications. As the data sources are independent, they may adopt independent and potentially inconsistent conventions. In data warehousing applications during ETL (Extraction, Transformation and Loading) or even in OLTP (On Line Transaction Processing) applications we are often encountered with duplicate records in table. Moreover, data entry mistakes at any of these sources introduce more errors. Since high quality data is essential for gaining the confidence of users of decision support applications, ensuring high data quality is critical to the success of data warehouse implementations. Therefore, significant amount of time and money are spent on the process of detecting and correcting errors and inconsistencies. The process of cleaning dirty data is often referred to as data cleaning. To make the table data consistent and accurate we need to get rid of these duplicate records from the table. In this paper we discuss different strategies of Deduplication along with their pros and cons and some of methods used to prevent duplication in database. In addition, we have made performance evaluation with Microsoft SQL-Server 2008 on Food Mart and AdventureDB Warehouses.
UR - https://www.scopus.com/pages/publications/79952337104
UR - https://www.scopus.com/pages/publications/79952337104#tab=citedBy
U2 - 10.1109/ICETET.2010.42
DO - 10.1109/ICETET.2010.42
M3 - Conference contribution
AN - SCOPUS:79952337104
SN - 9780769542461
T3 - Proceedings - 3rd International Conference on Emerging Trends in Engineering and Technology, ICETET 2010
SP - 664
EP - 668
BT - Proceedings - 3rd International Conference on Emerging Trends in Engineering and Technology, ICETET 2010
T2 - 3rd International Conference on Emerging Trends in Engineering and Technology, ICETET 2010
Y2 - 19 November 2010 through 21 November 2010
ER -