In 2004, the U.S. Census Bureau introduced an automated instrument to collect contact history paradata in personal-visit surveys. Among others, survey methodologists analyze these data to improve contact strategies, predict survey nonresponse and evaluate nonresponse bias. But while the paradata literature is growing, a critical question remains - how accurate are the paradata themselves? We address this question by analyzing contact history data collected by the same instrument across three Federal surveys. We compare indicators of data quality to assess level of consistency both across and within the surveys. We also assess the degree of agreement between automated contact history data (e.g., time/date stamps) and information entered directly by the interviewer, such as attempt day and time, notes, and assessments of respondent cooperation.