U.S. flag

An official website of the United States government

Skip Header

Nonsampling Error

All surveys, including SIPP, are subject to nonsampling errors from various sources. SIPP contains nonsampling errors common to most surveys, as well as errors that stem from SIPP's longitudinal design. SIPP experiences some differential undercoverage of demographic subgroups. To compensate for this undercoverage, the Census Bureau adjusts SIPP sample weights to population control totals. Little is known, however, about how effective these adjustments are in reducing biases in surveys.

SIPP also experiences sample attrition. This is a common concern in longitudinal surveys because of the need to follow the same people over time. Attrition reduces the available sample size, and to the extent that those leaving the sample are systematically different from those who remain in the sample, survey estimates could be biased.

Response errors also occur in SIPP and take on a number of forms. Recall errors, for example, are thought to be the source of the "seam phenomenon." This effect occurs when a respondent projects current circumstances back onto the survey’s reference period (the prior calendar year for the 2014 Panel and after, and the prior 4 months for 2008 and earlier panels). This causes any changes in respondent circumstances that occurred during the reference period to appear to have happened at the beginning of the reference period. The effect is a disproportionate number of changes that appear to occur between the last month of one wave and the first month of the following wave – the "seam" between their two interview waves.

Another potential source of nonsampling error is the time-in-sample effect. This effect refers to the tendency of sample members to "learn the survey" over time. The more times a sample member is interviewed, the better they learn the questionnaire. The concern is that sample members will alter their responses to the survey questions to conceal sensitive information or to minimize the length of the interview. 

Effects of Nonsampling Error on Survey Estimates

A considerable amount of research has been conducted to investigate the various sources of non-sampling error in SIPP. The results of this research are summarized in the SIPP Quality Profile, 3rd edition, available at www.census.gov/library/working-papers/1998/demo/SEHSD-WP1998-11.html Additional findings about SIPP data quality, especially for more recent panels, can be found in the National Research Council’s 2018 report, The 2014 Redesign of the Survey of Income and Program Participation: An Assessment, and in Appendix A of the National Research Council’s 2009 report, Reengineering the Survey of Income and Program Participation. Despite the volume of methodological research, it remains difficult to quantify the combined effects of non-sampling errors on SIPP estimates. This problem is made more complex because the effects of different types of non-sampling error on survey estimates vary, depending on the estimate under consideration. However, there are some findings about non-sampling error that SIPP users should bear in mind when conducting their analyses and examining their results. Those findings include:

  • Some demographic subgroups are underrepresented in SIPP because of under-coverage and nonresponse. They include young black males, metropolitan residents, renters, people who changed addresses during a panel (movers), and people who were divorced, separated, or widowed. The Census Bureau uses weighting adjustments and imputation to correct the underrepresentation. However, those procedures may not fully correct for all potential biases (SIPP Quality Profile, 3rd Ed., Chapter 8).
  • SIPP estimates of the working population differ from those produced from CPS. The differences may be explained largely by substantial conceptual and operational differences in the collection of labor force data in the two surveys (SIPP Quality Profile, 3rd Ed., Chapter 10).
  • SIPP estimates of the number of births compare favorably with CPS estimates. Both surveys, however, provide estimates that are low relative to records from the National Center for Health Statistics (NCHS). SIPP estimates of the number of marriages are fairly comparable with NCHS counts, but SIPP estimates of the number of divorces are consistently lower than NCHS estimates (SIPP Quality Profile, 3rd Ed., Chapter 10).
  • Across all age groups, particularly children and the elderly, SIPP continues to identify more sources of family income than CPS. SIPP’s greater effectiveness than CPS in capturing income from multiple sources among retired workers demonstrates an important way in which SIPP appears to provide a better tool for policy analysis (Czajka et al., 2008).
  • In 2005, SIPP captured a higher share of aggregate annual benefits than CPS for Food Stamps, AFDC/TANF, OASI, and SSI, but was only marginally better for SSDI. In 1987, SIPP was on par with CPS for AFDC/TANF and SSDI. Whether because of poor recall or because respondents sometimes answer on the basis of their current situation, CPS estimates of persons who ever participated in a program sometimes line up with SIPP estimates of average monthly participants (Czajka, 2009).
  • When compared to the Survey of Consumer Finances (SCF) by the Federal Reserve Board for late 1998 and early 1999, SIPP is much more effective in capturing liabilities than assets. SIPP’s estimate of aggregate assets was 55 percent of the SCF estimate of $34.1 trillion, but its estimate of aggregate liabilities was 90 percent of the SCF estimate of $5.0 trillion (Czajka, 2009).
  • Average monthly estimates of health insurance coverage from SIPP compare closely to estimates of health insurance coverage obtained in the National Health Interview Survey (NHIS) and CPS (Czajka, 2009).
  • When examining the use of housing unit controls versus population controls, a team at the Census Bureau concluded that the weighting adjustment for within-household under-coverage when using population controls by age, sex, and race tended to be higher than the weighting adjustment for housing unit coverage when using housing unit controls, which focus on coverage of housing units (including whole households). Thus, when population-control-based weights are applied to characteristics such as household relationship, the estimate of householders (family plus nonfamily) will almost always be higher than the corresponding housing-unit-control-based weights that are applied to obtain the estimate of occupied housing units (Cresce et al., 2013).

The Census Bureau has done nonresponse bias studies to investigate the effect of decreasing response rates, but more work needs to be done to truly quantify that bias (McMillan & Culver, 2013). These studies are available at www.census.gov/programs-surveys/sipp/tech-documentation/nonresponse-reports.html

Page Last Revised - August 22, 2022
Is this page helpful?
Thumbs Up Image Yes Thumbs Down Image No
255 characters maximum 255 characters maximum reached
Thank you for your feedback.
Comments or suggestions?


Back to Header