Department of Labor Logo United States Department of Labor
Dot gov

The .gov means it's official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Article
March 2020

Estimating variances for modeled wage estimates

The Modeled Wage Estimates (MWE) program publishes mean hourly wages by occupation, geographic area, and worker characteristic (for example, full-time workers). The MWE program combines data from two U.S. Bureau of Labor Statistics programs: the Occupational Employment Statistics (OES) and the National Compensation Survey (NCS). For the first few years of the MWE program, there were no estimates of variance. In 2018, variance estimates were published for the first time for the MWE program, for the May 2017 reference month. This article first shows how the OES and NCS microdata are combined to produce a mean wage estimate. It then focuses on the new variance estimation methodology, highlighting how the variability of both the OES and NCS sample designs are simultaneously captured. A small sample of MWE mean wages and variances are provided for the most recent estimates, for the May 2018 reference month.

The Occupational Employment Statistics (OES) and the National Compensation Survey (NCS) programs both estimate mean hourly wages. Their data come from two independent establishment samples spanning the nation. The OES program collects employment and wage data from all occupations in an establishment, whereas the NCS program only collects data from a sample of occupations. The NCS program also collects information on worker characteristics, such as full- or part-time status, union or nonunion status, whether pay is time based only (for example, a wage) or contains incentive-based pay (for example, commissions), and the NCS generic work level.1 The OES program does not collect data on these worker characteristics.

The OES program samples about 1.2 million establishments over a 3-year period. The NCS program samples only about 8,000 establishments. The OES program produces reliable estimates for many estimation domains, such as occupations within geographic areas. The OES program does not have worker-characteristic breakouts. The NCS program, however, can produce worker-characteristic breakouts. But for many small domains, the NCS sample size is too small to produce reliable estimates, either for the entire domain or for some of its worker-characteristic breakouts.

The Modeled Wage Estimates (MWE) program was created to bridge this coverage gap between the OES and NCS programs. The MWE program produces worker-characteristic estimates for many OES estimation domains, including small OES domains in which NCS estimates are unreliable. The MWE program combines microdata from both the OES and NCS programs. The MWE estimation methodology was previously introduced in a 2013 Monthly Labor Review article, “Wage estimates by job characteristic: NCS and OES program data.”2 As will be shown, the mean wage estimator for the MWE program is identical to the mean wage estimator for the OES program, except for the inclusion of a new factor, the characteristic proportion for the worker characteristic. The proportion is an estimate of the fraction of workers (in an OES microdata row) who have a worker characteristic—for example, the fraction that is full time or the fraction that is both part time and in NCS generic work level 9. The MWE program has breakouts for 54 worker characteristics. See appendix A for a list.

To gauge the reliability of any estimator, we estimate its variance. The variance is the mean squared deviation of the sample estimates from the mean of the sample estimates, evaluated over the entire sampling distribution. The variance measures the dispersion of the sampling distribution. High variance indicates high dispersion, and low variance means that the estimates are tightly clustered. The lower the variance, the more likely that a randomly selected sample estimate from this distribution will be close to its mean and, hence, the more reliable the sample design. The variance also can be used to estimate a confidence interval (margin of error).

For any estimator, its variance cannot be calculated directly because we have only one sample, so its variance is estimated. We can estimate this variance in several ways. One method is the Taylor series, which is used to estimate one component of the variance of the OES mean wage estimator.3 Another method is Fay’s method of balanced repeated replication (Fay’s BRR), which is used by the NCS program.4 After weighing the options for the MWE program, we decided to use Fay’s BRR.

To compute the Fay’s BRR variance estimate for the MWE program, we start with the original mean wage estimate for the MWE program, which we call the full-sample estimate. Then we compute R new mean wage estimates, called replicate estimates. Each replicate estimate is computed with the use of the same formula as the full-sample estimate, except that the sampling weights are perturbed. The sampling weight of each noncertainty sampled unit is either increased by 50 percent or decreased by 50 percent. The choice of whether to increase or decrease is not static; rather, it will vary both by sample unit and replicate. The Fay’s BRR variance estimate is 4 times the mean squared deviation of these R replicate estimates from the full-sample estimate. Multiplying the mean squared deviation by four is necessary to properly scale the result.

The perturbation patterns for the replicates are designed to capture how the actual sample units and weights might vary from sample to sample. To define these perturbation patterns, we first divide the microdata into special subsets called variance strata, which are based on first-stage sampling strata. A sampling stratum is a subset of the sampling frame from which independent samples are selected. Ideally, the variance strata should be these first-stage sampling strata. But in some cases, we collapse the variance strata together. In other cases, we split them up.

Once formed, each variance stratum is then randomly split into two subsets, labeled “variance PSU 1” and “variance PSU 2” (PSU = primary sampling unit). For each replicate and stratum pair, we upweight one variance PSU and downweight the other variance PSU. That is, we first select one of the two variance PSUs. If a noncertainty unit is in this selected variance PSU, we increase its weight by 50 percent; otherwise, we decrease its weight by 50 percent. The pattern of perturbations, across all replicates and strata, are carefully chosen to ensure a good balance.

Adapting Fay’s BRR to the MWE program is challenging because the MWE variance estimator must capture the sampling variance of both the NCS and OES sample designs. Hence, we use a mixture of variance strata. Some MWE variance strata are based on the NCS locality sample design, some on the NCS national sample design, and the rest on the OES sample design. Each MWE estimation domain has a unique set of MWE variance strata. This practice of defining new variance strata for each MWE domain deviates from the method used for other NCS products, in which only one set of variance strata is used for all NCS domains.

Motivation for MWE

The OES sample has about 1.2 million establishments spread over six semiannual OES sampling panels. Each establishment is contacted only once. From most establishments, the OES program collects the employment of each “estab-occ-interval,” which is a wage interval within a six-digit Standard Occupational Classification (SOC) code occupation within the establishment. From some establishments, however, the OES program collects the individual wage rate of each worker. A wage interval is a range of mean hourly wages, for example $7.25 to $9.25 per hour. There are 12 OES wage intervals, which can vary by panel, and up to 840 possible SOC codes.5

The OES program computes mean hourly wage estimates for the nation, the states, and OES localities. For May 2018 estimates, there were two types of OES localities: metropolitan statistical areas (MSAs) and balance of state (BOS) areas. A BOS area is a cluster of nonmetropolitan counties. Each MSA can overlap more than one state, yet each BOS area is contained in a single state.

The NCS sample has about 8,000 establishments spread over several annual sampling panels. Each establishment in each sampling panel is recontacted every quarter, and its data are updated, until the sampling panel rotates out. From each establishment, the NCS program selects a sample of job quotes. Each sampled quote is a collection of workers who share the same six-digit SOC code and the same set of worker characteristics, such as full- or part-time status, union or nonunion status, time or incentive status, and NCS work level. From each quote, the NCS program collects the hourly wage rate of each worker. The NCS program also collects information on benefits, such as benefit costs, access, and participation. These microdata support the Employment Cost Index (ECI), the Employer Costs for Employee Compensation (ECEC), the Employee Benefits publications, and other statistics.6 The occupational and geographic scope of the NCS target population is similar to the scope of the OES target population. However, the NCS sampling strata are too coarse to support estimates for most OES localities, and they cannot support any state estimates.

For the May 2018 reference month, mean wage estimates for the MWE program are computed for 521 OES localities. In contrast, the NCS program has only 120 national sampling strata made of 24 NCS sample areas split into 5 aggregate industry groups. These 24 NCS sample areas are the 15 largest NCS localities and 9 broad “rest-of-census-division” areas. To form a rest-of-census-division area, we start with the entire division and remove all territory that overlaps any of the 15 largest NCS localities. Hence, the NCS program can only support the MWE localities that correspond to the 15 largest NCS sample areas. In addition, the NCS program has too few sample units to produce reliable mean wage estimates for the MWE program for most MWE geographic and occupational domains, let alone for worker-characteristic breakouts of these domains. On the other hand, although the OES program has adequate occupational and geographic coverage, the OES program cannot produce any breakouts by worker characteristic.

The MWE program is designed to bridge these coverage gaps. The mean wage estimates of the MWE program are anchored on the broad and deep occupational and geographic coverage of the OES microdata. These OES microdata are then supplemented with information on worker characteristics only found in NCS microdata. The OES microdata are used as a skeleton, to allow reliable estimates for some small OES domains. The NCS microdata are used to estimate mean wages for each worker characteristic for each OES domain. For each OES estab-occ-interval and for each worker characteristic, ideally we would like to know the true characteristic proportion, which is the fraction of workers in the OES estab-occ-interval who have the given characteristic. But the OES does not collect data on worker characteristics, so the true characteristic proportion is unknown and hence must be imputed.

The NCS microdata are used to impute these characteristic proportions. First, we partition the NCS microdata into imputation cells. In order for us to compute reliable estimates of characteristic proportions, each cell should have a sufficient amount of NCS microdata. Cells with insufficient microdata are collapsed together with other cells until each collapsed cell has enough microdata. The resulting collapsed cells are called final imputation cells. Next, using just the NCS microdata in the final imputation cell, we compute an NCS characteristic proportion for each final imputation cell and each of the 54 worker characteristics. Lastly, we map each OES estab-occ-interval to one final imputation cell. The 54 imputed characteristic proportions for this OES estab-occ-interval are set equal to the corresponding 54 characteristic proportions for the NCS final imputation cell to which the OES estab-occ-interval maps.

Within a single OES domain, however, this mapping will be imperfect. Because of the smaller NCS sample size, many of these final imputation cells may not be subsets of the OES domain. As a result, the imputed characteristic proportions used for an OES domain will often be based in part on NCS microdata outside the OES domain. For example, the OES domain might be restricted to one locality, such as Lexington, Kentucky. However, the associated NCS imputation cell might span all of the East South Central census division (Alabama, Kentucky, Mississippi, and Tennessee). In another example, the initial imputation cell is restricted to a single six-digit SOC code, yet the final cell, after collapsing, spans an entire major occupational group (MOG).

OES mean wage estimates

The mean wage estimator for the MWE program is nearly identical to the mean wage estimator for the OES program, so discussing the OES mean wage estimator first is helpful. An OES estimation domain D is an occupational or industry domain within a geographic domain. The OES mean hourly wage for D is given by

where  = OES estimate of the mean hourly wage for domain D; for OES interval data, k = OES estab-occ-interval and Yk = NCS interval-mean wage that is associated with k; for OES point data, k = OES individual wage record in an OES estab-occ and Yk = OES individual wage rate for k; and for the other variables, D = OES estimation domain (which is an occupational or industry domain in a geographic domain), Wk = OES weight for the OES establishment containing k, and Ek = OES employment of k.

If k is an estab-occ-interval, the NCS interval mean wage Yk is computed as follows. First, we collect 3 years of NCS microdata (using the June quarter). We then create six NCS datasets, one for each OES panel. For the first year, NCS data are duplicated to create panels 5 and 4, the second year creates panels 3 and 2, and the third year creates panels 1 and 0. Next, if a wage is below the federal minimum wage, it is increased to this minimum, and some upper outlier wages are dropped. We then assign each row of NCS data to an OES wage interval, using the interval definitions for that panel. Finally, the NCS data are divided into interval-mean estimation cells, on the basis of panel and interval number. The estab-occ-interval k is mapped to one of these cells Mk, and an initial mean wage is computed for the cell as

where j = individual wage record in an NCS quote; Mk = interval-mean cell, associated with k; Zj = NCS individual weight for j; and Xj = NCS mean hourly wage for j.

The interval means for the five older OES panels are then aged forward with the use of the ECI. That is, if the ECI went up by 2 percent, the interval means are increased by 2 percent. The aging factors vary by panel and MOG. One final adjustment may be required, since for some OES panels and states, the state minimum wage exceeds the lower bound of interval 1 or even interval 2. In these situations, the interval mean wages are shifted up. In some cases, the value is replaced with the geometric mean of the endpoints of a wage interval.

MWE mean wage estimates

The estimation domains for the MWE program are occupational domains within geographic domains. Most of the occupational domains are six-digit or two-digit SOC codes. The rest are small clusters of six-digit SOC codes, called rollup SOC codes. The geographic domains for May 2018 estimates include 521 OES localities, 51 states (when we include the District of Columbia), and the nation. Currently, we compute mean hourly wages for the MWE program for four dimensions of worker characteristics: union or nonunion, full or part-time, time or incentive, and work levels (levels 115, plus an extra category for nonleveled quotes). These definitions yield 22 worker characteristics. We also break out the full-time and part-time estimates by work level, which adds 32 more worker characteristics. Hence, each MWE estimation domain has 54 NCS worker characteristics. See appendix A for a list.

The mean wage estimator for the MWE program is nearly identical to the mean wage estimator for the OES program, except for the introduction of a new factor, the characteristic proportion FkC. For the MWE program, the mean hourly wage for a domain D and worker characteristic C is given by

where  = estimate of the mean hourly wage for the MWE program for domain D and worker characteristic C; the symbol D = MWE estimation domain (which is an occupational domain in a geographic domain); C = worker characteristic (full time, union, time-based-pay, work level, etc.); the subscript k and the variables Wk, Ek, and Yk are the same as in the OES mean wage estimator shown previously; and FkC = MWE characteristic proportion for C, that is associated with k. Note that the definition of D has changed compared with what is used in the OES program. For the OES program, a domain is an occupational or industry domain in a geographic domain. For the MWE program, however, currently no mean wage estimates exist for industry groups (for any geographic breakout).

The characteristic proportion for k and C is computed from NCS microdata as

where i = individual wage record in an NCS quote, BkC = characteristic-proportion final imputation cell associated with k, Zi = NCS individual weight for i, and GiC = 1, if the quote containing i has worker characteristic C. Otherwise, GiC = 0.

The characteristic-proportion cells BkC are initially broken out by OES panel, wage interval, six-digit SOC code, and NCS sample area (24 NCS sample areas exist, which were referred to previously). Yet, if fewer than three NCS quotes are in the cell, it is collapsed and the characteristic proportion is recomputed with the use of the microdata from the collapsed cell. First, we collapse the NCS sample areas into census divisions, then census regions, and then the nation. Next, we collapse the six-digit SOC codes into MOGs. Finally, we collapse the MOGs together. If the final imputation cell has no quotes with the characteristic, we set the characteristic proportion to zero.

MWE variance estimates

The variance of the mean wage estimator for the MWE program is the expected value of the squared deviation of the estimates from their expected value, which is defined as

where D = MWE domain; C = worker characteristic; s = sample; Pr(s) = probability of selecting s;  = variance of the estimator ;  = mean wage estimator for the MWE program for D and C;  = mean wage estimator for the MWE program for D and C, for the given sample s; and  = expected value of the estimator , where  =.

We cannot compute the variance directly because we only have one sample to work with, so we must estimate the variance. Several different methods can be used for estimating the variance, but they fit into two broad categories: linearization methods and replication methods.

One linearization method is the Taylor series method.7 It approximates the sample deviation as a linear function of the numerator and denominator of the mean wage estimator. We approximate the sampling variability by looking at how the units vary inside the sample, yet also considering the sample design.

The variance estimate for the OES mean wage estimator is a sum of two components. The first variance component measures the OES sample design’s contribution to the variance. A Taylor series variance estimator is used. Yet, the interval means are constant, so the Taylor series variance estimator only captures the variability from the OES sample design. The second variance component models the interval means’ contribution to the variance, which can vary by NCS sample. First, NCS microdata are used for creating an artificial population. Next, a regression model is created that relates the interval means of this population to auxiliary data. One output of the modeling process is the contribution of each regression variable to the model variance. The second variance component is the sum of these variance contributions.

One replication method is Fay’s BRR, mentioned earlier.8 First, we create an artificial set of new estimates, called replicate estimates. The distribution of these replicate estimates is used to model the true distribution of sample estimates, from which the variance can be estimated.

Recall that the mean wage estimator for the MWE program is the mean wage estimator for the OES program, with the inclusion of the characteristic proportion. Hence, for MWE variance estimation, we could start by considering the OES variance estimator. Yet, the OES variance estimator does not account for the sampling variability of the characteristic proportions. Accounting for this variability, of course, was not necessary for the OES program since the mean wage estimator for the OES program does not have a characteristic proportion. For the MWE variance estimator, to account for this extra component of variance, one can use a new linear function for the Taylor series. This possibility was investigated but was too complex to estimate.

The Fay’s BRR variance estimator, by comparison, is much simpler, and it can still capture all three sources of sampling variability for the mean wage estimator of the MWE program. The first source of sampling variability is from the OES sample design. Recall that for OES variance estimation, a Taylor series variance estimator is used for estimating the OES sample design’s contribution to the variance. For the MWE variance estimator, however, we use the Fay’s BRR variance estimator to capture the first variance contribution and we vary the OES sampling weights by replicate. The second source of sampling variability for the mean wage estimator for the MWE program is from the interval means, which can vary by NCS sample. Recall that for OES variance estimation, this source of sampling variability was estimated as the sum of regression-model variance components. For the MWE variance estimator, however, we use the Fay’s BRR variance estimator to capture the second variance contribution and we vary the NCS sampling weights in the initial interval mean formula by replicate. The third source of sampling variability for the mean wage estimator of the MWE program is from the characteristic proportions, which can vary by NCS sample. This third source is unique to the MWE program and does not exist in the OES program. For the MWE variance estimator, we use the Fay’s BRR variance estimator to capture the third variance contribution and we vary the NCS sampling weights in the characteristic-proportion formulas by replicate.

The Fay’s BRR estimator of the variance of the mean wage estimator for the MWE program is given by

where R = number of replicates; r = replicate number;  = Fay’s BRR estimator of the variance of the estimator ;  = mean wage estimator for the MWE program for domain D, characteristic C, and replicate r; and  = mean wage estimator for the MWE program for domain D and characteristic C. For each D and C, the estimates computed from the estimator  are called replicate estimates, and the estimate computed from the estimator  is called the full-sample estimate.

Note that the mean squared deviation is multiplied by 4. This rescaling is done because the distribution of replicate estimates will be about half as wide as the true sampling distribution.

MWE replicate estimates

To compute a replicate estimate of the mean hourly wage for the MWE program, we use the same formula as that used for the full-sample estimate of the mean hourly wage for the MWE program, except that some, but not all, of the terms in the full-sample estimation formula are replaced by their replicate estimates. OES and NCS weights are replaced with their replicate weights.

The mean wage estimator for the MWE program for domain D, characteristic C, and replicate r is

where  = mean wage estimator for the MWE program for domain D, characteristic C, and replicate r; for OES interval data, k = OES estab-occ-interval and Ykr = rth replicate estimate of the NCS interval-mean wage used for k; for OES point data, k = OES individual wage record in an OES estab-occ and Ykr = OES individual wage rate for k; and for point data, this value Ykr is the same for all replicates. For the other variables, D = MWE estimation domain, which is an occupational domain in a geographic domain; C = worker characteristic; r = replicate number; FkCr = rth replicate estimate of the MWE characteristic proportion for C (used for k); Wkr = OES weight for the OES establishment containing k (adjusted for replicate r); and Ek = OES employment of k (this value is the same for all replicates).

The rth replicate estimate of the characteristic proportion is computed from NCS microdata as

where i = individual wage record in an NCS quote; BkC = characteristic-proportion final imputation cell associated with k and C (the value BkC is the same for all replicates); Zir = NCS individual weight for i, adjusted for replicate r; and GiC = 1 if the quote containing i has worker characteristic C. Otherwise, GiC = 0 (this value is the same for all replicates).

If k is an estab-occ-interval, the rth replicate estimate Ykr of the NCS interval mean wage is computed in the following way. First, we use the same NCS input dataset as the full-sample estimate and modify the dataset the same way. Then we compute an initial interval mean as

where j = individual wage record in an NCS quote; Mk = interval-mean cell, associated with k and C (this is the same for all replicates); Zjr = NCS individual weight for j, adjusted for replicate r; and Xj = NCS mean hourly wage for j (this is the same for all replicates).

This initial interval mean for replicate r is then adjusted with the same algorithm that was used for the full-sample estimate, yet with modifications. Some aspects of the algorithm can vary by OES and/or NCS sample. Thus, some of these aspects are allowed to vary by replicate, yet not all, to avoid too much complexity. For example, the ECI aging factors do not vary by replicate, even though they would vary by NCS sample.

For replicate r, if an OES establishment is sampled with certainty (probability 1), no adjustments are made to the OES establishment weight. For an OES noncertainty establishment, the OES establishment weight is either increased by 50 percent or decreased by 50 percent. For each NCS quote hit, the NCS individual weight is increased by 50 percent or decreased by 50 percent.

To define these perturbation patterns, first we partition the microdata into H variance strata (to be described later). Next, we split the sampling units in each variance stratum h into two roughly equal parts, called variance PSU 1 and variance PSU 2 (mentioned earlier). Then, for replicate r and variance stratum h, we upweight one variance PSU by 50 percent and downweight the other by 50 percent. The perturbations are not random but are chosen in a balanced fashion. Suppose we represented the perturbation choices as a matrix with R rows and H columns. For replicate r and stratum h, if PSU 1 is upweighted, let the matrix entry be 1; otherwise, let it be –1. Then the perturbation pattern is balanced whenever this matrix has orthogonal column vectors. That is, the inner product of all pairs of columns is always zero. The number R of replicates is always greater than or equal to the number H of strata, but typically R will be within a few units of H.

Ideally, the variance strata should be based on the first-stage sampling strata. But in some cases, we collapse the variance strata together; in others, we split them up. For example, we want each variance PSU in each variance stratum to have at least one sample unit. So we collapse the variance strata together until each stratum has at least two units. Sometimes, we collapse strata together simply to reduce the number H of strata. The fewer strata that exist, the fewer replicates we need, which speeds up running times because there are fewer replicate estimates to compute. Finally, in some cases, we do not have enough strata to accurately capture the sampling variability. For example, if only three strata exist, then there will be only four replicates. Yet, suppose the number of possible samples is much larger than four. Then using only four replicate estimates to approximate the true sample distribution of the mean wage estimates could underestimate the variance. Splitting up the variance strata increases the number and hence the diversity of the replicate estimates, which could counteract this bias in the variance estimate since it may be a better approximation of the true sampling distribution.

MWE variance strata

The replicate estimator for mean hourly wage for the MWE program has both NCS and OES terms and also three types of MWE domains: locality, state, and national. The NCS already uses Fay’s BRR, so for the NCS terms, we already have NCS variance strata and PSUs. Some of these NCS variance strata and PSUs are designed for NCS locality estimates, and some are designed for NCS national estimates. Hence, for each MWE domain and each NCS quote hit, we must decide if we should use the locality variance strata and PSUs or use the national variance strata and PSUs. For the OES program, Fay’s BRR was not used. Yet, the OES sampling strata were available. First, we let the OES variance strata for a MWE domain equal the OES sampling strata within the MWE domain. Many of these OES variance strata, however, are then collapsed together or spilt into pieces. Hence, the final MWE variance strata for a domain are a mixture of these three types of strata: NCS locality variance strata, NCS national variance strata, and the new OES variance strata.

For each NCS quote hit, we must decide whether to use its NCS locality variance stratum and PSU definition or its NCS national variance stratum and PSU definition. The choice we make for a quote hit is based on the scope of the MWE estimation domain, so it can vary by MWE domain. Suppose we are estimating mean wages for the MWE program for a domain that spans the nation. Then, we only use the NCS national variance strata and PSU definitions. On the other hand, suppose the MWE domain is in a single OES locality or a single state. Let A1 be the locality or the state containing the MWE domain, and let A2 be the NCS sample area that contains the given NCS quote hit. If A1 and A2 overlap, then we use the locality variance stratum and PSU definitions for that quote hit. Otherwise, we use the NCS national variance stratum and PSU definitions.

The approach just discussed for locality and state MWE domains was used primarily to solve a problem in which too few NCS sampling strata existed in the final characteristic-proportion imputation cell. Again, suppose A1 is the MWE domain and A2 is the NCS sample area containing the quote. Suppose A1 is a subset of A2, and we proceed to compute the replicate estimates for a characteristic proportion for the given quote. The initial characteristic-proportion imputation cell will be contained within this single NCS area A2. Suppose this initial imputation cell has at least three quotes; then no collapsing occurs. Unfortunately, this single area A2 has only five NCS sampling strata (and hence only five NCS national variance strata), which did not seem enough for reliably capturing the true within-cell variability. So for these cases, we elected instead to use the 44 locality variance strata that exist inside each NCS area. These 44 strata are industry poststrata and are used for computing locality variance estimates for other NCS programs (such as ECI and ECEC). This methodology also handles cases in which the characteristic-proportion cells are collapsed.

To get the OES variance strata for the MWE domain, we start with the OES sampling strata in the MWE domain, which are based on OES locality, state, industry, and panel. First, we may need to collapse some strata together, since for Fay’s BRR, we prefer at least two OES establishments per variance stratum. A within-cell nearest neighbor method is used. First, we create a collapse tree whose leaves are the OES sampling strata. If a stratum needs to be collapsed, we try to pair it with a neighboring donor stratum that shares the same parent. If no such donor can be found, we look for a neighboring donor that shares the same grandparent, and so on.

If the MWE domain is small, the domain may not have enough OES variance strata for reliably estimating the variance. So if possible, the strata are split by establishment size class and then by industry (defined by the North American Industry Classification System codes) until there are enough variance strata. Yet, the sparseness of the OES microdata and the requirement of at least two units per stratum often makes this effort impossible. Even if some splitting by size class and industry can occur, often after this process terminates, the MWE domain still has too few variance strata. If the MWE domain still has too few variance strata, we reject these final variance strata and try again. Yet, this time, we abandon the goal of splitting only by size class and industry. Rather, the establishments in the MWE domain are sorted, by size and industry, and then the first establishment is paired with the second, the third with the fourth, and so on. This pairing yields the most variance strata, although each stratum is no longer necessarily restricted to a single size class and industry. However, the MWE domain still may not have enough variance strata, simply because there are not enough OES establishments. Fortunately, many of these small MWE domains are too small to publish.

On the other hand, for many large MWE domains, far too many OES sampling strata are in the domain. If we used all of them, then the number R of replicates would be too large to allow us to compute all the replicate estimates in any reasonable time, and the storage requirements would be severe. So more collapsing of strata must occur until the total number H of strata (and hence R) is more tractable.

For locality MWE, all the collapsing just described is done first by panel, then state, and then industry. For state MWE, we also collapse by locality size class and by metropolitan or nonmetropolitan status. Yet, for state MWE, the collapse levels (industry groups, geographic objects, locality size classes, and metropolitan or nonmetropolitan status groups) are interleaved in the collapse tree, which allows us to achieve a decent balance. For example, we might first collapse by one industry level, then by one geographic level, and then by one locality size class level and so forth, and then repeat this cycle. The national MWE program, however, has three more geographic levels: states, census divisions, and census regions. To compensate, we removed some of the industry levels. Also, the order in which the levels are interleaved for national estimates is different from that for state estimates. The order matters because the higher a node is on the collapse tree, the more likely its information and, hence, its variability will be retained when we collapse from the bottom up.

Note that a new set of OES variance strata is defined for each MWE domain. This practice differs from that which is typically used in the NCS program, in which the variance strata are fixed for all NCS domains. For the NCS program, the number of sampling strata are small, so we used a fixed set. For the OES program, however, about 151,000 OES sampling strata (with OES microdata) exist. So for large MWE domains, such as two-digit SOC codes for national estimates, we could only get a manageable amount of strata by applying a huge amount of collapsing. Unfortunately, for small domains, such as a six-digit SOC code within a locality, the number of OES sampling strata (with microdata in the domain) is often very small, so we may want more strata rather than fewer strata. So making one fixed set of OES variance strata that would work for all MWE domain sizes would have been difficult. Hence, the OES variance strata were redefined for each MWE domain.

Use of variance estimates

The variance estimate is a measure of mean squared deviation. However, the variance estimate is not directly comparable to the mean wage estimate because the mean wage is measured in dollars, whereas the mean squared deviation is measured in dollars squared. Hence, we often take the square root of the variance so that we have a value that is comparable to the mean wage estimate. The square root of the variance is called the standard error. The standard error often varies greatly across domains because of the size of the mean wage, not because of reliability issues. So the standard error is often represented instead as a percentage of the mean wage. This approach allows better comparisons across domains. This new value is called the percent relative standard error (%RSE). Tables 1–6 in the next section contain mean wages and %RSEs.

The standard error estimate can also be used to generate an estimated confidence interval as

where z depends on the desired confidence level. For example, for the 90-percent confidence level, z is about 1.645. To understand the estimated confidence interval, consider the following situation. Suppose the confidence level was 90 percent, and we could select all samples and compute their confidence intervals. Also, suppose these estimates were normally distributed. Then, we expect that 90 percent of these confidence intervals will contain the true population value. In reality, estimates usually are not normally distributed, but for large sample sizes, the normal distribution is a good approximation of the true distribution. The smaller the variance, the smaller the confidence interval and hence the more reliable the estimate.

MWE variance estimates for May 2018

A complete set of all mean wage estimates for the MWE program for May 2018 can be found at https://www.bls.gov/mwe/mwe-2018complete.xlsx. This Excel file was used to generate six custom tables for this article, tables 1–6. They can be found below. Tables 1–6 show mean hourly wages for the MWE program and their associated %RSEs, for a few domains and characteristics. Tables 1 and 2 show national estimates by occupational group (two-digit SOC code) and by worker characteristic. Tables 3 and 4 show state estimates for a single six-digit SOC code, cashiers. Tables 5 and 6 show state estimates for another six-digit SOC code, registered nurses.

 Table 1. Mean hourly wages and percent relative standard errors for United States, by occupational group and worker characteristics, May 2018
Occupational groupUnionNonunionTime-based payIncentive-based payFull timePart time
Mean%RSEMean%RSEMean%RSEMean%RSEMean%RSEMean%RSE

Management

58.651.257.471.177.026.459.071.2

Business and financial operations

33.312.536.840.435.630.356.013.137.120.422.308.2

Computer and mathematical

43.904.143.890.243.790.254.159.444.270.2

Architecture and engineering

44.044.941.430.541.450.441.990.333.204.0

Life, physical, and social science

42.213.434.781.035.940.736.760.531.013.6

Community and social service

29.931.521.410.623.550.224.330.619.332.8

Legal

42.254.553.061.752.041.553.151.841.8910.9

Education, training, and library

24.510.5

Arts, design, entertainment, sports, and media

27.151.028.410.831.450.8

Healthcare practitioners and technical

48.284.937.891.338.860.840.500.935.191.8

Healthcare support

19.922.614.930.815.500.616.260.813.921.7

Protective service

30.242.118.071.622.900.424.980.514.722.7

Food preparation and serving related

17.792.211.873.912.243.617.836.214.331.910.935.1

Building and grounds cleaning and maintenance

19.211.413.391.814.351.417.745.715.121.212.232.8

Personal care and service

17.692.313.172.213.342.116.484.914.521.512.553.0

Sales and related

15.232.920.290.916.681.336.411.826.260.711.193.1

Office and administrative support

22.211.018.120.518.510.419.912.419.770.313.212.2

Construction and extraction

33.410.921.660.724.560.228.5710.824.880.318.846.0

Installation, maintenance, and repair

31.741.221.560.423.280.225.872.524.020.314.774.3

Production

23.301.217.990.518.820.418.516.619.350.311.972.6

Transportation and material moving

24.821.716.200.918.120.721.383.019.840.613.682.1

Notes: For definitions of worker characteristics terms, see “Frequently asked questions” at https://www.bls.gov/mwe/faq.htm. Dash indicates data failed to meet publication criteria. %RSE = percent relative standard error.

Source: “2018 modeled wage estimates,” National Compensation Survey (U.S. Bureau of Labor Statistics, August 2019), https://www.bls.gov/mwe/mwe-2018complete.xlsx.

 Table 2. Mean hourly wages and percent relative standard errors for United States, by occupational group and work levels, May 2018
Occupational groupValueWork level
12345678910111213

Management

Mean16.1523.5728.1135.7639.5253.9772.0181.42
%RSE4.105.803.102.003.701.802.804.20

Business and financial operations

Mean20.7821.3023.7828.6834.7544.3252.8166.3590.13
%RSE6.202.902.501.601.202.401.802.7011.30

Computer and mathematical

Mean20.6622.0827.4834.0240.1647.4953.3269.1377.59
%RSE4.501.701.802.401.302.500.901.501.90

Architecture and engineering

Mean18.8422.6823.9029.6634.2237.9945.5151.3066.4981.03
%RSE2.203.301.701.901.601.503.001.602.502.50

Life, physical, and social science

Mean14.3117.2919.5823.7324.9933.9536.3737.1148.6965.17
%RSE2.904.002.403.202.103.501.903.402.403.90

Community and social service

Mean14.9617.5221.3125.4230.7532.7236.68
%RSE2.401.901.502.601.204.804.00

Legal

Mean21.6825.4135.6135.4945.5546.5959.4498.21
%RSE5.102.607.103.2010.104.003.0010.80

Education, training, and library

Mean9.8912.9014.3615.4415.7521.4525.8231.4738.8346.4966.4795.04
%RSE8.303.502.203.002.003.204.100.603.202.004.405.00

Arts, design, entertainment, sports, and media

Mean13.0915.7818.5223.9430.3834.9343.2150.81
%RSE2.002.802.302.704.001.402.503.20

Healthcare practitioners and technical

Mean12.7714.5915.5621.2923.5230.0032.2237.5345.2658.2598.40
%RSE9.703.501.803.602.001.201.900.902.402.506.70

Healthcare support

Mean12.3013.0615.5319.3624.5829.1137.50
%RSE3.301.100.802.202.101.602.60

Protective service

Mean11.9312.6413.5715.6220.7627.2232.1837.0141.5446.77
%RSE3.304.202.201.904.003.301.403.301.805.90

Food preparation and serving related

Mean10.1410.6112.1614.1116.6521.5525.1931.3633.67
%RSE6.305.703.001.803.104.003.004.407.50

Building and grounds cleaning and maintenance

Mean10.8512.3914.8017.0121.6323.1224.09
%RSE4.702.501.402.303.403.703.80

Personal care and service

Mean10.4510.4211.8214.1217.1122.5126.4629.0042.03
%RSE7.906.002.702.003.003.903.7010.502.80

Sales and related

Mean10.1110.8512.1717.6722.4027.8433.7139.0056.4561.6675.44
%RSE5.603.502.402.702.103.602.304.302.605.908.40

Office and administrative support

Mean11.4911.9213.7517.1320.2024.8630.5334.61
%RSE3.202.501.300.700.600.801.201.70

Construction and extraction

Mean13.7415.7116.9819.2824.6329.8532.8036.7350.03
%RSE7.502.201.501.501.101.501.804.003.90

Installation, maintenance, and repair

Mean11.5114.1914.7317.1120.7626.3030.9937.3740.54
%RSE3.503.402.201.901.501.101.103.203.50

Production

Mean11.0412.4015.6018.3820.0525.4431.1936.3039.02
%RSE3.501.501.100.901.001.002.302.303.70

Transportation and material moving

Mean11.5913.8216.6721.7624.4528.0033.7539.7462.39
%RSE3.201.701.301.401.502.703.504.908.30

Notes: Dash indicates data failed to meet publication criteria. %RSE = percent relative standard error.

Source: “2018 modeled wage estimates,” National Compensation Survey (U.S. Bureau of Labor Statistics, August 2019), https://www.bls.gov/mwe/mwe-2018complete.xlsx.

 Table 3. Mean hourly wages and percent relative standard errors for cashiers, by state and worker characteristics, May 2018
StateUnionNonunionTime-based payFull timePart time
Mean%RSEMean%RSEMean%RSEMean%RSEMean%RSE

United States

13.192.010.903.311.153.112.162.110.773.6

Alabama

9.917.09.887.210.635.59.398.8

Alaska

16.814.012.501.013.431.413.341.6

Arizona

14.126.811.821.012.000.511.731.0

Arkansas

10.181.610.181.69.901.8

California

12.160.713.180.413.051.0

Colorado

13.894.112.300.912.420.913.561.511.901.0

Connecticut

12.383.512.081.512.150.813.315.811.851.3

Delaware

10.635.410.692.110.671.912.202.310.422.5

District of Columbia

15.732.013.801.714.271.714.141.9

Florida

11.004.410.503.110.503.212.121.89.944.0

Georgia

10.096.49.987.211.123.59.459.4

Hawaii

11.740.912.480.712.381.3

Idaho

11.767.810.614.610.694.611.844.510.265.4

Illinois

13.634.210.862.211.221.912.433.310.792.2

Indiana

11.233.610.007.110.156.511.454.59.767.4

Iowa

9.2314.310.664.210.494.811.444.69.946.0

Kansas

9.0413.510.586.010.376.511.174.59.958.1

Kentucky

9.828.99.799.110.666.89.2311.1

Louisiana

9.519.69.509.69.2410.4

Maine

11.110.911.140.412.417.110.910.8

Maryland

13.535.511.131.511.490.913.3415.011.291.0

Massachusetts

12.883.212.610.712.620.314.123.512.250.8

Michigan

12.851.910.830.811.160.512.021.510.810.9

Minnesota

11.970.411.890.513.111.811.280.3

Mississippi

9.4410.19.4010.210.097.99.0111.9

Missouri

10.773.810.604.211.674.49.985.3

Montana

11.485.110.982.411.012.412.032.910.622.8

Nebraska

10.345.511.160.911.090.810.671.0

Nevada

12.848.411.054.211.194.112.484.210.704.9

New Hampshire

11.654.110.854.110.953.612.463.910.594.1

New Jersey

11.392.711.162.311.231.913.575.810.832.1

New Mexico

10.846.610.594.510.604.511.634.510.235.2

New York

12.982.211.971.212.240.613.453.611.950.9

North Carolina

10.408.99.858.29.878.211.515.89.339.9

North Dakota

10.9613.812.181.712.091.913.202.010.933.1

Ohio

11.673.110.463.910.633.512.392.710.064.3

Oklahoma

10.007.610.037.610.846.09.618.8

Oregon

16.352.011.910.612.500.712.451.2

Pennsylvania

10.436.810.126.910.126.712.106.39.787.8

Rhode Island

12.084.712.071.412.071.113.825.411.611.2

South Carolina

9.868.89.687.89.697.810.916.09.349.0

South Dakota

9.896.510.901.210.791.211.542.610.371.3

Tennessee

10.276.710.236.811.184.99.548.8

Texas

10.534.710.564.710.234.8

Utah

12.429.510.954.211.044.112.284.010.555.1

Vermont

12.071.212.171.013.337.111.761.9

Virginia

12.823.310.366.110.585.611.904.310.296.2

Washington

13.100.514.090.715.081.913.592.3

West Virginia

10.552.810.251.710.261.711.402.69.951.9

Wisconsin

11.343.510.335.810.475.211.932.89.996.4

Wyoming

11.746.411.044.311.094.312.194.410.665.0

Notes: For definitions of worker characteristics terms, see “Frequently asked questions” at https://www.bls.gov/mwe/faq.htm. Dash indicates data failed to meet publication criteria. %RSE = percent relative standard error.

Source: “2018 modeled wage estimates,” National Compensation Survey (U.S. Bureau of Labor Statistics, August 2019), https://www.bls.gov/mwe/mwe-2018complete.xlsx.

 Table 4. Mean hourly wages and percent relative standard errors for cashiers, by state and work levels, May 2018
StateLevel 1Level 2Level 3
Mean%RSEMean%RSEMean%RSE

United States

10.225.010.763.711.513.4

Alabama

9.917.69.2410.29.329.2

Alaska

13.013.614.115.4

Arizona

11.531.912.292.1

Arkansas

9.672.39.862.010.361.8

California

12.741.313.423.9

Colorado

10.801.712.026.013.192.4

Connecticut

11.481.411.772.512.983.6

Delaware

9.254.810.712.710.467.4

District of Columbia

12.840.314.122.315.758.3

Florida

9.237.110.443.510.254.5

Georgia

8.6014.29.749.29.598.1

Hawaii

12.063.813.215.5

Idaho

9.2311.110.597.011.356.4

Illinois

10.443.112.194.5

Indiana

9.986.49.698.310.724.9

Iowa

9.857.010.026.511.4511.0

Kansas

9.558.69.837.811.7911.5

Kentucky

9.749.29.1012.59.2311.4

Louisiana

9.0411.39.1511.39.708.5

Maine

10.841.711.674.7

Maryland

10.582.911.252.212.7812.6

Massachusetts

12.231.212.100.812.752.8

Michigan

10.420.410.791.211.6510.0

Minnesota

11.081.810.981.1

Mississippi

9.549.88.9213.09.0011.8

Missouri

9.836.210.025.811.7210.6

Montana

9.757.210.915.111.634.1

Nebraska

10.511.310.722.111.925.2

Nevada

9.3411.211.018.611.956.5

New Hampshire

10.543.910.215.411.852.5

New Jersey

10.624.710.712.211.693.4

New Mexico

9.3110.610.586.711.266.3

New York

11.791.711.680.812.723.6

North Carolina

8.6114.69.768.49.6110.2

North Dakota

10.524.011.115.513.395.4

Ohio

9.964.410.154.510.563.6

Oklahoma

9.2510.09.499.710.216.7

Oregon

12.433.512.844.8

Pennsylvania

9.0211.49.887.910.746.5

Rhode Island

11.682.111.431.012.264.8

South Carolina

8.6513.39.667.99.529.5

South Dakota

10.332.110.472.411.507.3

Tennessee

10.077.99.4010.39.549.7

Texas

10.107.110.156.8

Utah

9.2411.510.938.011.736.4

Vermont

11.390.211.944.812.822.1

Virginia

9.1910.710.615.810.338.3

Washington

13.961.214.752.2

West Virginia

9.502.710.182.110.163.5

Wisconsin

10.045.510.036.810.714.8

Wyoming

9.3211.811.018.311.927.6

Notes: Dash indicates data failed to meet publication criteria. %RSE = percent relative standard error.

Source: “2018 modeled wage estimates,” National Compensation Survey (U.S. Bureau of Labor Statistics, August 2019), https://www.bls.gov/mwe/mwe-2018complete.xlsx.

Table 5. Mean hourly wages and percent relative standard errors, registered nurses, by state and worker characteristics, May 2018
StateUnionNonunionTime-based payFull timePart time
Mean%RSEMean%RSEMean%RSEMean%RSEMean%RSE

United States

46.881.333.870.6036.110.135.410.738.241.7

Alabama

28.250.3028.390.328.561.327.656.0

Alaska

44.332.341.272.4042.520.441.352.645.466.4

Arizona

36.120.8036.760.436.171.338.963.0

Arkansas

28.590.3028.600.328.450.629.162.0

California

57.892.043.031.2051.260.348.721.056.321.8

Colorado

41.486.934.821.4035.460.335.591.334.826.9

Connecticut

39.151.738.671.4038.870.439.751.437.052.5

Delaware

40.073.234.803.0035.750.336.051.834.486.7

District of Columbia

39.951.7044.190.841.951.250.281.5

Florida

31.280.5031.560.531.630.631.252.4

Georgia

32.730.3032.950.333.501.631.325.2

Hawaii

49.182.745.833.8047.101.044.764.450.604.2

Idaho

34.517.431.811.0032.030.632.310.930.603.7

Illinois

42.873.434.850.7035.360.435.501.134.855.0

Indiana

38.106.330.251.7031.000.230.591.132.242.5

Iowa

28.823.928.120.5028.190.328.101.128.463.2

Kansas

29.724.628.960.6029.040.228.911.129.413.1

Kentucky

30.030.2030.130.230.121.630.237.3

Louisiana

30.371.1030.491.130.620.930.084.0

Maine

33.644.631.076.8032.310.533.841.630.202.4

Maryland

41.446.235.432.0036.680.436.341.037.862.7

Massachusetts

51.262.241.541.3044.190.343.861.544.661.9

Michigan

36.843.533.701.2034.180.234.470.933.362.8

Minnesota

41.413.637.270.7037.770.234.431.940.471.0

Mississippi

27.720.4027.790.428.171.2

Missouri

33.155.930.840.9031.030.331.200.930.782.2

Montana

33.458.032.000.8032.120.432.080.832.344.0

Nebraska

31.374.130.800.7030.860.430.700.731.301.8

Nevada

50.583.039.122.2040.940.441.011.140.616.2

New Hampshire

37.313.034.201.3034.880.435.002.734.694.1

New Jersey

41.701.738.760.7039.710.339.971.039.032.6

New Mexico

36.947.333.951.0034.190.634.361.133.255.5

New York

44.103.339.741.8041.110.440.630.842.392.0

North Carolina

30.820.2030.830.230.750.631.262.8

North Dakota

31.543.531.350.6031.380.331.050.632.261.8

Ohio

37.396.531.121.9031.880.231.441.132.942.1

Oklahoma

30.000.3030.030.330.010.630.122.1

Oregon

44.422.143.012.3043.630.342.362.246.725.2

Pennsylvania

38.091.432.612.4033.800.333.780.633.412.6

Rhode Island

41.395.136.980.9037.520.736.314.039.154.1

South Carolina

30.620.3030.960.331.000.630.942.5

South Dakota

28.364.727.520.5027.600.327.521.227.823.3

Tennessee

29.060.3029.120.329.221.428.746.5

Texas

34.690.2034.790.235.080.432.772.9

Utah

33.416.631.170.7031.350.331.230.832.095.1

Vermont

33.534.932.485.3033.020.634.352.130.902.1

Virginia

32.450.9033.180.332.910.534.361.8

Washington

44.611.933.761.7039.500.539.021.740.654.1

West Virginia

29.200.3029.250.329.440.628.423.0

Wisconsin

38.985.833.292.3034.120.233.471.135.581.8

Wyoming

34.127.931.491.1031.630.931.571.232.364.5

Notes: For definitions of worker characteristics terms, see “Frequently asked questions” at https://www.bls.gov/mwe/faq.htm. Dash indicates data failed to meet publication criteria. %RSE = percent relative standard error.

Source: “2018 modeled wage estimates,” National Compensation Survey (U.S. Bureau of Labor Statistics, August 2019), https://www.bls.gov/mwe/mwe-2018complete.xlsx.

 Table 6. Mean hourly wages and percent relative standard errors for registered nurses, by state and work levels, May 2018
StateValueWork level
7891011

United States

Mean29.2333.1136.1940.8249.42
%RSE3.302.600.803.403.60

Alabama

Mean27.1027.97
%RSE4.701.10

Alaska

Mean39.5143.97
%RSE5.801.80

Arizona

Mean37.3235.57
%RSE2.304.70

Arkansas

Mean27.2629.2933.88
%RSE6.904.3011.60

California

Mean54.0643.2663.53
%RSE2.106.704.00

Colorado

Mean32.2535.0846.9146.04
%RSE5.302.304.004.20

Connecticut

Mean39.0439.9351.56
%RSE0.805.005.70

Delaware

Mean28.9838.06
%RSE2.101.80

District of Columbia

Mean36.1943.5548.1455.08
%RSE3.201.701.604.40

Florida

Mean31.6431.08
%RSE5.301.90

Georgia

Mean31.4032.9742.69
%RSE6.901.202.90

Hawaii

Mean43.7347.51
%RSE10.303.20

Idaho

Mean30.4832.6643.0943.80
%RSE4.401.703.706.70

Illinois

Mean36.9235.48
%RSE3.801.80

Indiana

Mean33.3031.01
%RSE3.601.70

Iowa

Mean28.4928.4431.52
%RSE5.104.006.30

Kansas

Mean29.2229.4232.58
%RSE5.604.605.20

Kentucky

Mean29.6829.61
%RSE5.601.50

Louisiana

Mean28.2228.1530.6237.02
%RSE6.604.304.2011.40

Maine

Mean33.98
%RSE2.50

Maryland

Mean34.9735.5644.2846.82
%RSE2.501.402.402.40

Massachusetts

Mean45.14
%RSE0.80

Michigan

Mean34.33
%RSE0.70

Minnesota

Mean31.6639.56
%RSE2.902.00

Mississippi

Mean27.0327.93
%RSE4.701.10

Missouri

Mean29.4231.0637.28
%RSE5.405.304.80

Montana

Mean30.1132.3943.4343.04
%RSE4.201.704.308.40

Nebraska

Mean30.4228.9431.1634.54
%RSE4.3015.404.003.70

Nevada

Mean37.0139.6650.8946.45
%RSE4.802.104.504.60

New Hampshire

Mean35.79
%RSE0.90

New Jersey

Mean40.8643.7654.29
%RSE1.707.903.80

New Mexico

Mean31.7134.2644.5343.50
%RSE4.902.203.804.90

New York

Mean41.5941.3652.93
%RSE1.7010.902.60

North Carolina

Mean32.0430.3941.80
%RSE5.001.301.70

North Dakota

Mean31.0029.5531.7934.44
%RSE4.3013.403.503.50

Ohio

Mean31.9245.31
%RSE0.808.30

Oklahoma

Mean28.0330.34
%RSE6.403.70

Oregon

Mean40.4944.23
%RSE6.401.80

Pennsylvania

Mean29.3034.6737.08
%RSE3.001.109.70

Rhode Island

Mean32.9438.17
%RSE8.901.10

South Carolina

Mean32.0730.2044.19
%RSE4.701.202.10

South Dakota

Mean28.1127.8130.86
%RSE5.704.106.40

Tennessee

Mean27.9128.95
%RSE4.501.10

Texas

Mean28.8130.0933.23
%RSE5.604.701.30

Utah

Mean29.9231.8743.0643.25
%RSE3.601.804.506.40

Vermont

Mean33.97
%RSE2.60

Virginia

Mean32.2632.1544.64
%RSE5.200.801.40

Washington

Mean39.14
%RSE4.20

West Virginia

Mean31.2629.0737.7141.06
%RSE5.901.209.301.90

Wisconsin

Mean33.98
%RSE0.50

Wyoming

Mean29.9831.9141.9844.03
%RSE3.901.704.007.80

Notes: Dash indicates data failed to meet publication criteria. %RSE = percent relative standard error.

Source: “2018 modeled wage estimates,” National Compensation Survey (U.S. Bureau of Labor Statistics, August 2019), https://www.bls.gov/mwe/mwe-2018complete.xlsx.

Appendix A: Breakouts of 54 worker characteristics

 Table A-1. Worker-characteristic breakouts, by labels 1 to 54
LabelCharacteristic name

1

Union

2

Nonunion

3

Time

4

Incentive

5

Full time

6

Part time

7–21

Full time, levels 1–15

22

Full time, not able to be leveled

23–37

Part time, levels 1-15

38

Part time, not able to be leveled

39–53

Levels 1–15

54

Not able to be leveled

Source: U.S. Bureau of Label Statistics

Suggested citation:

Christopher J. Guciardo, "Estimating variances for modeled wage estimates," Monthly Labor Review, U.S. Bureau of Labor Statistics, March 2020, https://doi.org/10.21916/mlr.2020.3

Notes


1 Work levels are a ranking of the duties and responsibilities of employees within an occupation and enable comparisons of wages across occupations. Work levels are determined by the number of points given for specific aspects, or factors, of the work. For a complete description of point-factor leveling, see “National Compensation Survey: guide for evaluating your firm’s jobs and pay” (U.S. Bureau of Labor Statistics, May 2013), https://www.bls.gov/ncs/ocs/sp/ncbr0004.pdf.

2 Michael K. Lettau and Dee A. Zamora, “Wage estimates by job characteristic: NCS and OES program data,” Monthly Labor Review, August 2013, https://www.bls.gov/opub/mlr/2013/article/lettau-zamora.htm.

3 For a detailed description of the Taylor series variance estimation method, see Ralph S. Woodruff, “A simple method for approximating the variance of a complicated estimate,” Journal of the American Statistical Association, vol. 66, no. 334, June 1971, pp. 411414, https://www.jstor.org/stable/2283947?seq=1#metadata_info_tab_contents.

4 For a detailed description of the Fay’s BRR variance estimation method, see David R. Judkins, “Fay’s method for variance estimation,” Journal of Official Statistics, vol. 6, no. 3, September 1990, pp. 223239, https://www.scb.se/contentassets/ca21efb41fee47d293bbee5bf7be7fb3/fay39s-method-for-variance-estimation.pdf.

5 For a detailed description of the Occupational Employment Statistics procedures, see “Survey methods and reliability statement for the May 2018 Occupational Employment Statistics survey” (U.S. Bureau of Labor Statistics, March 2019), https://www.bls.gov/oes/2018/may/methods_statement.pdf.

6 For a detailed description of the National Compensation Survey procedures, see “National Compensation Survey measures: overview,” Handbook of Methods (U.S. Bureau of Labor Statistics, December 2017), https://www.bls.gov/opub/hom/ncs/home.htm.

7 Woodruff, “A simple method for approximating the variance of a complicated estimate.”

8 Judkins, “Fay’s method for variance estimation.”


article image
About the Author

Christopher J. Guciardo
guciardo.christopher@bls.gov

Christopher J. Guciardo is a mathematical statistician in the Office of Compensation and Working Conditions, U.S. Bureau of Labor Statistics.

close or Esc Key