United States Department of Labor

The .gov means it's official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

## Occupational Requirements Survey: Calculation

The ORS estimates provide information about the physical demands; environmental conditions; education, training, and experience; and mental requirements of jobs in the U.S. economy. Job requirement categories in ORS have estimates conveyed as the percentage of workers, mean (in hours, days, pounds, or percentage of a day), percentiles, and mode estimates for each occupational definition. Most physical demands and environmental conditions have duration associated with their requirements, which are grouped within a duration level, and a percentage-of-workers estimate is calculated for each of these levels (see the “Duration Level” table below).

Some categories have estimates that use the mean to convey duration, such as sitting or standing/walking and various education, training, and experience components. For example sitting is captured using hours, so mean and percentile estimates (10⁠th, 25⁠th, 50⁠th, 75⁠th, and 90⁠th percentiles) are calculated for both hours and the percent of day spent sitting for a specific occupation or occupational group.

Duration Level

Not present

Requirement is not present and there is no duration

Seldom

Up to 2 percent of the workday

Occasionally

2 percent and up to 1/3 of the workday

Frequently

1/3 up to 2/3 of the workday

Constantly

2/3 or more of the workday

For physical demands and environmental conditions, the mode of the category is identified, that is, which duration level has the largest weighted number of workers. For other categories that do not have duration levels associated with them, the mode is also determined. For example, the minimum education that is the most common for security guards is a high school diploma.

Field economists collect ORS job requirements for over 70 categories; however, as explained above, many estimates can be calculated from one collected job requirement. This results in many more calculated ORS estimates per occupation (or occupational group). For a full list of calculated elements, please see Appendix A at the end of this section.

The formulas used to calculate these estimates are shown below. The type of estimator used depends on the job requirement category. For categorical job requirement estimates, a percentage of workers is calculated, and a mode identified for these percentages. For continuous job requirement estimates (such as duration in hours or days and maximum weight lifted/carried elements), mean and percentile estimates are calculated. Appendix A provides the breakdown of estimate type by job requirement category.

Percentage. The formula for the percentage of workers with a given job requirement out of all workers in the domain (such as an occupation) is

$∑ i = 1 I ∑ g = 1 G i OccFW ig × X ig × Z ig ∑ i = 1 I ∑ g = 1 G i OccFW ig × X ig × 100$

$NRAF= ∑ A+ ∑ B ∑ A$

where:

i is the establishment,

g is the occupation within establishment i,

I is the total number of establishments,

Gi is the total number of quotes in establishment i,

Xig is 1 if worker ig meets the condition set in the domain (denominator) condition and 0 otherwise.

Zig is 1 if worker ig meets the condition set in the characteristic condition and 0 otherwise.

OccFWig is the final quote weight for occupation g in establishment i.

Average (mean). The formula for the average value of a quantity for a requirement is

$∑ A$

where:

i is the establishment,

g is the occupation within establishment i,

I is the total number of establishments,

Gi is the total number of quotes in establishment i,

Xig is 1 if worker ig meets the condition set in the domain (denominator) condition and 0 otherwise.

Zig is 1 if worker ig meets the condition set in the characteristic condition and 0 otherwise.

OccFWig is the final quote weight for occupation g in establishment i.

Qig is the value of a quantity for a specific characteristic for occupation g in establishment i.

Percentiles. The grouping of various categories are used to describe the distribution of a numeric value. The following percentiles p are calculated: 10⁠th, 25⁠th, 50⁠th (median), 75⁠th, and 90⁠th. The pth percentile is the value Qig, where the value of a quantity is for a specific category, such that

· the sum of final quote weights (OccFWig) across quotes with a value less than Qig is less than p percent of all final quote weights and

· the sum of final quote weights (OccFWig) across quotes with a value more than Qig is less than (100 – p) percent of all final quote weights.

It is possible that there is no specific quote ig for which both of these properties hold. This occurs when there exists a quote for which the OccFWig of records whose value is less than Qig equals p percent of the total weighted quote employment. In that situation, the pth percentile is the average of Qig and the value on the quote with the next-lowest value.

Mode. The mode is the chighest percentage estimate within a job requirement category. Refer to Appendix A at the end of this section for a list of elements that have mode estimates.

### Education, training, and experience

Although most of the estimates for these requirements are simply based on establishment responses about the selected jobs’ various tasks, there are some that require an additional level of calculation. One of these is the Specific Vocational Preparation (SVP) level which is determined by the amount of preparation time required by the worker in order to develop the skills needed to perform the job. The job requirement categories that make up this preparation are the minimum education level with respect to formal degree types, pre-employment training, previous work experience, and on-the-job training required by a job. These categories’ associated time are then aggregated and used to determine the SVP level needed for the job shown in the table below:

Specific vocational preparation level Preparation time

1

Short demonstration only (4 hours or less)

2

Anything beyond short demonstration up to and including 1 month

3

Over 1 month up to and including 3 months

4

Over 3 months up to and including 6 months

5

Over 6 months up to and including 1 year

6

Over 1 year up to and including 2 years

7

Over 2 years up to and including 4 years

8

Over 4 years up to and including 10 years

9

Over 10 years

### Strength

Another job requirement that is based off several categories’ estimates is strength. It is measured in five levels: sedentary, light work, medium work, heavy work, and very heavy work. The levels are determined by how much weight a worker is required to lift and/or carry, how often, and standing/walking in some special cases. The strength level is determined by satisfying at least one of the lifting/carrying conditions shown in the table below, or as defined by the “strength special cases” table. The highest strength level satisfied will be the level that represents that sampled job. For example, if a job requires a worker to lift or carry 11–20 pounds occasionally, then it would be classified as light work. However, if that same job were to require lifting or carrying that same weight frequently, then it would be classified as medium work.

Strength level

Lifting/carrying

Light work Medium work Heavy work Very heavy work

Seldom

11-20 pounds 21-50 pounds 51-100 pounds " class="token entity">>100 pounds

Occasionally

11-20 pounds 21-50 pounds 51-100 pounds " class="token entity">>100 pounds

Frequently

≤10 pounds 11-25 pounds 26-50 pounds " class="token entity">>50 pounds

Constantly

Negligible weight ≤10 pounds 11-20 pounds " class="token entity">>20 pounds
The following table outlines the special cases for strength. In instances where field economists are unable to determine certain job requirements from the respondent , they record these data as unknown. See the section “Weighting, Nonresponse Adjustment, Imputation, and Benchmarking” for more information.

### Reliability of ORS estimates

To assist users in ascertaining the reliability of ORS estimates, standard errors are published along with each estimate. Standard errors provide users with a measure of the precision of an estimate to ensure that it is within an acceptable range for their intended purpose. The standard errors are calculated from collected and imputed data. BLS is researching methods for estimating the variance excluding imputed values. For additional information, see https://www.bls.gov/ors/se.htm.

ORS estimates are derived from sampled jobs within responding establishments. Two types of errors are possible in an estimate based on a sample survey: sampling and nonsampling errors. Sampling errors occur because the sample makes up only a part of the population it represents. The sample used for the survey is one of a number of possible samples that could have been selected under the sample design, each producing its own estimate. A measure of the variation among sample estimates is the standard error. Nonsampling errors are data errors that stem from any source other than sampling error, such as data collection errors and data-processing errors.

Standard errors can be used to measure the precision with which an estimate from a particular sample approximates the expected result of all possible samples. The chances are about 68 out of 100 that an estimate from the survey differs from a complete population figure by less than the standard error. The chances are about 90 out of 100 that this difference is less than 1.6 times the standard error. Statements of comparison appearing in ORS publications are significant at a level of 1.6 standard errors or better. This means that, for differences cited, the estimated difference is more than 1.6 times the standard error of the difference.

The ORS uses balanced repeated replication (BRR) to estimate the standard error. The procedure for BRR entails first partitioning the sample into variance strata composed of a single sampling stratum or clusters of sampling strata, and then splitting the sample units in each variance stratum evenly into two variance primary sampling units (PSUs). Next, half-samples are chosen, so that each contains exactly one variance PSU from each variance stratum. Choices are not random but are designed to yield a “balanced” collection of half-samples. By using half-samples, we can compute a “replicate” estimate with the same formula for the regular, or “full-sample,” estimate, except that the final weights are adjusted. If a unit is in the half-sample, its weight is multiplied by (2 – k); if not, its weight is multiplied by k. For all ORS publications, k = 0.5, so the multipliers are 1.5 and 0.5.

The BRR estimate of standard error with R half samples is

$SE Y ^ = 1 R 1 - k 2 ∑ r = 1 R Y r ^ - Y ^ 2$$∑ B$ ,

where:

the summation is over all replicates of half-samples r = 1,...,R,

$Y ^ r$ is the rth replicate estimate, and

$Y ^$ is the full-sample estimate.

Data collection and processing errors are mitigated primarily through quality assurance programs that include the use of data collection reinterviews, observed interviews, computer edits of the data and systematic professional review of the data. The programs also serve as a training device to provide feedback to field economists, or data collectors, on errors and the sources of errors that can be remedied by improved collection instructions or computer-processing edits. Field economists receive extensive training to maintain high standards in data collection.

Once estimates of occupational requirements are produced, the estimates are verified, or validated. The focus of the validation is to compare the estimates with expectations for them. The expectations are based on values of the ORS estimates from prior years as well as similar estimates from other sources of data, such as the Occupational Information Network (O*NET). In addition, ORS estimates between similar occupations are compared.

Estimates that deviate from their expectations are further investigated to ensure that their underlying data are consistent with ORS collection procedures, and their calculation is consistent with the ORS statistical procedures. Estimates that are consistent with these procedures are designated as “fit-for-use” for publication.

Before any estimate is published, it is also reviewed to make sure that it meets specified statistical reliability and confidentiality requirements. The review prevents the publication of an estimate that could reveal information about a specific establishment or that has a large sampling error.

For additional information on data review and estimate validation, see the Data Review and Validation portion of the research section on the ORS website.

### Weighting, nonresponse adjustment, imputation, and benchmarking

Participation in the survey is voluntary; therefore, a company official may refuse to participate in the survey. In addition, some establishments selected from the sampling frame may be out of the scope of the survey or may have gone out of business. To address the problems of nonresponse and missing data, the ORS program adjusts the weights of the remaining establishments and imputes missing values, to ensure that occupational requirement estimates are representative of requirements for civilian workers during the estimation process. This section describes the current weight adjustments, imputation, and benchmarking methods.

Weight adjustments and imputation are made in accordance with the following steps:

1. Unit nonresponse adjustment: An establishment is considered responding if it provided information for at least one usable quote (or sampled job). A quote is classified as usable if the following data are present: occupational attributes (full-time or part-time schedule, union or nonunion status, and time or incentive type of pay), work schedule, and occupational requirements data for any of the job requirement categories.

An establishment is considered nonresponding if it is unable to provide at least one usable quote. Establishment nonresponse is treated with adjustments that redistribute the weights of nonrespondents to similar respondents by characteristics such as the industry, size class, and geographic area of the establishment. For example, if the nonresponding establishment was in the manufacturing industry and had an employment of 350 workers, the ORS program would adjust the weights of responding manufacturing establishments with 250–499 workers by a nonresponse factor during estimation. This nonresponse adjustment factor (NRAF) at the establishment level is calculated using the following formula:

1. Unit nonresponse adjustment: An establishment is considered responding if it provided information for at least one usable quote (or sampled job). A quote is classified as usable if the following data are present: occupational attributes (full-time or part-time schedule, union or nonunion status, and time or incentive type of pay), work schedule, and occupational requirements data for any of the job requirement categories.

An establishment is considered nonresponding if it is unable to provide at least one usable quote. Establishment nonresponse is treated with adjustments that redistribute the weights of nonrespondents to similar respondents by characteristics such as the industry, size class, and geographic area of the establishment. For example, if the nonresponding establishment was in the manufacturing industry and had an employment of 350 workers, the ORS program would adjust the weights of responding manufacturing establishments with 250–499 workers by a nonresponse factor during estimation. This nonresponse adjustment factor (NRAF) at the establishment level is calculated using the following formula:

$NRAF= ∑ A + ∑ B ∑ A$ ,

where:

$∑ A$= weighted employment of all usable establishments in the nonresponse cell

$∑ B$ = weighted employment of all viable but not usable establishments in the nonresponse cell

If there are no responding establishments to reweight within the industry/employment group, then additional responding units from similar geographic areas are considered.

Establishments no longer in operation or out of the scope of the survey, and establishments with no workers within the scope of the survey, are excluded from the survey estimates.

2. Other response and nonresponse adjustment factors may be included for any special situations that may have occurred during data collection. For example, an establishment weight adjustment factor is applied when a sample unit is one of two establishments owned by a given company and the respondent provides data for both locations combined instead of data for the sampled unit. In this example, the weight of the sampled unit is adjusted to reflect the employment data collected.

3. Item nonresponse is a situation in which an establishment responds to the survey but is unable or unwilling to provide some of the occupational requirements data or worker attributes for a given sampled occupation. Item nonresponse is addressed through item imputation in certain situations. Item imputation replaces missing values for an item or items with values derived from establishments with similar establishment and worker characteristics that have a value for the item. For ORS estimates, items with missing values are imputed within groups of ORS characteristics that are related. For example, one ORS group refers to categorical variables only and includes such characteristics as hearing, vision, and driving. Within the group, values are imputed using occupational information from similar occupations in similar establishments. Imputation of one group of ORS characteristics does not affect the imputation for any other group.

4. Poststratification, or benchmarking, is the process of adjusting the weight of each establishment in the survey to match the most current distribution of employment by industry. The ORS establishment sample is drawn from the Quarterly Census of Employment and Wages (QCEW). Because the sample of establishments used to collect ORS data are chosen ahead of time, establishment weights reflect employment at the time of sampling, not collection. The benchmark process updates those weights by current employment. Benchmarking ensures that survey estimates reflect the most current industry– government (hereafter, ownership) employment counts in proportions consistent with the private industry, state government, and local government sectors. For example, let’s say 40 private industry, 10 local government, and 5 state government units in the service sector were selected from the ORS sampling frame. These units consist of establishments employing 200,000 private workers, 30,000 local government workers, and 10,000 state government workers. If, by the time of survey processing, the private service sector experienced an employment increase of 10,000 workers (or 5 percent) and there is no increase in employment in the service sectors of state and local government, then the sample would underrepresent current employment in the private industry service sector in the absence of benchmarking. In this example, the ORS would adjust the sample weights of the 40 service sector firms in private industry to ensure that the number of workers in establishments in the sampling frame rises to 210,000. The ownership employment counts for the private industry service sector would then reflect the current proportions of 84 percent for private industry, 12 percent for local government, and 4 percent for state government employment.

Employment information is derived from the Quarterly Census of Employment and Wages (QCEW) Longitudinal Database, a file of railroad employment, and the Current Employment Survey (CES). The QCEW and the railroad information provide employment data, but since these sources do not have current employment data, the CES is used to make an adjustment to the employment.