Four-Fifths Rule of the Uniform Guidelines on Employee Selection Procedures
(EEOC Four-Fifths Rule)
(April 30, 2012; rev. June 12, 2013)
Note added October 11, 2013 (updated November 19, 2014):Issues discussed on this page are discussed in amicus curiae brief filed November 17, 2014 in Texas Department of Housing and Community Development, et al. v.The Inclusive Communities Project, Inc., Supreme Court No. 13-1731. They are also discussed in my paper “The Mismeasure of Discrimination,” which was presented in a faculty workshop at the University of Kansas School of Law on September 20, 2013.Recent articles discussing the way that relaxing a standard will tend to increase relative differences in failing to meet the standard while reducing relative differences in meeting the standard include “Race and Mortality Revisited,” Society (July/Aug. 2014) and “The Perverse Enforcement of Fair Lending Laws,” Mortgage Banking (May 2014).Recent articles making the point more succinctly include “The Paradox of Lowering Standards,” Baltimore Sun (Aug. 5, 2013) and “Things government doesn’t know about racial disparities,” The Hill (Jan. 28, 2014).The illustration of the point with test score data in the last two items (there showing that lowering a test cutoff will tend to increase relative differences in failure rates while reducing relative differences in pass rates) may be found in tabular form in Table 1 of the Society article and Table 1 of the Mortgage Banking article. Illustrations of the meaning of various EES figures akin to those in Table 1 below (though not tied to the four-fifths rule) may be found in Table 12 (slide 69) of an October 2014 methods workshop at the Maryland Population Research Center of the University of Maryland “Rethinking the Measurement of Demographic Differences in Outcome Rates.”
The point of this page, however, involves the problematic nature of the four-fifths rule as a measure of association. Many regard the four-fifths rule as a useful indicator of effect size (see, e.g., this explanation on the site adverseimpact.org) and, indeed, the rate ratio is commonly regarded as the most useful indicator of the size of an effect. But not only is a rate ratio not a useful indicator of effect size, it is illogical to regard the rate ratio as such. See the Illogical Premises II sub-page of the Scanlan’s Rule page (SR), which explains that it is illogical to regard a rate ratio as reflecting the same measure of association as to different baseline rates given that, if the rate ratios are the same as to one outcome, they must be different as to the opposite outcome. The point is easier to explain with regard to the mistaken perception that a factor will typically cause equal proportionate changes across a range of baseline rates, as discussed in the Subgroup Effects, Illogical Premises, and Inevitability of Interaction sub-pages of SR. See also the February 25, 2013 BMJ comment “Goodbye to the Rate Ratio.”
Table 1 below, which is an illustration akin to that in Table 1 of the 2009 Royal Statistical Society presentation, shows the various effect sizes (reflected by the EES for estimated effect size, see Solutions sub-page of Measuring Health Disparities), consistent with a situation where the success rates of the disadvantaged group is 80 percent of the success rate of the advantaged group (as reflected in the SRR column) The table thus shows at different overall selection rates (benchmarked by the selection rate of the advantaged group (AGSR)), an 80% disadvantaged to advantaged group selection rate ratio (SRR) means quite different things as to the strength of the forces causing the rates to differ.
The penultimate column (RRR for rejection rate ratio) shows how the matter would be viewed in terms of ratios of experiencing the adverse outcome (though the adverse outcome ratio is not more useful a measure of association than the favorable outcome ratio)..
In order to provide some perspective on meaning of each EES figure, the final column (%DG>AGMean) shows the proportion of the disadvantaged group risk distribution for the outcome at issue that is above the mean of the distribution of the advantaged group. By way of explanation, the first row reflects that fact that with a 0.1 standard deviation, 46.4% of the disadvantaged group is above the advantaged group mean, which is to say the distributions are fairly similar. The subsequent rows reflect increasingly dissimilar distributions.
For a fuller discussion of the implications of reliance on standard measures of differences between outcome rates in the employment context, see pages of 24-28 of the Harvard University Measurement Letter.
Table 1. Illustration of Differences in Level of Association Reflected by Situations Where Success Rate of Disadvantaged Group is Four-Fifths of the Success Rate of the Advantaged Group at Different Levels of Prevalence [ref b3811 a 3]