In that post, I explained that, contrary to the view reflected in the Sentencing Project report and that is otherwise almost universal, general reductions in incarceration rates tend to increase, not reduce, relative (percentage) racial differences in such rates.
The Mother Jones article quoted the following statement from an earlier Sentencing Project report as proof that the fact that the black youths were arrested twice as often as white youths was not because black youths were committing more crimes: “Black and white youth are roughly as likely to get into fights, carry weapons, steal property, use and sell illicit substances, and commit status offenses, like skipping school.”
The implausibility of the quoted statement, and the many like statements in the context of discussions of racial differences in school discipline rates, ought to be obvious. Black youths are substantially more likely than white youths to be in groups defined by income, family circumstances, and academic achievement where criminal conduct and noncriminal misconduct rates are much higher than in other groups. For there to be little or no difference in behavior of black and white youths would mean that within many or all of the groups so defined the conduct of black youths is substantially superior to that of white youths.
Exhibiting a bit more circumspection than one usually finds in the discussions of these issues the Mother Jones article went on to discuss data on behavioral differences in the following terms (original links and emphasis):
Still, slight behavioral differences doexist—white high school students are more likely to carry weapons or drink underage, while black students are more likely to smoke weed or get into fights, according to the Centers for Disease Control and Prevention. But they don’t explain the vast differences in arrest rates: In the CDC study, a third of black students had gotten into a fight, compared with a fifth of white students. Yet [ ] Justice Department data shows that black youths are about 300 percent more likely than white youths to be arrested for simple assault.
This is the kind of statement one might find in many appraisals of the extent to which seemingly modest differences in conduct might explain seemingly enormous differences in rate of punishment for conduct. And, as is usually the case with such statements, the reasoning is deeply flawed. But, in this instance, the statement cites data from which there is a good deal to be learned about the extent to which difference in behavior explain differences in outcomes.
The failure rates in the table are 37% for the disadvantaged group (DG) and 20% for the advantaged group (AG). Thus the DG rate is 1.85 times (85% greater than) the AG rate.
The second row of that table then shows that with a lower cutoff, failure rates are 13% for DG and 5% for AG. Hence, with the lower cutoff, the DG failure rate is 2.60 times (160% greater than) the AG failure rate.
But that table also shows that the percentage difference in pass rates is larger in the first row than the second row. Thus, the table shows how lowering a cutoff tends to increase relative differences in rates of test failure (the deceasing outcome) at the same time that it reduces relative differences in rates of test passage (the increasing outcome).
The example in the table is one I have used in many places to demonstrate that general reductions in an outcome – whether involving criminal justice, school discipline, lending or pretty much anything else – will tend to increase relative differences rates of experiencing the outcome while reducing relative differences in rates or avoiding the outcome (i.e., experiencing the opposite outcome). An abbreviated and slightly modified version of the table is set out as Table 1 below.
Table 1. Advantaged group (AG) and disadvantaged group (DG) rates of passing and failing a test at higher and lower cutoffs, with measures of difference
AG Pass Rate
DG Pass Rate
AG Fail Rate
DG Fail Rate
DG/AG Fail Ratio
1 - High
2 - Low
Later in the February 8 post, I explained that in a situation where the pass and fail rates shown in the table are rates for any favorable and adverse outcomes that result from decisions of police officers or other decision-makers, there is no rational basis for arguing – on the basis of the comparative size of either the relative difference in the favorable outcome (larger in the first row) or the relative difference in the adverse outcome (larger in the second row) – that the situation reflected in one row is more likely to result from decision-maker bias than that reflected in the other row. That is because both rows reflect a situation where the underlying distributions differ by half (.50) a standard deviation (a situation where approximately 31% of DG is above the mean for AG).
From any pair of outcome rates (for either the favorable outcome or the corresponding adverse outcome) one can derive a like figure for the difference between the means of the two groups’ hypothesized underlying risk distributions. (A statistician would call it the difference between two probits.) I commonly term the figure “EES,” for “estimated effect size.” I explain it somewhat more fully and present a number of examples of its use in “Race and Mortality Revisited,” Society (July/Aug. 2014).
It is, to be sure, an imperfect measure, for reasons I have discussed in various places. But even if we deem it as providing no more than a rule of thumb, it involves an essentially logical and coherent method of estimating the strength of the forces causing outcome rates to differ that is theoretically unaffected by the prevalence of an outcome. Thus, it is far superior to the methods commonly employed in appraisals of the size of demographic differences in outcome rates. These include the relative difference for the favorable outcome and the relative difference for the adverse outcome that so often tell opposite stories about whether one disparity is larger than another, as well as other measures that tend to change solely because of a change in the prevalence of an outcome akin to that effected by the lowering of a test cutoff. See discussion of Table 5 in "Race and Mortality Revisited."
The one-third (33.3%) and one-fifth (20.0%) figures for black and white students involved in fights cited in the Mother Jones article would yield an EES of .41, somewhat lower than the .50 standard deviation difference between means underlying the illustration in Table 1. The actual figures in the CDC report the article cited are 32.4% for blacks and 20.1% for whites, which translate into a still lower EES of .38. The actual figures also mean that the black rate is 1.61 times (61% greater than) the white rate, which the Mother Jones article impliedly contrasts with the 300% difference in arrest rates.
Before considering the soundness of that particular comparison, let us consider what sort of differences we might find between black and white rates of suspensions for fighting. Typically, when one group is more likely to get into fights than another group, the former group will also be even more likely than the other group to get into the more serious fights that frequently result in suspensions. Thus, there is reason to expect the ratio of the black rate of suspension for fighting to the white rate of suspension for fighting to be rather than larger than 1.61 (though there is also reason to expect the relative difference in rates of avoiding suspension for fighting to be rather smaller than the relative difference in rates of avoiding fights).
So, in a situation where the figure for fighting are like those in the first row of Table 1, I would expect that, with respect to patterns of the comparative sizes of the two relative differences, figures regarding suspensions for fighting would tend to look like the second row of Table 1 –that is, with a larger relative difference for experiencing the outcome, but a smaller relative difference for avoiding the outcome, than in the first row. And if rates of suspension for fighting were even lower than the failure rates in the second row of Table 1, I would expect the relative difference in adverse outcome to be even larger, and the relative difference in the corresponding favorable outcome to be even smaller, than in the second row of the table. Rather, that is, I would expect these patterns unless there is discrimination in decision as to who is suspended and the discrimination is severe enough to alter the patterns that tend to result from differences in the prevalence of the outcomes.
But many seemingly informed people would likely find the larger ratio for suspension than for fighting to be an indicator of racial bias. This is akin to the way people mistakenly find evidence of bias in the increasing relative differences (or concentrations of minorities) at each deepening stage of the criminal justice system, a matter I addressed a few decades ago in “Mired in Numbers,” Legal Times (Oct. 12, 1996). Bias may or may not have a role in such things, but we cannot infer bias on the basis of patterns that typically would exist whether or not bias exists.
I do not have actual data on suspensions for fighting that can be compared with data in the CDC report cited in the Mother Jones article. But the table in the CDC report that provided the black and white rates for involvement in fights also showed that 4.7% of blacks and 1.9% of whites were injured in fights. These figures allow comparison of patterns of racial differences for more serious fights (and thus suspensions) with patterns of racial difference for fights in general.
The two rows of Table 2 below present black and white rates of fighting and injuries in fights (the adverse outcomes), along with the rates of not fighting and not being injured in fights (the favorable outcome) and rates ratios for the two outcomes. To facilitate comparison with Table 1, the columns for favorable outcome rates and adverse outcome rates, as well as the associated ratios, are ordered in the same way as in Table 1. And I have added in the final column the EES figures that can be derived from the pairs of favorable (or adverse) outcome rates of blacks and whites.
Table 2. White and black rates of avoiding (favorable outcome) and experiencing (adverse outcome) fighting and injury in fights, with measures of difference
Wh Fav Rate
Bl Fav Rate
Wh Adv Rate
Bl Adv Rate
Wh/Bl Fav Ratio
Bl/Wh Adv Ratio
The table shows what one will typically find in the circumstances. The relative difference in the favorable outcome is larger in the first row (where the favorable outcome is less common), while the relative difference in the adverse outcome is larger in the second row (where the adverse outcome is less common). And the EES for the rates of more serious fights is approximately what one would expect based on the EES for fights in general.
One should be mindful that the data are based on student reports of events and involve no potentially biased appraisal of events by school administrators. Thus, there is little reason why the EES should differ from row to row (save for ways in which the shapes of the unseen risk distributions might differ materially from the shapes normal risk distributions). But in situations where there is a possibility that bias influences outcomes the EES would provide a means (albeit an imperfect one) to indicate whether bias influences the differences in outcomes and to do so even when the two rate ratios (and other measures) vary in the ways they typically do in circumstances of differing outcome prevalence.
Now let us turn to the view in the Mother Jones article that the 67% difference in rates of fighting could not explain the 300% difference in rates of arrest for simple assault. To begin with, the reference to 300% is based on a Department of Justice report showing for 2015 rates of arrest for simple assault of 940.9 per 100,000 (0.9%) for black youths and 304.7 per 100,000 (0.3%) for white youths. Thus, the black in fact is 3.09 times, or 209% greater than, the white rate, not 300% greater than the white rate.
(See my Times Higher webpage regarding the way that long after Philip Meyer’s Precision Journalism and The New York Times Manual of Styles and Usage explained that a ratio of 3 to 1 means that one outcome is “three times as likely” not “three times more likely” than the other, the incorrect usage predominates even in the leading scientific journals (with the notable exception of the New England Journal of Medicine). The “5 Times Likelier” in the title of the Mother Jones article, apparently based on a like misusage in a headline of the Sentencing Project report on which the article relied (“5 X More Likely”), is a common sort of example. The webpage also explains that the incorrect usage has sometimes caused observers to explicitly add an extra 100 percentage points to any actual relative difference, including an instance where the misusage in the Sentencing Project report apparently misled the organization itself with respect to a later discussion of the size of the difference.)
More important, the black and white rates shown for 2015 translate into an EES of .39. Thus, to the extent that the strength of the forces causing the rates at which black and white youths are arrested for simple assault can be measured, it is pretty much what an informed observed might expect given either the rates at which black and white students are involved in fights (EES = .38) or the rates at which black and white students are involved in fights of the kind that result in injury (EES=.40). That is, it is pretty much what an informed observer would expect absent discrimination against blacks that might increase the figure or discrimination against whites that might reduce the figure.
Unfortunately, few people understand these issues. As I have explained here before, the great majority of people analyzing group differences appear to be unaware that it is even possible for the relative difference in a favorable outcome and the relative difference in the corresponding adverse outcome to change in opposite directions as the prevalence the outcomes changes, much less that this tends to occur systematically (though the National Center for Health Statistics recognized such pattern more than a decade ago). A great majority of such persons also share the government’s mistaken belief that reducing an adverse outcome will tend to reduce, rather than increase, relative differences in rates of experiencing the outcomes. And to my knowledge, no one has ever endeavored to appraise a demographic difference while considering the effects of the prevalence of an outcome on the measure employed. See especially “The Government’s Uncertain Path to Numeracy,” Federalist Society Blog (July 21, 2017), and the November 2, 2017 post mentioned below.
A final observation is warranted about differences in behavior of black and white youths. In the main, one would expect black and white youths of like characteristics to exhibit similar behavior. But there is one reason to expect that black public school students will engage in misconduct more often than white students with the same pertinent characteristics. Increasingly, discussions of school discipline disparities issues suggest or maintain that the fact that black students are suspended about three times as often as white student results entirely, or almost entirely, from discrimination. In other words, about two out of every three black suspensions would not have occurred but for discrimination.
Many students have to be influenced by these appraisals of the way racial bias pervades the administration of school discipline, even though most students probably have a much better understanding of classroom realities than social scientists. And it is hard to imagine that a belief that they are being systematically treated unfairly will fail to have an adverse effect on the conduct of black students, just as it is hard to imagine that such a belief would not have an adverse effect on the conduct of students of any race. The extent of such effect, of course, is impossible to gauge. But the belief, and its potential effects, ought to be a matter of concern, especially to anyone promoting it without a sound justification.
In my “United States Exports Its Most Profound Ignorance About Racial Disparities to the United Kingdom,” Federalist Society Blog (Nov. 2, 2017), I discussed the ways that, as a result of its misunderstanding of statistics, a report on racial differences in criminal justice outcomes in the UK promoted a belief that racial disparities in such outcomes are larger than they actually are and the mistaken belief that disparities were increasing in circumstances where they should have been decreasing. The report also recommended policies as means of reducing disparities that in fact would tend to increase the measures of disparity it employed. In the post, however, I failed to note that a central theme of the report was the importance of promoting trust in the criminal justice system within racial minority communities, among other reasons, to cause minority defendants to recognize the advantages of pleading guilty where there is solid evidence of guilt. Thus, there is an unfortunate irony in the way that the report’s misunderstandings regarding the interpretation of data are likely to cause it to undermine the trust that it considers to be so crucial.
In the Winter 1991 issue of The Public Interest, I titled an article “The Perils of Provocative Statistics” in which I recorded misunderstandings of statistics that would prove to be as pervasive today as they were in 1991. But I failed to make points like those in the paragraphs just above or to suggest that the unwarranted undermining of trust among disadvantaged groups may be the greatest peril of provocative statistics.
The above post should not be read to mean that there is no racial bias involved in racial differences in school discipline, criminal justice, or other outcomes, though any role of such bias in the case of school discipline and criminal justice outcomes is likely to be small. A recent study examined suspension rates in North Carolina according to race and gender of teachers and students. It showed, for example, that suspension rates for black male students are 15.4% with white male teachers and 12.7% with black male teachers and that they are 16.1% with white female teachers and 13.7% with black female teachers. Assuming the general validity of the study, the difference in rates according to teacher race are presumably functions of (a) white teachers’ disfavoring black students, (b) black teachers’ favoring black students, (c) better behavior of black students when being taught by black teachers rather than white teachers, or (d) some combination of the three. The EES for those differences is .12 where the teacher is a man and .10 when the teacher is a woman. By contrast, the overall 19.5% and 6.92% figures for black and white male rates of suspensions shown in Table 1 of my Discipline Disparities page (and which yield a black to white ratio of 2.82) yield an EES of .62. In North Carolina and probably in most other places, it is EES figures like the .12 and .10 figures derived from suspension rates in the recent study that likely provide the most useful indicator of the degree to which racial bias may be having a role in differing discipline rates. In any case, such figures are clearly more useful than the EES of .62 that does not account for differences in conduct. And they are certainly more useful than the 2.82 ratio cited above or like ratios presented in the context of supposedly slight differences in behavior and presented without recognition that general reductions in suspensions rates will tend to increase the ratios.