CUNY Institute for State and Local Governance Equality Indicators Project
(March 26, 2020)
This subpage is one of the subpages of the Educational Disparities page of jpscanlan.com discussing efforts to analyze demographic differences in educational outcomes without understanding the ways measures tend to be affected by the prevalence of an outcome, a matter somewhat summarized in “Innumeracy at the Department of Education and the Congressional Committees Overseeing It,” Federalist Society Blog (Aug. 24, 2017). The subpage principally focuses on the analysis of demographic differences regarding the academic proficiency in public schools when the differences are quantified in terms of relative differences in rates of meeting proficiency standards or relative differences in rates of failure to meet standards without an understanding that general improvements in proficiency will tend to reduce relative differences in meeting proficiency standards but increase relative differences in failure to meet proficiency standards. The page addresses one example of the remarkable situation where (a) even though test score data make it abundantly clear that lowering test cutoffs (or improving test performance) tend to relative differences in meeting the passing the test while increasing relative differences in failing the test (as illustrated, for example, in Table 2 of “Race and Mortality Revisited,” Society (July/Aug. 2014) and Figure 1 (at 22) of Comments for the Commission on Evidence-Based Policymaking (Nov. 14, 2016) and (b) even though this pattern can be seen to occur in the great majority of cases where there occur substantial overall changes in the proficiency rates, those analyzing demographic differences in proficiency standards and encouraging others to do so remain universally unaware that it is even possible for relative differences in rates of meeting proficiency standards and relative differences in rate of failure to meet proficiency standards to change in opposite directions much less that this will usually occur.
Around 2015, the Institute for State and Local Governance (ISLG) at the City University of New York created an Equality Indicators project, funded by the Rockefeller Foundation, that purported to evaluate the equity reflected by 96 indicators. A general description of the project is set out in this brochure. The project produced reports on equity in the City of New York for the years 2015, 2016, 2017, and 2018.[i] The principal indicators are favorable or adverse outcome rates of advantaged and disadvantaged groups and the differences between these rates are measured in terms of ratio of an outcome rate of the group with the higher rate for the outcome to the outcome to the rate for the group with the lower rate for the outcome. The numerator of the ratio is the rate of the advantaged group when the outcome examined is a favorable side of a dichotomy and the rate of the disadvantaged group in the more common situation where the outcome examined is the unfavorable side of the dichotomy. The ratio is then converted to an equity score according to the Ratio-to-Score Conversion Table, which appears at pages 114-115 of the 2018 report. The lower the ratio and hence the lower the relative difference, higher the equity score.
As is common or universal for such projects, project materials reflect no understanding of the way measures employed tend to be affected by the prevalence of an outcome and thus can provide nothing useful, though much that is misleading, about whether the forces causing the outcome rates of advantaged and disadvantaged groups to difference are growing stronger or weaker over time.
Most pertinent to the point of this subpage, in appraising demographic differences regarding proficiency in New York City (Indicators 21 and 24 of the 2018 report), the project measured demographic differences in terms of relative differences in rates of failure to achieve proficiency (cast in terms of the ratio of the nonproficiency rate of the disadvantaged group to the nonproficiency rate of the advantaged group). In the case of the comparison of blacks with Asians respecting math proficiency (where proficiency rates between 2015 and 2018 increased from 19.1% to 25.4% for blacks and from 66.8% to 72.2% for Asians) the report found that the black-Asian nonproficiency ratio had increased from 2.44 (80.9%/33.2%) to 2.68 (74.6%/27.8%). The increase in the ratio caused the equity score for the indicator to decline from 38 to 36. See Indicator 21 at page 48. Had the report measured the subject in terms of relative differences in proficiency rates, it would have found the Asian-black ratio to have declined from 3.50 to 2.84, which would have involved an increase from an initial equity score of 30 in 2015 to an equity score of 35 in 2018. According the method I described in “Race and Mortality Revisited,” Society (July/Aug. 2014), the disparity would have shown an Estimated Effect Size or EES (more formally known as probit d') of 1.31 in 2015 and 1.25 in 2018.[ii] The pertinent figures for this indicator are set out in Table 1.
Table 1. Mathematics nonproficiency and proficiency rates of New York City black and Asian public school students in grades 3-8 in 2015 and 2018, as shown for Indicator 21 in ISLG 2018 Inequality Indicator Report, with measures of difference
I do not know whether the EES change is statistically significant. But, in any case, it would be a small change,[iii] as is usually the case when there occurs an overall change in an the prevalence of an outcome over a comparatively short period of time (that is, when ordinarily there is insufficient time for meaningful changes in the strength of the forces causing outcome rates and of advantaged and disadvantaged groups to differ).
A similar pattern is shown for Indicator 24 (at page 49), which involves differences of students with disabilities and students without disabilities and where between 2015 and 2018 English proficiency rates increased from 6.9% to 15.8% for students with disabilities and from 36.8% to 55.2% for students without disabilities. That increase was accompanied by an increase in the ratio of the nonproficiency rate of students with disabilities to the nonproficiency rate of students without disabilities from 1.47 (93.1%/63.2%) to 1.88 (84/2%/44.8%). This resulted in a decrease in the equity score from 62 to 45, which is the fifth largest of the negative changes listed at page 117 of the 2019 report.
The ratio of the proficiency rate of students without disabilities to the proficiency rate of students with disabilities, however, declined from 5.33 to 3.49. This would have resulted in an increase from an initial score of 19 in 2015 to a score of 32 in 2015.
In this case, the EES declined from 1.15 to 1.13, a change to which the observations about above regarding the EES for the black-Asian math proficiency difference would apply. The pertinent figures are set out in Table 2.
Table 2. Reading nonproficiency and proficiency rates of New York City public school students with disabilities (WD) and students without disabilities (WOD) in grades 3-8 in 2015 and 2018, as shown for Indicator 24 in ISLG 2018 Inequality Indicator Report, with measures of difference
The ISLG reports on New York City may be contrasted with the 2016 report by the New York City government’s Center for Innovation through Data Intelligence (CITDA). That report measured racial/ethnic differences regarding proficiency standards in terms of relative differences in meeting the standards. As commonly happens in such circumstances, the agency found that general increases in overall proficiency rates were associated with decreasing disparities and that overall reductions in proficiency rates resulting from the use of a harder test in 2013 increased the disparities. ISLG would have found the opposite.
For a fuller discussion for the 2016 report and the failure of understanding of the CITDA, see my letter to the organization dated June 6, 2016. See also the New York Proficiency Rate Disparities subpage of the Discipline Disparities page with regard to the statewide impact of the change to a harder test for 2013 that relied on the while relying on relative differences in proficiency rates, without recognizing the pattern by which tests with generally high pass rates would tend to show smaller relative differences in pass rates, but larger relative differences in failure rates, than tests with generally low pass rates.
More examples of the consistency of the pattern whereby relative differences in rates of proficiency and relative differences in rates of nonproficiency change in opposite directions as proficiency rates generally change may be found in my December 23, 2014 letter to the Wisconsin Council on Families and Children’s Race to Equity Project, which involves a study that examined racial differences regarding proficiency in terms of relative differences in rates of failure to achieve proficiency. Most of the subpages of the Education Disparities page, while focusing on particular approaches to measurement (including reliance on absolute differences without understanding the ways absolute differences tend to be affected by the prevalence of an outcome), contain tables illustrating situations where in fact general increases in rates of meeting standards were accompanied by reduced relative differences in meeting the standards but increased relative differences in failure to meet the standards, while general decreases in rates of meeting standards were accompanied by an increase in the former relative difference and a decrease in the latter relative difference.
The ISLG Equality Indicators project has provided guidance on adapting its 96 indicators for use in five other cities involved in a project funded by the Rockefeller Foundation. According to material showing baseline data, four (Dallas (at page 28) Pittsburgh (at pages 72-73) St. Louis (at page 68), and Tulsa (at page 25) measure racial/ethnic differences regarding proficiency in terms of relative differences in meeting standards. They would thus tend to reach opposite conclusions about changes over time from those ISLG would reach. One jurisdiction (Oakland, CA (p. 49)) measures differences involving proficiency in terms of relative differences in failure to meet the standards, thus tending to reach the same conclusions about changes over time that ISLG would reach.
In the two instances where any of these jurisdictions have published results over time, the patterns of changes in the relative differences for the favorable outcome and for the corresponding adverse outcome are in accord with what an informed observer should expect. A Dallas report for 2019 (at page 28) found that between 2018 and 2019, when third grade reading proficiency rates increased from 87.41% to 91.03% for white students and from 51.80% to 62.79% for black students, the white-black proficiency rate ratio decreased from 1.69 to 1.45. This resulted in an increase in the equity score from 53 to 63. ISLG, however, would have examined the relative difference in nonproficiency and have found that the ratio of the black nonproficiency rate to the white nonproficiency rate had increased from 3.83 (48.20%/12.59%) to 4.15 (37.21%/8.97%), with a reduction in the equity score from 28 to 26. The EES declined from 1.10 to 1.01. The pertinent figures are set out in Table 3, with columns ordered in accord with the approach of Table 1 and 2.
Table 3. Reading nonproficiency and proficiency rates of Dallas black and white public school students in grades 3-8 in 2018 and 2019, as shown for Indicator 16 in Dallas Equity Indicators 2019 Report, with measures of difference
Wh/Bl Prof Ratio
Data in the Tulsa 2019 report show an unusual situation where both economically disadvantaged and economically advantaged groups (differentiated by whether or not students are eligible for free or reduce lunch) experienced extremely large declines in proficiency rates over a one-year period. Between the 2015-16 and 2016-17 schools years, the economically disadvantaged proficiency rate declined from 46% to 20% and the economically advantaged group proficiency rate declined from 79% to 38%. Declines of this magnitude over a one-year period very likely reflect something other than actual declines in the reading skills of each group, including things involving the way rates were calculated. There may also exists some data collection or presentation issue.
Nevertheless, one observes the usual situation where a decline in the favorable outcome was accompanied by an increase in the relative difference in rates of experiencing the outcome, as reflected by an increase in the ratio of the of the economically advantaged group’s proficiency rate to the disadvantaged group’s proficiency rate from 1.72 to 1.90. this resulted in a decrease in equity score from 52 to 44. In appraising equality, however, ISLG would have relied on the decrease in the ratio of the nonproficiency rate of the economically disadvantaged group to the nonproficiency rate of the economically advantaged group – from 2.57 (54%/21%) to 1.29 (80%/62%) – and thus have found an increase in the equity score from 37 to 71.
The EES decreased from .907 in 2018 to .536 in 2019. This is an extraordinarily large reduction in the EES over a one-year period, and, like the size of the changes in the proficiency rates themselves, suggests that something other than an actual change in the comparative situations of the two groups had a role.[iv]
The pertinent figures for Tulsa are set out in Table 4, as with Table 3, in accord with the formatting of Tables 1 and 2.
Table 4. Reading nonproficiency and proficiency rates of Tulsa economically disadvantaged (ED) and economically advantaged (EA) elementary school students for 2015-16 and 2016-17 school years, as shown in Tulsa Equality Indicators online data in 2018 and 2019, with measures of difference.
Regardless of the chosen approach to measuring differences regarding proficiency, however, it is unlikely that any of the involved entities is aware that it is even possible for the identified direction of change in equality to be affected by whether ones examines the relative difference in the favorable outcome or the relative difference in the adverse outcome, much less that the two relative differences usually will change in opposite directions. And it is very unlikely that anyone involved in evaluating the data will give thought to whether an observed pattern is anything other than the consequence of an overall change in proficiency rates, just as no other entity or persons analyzing demographic differences of any kind has yet given any thought to such matter.
It is important to keep in mind, however, that the failure of understanding regarding the measurement of demographic differences at ISLG differs little from the failure at any institution of putative expertise in the analysis of demographic difference. Well above 99 percent of persons purported to be experts in the analysis of demographic difference are unaware that it is even possible for relative differences in favorable outcome and relative differences in the corresponding adverse outcomes to change in opposite directions as the prevalence of an outcome changes, mush less that, as discussed in "Race and Mortality Revisited," National Center for Health Statistics (NCHS) long ago recognized that this tends to occur systematically. And probably above 99 percent of such persons also believe that generally reducing an adverse outcome should reduce relative differences in rates of experiencing the outcome, which is the opposite of what NCHS recognized, and, more important, the opposite of reality. I did not notice that particular failure of understanding in the ISLG Equality Indicator materials. But with respect to things like adverse criminal justice outcomes, ISLG appears to share the mistaken view that generally reducing such outcomes will tend to reduce, rather than increase, relative racial differences in rates of experiencing the outcomes, as reflected in its web page on the MacArthur Foundation’s Safety and Justice Challenge. See, e.g., “Usual, But Wholly Misunderstood, Effects of Policies on Measures of Racial Disparity Now Being Seen in Ferguson and the UK and Soon to Be Seen in Baltimore,” Federalist Society Blog (Dec. 4, 2019), and “United States Exports Its Most Profound Ignorance About Racial Disparities to the United Kingdom,” Federalist Society Blog (Nov. 2, 2017).
In any case, in developing the Equality Indicators project, ISLG was not more culpable than other entities that promote the devotion of resources to analyzing demographic differences while being oblivious to the ways such analyses are undermined by a failure to understand that ways the measure employed tend to be affected by the prevalence of an outcome. Whether the comparative culpability of ISLG has changed after I acquainted leadership of the project with these issues by email of April 19, 2019, and will further change after this page has been brought to the attention of that leadership, is another matter. And ISLG has special obligations to the five cities that are devoting resources to the adapting of the ISLG model to their circumstances while relying on ISLG guidance.
While ISLG’s failure of understanding regarding the measurement of demographic differences are no more egregious than those of other institutions (including those regarded as the most prestigious), the manner in which it causes the waste of resources on unsound and misleading research may be compared with that of the National Quality Forum, as discussed in letters to that organization of March 15, 2019, and August 29, 2017. That is, reflecting no understanding of how to measure the simplest of differences, the project urge entities to devote resources to measuring a vast number of differences. Moreover, while it eschews a systematic examination of things that might reasonably be deemed to involve important equity issues like differences related to proficiency (assuming one knew how to measure the differences), the ISLG project’s commitment to a large number of indicators causes it to create many that have no reasonable bearing on equity.
The data presented for Indicator 13 might be deemed an extreme example of the absurdity of such approach. Minority and women owned businesses for received 24.0% of the smaller contracts in 2015 and 46.7% of the smaller contracts in 2018. Such businesses received 13.6% of the larger contracts in 2015 and 20.9% of the larger contracts in 2018. Thus, it should be evident that, to the extent that the data on the proportion minority and women receive of contracts is an indicator of equity, equity improved greatly between the two years But on the basis of the fact that ratio of the proportion of small contracts going to minority or women owned businesses to the proportion of large contracts going to such businesses increased from 1.765 (24.0%/13.6%) in 2015 to 2.234 (46.7%/20.9%), the 2019 ISLG report found that equity score for this indicator declined from 50 to 39.
Thus, contracting officials with an eye on equity scores will have an incentive not to award smaller contracts to minority or women owned businesses. That can be especially problematic for minority and women owned businesses given that, almost certainly, not only do minority and women owned businesses make up much larger proportion of the businesses seeking the smaller contracts than the larger contracts, but there are far more minority and women owned businesses seeking the smaller contracts than the larger contract.
Indicator 16 (page 41) is the ratio of the proportion of total New York City sales tax revenue collected in Manhattan to the proportion of such revenue collected in the other boroughs, In 2018, the proportion collected in Manhattan was 63.1% and the proportion collected outside of Manhattan was 36.9%, for a ratio of 1.710 and equity score of 52. The ratio was down from a ratio of 1.770 in 2015, when the equity score was 50. It is difficult to see how the proportion of sales tax collected in the mercantile center of the world compared with that collected in less mercantile areas of the same city has anything whatever to do with equity. Thus, even if the report were otherwise measuring meaningful things about equity (as, for example, might be a sound measure of academic achievement differences of advantaged and disadvantaged groups), the inclusion a score for an indicator like 16 would merely detract from the utility of the report.
A number of indicators are unrelated to demographic differences at all, as in the case of Indicator 8 (at page 37), which is based on percentage of cash assistance recipients who were no longer employed after being placed in a job. That percentage (26.1% in 2015 and 26.2% in 2018) is then translated into equity score simply by subtracting it from 100% (yielding equity scores of 74 in both years). Thus, it is just some general indicator of the employment situation of disadvantaged groups. Notably, while the equity score for this indicator would tend to decrease during a recession, the ratio of the black unemployment rate to the white unemployment rate (Indicator 5 at page 36) would tend to decrease during a recession, thus increasing the equity score for that indicator. That would also tend to be the case for the Indicator 5 (at page 34), which is the ratio of the Asian poverty rate to the white poverty rate.[v]
Readers interested in the project should read all the indicators in the 2018 report with an eye toward how many of them might actually provide useful information as to the degree of equity in New York City, even assuming that the project employed an effective measure of the magnitude of the difference between the circumstances of advantaged and disadvantaged groups. But the reader should be mindful that, for reasons discussed in "Race and Mortality Revisited," as with virtually all efforts to quantify demographic differences involving outcome rates, the project does not provide a useful measure of that magnitude and reflects no understand that different measures commonly yield opposite conclusions about directions of changes in such differences or that useful efforts to understand the ways policies affect differences in the circumstances of advantaged and disadvantaged groups must take into account the ways measures tend to be affected by the prevalence of an outcome.
Yet entities that fund the various equality indicator projects and the localities that devote resources to collecting and analyzing data take for granted that ISLG is an expert in the analyses of demographic difference. Again, however, ISLG’s failure of understand differs little from that of all entities of putative expertise regarding such matters.
[i] Links are provided to each of the reports on an ISLG Publications page. But as of the date of creation of this subpage, the only link working was that the 2016 report. This subpage mainly discusses the 2018 report, which currently is not available on the ISLG site. Because of the size of the 2018 report, I have been unable to upload the entire copy of the report that I downloaded last year. But excerpts that include the cover page and all pages referenced here are available by means of the link in the text above for the 2018 report.
[ii] The value can be easily derived for any pair of favorable or corresponding adverse outcome rates by means of the ES_Calculator (Proportions 1 tab, Probit Method d value) file found by means of this link: http://mason.gmu.edu/~dwilsonb/ma.html
[iii]In appraising the size of the EES change, one should be mindful of the difference between EES figures above 1 and ratios of rates above 1. A change in a ratio from 1.31 to 1.25 would be a 19.4% decrease in the 31% relative differences that the 1.31 ratio represents. By contrast, a change in an EES from 1.31 to 1.25 is 4.6% reduction in the EES (which is a unit of standard deviations). In making this clarifying point, I do not mean to suggest that either a risk ratio of the relative difference it represents is a sound measure of association or that the change in such ratio or the relative difference it represents, however such change might be quantified, is a useful indicator of the change forces causing the rates of the two groups to differ.
[iv] The final column of Table 4 of the June 6, 2016 letter to the Center for Innovation through Data Intelligence and the final column of Table 1 of the New York Proficiency Rate Disparities subpage show the changes in EES figures in New York City and New York State when the state adopted a more difficult proficiency test in 2013. The final column of Table 1 of the December 23, 2014 letter to the Wisconsin Council on Families and Children’s Race to Equity Project shows the change in EES figures when Wisconsin adopted more new proficiency standards in 2012. The modesty of the EES changes in these cases provide further reason to be suspicious of the very large change in the in Tulsa and of the rates underlying the EES values. In general, one would not expect at a change in the difficulty of a proficiency test to affect the EES any differently from a change in the cutoff, which is to say that any observed changes in EES are likely to be random variation of reflections of irregularities in the distributions. Of course, reasons for any actual meaningful changes are matters worthy of study.
[v]As suggested at the outset of this page, given the ways the patterns discussed here are so evident in test score data, the failure of the educational community to understand that it is even possible for relative differences meeting standards and relative differences in failure to meet standards is something quite remarkable. But, given that the same patterns are evident in income data in precisely the way it is maintained by the Census Bureau (see Table 2 or "Race and Mortality Revisited" and Table 1 of “Can We Actually Measure Health Disparities?,” Chance (Spring 2006), the failure of the poverty research community to understand that reductions in poverty tend to increase relative differences in poverty is comparably remarkable.