James P. Scanlan, Attorney at Law

Home Page

Curriculum Vitae

Publications

Published Articles

Conference Presentations

Working Papers

page1

Journal Comments

Truth in Justice Articles

Measurement Letters

Measuring Health Disp

Outline and Guide to MHD

Summary to MHD

Solutions

page3

Solutions Database

Irreducible Minimums

Pay for Performance

Between Group Variance

Concentration Index

Gini Coefficient

Reporting Heterogeneity

Cohort Considerations

Relative v Absolute Diff

Whitehall Studies

AHRQ's Vanderbilt Report

NHDR Measurement

NHDR Technical Issues

MHD A Articles

MHD B Conf Presentations

MHD D Journal Comments

Consensus/Non-Consensus

Spurious Contradictions

Institutional Corresp

page2

Scanlan's Rule

Outline and Guide to SR

Summary to SR

Bibliography

Semantic Issues

Employment Tests

Case Study

Case Study Answers

Case Study II

Subgroup Effects

Subgroup Effects NC

Illogical Premises

Illogical Premises II

Inevitable Interaction

Interactions by Age

Literacy Illustration

RERI

Feminization of Poverty S

Explanatory Theories

Mortality and Survival

Truncation Issues

Collected Illustrations

Income Illustrations

Framingham Illustrations

Life Table Illustrations

NHANES Illustrations

Mort/Surv Illustration

Credit Score Illustration

Intermediate Outcomes

Representational Disp

Statistical Signif SR

Comparing Averages

Meta-Analysis

Case Control Studies

Criminal Record Effects

Sears Case Illustration

Numeracy Illustration

Obesity Illusration

LIHTC Approval Disparitie

Recidivism Illustration

Consensus

Algorithm Fairness

Mortality and Survival 2

Mort/Survival Update

Measures of Association

Immunization Disparities

Race Health Initiative

Educational Disparities

Disparities by Subject

CUNY ISLG Eq Indicators

Harvard CRP NCLB Study

New York Proficiency Disp

Education Trust GC Study

Education Trust HA Study

AE Casey Profic Study

McKinsey Achiev Gap Study

California RICA

Nuclear Deterrence

Employment Discrimination

Job Segregation

Measuring Hiring Discr

Disparate Impact

Four-Fifths Rule

Less Discr Alt - Proc

Less Discr Altl - Subs

Fisher v. Transco Serv

Jones v. City of Boston

Bottom Line Issue

Lending Disparities

Inc & Cred Score Example

Disparities - High Income

Underadjustment Issues

Absolute Differences - L

Lathern v. NationsBank

US v. Countrywide

US v. Wells Fargo

Partial Picture Issues

Foreclosure Disparities

File Comparison Issues

FHA/VA Steering Study

CAP TARP Study

Disparities by Sector

Holder/Perez Letter

Federal Reserve Letter

Discipline Disparities

COPAA v. DeVos

Kerri K. V. California

Truancy Illustration

Disparate Treatment

Relative Absolute Diff

Offense Type Issues

Los Angeles SWPBS

Oakland Disparities

Richmond Disparities

Nashville Disparities

California Disparities

Denver Disparities

Colorado Disparities

Nor Carolina Disparitie

Aurora Disparities

Allegheny County Disp

Evansville Disparities

Maryland Disparities

St. Paul Disparities

Seattle Disparities

Minneapolis Disparities

Oregon Disparities

Beaverton Disparities

Montgomery County Disp

Henrico County Disparitie

Florida Disparities

Connecticut Disparities

Portland Disparities

Minnesota Disparities

Massachusetts Disparities

Rhode Island Disparities

South Bend Disparities

Utah Disparities

Loudoun Cty Disparities

Kern County Disparities

Milwaukee Disparities

Urbana Disparities

Illinois Disparities

Virginia Disparities

Behavior

Suburban Disparities

Preschool Disparities

Restraint Disparities

Disabilities - PL 108-446

Keep Kids in School Act

Gender Disparities

Ferguson Arrest Disp

NEPC Colorado Study

NEPC National Study

California Prison Pop

APA Zero Tolerance Study

Flawed Inferences - Disc

Oakland Agreement

DOE Equity Report

IDEA Data Center Guide

Duncan/Ali Letter

Crim Justice Disparities

U.S. Customs Search Disp

Deescalation Training

Career Criminal Study

Implicit Bias Training

Drawing Inferences

Diversion Programs

Minneapolis PD Investig

Offense Type Issues CJD

Innumerate Decree Monitor

Massachusetts CJ Disparit

Feminization of Poverty

Affirmative Action

Affirm Action for Women

Other Affirm Action

Justice John Paul Stevens

Statistical Reasoning

The Sears Case

Sears Case Documents

The AT&T Consent Decree

Cross v. ASPI

Vignettes

Times Higher Issues

Gender Diff in DADT Term

Adjustment Issues

Percentage Points

Odds Ratios

Statistical Signif Vig

Journalists & Statistics

Multiplication Definition

Prosecutorial Misconduct

Outline and Guide

Misconduct Summary

B1 Agent Cain Testimony

B1a Bev Wilsh Diversion

B2 Bk Entry re Cain Call

B3 John Mitchell Count

B3a Obscuring Msg Slips

B3b Missing Barksdale Int

B4 Park Towers

B5 Dean 1997 Motion

B6 Demery Testimony

B7 Sankin Receipts

B7a Sankin HBS App

B8 DOJ Complicity

B9 Doc Manager Complaints

B9a Fabricated Gov Exh 25

B11a DC Bar Complaint

Letters (Misconduct)

Links Page

Misconduct Profiles

Arlin M. Adams

Jo Ann Harris

Bruce C. Swartz

Swartz Addendum 2

Swartz Addendum 3

Swartz Addendum 4

Swartz Addendum 7

Robert E. O'Neill

O'Neill Addendum 7

Paula A. Sweeney

Robert J. Meyer

Lantos Hearings

Password Protected

OIC Doc Manager Material

DC Bar Materials

Temp Confidential

DV Issues

Indexes

Document Storage

Pre 1989

1989 - present

Presentations

Prosec Misc Docs

Prosec Misc Docs II

Profile PDFs

Misc Letters July 2008 on

Large Prosec Misc Docs

HUD Documents

Transcripts

Miscellaneous Documents

Unpublished Papers

Letters re MHD

Tables

MHD Comments

Figures

ASPI Documents

Web Page PDFs

Sears Documents

Pages Transfer


Institutional Correspondence

(Apr. 9, 2010; rev. Mar. 9, 2015)

Prefatory note:  This page serves as a repository of links to formal correspondence to institutions whose missions are compromised by failure to understand patterns by which measures of differences between outcome rates (proportions) tend to be affected by the prevalence of an outcome (as discussed, among other places, on the Measuring Health Disparities, Scanlan’s Rule, Mortality and Survival, Lending Disparities, and Discipline Disparities pages of this site).  As suggested by the fact that this page was made a subpage to the Measuring Health Disparities page, originally the institutions were limited to those involved with health and healthcare disparities research issues.  But beginning in April 2012 it was expanded to include letters to other types of institutions. 

The page serves two purposes.  First, it makes available electronic copies of the items of correspondence with links to materials they reference, thus facilitating recipients’ review of referenced materials.  Second, it creates a record of the correspondence that may ultimately be useful in addressing the willingness and ability of institutions to respond to information indicating that some of things they do in pursuit of their missions are deeply flawed.  For example, the Department of Education and the Department of Justice have been for some time leading the public to believe that large racial disparities in discipline rates result from stringent discipline policies, the exact opposite of the case.  So this page may one day address how those institution reacted when confronted with the fact of their misperceptions.  See the Duncan/Ali Letter sub-page of the Discipline Disparities page and the Holder/Perez Letter sub-page of the Lending Disparities page.

To date, most institutions have done nothing significant in response to this correspondence (or other correspondence either in hard copy or email form).  One notable exception involves the National Center for Health Statistics (NCHS), which, based in based on my “Race and Mortality” (Society, Jan/Feb 2000) and “Divining Difference” (Chance, Fall 1994), recognized in four official or unofficial publications between 2004 and 2009 that determinations of whether health and healthcare disparities are increasing or decreasing would commonly turn on whether one examined relative differences in favorable outcomes or relative differences in the corresponding adverse outcome.  But the NCHS response was by no means a useful one, as discussed most recently in “Race and Mortality Revisited,” Society (July/Aug. 2014).

The listing of letters makes evident that I did not proceed to send as many letters as the page created in April 2010 suggested might be contemplated.  A long-planned letter to Harvard University was not sent until the scheduling of the presentation of “The Mismeasure of Group Differences in the Law and the Social and Medical Sciences” at an October 17, 2012 Applied Statistics Workshop of Harvard’s Institute for Quantitative Social Science.

Some recipients of the letter are discussed at pages 26 to 31 of my Federal Committee on Statistical Methodology 2013 Research Conference paper titled “Measuring Health and Healthcare Disparities”and the recent “Race and Mortality Revisited.”  The last few pages of the latter item give particular attention to the responses of Harvard Medical School and Massachusetts General Hospital.


The material that follows this prefatory note is a quite useful summary of the health and healthcare disparities issues to which the correspondence principally pertains.  But commencing on May 29, 2014, in order to make the correspondence itself more accessible, I list the items of correspondence at the end of this prefatory note rather than at the end of the page itself. Many of the more recent items address issues other than health and healthcare disparities issues.  


Comments for the Commission on Evidence-Based Policymaking (Nov. 28, 1016)

Letter to the Pyramid Equity Project (Nov. 28, 2016)

Comments for Commission on Evidence-Based Policymaking (Nov. 14, 2016)

Oklahoma City School District (Sept. 20, 2016)

Antioch (CA) Independent School District (Sept. 12, 2016)

American Statistical Association II (July 25, 2016)

Federal Judicial Center (July 7, 2016)

University of Oregon Institute on Violence and Destructive Behavior and University of Oregon Law School Center for Dispute Resolution (July 5, 2016)

University of Oregon Institute on Violence and Destructive Behavior and University of Oregon Law School Center for Dispute Resolution (July 3, 2016)

New York City Center for Innovation through Data Intelligence (June 6, 2016)

Consortium of Social Science Associations (Apr. 6, 2016)

Population Association of America and Association of Population Centers (Mar. 29, 2016)

Council of Economic Advisers (Mar. 16, 2016)

City of Madison, Wisconsin (Mar. 12, 2016)

Stanford Center on Poverty and Inequality (Mar. 8, 2016)

City of Boulder, Colorado (Mar. 5, 2015)

Houston Independent School District (Jan. 5, 2016)

Boston Lawyers’ Committee for Civil Rights and Economic Justice (Nov. 12, 2015)

House Judiciary Committee (Oct. 19, 2015)

American Statistical Association (Oct. 8, 2015)

Chief Data Scientist of White Hous OSTP (Sept. 8, 2015)

McKinney, Texas Independent School District (Aug. 31, 2015)

Department of Health and Human Services and Department of Education (Aug. 24, 2015)

Agency for Healthcare Research and Quality (July 1, 2015)

City of Minneapolis, Minnesota (June 8, 2015)

Texas Appleseed (Apr. 7, 2015)

Senate Committee on Health, Education, Labor and Pensions (Mar. 20, 2015)

United States Department of Justice and City of Ferguson, Missouri (Mar. 9, 2015)

Vermont Senate Committee on Education (Feb. 26, 2015)

Portland, Oregon Board of Education (Feb. 25, 2015)

Wisconsin Council on Families and Children’s Race to Equity Project (Dec. 23, 2014)

Financial Markets and Community Investment Program, Government Accountability Office (Sept. 9, 2014)

Education Law Center (Aug. 14, 2014)

IDEA Data Center (Aug. 11, 2014)

Institute of Medicine II (May 28, 2014)

Annie E. Casey Foundation (May 13, 2014)

Education Trust (April 30, 2014)

Investigations and Oversight Subcommittee of House Finance Committee (Dec. 4, 2013)

Mailman School of Public Health of Columbia University (May 24, 2013)

Senate Committee on Health, Education, Labor and Pensions (Apr. 1, 2013)

Federal Reserve Board (March 4, 2013)

Harvard University et al.  (Oct. 26, 2012)

Harvard University  (Oct. 9, 2012)

United States Department of Justice (Apr. 23, 2012)

United States Department of Education (Apr. 18, 2012)

The Commonwealth Fund (June 1, 2010)

Institute of Medicine (June 1, 2010)

National Quality Forum (Oct. 22, 2009)

Robert Wood Johnson Foundation (Apr. 8, 2009)


Many thousands of institutions in the United States and around the world engage in activities that involve appraising differences between the rates at which two groups experience an outcome and evaluating the bearing of such appraisal on a range of issues in the law and the social and medical sciences. Such institutions include governmental entities, universities, research institutes, and a variety of scientific and other scholarly journals. With very minor exception, however, the manner in which these institutions appraise differences between outcome rates is fundamentally flawed as a result of the failure to recognize the way that standard measures of differences between outcome rates tend to be affected by the overall prevalence of an outcome, as discussed in the Measuring Health Disparities (MHD), Scanlan’s Rule (SR), and Mortality and Survival pages, among other pages, on this site and in the references made available by those pages, including “Can We Actually Measure Health Disparities?” (Chance, Spring 2006), “Race and Mortality” (Society, Jan/Feb 2000), “Divining Difference” (Chance, Fall 1994), “The Perils of Provocative Statistics” (Public Interest, Winter 1991), and “The Misinterpretation of Health Inequalities in the United Kingdom” (British Society for Population Studies, 2006) – and now addressed most comprehensively in the Harvard University Measurement Letter listed with the items at the end of the text on this page, “Measuring Health and Healthcare Disparities”(Federal Committee on Statistical Methodology (April 2014), and “Race and Mortality Revisited” (Society, July/Aug. 2014) (forthcoming). 

The most notable of the ways standard measures of differences between outcome rates are affected by the overall prevalence of an outcome is the pattern whereby the rarer an outcome the greater tends to be the relative difference in experiencing it and the smaller tends to be the relative difference in avoiding it.  Thus, among many comparable examples:

  • When test scores are lowered (or test performance improves), relative differences in failure rates tend to increase while relative differences in pass rates tend to decrease.  

  • When poverty declines relative differences in poverty rates tend to increase while relative differences in rates of avoiding poverty tend to decrease. 

  • When mortality declines, relative differences in mortality rates tend to increase while relative differences in survival rates tend to decrease.

  • When overall rates of receiving beneficial health procedures or care (e.g., mammography, immunization, prenatal care, adequate hemodialysis etc.) increase, relative differences in rates of receiving such procedures or care tend to decrease while relative differences in rates in failing to receive them tend to increase.

  • Banks with relatively liberal lending policies tend to show larger relative differences in mortgage rejection rates but smaller relative differences in mortgage acceptance rates than banks with less liberal lending policies.

  • Relative differences in adverse outcome rates tend to be large among comparatively advantaged subpopulations (e.g., the college-educated, British civil servants), where such outcomes tend to be rare, while relative differences in the opposite outcomes tend to be small among those subpopulation.

  • More lenient school discipline policies will tend to yield larger relative differences in discipline rates, though smaller relative differences in rates of avoiding discipline, than more stringent policies.

 

Absolute differences between rates and differences measured by odds ratios tend also to be affected by the overall prevalence of an outcome, though in a more complicated way than the two relative differences, as described most precisely in the introduction to SR.  Roughly, as uncommon outcomes (those with rates of less than 50% for both groups) become more common, absolute differences between rates tend to increase; as common outcomes (those with rates of more than 50% for both groups) become even more common absolute differences tend to decrease.  Differences measured by odds ratios tend to change in the opposite direction of absolute differences.[i]   Other common measures that are functions of dichotomies, and hence in some manner affected by overall prevalence, are discussed in various places – e.g., longevity (BSPS 2006), the Gini coefficient (Gini Coefficient sub-page of MHD), the concentration index (Concentration Index sub-page of MHD), the phi coefficient (Section A.13 of SR), Cohen’s Kappa Coefficient (Section A.13a of SR).[ii]  

One point of clarification is in order. When a study finds, for example, that a factor increases some outcome rate from 1% to 3%, whether one states that the factor increased the outcome by 200% or by 2 percentage points or states that the opposite outcome decreased by 2% (i.e., 99% reduced to 97%) or 2 percentage points, all such characterizations would be correct, and none would implicate the issues described in the prior paragraphs.[iii]  But if one were to attempt to compare the size of the effect in the circumstance where an outcome rates increased from 1% to 3% with one where, say, a factor increases a rate of 2% to 5% – or if one were more abstractly to attempt to characterize the difference between 1% and 3% as a large one or a small one – the referenced issues are implicated. And none of the institutions whose activities involve the appraisal of differences in outcome rates recognizes these issues much less knows how to address them.

Most of the materials made available on this site involve the analysis of health and healthcare disparities, particularly with regard to whether race/ethnic or socioeconomic disparities are increasing or decreasing over time or otherwise are larger in one setting than another.  But as suggested by the bulleted examples set out above, the same issues are involved in any interpretation of the size of differences between outcome rates. 

As reflected in the discussion of the works of Carr-Hill and Chalmers-Dixon, Houweling et al., Eikemo et al., and Day et al. in Section E.7 of MHD, more thought has been given to these issues in Europe than in the United States.  But with respect to the extent to which the overwhelming majority of work implicating these issues is fundamentally flawed, the situation in Europe is indistinguishable from that in the United States and elsewhere around the world.

The extent of the failure to recognize these issues is perhaps best illustrated by the many journal articles that, particularly with regard to disparities in cancer outcomes, discuss relative differences in survival and relative differences in mortality interchangeably without recognizing that the two relative differences tend to change systematically in opposite directions as cancer survival increases (as discussed on the Mortality and Survival page).  The failure is also well illustrated by the way the Departments of Justice and Education have been encouraging or pressuring banks and public schools to relax lending standards and discipline policies, plainly believing that doing so should reduce relative differences in mortgage rejection rates and relative differences in discipline rates, when the exact opposite is the case (as discussed on the Lending Disparities, and Discipline Disparities pages and the recent ’Disparate Impact’:  Regulators Need a Lesson in Statistics” American Banker, June 5, 2012) and “Racial Differences in School Discipline Rates” (The Recorder, June 22, 2012).   Indeed, though the Department of Justice has been pressing employers to lower test cutoffs for close to fifty years because lowering cutoffs reduces relative differences in pass rates, it is unclear whether anyone in the Department even knows that lowering test cutoffs increases relative differences in failure rates.

From time to time, I have contacted various researchers or institutions about these issues, in recent years usually by email, suggesting that they reevaluate the ways they or those in some manner affiliated with them (as in the case of editors of scientific journal and the authors who publish in those journals) appraise differences between outcome rates.  But even when the emails have been read carefully enough for that the recipient to recognize that there may be a serious problem with current methods, such communications have had limited effect. With the hope that formal letters would have greater effect, in 2009 I began to send such letters to the some of the more influential institutions involved in activities implicating the issues described above. When sending hard copy letters, it is my practice to include links to referenced materials and to post electronic versions of the letters on this site in order to facilitate the recipients’ review of referenced materials. Thus, as letters are sent, links to the letters will be made available below. 

Some of the eventual recipients of the letters are already discussed in various pages on this site, including the National Center for Health Statistics  (NCHS) and the Agency for Healthcare Research and Quality (including among many other places Section E.4 of MHD, Section A.6 of the SR, and the 2007 APHA presentation).  (As noted in the introductory material, earlier contacts to NCHS, while causing their statisticians to recognize problems with existing practices, failed to yield useful results.)  Other institutions may be mentioned only in passing, as in the case of Health Care Policy Department of Harvard Medical School (see Pay for Performance sub-page of MHD), though the work of such entities may be frequently addressed in the comments collected under Section D of MHD. Those comments, by their critique of so much research in major medical and health policy journals published in the United States and Europe, also inferentially implicate the editorial practices of those journals. The Mortality and Survival page does that more directly with regard to journals that publish articles on disparities in cancer mortality and survival, as discussed above, typically without recognizing, for example, that increasing mortality disparities tend to be associated with decreasing survival disparities.  

Whether a particular institution receives attention on this site or receives one of the letters to be listed below typically will have little to do with the institution’s level of understanding of these issues.  For similar misunderstandings exist at essentially all institutions.  Institutions (and researchers) that indicate a recognition that determinations as to the size of a disparity between outcome rates may turn on the measure chosen may seem to reflect a greater understanding of the matter.[iv]  But unless such entities also recognize the way each measure is systematically affected by the overall prevalence of an outcome or the need to find a measure that is not so affected, their recognition that different measures may yield different results is of limited value.



[i]  See Irreducible Minimums sub-page of MHD and the Truncations Issues sub-page of SR with regard to some variations on these patterns in particular settings. 

[ii]  As discussed in the introduction to the Solutions sub-page of MHD and in the February 23, 2009 update to the Comment on Morita , a probit analysis yields the same results as the more mechanically derived estimate effect size (EES) described on the Solutions sub-page of MHD and thus is theoretically unaffected by the overall prevalence of an outcome.   The points made on the Solutions page regarding the strengths and weaknesses of the EES apply to the probit analysis as well.  See also the Truncation Issues sub-page of SR and the Cohort Considerations sub-page of MHD.

[iii]  A statement that the second figure is two times greater or two times higher than the first figure would also be correct.  As discussed on the Times Higher/Times Greater sub-page of the Vignettes page of this site, however, a statement that the second figure is three times greater or higher than the first (the predominant usage in most scientific journals) would be incorrect.  As discussed on the Percentage Points sub-page of the Vignettes page, a statement that the second figure is 2% greater than the first, whether incorrect or not, should be discouraged.  But these are different issues from those addressed on this page. 

[iv] But see page 9 and Section D of the Harvard University Measurement Letter regarding reasons why those who express an awareness of the way various measures yield different results may betray a fundamental misunderstanding of the purpose of an inquiry into the forces causes the rates of two groups to differ.