DFEH Holds Civil Rights Hearing on Algorithms and Bias

May 6, 2021

For Immediate Release


Will Consider Whether Regulatory Changes Are Needed

SACRAMENTO – The California Fair Employment and Housing Council held a hearing on Friday, April 30, to examine how state law can reduce the risk that algorithmic decision making will perpetuate or cause discrimination and inequality in the areas of employment, housing, lending, and healthcare. The Council is part of the Department of Fair Employment and Housing (DFEH), California’s civil rights agency.

“Given how algorithms are increasingly present in our day-to-day lives and at the same time invisible to most of us, it was essential for the Council to examine how the use of this technology may impact the civil rights of Californians,” said Councilmember Hellen Hong. Among other usages, algorithms are used to screen applicants for jobs or apartments, evaluate work performance and promotions, approve/disapprove someone for a loan or establish loan terms, and inform healthcare delivery. Depending on the algorithm and the data used, these technologies risk perpetuating discrimination and inequality on the basis of race, disability, and other protected characteristics in ways that are difficult to detect.

“The hearing demonstrated that algorithms do not necessarily undermine civil rights, but some do. Companies creating or using algorithms should take proactive steps to mitigate these technologies’ harmful effects, and existing and new laws could help ensure that algorithms advance rather than undermine civil rights,” said Councilmember Tim Iglesias.

During the hearing, experts discussed numerous ways that algorithms make employment, housing, lending, and healthcare decisions and can perpetuate existing biases and inequalities. The speakers also presented ideas to address these concerns. Members of the public spoke and submitted comments to the Council. Session one of the hearing addressed employment. Session two addressed housing, lending, and healthcare. The hearing is available on DFEH’s YouTube channel:https://www.youtube.com/watch?v=IQ_6f9lMUfU

Highlights from the hearing included:

  • Aaron Rieke, Managing Director at Upturn, overviewed how the use of algorithms impact employment opportunities. He discussed how algorithms are involved in all aspects of the hiring process, including who will actually see online job postings. Mr. Rieke cautioned that “any system that runs on machine learning without intervention is going to reproduce” bias in how online job ads and advertisements are disseminated. Mr. Rieke suggested that policymakers address bias by regulating all aspects of the hiring process, including recruiting, online assessments and personality tests (which are used by many companies to screen applicants), and even the software that employers use to track applicants and employees.
  • Pauline Kim, Professor at the University of Washington at St. Louis School of Law, addressed how the predictions made by algorithms about applicants and employees may have little relationship to whether an applicant is suited for a particular job. “This process is not informed by careful study as to what factors are actually relevant to doing a job. Instead a computer just looks to what data is available and finds patterns visible in that data. As a result, what a model will predict depends heavily on the data it is exposed to,” said Professor Kim. Therefore, if an algorithm uses data that is biased, it will replicate that bias in the prediction it makes. Professor Kim cited the example of an algorithm used by a company to identify the best candidates for a software developer position, utilizing data regarding current employees who were overwhelmingly male. As a result of using this biased information, the algorithm downgraded the resumes of women and favored male applicants.
  • Lydia X. Z. Brown, Policy Counsel with the Center for Democracy and Technology, discussed how the use of algorithms in hiring can have a particularly negative impact on people with disabilities: “Firstly, many algorithm-driven hiring tools are inaccessible to people with disabilities because they use tests in formats that disabled people cannot use. Secondly, many algorithm-driven hiring tools tend to unfairly screen out disabled applicants, either individually or in groups, for reasons unrelated to the job,” Brown explained. As a result, some of the algorithmic tools used by employers may violate existing civil rights laws, including the Americans with Disabilities Act and California’s Fair Employment and Housing Act.
  • Eric Dunn, Litigation Director at the National Housing Law Project, discussed how algorithms are used to screen applicants for housing and that these tools can also have a significant discriminatory impact on protected groups. Mr. Dunn explained that “automated screening processes tend to produce profound errors invisible to consumers,” because screening algorithms use criminal and credit history that is often riddled with errors. Even if an applicant receives a copy of the criminal or credit history relied upon to deny them admission, the report usually doesn’t contain enough information for an applicant to dispute its accuracy. Some screening technology goes even further by providing a rating or suggestion to a housing provider as to whether they should accept or reject an applicant. Mr. Dunn further stated that these automated decisions often produce “arbitrary, not evidence based, admissions decisions,…with little predictive value as to the applicant’s suitability for future tenancy.”
  • Maeve Elise Brown, Executive Director at Housing and Economic Rights Advocates, discussed how algorithmic decision making may perpetuate discrimination in lending decisions, including whether someone is given a loan and what the terms of that loan will be. The lending criteria built into algorithmic decision making may actually be “proxies for race and gender that appear facially neutral but may result in targeting of particular lending decisions (denials or higher pricing) cased on personal characteristics, directed towards legally protected groups.” Brown said. “Even unintentionally, designers of decision-making algorithms may choose a combination of factors that has a negative disparate impact that violates fair credit and fair housing laws,” she added.
  • Robert Bartlett, Professor at the University of California, Berkeley, School of Law, discussed a study he co-authored that determined minority borrowers pay significantly higher interest rates when compared with non-minority borrowers, even when both groups had the same credit scores. The difference in these numbers “adds up to roughly 450 million more in interest paid per year as a result of what we believe to impermissible discrimination,” said Professor Bartlett. These differences were seen in both in-person lending decisions and algorithmic lending decisions. Although the algorithmic decisions may be based on permissible characteristics, such as a borrower’s level of education, a person’s level of education may have a relationship with protected characteristics such as race, leading to what Bartlett calls an “unintended discriminatory effect.”
  • Ziad Obermeyer, a physician and Professor of Public Health at University of California, Berkeley, discussed that algorithmic decision making, used to analyze roughly 150 to 200 million patients per year, can impact health outcomes. In a study co-authored by Obermeyer, the researchers analyzed how algorithms were used to identify the most at-risk patients to target them with intensive preventative healthcare. The study found that algorithms consistently underestimated the healthcare risk for Black patients. “Right now the number of Black patients in the high risk program is 18% . . . but if you address the true differences in health needs this group should actually be half Black,” said Professor Obermeyer. The researchers found that this disparity was because the algorithm was programmed to predict healthcare risk by utilizing data about the cost of healthcare per patient. However, this variable does not account for racial disparities in the provision of health care, which resulted in biased predictions. Professor Obermeyer warned that “getting that target variable for the algorithm right is very important, yet it is almost decided as an afterthought by data science teams.”

Following the hearing, the Council will consider whether changes to regulations implementing California’s civil rights laws are needed, and DFEH will take additional actions as appropriate

###

The California Civil Rights Department (CRD) is the state agency charged with enforcing California’s civil rights laws. CRD’s mission is to protect the people of California from unlawful discrimination in employment, housing, public accommodations, and state-funded programs and activities, and from hate violence and human trafficking. For more information, visit calcivilrights.ca.gov.


651 Bannon Street, Suite 200
Sacramento, CA 95811
Regional Offices
800-884-1684 (voice), 800-700-2320 (TTY) or
California's Relay Service at 711
contact.center@calcivilrights.ca.gov