|Posted:||January 29, 2021 02:21 PM|
|From:||Representative Brian Sims|
|To:||All House members|
|Subject:||Reducing Bias in Automated Decision Systems|
|Algorithms and computer programs used to make efficient decisions for businesses and governments, known as Automated Decision Systems (ADS), have become increasingly common in recent years. While ADS technology aims to increase productivity, the biases of programmers are entrenched into these systems and biased decisions can have serious consequences. Technology like facial recognition has been used in wrongful arrests by some police departments and predictive risk algorithms have led to incorrect and racially biased predictions about the threat posed by people charged with crimes.
This issue was brought to my attention by stakeholders who are concerned that the use of ADS technology is increasing rapidly without oversight, accountability, or concern for privacy and other rights. Most recently, an algorithm utilized by Stanford Medicine to decide who should receive the first COVID-19 vaccine doses ignored almost all of the medical residents who work in close contact with COVID-19 positive patents. Instead, attending physicians who had minimal or no contact with patients were prioritized by the algorithm, putting the health and safety of medical residents at risk.
My legislation seeks to provide transparency and accountability. This proposal would create a task force to identify where and how ADS technology is being used by government entities in Pennsylvania, examine the effects of their use and whether marginalized communities are impacted disproportionately, and recommend methods of oversight. Please join me in cosponsoring this important legislation.
Introduced as HB1338