To combat bias, we need systems that provide transparency in algorithms. We will explore why we need these systems and go over new open source tools from IBM and Google as well as other topics below.
To combat bias, we need systems that provide transparency in algorithms. These systems would give information on what an algorithm decided and why it made its decision. Including the apparent missteps of cultural biases to the biases that are more subtle.
This workshop will go over the first steps you can take with new open source tools from IBM and Google called AI Fairness 360 and What-If respectively as well as the other topics listed below.
- Let's talk about machines and automation (30 minutes)
- Algorithmic changes in the workplace (30 - minutes)
- Overview: What is Algorithmic fairness and why should you care? (30 minutes)
- Organizations working in the field (30 minutes)
- Examination of algorithms in production and specific instances + how these have been unfair (30 minutes)
- Company efforts to mitigate bias and tools (1- hour)
- Hand- On Lab (1.5 hours)
- Stakeholders, who are responsible and for what? (1 - hour)
- Data and accountability + feasibility. Who understands the data piped into these algorithms and why do data analysts + data scientists matter? (30 -minutes)
- Algorithmically enhanced machines - when hardware and software combine (30 -minutes)