As many people adapt to growing awareness of how bias work it is natural to seek out frameworks and tools for this issue.
The purpose of this talk is to walkthrough open source technology that allows testers and managers to understand underlying bias in data science and AI systems. By making use of explainable AI tools and fair frameworks we will uncover blind spots in the planning of software products that negatively impact equality and equity. Building on feedback from the last couple of years, in this year's talk there will be an even larger focus on how to build these tools for prototypes and small teams. Fairness, Responsibility and Transparency is an imperative mission for a culturally and bias informed technologist community as we scale our products for even bigger purposes.