Date created: | Last Updated:
: DOI | ARK
Creating DOI. Please wait...
Category: Project
Description: Accountability toolkits have been developed to determine the extent to which new technologies are biased against certain individuals, based on their sex, race, or standing on other protected characteristics. However, tools and implementation strategies that feel ‘forced’ (i.e., motivation through control) will be met with resistance, opposition, and ultimately ‘cheating’, for example with developers and users doing the least possible to engage with toolkits or otherwise attempting to undermine their success. In both the short and long term, we can greatly increase the efficacy of accountability toolkits, and thus make systems more accountable and trustworthy, by convincing developers to ‘buy in’ to the importance of toolkits developed so that they are widely and appropriately used. This project aims to produce a concrete communication strategy to drive toolkit buy-in and therefore investment in accountability and trustworthiness in students, trainees, developers, and non-technical users. In this experiment, we will trial communications around the values that underly accountability toolkits (namely transparency when communicating around bias) in the domain of healthcare AI.
Files can now be accessed and managed under the Files tab.