Artificial Intelligence (AI) applications are demonstrating undeniable success and widespread adoption, but the problem of algorithmic bias has arisen. Algorithmic bias occurs when AI systems produce unjust decisions. This problem is made worse by the fact that AI systems are opaque and unfamiliar to most people. Scientists and the public want to unlock the power of fast, intelligent computer systems, but they are understandably hesitant when they see these systems unfairly discriminate or simply make unexplained decisions. This research will answer fundamental questions to assist in understanding the public's trust and mistrust of AI: How does the public feel about AI systems? What do they think of the shapers of these systems? And do these attitudes change as they gain experience with AI systems? The aim of this research is to directly measure public opinion regarding artificial intelligence systems and scientists and test the hypothesis that exposure to interpretable AI will lead to more positive attitudes. The integration of AI systems into decision-making processes previously the sole domain of human judgement is still a relatively novel phenomenon. Public opinion is likely in a dynamic phase and measuring how public attitudes toward AI evolve over the next few years is crucially important. Thus, this project will compile monthly composite measures of trust in AI systems and scientists through surveying a representative sample of the US population. Additionally, it is the case that much effort is currently being expended to make AI systems more interpretable. This effort is predicated on the untested assumption that negative attitudes toward AI are due to the complex and opaque nature of its underlying algorithms. The investigators will test the effect of firsthand experience with AI systems while experimentally controlling the level of transparency. The goal is an explicit test of the theory that increased exposure to interpretable AI will decrease distrust and other negative attitudes toward AI. With this research, the investigators will measure and test a method to mitigate mistrust of AI and advance the conversation currently taking place across social science and engineering disciplines regarding how humanity shall relate to a powerful new tool.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.