## [Machine Learning and Biases][1]
Machine learning offers great possibilities for research and practice. However, there are also examples where machine learning has “gone wrong”. For instance, Microsoft’s chatbot “Tay” learned to communicate with humans based on Twitter messages. Unfortunately, within one day it produced racists and sexist commentaries which it had learned from the data influx from Twitter. This example highlights the potential pitfall that algorithms may learn human biases from the data. This module should make students aware of this pitfall and should highlight the role that psychologists can play in the development of novel machine learning algorithms.
[1]:https://osf.io/yf3xh/