Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
## [Machine Learning and Biases][1] Machine learning offers great possibilities for research and practice. However, there are also examples where machine learning has “gone wrong”. For instance, Microsoft’s chatbot “Tay” learned to communicate with humans based on Twitter messages. Unfortunately, within one day it produced racists and sexist commentaries which it had learned from the data influx from Twitter. This example highlights the potential pitfall that algorithms may learn human biases from the data. This module should make students aware of this pitfall and should highlight the role that psychologists can play in the development of novel machine learning algorithms. [1]:https://osf.io/yf3xh/
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.