Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
Data and model repository for HateBERT. If you use this model, please cite the accompanying paper: @inproceedings{caselli-etal-2021-hatebert, title = "{H}ate{BERT}: Retraining {BERT} for Abusive Language Detection in {E}nglish", author = "Caselli, Tommaso and Basile, Valerio and Mitrovi{\'c}, Jelena and Granitzer, Michael", booktitle = "Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.woah-1.3", doi = "10.18653/v1/2021.woah-1.3", pages = "17--25", abstract = "We introduce HateBERT, a re-trained BERT model for abusive language detection in English. The model was trained on RAL-E, a large-scale dataset of Reddit comments in English from communities banned for being offensive, abusive, or hateful that we have curated and made available to the public. We present the results of a detailed comparison between a general pre-trained language model and the retrained version on three English datasets for offensive, abusive language and hate speech detection tasks. In all datasets, HateBERT outperforms the corresponding general BERT model. We also discuss a battery of experiments comparing the portability of the fine-tuned models across the datasets, suggesting that portability is affected by compatibility of the annotated phenomena.", }
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.