Detecting online hate is a complex task, and low-performing models have harmful consequences when used for sensitive applications such as content moderation. Emoji-based hate is an emerging challenge for automated detection. We present HatemojiCheck, a test suite of 3,930 short-form statements that allows us to evaluate performance on hateful language expressed with emoji. Using the test suite, we expose weaknesses in existing hate detection models. To address these weaknesses, we create the HatemojiBuild dataset using a human-and-model-in-the-loop approach. Models built with these 5,912 adversarial examples perform substantially better at detecting emoji-based hate, while retaining strong performance on text-only hate. Both HatemojiCheck and HatemojiBuild are made publicly available. See our Github repository. HatemojiCheck, HatemojiBuild, and the final Hatemoji Model are also available on HuggingFace.

How to cite:  Hannah Rose Kirk, Bertram Vidgen, Paul Röttger, Tristan Thrush, Scott A. Hale. (2022). Hatemoji: A Test Suite and Adversarially-Generated Dataset for Benchmarking and Detecting Emoji-based Hate. In the Proceedings of 2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2022).

We collaborated with 53 partner organizations worldwide to design and carry out our 2024 elections projects. We extend special gratitude to our lead partners in Brazil, Mexico and Pakistan, whose work we highlight in this essay.

Pacto pela democraciaINE MexicoDigital Rights Foundation

The 2024 elections projects featured in here would not have been possible without the generous support of these funders.

SkollSIDAPatrick J McGovernSVRI
Tags
No items found.

Footnotes

References

Authors

Words by

No items found.
No items found.
Words by
Organization

Published on

July 8, 2022
July 17, 2023