CS 5/662, Winter 2021
Week 5 / Wed, Feb 3 &
Project proposal presentations!
Raji, D. (2020) “How our data encodes systematic racism.” MIT Technology Review, Dec. 10, 2020
Blodgett, S. L., Barocas, S., Daume, H., & Wallach, H. (2020). Language (Technology) is Power: A Critical Survey of “Bias” in NLP. Proc. ACL 2020
Shah, D. S., Schwartz, H. A., & Hovy, D. (2020). Predictive Biases in Natural Language Processing Models: A Conceptual Framework and Overview. Proc NAACL 2020, pp. 5248–5264.
Kurita, K., Vyas, N., Pareek, A., Black, A. W., & Tsvetkov, Y. (2019). Measuring Bias in Contextualized Word Representations Presented at the Proceedings of the First Workshop on Gender Bias in Natural Language Processing, Florence, Italy.
Webster, K., Recasens, M., Axelrod, V., & Baldridge, J. (2018). Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns. Transactions of the Association for Computational Linguistics, 6, 605–617. http://doi.org/10.1162/tacl_a_00240
There has been so much good stuff written on this subject; if you want to go deeper, here are some places that Dr. Agrawal suggests starting:
May, C., Wang, A., Bordia, S., Bowman, S. R., and Rudinger, R. (2019). On measuring social biases in sentence encoders. Proc. NAACL 2019
Nadeem, M., Bethke, A., and Reddy, S. (2020). Stereoset: Measuring stereo-typical bias in pretrained language models.. arXiv preprint arXiv:2004.09456.
Manzini, T., Lim, Y. C., Tsvetkov, Y., and Black, A. W. (2019). Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings. Proc. NAACL 2019
Lauscher, A. and Glavaš, G. (2019). Are we consistently biased? Multidimensional analysis of biases in distributional word vectors. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics, 2019
Park, J. H., Shin, J., and Fung, P. (2018). Reducing gender bias in abusive language detection. Proc. EMNLP 2018.
Tan, Y. C. and Celis, L. E. (2019). Assessing social and intersectional biases in contextualized word representations. Proc. NeurIPS 2019
Yang, K., Qinami, K., Fei-Fei, L., Deng, J., and Russakovsky, O. (2020). Towards fairer datasets: Filtering and balancing the distribution of the people subtree in the imagenet hierarchy. In Proceedings of the 2020 Conference on Fairness, Accountability, andTransparency, pages 547–558
Zhao, J., Wang, T., Yatskar, M., Ordonez, V., and Chang, K.-W. (2018). Gender bias in coreference resolution: Evaluation and debiasing methods. In Proc. NAACL 2018