TITLE:
Developing a Large-Scale Language Model to Unveil and Alleviate Gender and Age Biases in Australian Job Ads
AUTHORS:
Ruochen Mao, Liming Tan, Rezza Moieni, Nicole Lee
KEYWORDS:
Gender Bias, Age Bias, Natural Language Generation, Large Language Models, Machine Learning, Natural Language Processing
JOURNAL NAME:
Open Journal of Social Sciences,
Vol.12 No.6,
June
17,
2024
ABSTRACT: This study aims to explore the application of large-scale language models in detecting and reducing gender and age biases in job advertisements. To establish gender and age bias detectors, we trained and tested various large-scale language models, including RoBERTa, ALBERT, and GPT-2, and found that RoBERTa performed the best in detecting gender and age biases. Our analysis based on these models revealed significant male bias in job ads, particularly in the information and communication technology, manufacturing, transportation and logistics, and services industries. Similarly, research on age bias revealed a preference for younger applicants, with limited demand for older candidates in job ads. Furthermore, we explored the application of natural language generation using ChatGPT to mitigate gender bias in job advertisements. We generated two versions of job ads: one adhering to gender-neutral language principles and the other intentionally incorporating feminizing language. Through user research, we evaluated the effectiveness of these versions in attracting female candidates and reducing gender bias. The results demonstrated significant improvements in attracting female candidates and reducing gender bias for both versions. Overall, gender bias was reduced, and the appeal of job ads to female candidates was enhanced. The contributions of this study include an in-depth analysis of gender and age biases in job advertisements in Australia, the development of gender and age bias detectors utilizing large-scale language models, and the exploration of natural language generation methods based on ChatGPT to mitigate gender bias. By addressing these biases, we contribute to the creation of a more inclusive and equitable job market.