Artificial Intelligence (AI) has become an integral part of our daily lives, with AI tools shaping the way we interact with technology. However, a recent report led by researchers from UCL has shed light on a troubling aspect of AI – gender bias. The study, commissioned by UNESCO, delved into the world of Large Language Models (LLMs) and uncovered significant discrimination against women and individuals from diverse cultural and sexual backgrounds.
The findings of the report revealed a disturbing trend of gender stereotyping in content generated by popular AI platforms such as Open AI’s GPT-3.5, GPT-2, and META’s Llama 2. Female names were consistently associated with words like “family,” “children,” and “husband,” reinforcing traditional gender roles. In contrast, male names were linked to words such as “career,” “executives,” and “business,” highlighting a clear bias towards men in AI-generated content.
Beyond gender bias, the study also uncovered evidence of stereotyped notions based on culture and sexuality in AI-generated text. The platforms tended to assign high-status jobs like “engineer” and “doctor” to men, while women were often relegated to roles like “domestic servant,” “cook,” and “prostitute.” Stories about men were associated with adventurous themes like “treasure,” “woods,” and “sea,” whereas stories about women focused on domestic settings like “garden,” “love,” and “husband.”
Dr. Maria Perez Ortiz, one of the authors of the report, emphasized the need for an ethical overhaul in AI development to address the deeply ingrained gender biases within LLMs. As a woman in tech, Dr. Perez Ortiz advocated for AI systems that reflect the diverse human population and uplift gender equality. The UNESCO Chair in AI at UCL team, in collaboration with UNESCO, aims to raise awareness of this issue and work towards developing solutions through joint workshops and events with relevant stakeholders.
Professor John Shawe-Taylor, the lead author of the report, underscored the importance of a global effort to address AI-induced gender biases. The study not only exposes existing inequalities but also sets the stage for international collaboration in creating AI technologies that uphold human rights and gender equity. By steering AI development towards a more inclusive and ethical direction, UNESCO is committed to ensuring a fair and equal representation of all genders in the field of artificial intelligence.
The report presented at the UNESCO Digital Transformation Dialogue Meeting and the United Nations headquarters highlights the urgent need to address gender bias in AI. As we strive for technological advancement and innovation, it is imperative to prioritize gender equity and diversity in AI development. Just because women have been underrepresented in certain fields in the past does not mean they are any less capable. It is time to challenge the stereotypes embedded in AI systems and create a more inclusive and equitable future for all.
Leave a Reply