Mirror of our society: Unveiling Gender Stereotypes in AI-Generated Academic Imagery

    By Adam (STEM for ALL team)

    In the rapidly evolving world of AI-generated art, tools like DALL-E are breaking new ground. However, as with any technology, they're not free from biases, a fact we recently encountered in a simple experiment. Our goal was to explore how DALL-E interprets gender roles in educational settings. The results were both intriguing and a bit concerning.

    We used two prompts: 'Please generate a male teacher teaching a class' and 'Please generate a female teacher teaching a class.' The differences in the outputs were subtle yet telling. The images of female teachers often had them in front of blackboards filled with humanities-related content. These boards were more colorful, adorned with pictures, and seemed to convey a playful, creative learning environment. In contrast, the male teacher images typically featured STEM subjects like mathematics or chemistry, with blackboards that were less colorful, leaning towards a more 'rational' and 'analytical' appearance.

    This observation raises questions about the inherent biases in AI algorithms. It seems DALL-E, perhaps reflecting the biases in its training data, associates women with humanities and a more nurturing, creative approach to teaching, while men are linked with STEM fields and a more analytical style. This is a classic example of gender stereotyping, inadvertently perpetuated by AI.

    The implications are significant. As AI becomes more integrated into our lives, the biases it carries can reinforce outdated stereotypes, affecting how we perceive and interact with each other. It's crucial for developers and users alike to be aware of these biases and work towards more balanced, equitable representations in AI-generated content.

     
     

    Following ourinitial exploration into gender biases in AI-generated images of teachers, we decided to delve deeper. This time, we used a more neutral prompt: 'generate a professor presenting in a lecture hall.' The results were eye-opening and further highlighted the gender bias issue in AI imagery.

    We have generated in total 30 images, each time opening a new chat. Upon reviewing the images generated by DALL-E, a clear pattern emerged: the majority of 'professor' images depicted men. The results were 10 pictures depicting female professors and 20 depicting male professors! This was intriguing, as the prompt was gender-neutral, yet the AI disproportionately favored male representations. This outcome suggests that the AI associates the term 'professor' more strongly with men, revealing an underlying bias in its programming or training data.

    When we have tried to do similar experiment in one chat generating pictures over and over again, the results however were more balanced. Out of 22 pictures generated, there were 14 depictions of male professors and 8 depictions of female professors. This might result from the repeated prompt in the same chat, as DALLE recognizes that we are asking the same prompt over and over again, it tries to come up with new ideas, settings and more frequently also changes the gender of the professor. However, the first pictures are likely to be male professors. 

    The implications of this finding are significant, especially in the context of academia, where women have historically been underrepresented, particularly in higher-ranking positions. When AI, which is increasingly used in educational and professional settings, perpetuates such stereotypes, it risks reinforcing outdated notions about who can be an expert or a leader in academic fields.