15 May How to Use AI Ethically and Inclusively for Trans Representation
As brand directors, advertising, PR and digital marketing agency professionals, we have a responsibility to consider the ethical implications of our work, particularly when it comes to the representation of marginalised communities. One area where this is particularly important is in the use of artificial intelligence (AI) in our marketing and advertising efforts.
The use of AI in marketing and brand relationships with customers is becoming increasingly prevalent, with many companies using it to target and personalise advertising, as well as to analyse consumer data. However, there are concerns that this technology can perpetuate harmful stereotypes and bias, this is particularly an issue when it comes to the representation of trans people.
One major issue is that AI algorithms are only as unbiased as the data they are trained on. If the data used to train an algorithm is biased, the algorithm will likely produce biased results. This is a particular concern when it comes to trans people, as there is a long history of marginalisation and discrimination within society, which can be reflected in the data used to train AI algorithms.
This was highlighted in a 2018 study by researchers at MIT and Stanford University, which found that three commercially available facial analysis systems had much higher error rates for trans people than for cisgender people. The study found that the algorithms were more likely to misgender trans people, and that the error rate was particularly high for trans women.
The researchers noted that “this problem is not just the result of technical limitations, but also reflects societal biases that are built into the data used to train the algorithms.”
Another issue is that AI algorithms can perpetuate potentially harmful gender stereotypes and biases in the way that they represent trans people in advertising and marketing. For example, a 2018 study by the University of California, Berkeley found that AI algorithms used in online advertising were more likely to show job ads for “masculine” roles to trans men, and “feminine” roles to trans women.
The study’s authors note that this is “an example of how artificial intelligence can perpetuate and even amplify existing societal biases, leading to further marginalisation of marginalised groups.”
It is our responsibility as marketing, advertising and PR professionals to consider these ethical implications and take steps to ensure that our use of AI does not perpetuate discriminatory or sexist stereotypes and biases.
One way to do this is to ensure that the data used to train AI algorithms is diverse and inclusive, and that it includes a wide range of perspectives and experiences. This can be achieved by working with trans-led organisations and advocates to gather data and ensure that it is representative of the trans community.
Another important step is to be transparent about the data and methods used to train AI algorithms, and to regularly evaluate and test the algorithms to ensure that they are not producing biased results.
Additionally, brands and agencies can actively work to challenge and disrupt harmful stereotypes and biases in all advertising and marketing efforts. This can include creating campaigns that accurately and positively represent trans people, and working with trans-specific organisations to ensure that these campaigns are inclusive and sensitive to the needs of the trans community.
It is also important to remember that trans people are not a monolithic group, and that each person’s experiences and needs are unique. Therefore, it is essential to work with a diverse range of trans people in our marketing and advertising efforts to ensure that we are representing the community in an accurate and respectful manner.
Another important consideration is to ensure that there is diversity in the people who are programming and developing AI algorithms. The lack of diversity in programming across the board means that the output is biased, which is why it’s so important to get more diverse representation in the coding professionals tasked with building the software. As mentioned in the book “Technology is not neutral” by Stefanie Hare, it is important to consider the ethics in the digital age and how it impacts all of us who don’t fit into the mainstream average.
A diverse human interface with AI and all forms of marketing content is essential, if brands, advertising, PR and marketing professionals are not going to be accused of tokenism, or befall embarrassing and potentially brand-damaging gaffes.
Ultimately, the use of AI in digital marketing has the potential to revolutionise the way that we connect with consumers and target our advertising efforts. However, it is important that we take a responsible and ethical approach to this technology, and that we consider the implications for marginalised communities, such as trans people. By working to ensure that our data is diverse and inclusive, being transparent about our methods, including input from the minority communities we seek to represent and connect with, and actively working to challenge harmful stereotypes, we can ensure this emerging technology is used ethically and inclusively.
References:
“Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification” by MIT Media Lab and Stanford University,Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification — MIT Media Lab
“Mitigating Bias in Artificial Intelligence” by University of California, Berkeley, Mitigating Bias in Artificial Intelligence – Center for Equity, Gender, & Leadership – Berkeley Haas
“What happens if your agency does so much amazing hard work to create the most inclusive advert, but then a prop artist puts a hairbrush that’s unusable on a Black woman’s hair in your Black Actress’s bedroom and no one on set notices…until the advert goes live.” https://www.linkedin.com/feed/update/activity:7008365448933056512/
Need help ensuring your communications strategies are inclusive of a gender diverse audience? Get in touch
- 5 Lessons Brands Can Learn from Innocent’s Not-So-Smooth Response to Attacks on Trans-Inclusivity - June 6, 2023
- How to Use AI Ethically and Inclusively for Trans Representation - May 15, 2023
- How to Use 2021 Census Data to Enhance Trans-Inclusivity and Boost Business: Insights for Marketing Professionals, Brands and Businesses - January 7, 2023