Kore Global recently participated in MERL Tech’s Gender, MERL & Artificial Intelligence (AI) working group launch, part of their Natural Language Processing (NLP) Community of Practice. The discussions sparked reflection on how AI intersects with gender-lens investing and its evaluation. This feels particularly timely given our recent and ongoing evaluation work with British International Investment (BII), Proparco, and FinDev Canada examining their gender-lens investment portfolios. While AI hasn’t yet emerged as a significant tool in these evaluations, it is likely to become increasingly relevant as more investment flows toward AI companies and as businesses across sectors integrate AI capabilities. Drawing on insights from our evaluation experience and recent work examining AI in research contexts (including our blog on handling AI-generated survey responses), we wanted to share some reflections on how AI might affect gender-lens investing and its evaluation going forward.
As artificial intelligence reshapes the investment landscape, it offers both opportunities and challenges for gender-smart business practices. While AI systems could help address a crucial gap in gender-lens investing by enhancing the collection and analysis of gender and disaggregated data – from pay gaps to promotion rates – their adoption requires careful consideration of embedded risks and biases.
AI in Gender-Smart Business Practices: Opportunities and Embedded Risks
AI in Hiring, Promotion, and Decision-making
Increasing evidence shows that companies with gender-diverse leadership tend to adopt more gender-smart business practices. However, as businesses increasingly integrate AI into their operations, we must carefully examine both the opportunities and risks this creates for gender and social equity.
Consider an AI system being trained to identify promotion-ready candidates using historical promotion data. If a company’s past twenty years of promotion data shows that 80% of senior leadership promotions went to white, able-bodied men, the AI system may learn to associate “promotion potential” with patterns more common in white male candidates’ profiles. For instance, it might give higher weight to the following:
- Uninterrupted career progression (e.g. disadvantaging those who took parental leave)
- Traditional full-time work patterns (e.g. overlooking those with flexible arrangements)
- Specific leadership styles historically associated with white male executives (e.g. assertiveness over collaboration)
- Educational backgrounds from institutions or fields historically dominated by white men
- Career paths that reflect greater access to informal mentorship networks (e.g. disadvantaging racial minorities, immigrants, and others without pre-established access to such professional networks)
Without explicit correction, the AI system treats these historical patterns as “successful” examples rather than recognising them as potential indicators of systemic bias. The system then replicates these biases when evaluating current candidates, effectively creating a self-reinforcing cycle where the AI identifies candidates similar to those historically promoted, who then receive promotions, and their data feeds back into the system as new “successful” examples, further entrenching the biased pattern. This is particularly problematic because AI systems can give these biased decisions an appearance of objectivity through complex algorithms and data analysis, making the discrimination harder to identify and challenge than when it comes from human decision-makers.
To address these challenges while leveraging AI’s potential benefits, companies need to consider several critical questions:
- How do automated HR systems impact gender-diverse recruitment and retention?
- Are AI-powered performance evaluation tools designed with gender considerations and diversity, equity and inclusion in mind?
- Do workplace automation decisions consider differential impacts on women employees, employees of colour, employees with disabilities, as well as those at the intersections of multiple marginalised identities?
- What safeguards are in place to identify and correct bias in AI systems?
- How can AI be used to support rather than undermine gender-smart and socially equitable business practices?
Transparency and Disclosure
As companies increasingly integrate AI into their operations and reporting, questions arise about disclosure requirements. Should companies be encouraged or required to report which AI tools they use for metrics and analysis? This transparency could be informative to impact investors making ethical investment decisions.
For instance, if a company uses AI services from providers known for problematic practices – such as military contracts or non-consensual data scraping – this could conflict with investors’ ethical investment principles or goals. The challenge is compounded by the complex web of AI service providers and their varying ethical standards. A company might be using multiple AI services across different operations, from HR analytics to customer service, without fully disclosing these relationships to investors. This lack of transparency means impact investors could unknowingly be channeling capital to companies that rely heavily on AI providers whose practices conflict with investors’ ethical principles.
Environmental Considerations
The environmental impact of AI usage presents another crucial consideration for gender-lens and climate-smart investors. Large language models and AI systems require significant computational power, contributing to growing energy consumption and carbon emissions. This raises important questions about the trade-offs between improved impact data collection and environmental sustainability – a consideration that particularly affects women, particularly more vulnerable women in the Majority World, who often bear the brunt of climate change impacts.
Ethical AI Use in Practice
Companies might be tempted to rely heavily on AI for gender and other impact metrics due to its efficiency and scalability. However, this approach requires careful ethical frameworks. Key considerations include:
- Ensuring AI systems respect privacy when collecting gender-related data and data on other social characteristics
- Using AI analysis to complement rather than replace human oversight in gender equity and social equity assessment
- Establishing rigorous processes for validating AI-generated insights about gender, racial, disability, and other patterns and trends
- Establishing ethical guidelines for AI use in gender-related and equity, diversity, and inclusion decision-making, as well as throughout all company operations
Likewise, gender- and climate-smart investors may need to develop new criteria for evaluating companies’ AI practices, considering not just the presence of gender- and disaggregated data but also how that data is collected and analysed. This could include assessing the transparency of AI systems used, the environmental impact of AI operations, the ethical track record of AI service providers, and the company’s frameworks for preventing AI bias. Where companies do not yet have these transparency measures, investor due diligence could be as simple as asking questions about AI usage and providers directly to the company.
AI in Investment Decision-Making
As investment firms increasingly adopt AI-powered tools for deal sourcing and due diligence, we need to examine how these tools might impact gender-lens investing. Could AI help identify promising women-led businesses that traditional methods might overlook? Or might AI systems, trained on historical investment data that favours companies led by white able-bodied males, inadvertently perpetuate existing funding gaps? The key lies in deliberately incorporating gender-smart criteria, such as the 2X Criteria, as well as a racial equity lens, indigenous lens, disability lens, LGBTQI+ lens, child lens, and other equity and inclusion considerations into AI systems’ evaluation frameworks, as well as ensuring human oversight of AI system recommendations.
Recommendations
To ensure AI supports rather than hinders gender-lens and social impact investing objectives, we recommend:
- Developing frameworks for assessing AI systems’ impact on gender and social equity within investment processes.
- Ensuring AI tools used in impact measurement incorporate gender-specific and other social equity metrics and considerations.
- Supporting diverse women’s leadership in AI development to ensure diverse perspectives shape these technologies.
- Creating accountability mechanisms for AI-powered investment tools to prevent gender, racial, and other biases.
The intersection of AI and gender-lens investing presents both opportunities and challenges. By proactively considering these aspects in our evaluation frameworks, we can work to ensure that technological advances support rather than undermine the goals of gender-smart and social impact investing.