Ethics and Gender Inequality involving Artificial Intelligence in Job Recruitment

0
212

Abstract

Artificial intelligence (AI) is the reflection of society’s knowledge and beliefs, affecting its work with the pros and cons of humanity. In the workforce, AI is used for its efficient completion of tasks using documented information from human studies. The calculation accuracy and insightful evaluation have built itself a strong reputation that tempts companies to implement AI into their own systems. Underneath the glorification of its capabilities, AI holds human biases that favor and disfavor against different genders. These discriminatory biases influence the results produced and strip capable people of opportunities. This paper will address the causes and impacts of the biases involved in AI. It will discuss different attempts to resolve the unethical issues surrounding workplace bias and introduce an all-encompassing solution.

Introduction

The finite resources and growing labor demand have compelled numerous businesses to take the controversial shortcut of using AI to screen job applicants. In 2022, AI was implemented in the process of recruitment for 65 percent of recruiters1. For centuries, there has been rampant discrimination in the work environment (post-hiring) and labor market (pre-hiring)2. Data footprints from these unequal industry opportunities and treatments have trained AI to reflect the social biases against women and their abilities. Biased AI can make it harder for women to achieve the same opportunities as men.

Globally, women looking to work have an unemployment rate of 6.2 percent, while men have a 5.5 percent unemployment rate3. Applying for a competitive position places women at a significant disadvantage if AI is is involved in the process of reading the curriculum vitae (CV) and screening the applicants.

The debate between selecting job candidates through machine learning or human recruiters is the same comparison between the financial price and the ethical cost. Time and funding for human recruiters are essential to ensure that applicants are properly reviewed and sorted4. On the other hand, combining machine learning technology and CV scanning cuts the costs yet is at risk of continuous bias against certain groups of people. Large and successful corporations have numerous workers and high sales, allowing them to afford and continue to hand-select applicants. In cases where big companies are leaning towards using AI in their job selection process, the tremendous efforts towards removing these biases could be more costly than to ignore the biases attached to AI. Cottier and his co-authors have also supported the claim that there are high costs to train frontier AI models and discovered that the financial expense to cover training runs will be 2.4 times more than the year before. With the continuous emergence of AI technologies, businesses will find it more challenging to invest in training models and updating information.

In 2014, Amazon brought together machine learning specialists to create computer programs that could review job applicants’ resumes5. The screening tool leverages AI to rank job applicants on a scale from one to five, with five being an applicant who has skills that are beneficial to the company. By 2015, Amazon noticed a consistent pattern of scores between genders, a possible sign of gender inequality. Men were receiving relatively high scores, and women comparatively did not, despite the strong applications and qualifications that the women applicants had. This discrimination was rooted in the data that Amazon used to train its computer program. Their data came from employee information from a previous ten-year period when men were more prominent in the technology industry. This grading difference placed a negative bias against women and reduced the opportunities that were given to them. Amazon quickly scrapped the whole project and eliminated their immense efforts and funding towards the computer program.

Massive corporations tend to have a more flexible choice between AI and human work, yet it is a different situation for small and local businesses. For small shops, using machine learning for the hiring application process would be of great help. Especially since small restaurants or mom-and-pop shops are viewed as intro-level jobs or less stable, there are few long-term workers. According to the U.S. Bureau of Labor Statistics, 20.4 percent of businesses fail in their first year after opening and 49.4 percent fail in their first 5 years6. Referring back to Amazon’s action to scrap their computer recruitment system, small businesses will not have the resources – even less compared to Amazon – to alter discriminative data and create an unbiased model.

The process of filtering data requires experienced data scientists, extensive hours, and the money to pay for their work. As the trends of failing operations and unpredictable economic crises continue, the risk of removing data that can produce biases is massive for small businesses. Any solution could require tremendous amounts of work and community support, so how do we fix this issue? This paper will incorporate background information on how historical discrimination has shaped biases in AI. It will also briefly explain the current gender inequality and the relationship between data, machine learning, and AI. The literature review section will summarize and analyze current approaches to resolve gender bias in AI recruitment systems. After background and solution evaluations, the paper will introduce a proposal and its benefits to companies and applicants.

Background

Types of Biases

Gender inequality is largely rooted in representative and algorithmic biases. In historical times, there are unbalanced worker percentages among women due to social standards and the suppression of female rights7. Since women have been limited to few employment opportunities and education, the data on women’s work performance has been condensed to few or no representatives. As a result, AI could develop representative biases that will limit the chances of qualified women to involve themselves in work or research opportunities. Data skewed from historical discrimination also leads to AI viewing gender disproportion as a ratio pattern to follow when selecting applicants for the job; known as algorithmic bias. This assumption is created by the structure of machine learning and its ability to find trends through algorithms. In current modern times, women are more involved in the workforce and have improved in reducing gender stereotypes8. For example, Ethiopia reached greater political gender equality because there was a drastic growth in the amount of women-held seating for their parliament. In 1995, the Ethiopian parliament had 2 percent of their seats held by women and in 2010, the percentage of women representatives has grown to 22 percent. Although there were significant diversity improvements around the world, digital gender biases in AI has the potential to bring back restrictions from the historical past.

Historical Gender Inequality

Around 1760, the beginning of the Industrial Revolution and the transition from societies powered by agriculture and handicrafts to economies based on mechanized manufacturing and large-scale industry began9. Families and companies flourished from the technological, socioeconomic, and cultural boom of this century. From 1750-1913, the GNP (gross national product) increased by 102.13 percent for third-world countries and 363.74 percent for developed countries per capita. During the Industrial Revolution, women’s roles were to care for children and “transform the home into a haven for the men who faced daily pressures and dangers in the workplace.”10. Working in sweatshops for the textile industries was also considered a way for women to support their families. In addition to the gruesome and unhygienic work environment, their long work hours had little pay and the job efforts were seen as minuscule. Women were constantly told that their work roles could easily be replaced if they stopped working or revolted11. To achieve gender equality, women need equal representation and compensation for their work.

The Role of Machine Learning in AI Biases

In reflection of the discriminative history of women’s roles and standards in the household, AI is subject to bias through machine learning. Machine learning is a process that fuels the knowledge of AI by finding repeating occurrences from data. These patterns show connections and draw conclusions about something or someone, which allows AI to draw predictions. Therefore, machine learning depends on the data to be reliable in its calculations. As population growth and job demands increase, extensive filtering would be required to determine who is qualified or not qualified for a job position. Many recruiters turn towards AI to select high-performing applicants for interviews, which can quicken CV scanning and job searching for unemployed people. According to LinkedIn, 87 percent of human recruiters use AI for talent sourcing, which is the selection the profitable skills and strengths of an applicant12. The benefit of saving time and efficiency for recruiters by using AI has increased the prominence of AI in the job market. An MMR report valued the AI recruitment market at 662 million US dollars in 2023 and the market size is to reach an estimated 1.12 billion US dollars by 203013, exhibiting the importance of this type of recruitment for the wallets of businesses.

Workplace Discrimination

Underneath the layers of feminism, justice, and company work, there are hidden stigmas and discrimination against women. Almost half of employed women report having experienced sex discrimination in the workforce, according to a new analysis of Pew Research Center survey data14. This can include the job recruitment process and interview, job salaries, workplace treatment, amount of workload, and company position. “We need to keep working to create a more equitable world for women, and we need to keep inspiring the next generation of female leaders,” states Claudia Goldin, a gender equality economist professor and Nobel Memorial Prize winner in Economic Sciences. The stigma around women’s roles in society and their “limited” capabilities compared to men has hurt equality in raising the corporate ladder. Due to historically being subdued strictly to cleaning, cooking, and caring for the children, parts of society and the internet are divided and influenced by work done by women15. In 2023, only 40 percent of the 3.5 billion worldwide employed workers were women16, and their earnings were 83.6 percent of men’s17. The lower pay and hiring rates can indicate that women are underrepresented and less valued for their work, especially since women have been achieving higher education.

Up until 1977-1978, the gender-equal enrollment rates were not equal for college associate degrees. Now in current times, there has been a constant rise in female enrollment compared to male enrollment. Women made up 61.8 percent of associate degrees conferred, 57.9 percent of bachelor’s degrees conferred, 61.9 percent of master’s degrees conferred, and 53.6 percent of doctor degrees conferred from 2021 to 202218. Despite the ratio comparison results, only 23 percent of high-potential pools at top-performing organizations consist of women, and 14 percent of women at under-performing organizations19. Based on the statistics, either the value of women and their work is undervalued or there is a separate factor.

Work Disadvantage from Maternity Leave

Maternity leave is a key factor when companies decide who will work long-term. A report shows that a third of managers would rather employ a man in his twenties or thirties than a woman of the same age when considering potential maternity leave and the financial cost of the company20. Due to biological composition from birth, pregnant women physically require rest to conceive their children. Their time off creates a gap in work experience, which negatively affects the quality of women’s CVs. This unbalanced trait between the genders will attribute to set backs in their work experience, pulling them apart from male applicants and negatively affecting their CV competitiveness. AI is trained to perceive the time gap as a negative factor to the applicant, especially if the applicant has more than one child. This gender discrimination places a huge disadvantage on women applicants and can discourage pregnancy among women who seek to work.

Literature Review

Analysis of Existing Solutions

Solution One: Briefly explain the solution

In previous attempts, people have tried multiple ways to eliminate bias against certain backgrounds. For example, there has been a proposed solution that suggests company recruiters “actively build and test hiring applications to ensure they meet the required standards,” and for extensive human resources research for leveraging AI21. The writers are suggesting work involving great deals of time and financial support towards funding research as mentioned before. Their solution is not realistic for small companies that have few financial, logistical, and data resources. The extensive time required in the solution will prevent current algorithmic biases from affecting applicants during the long research duration.

Solution Two

Learning Collider, a group of social and data scientists who conduct research to benefit society, urges businesses to implement inclusive machine learning in their job recruitment process22. Developed by their workforce, inclusive machine learning uses an algorithm that is flexible towards unknown factors in applicants’ CVs. By standing open to skills, backgrounds, or characteristics that don’t have enough data on how they reflect the applicant’s performance, the system is less subjective and dismissive towards certain traits.

Inclusive machine learning requires a cycle of worker performance updates to re-train and learn the success rate of the new factors. The Learning Collider states that the data feedback loop on inclusive machine learning will result in a more diverse and higher quality applicant pool for interviewing. Initially, there will be various types of applicants yet inclusive machine learning will retrain to create new biases – the nature of machine learning – that will put advantages or disadvantages on applicants’ backgrounds.

Compared to large corporations like Amazon, small businesses have fewer resources and generally favor the cheaper options to keep costs low. For example, businesses would have to invest their time to master machine learning and regularly update data on worker performance. Deep learning in AI is a complex technology to the public with new features that are challenging to understand and catch up with. In the United States, a developed country with open access to technology, a 2022 survey shows that 44 percent of people think they do not regularly interact with AI23. In developing countries, there are lower chances of having open-access resources that educate business owners on machine learning and data science. Compared to using human recruiters and inclusive machine learning, standard machine learning is also more affordable on average. Assuming each interview cost is 250 US dollars, human recruiters will cost the company 1.1 million dollars, inclusive machine learning will cost 337 thousand dollars, and standard machine learning will cost between 138 to 464 thousand dollars (average between two costs is 301 thousand dollars).

Although large businesses may be able to afford a few thousand dollars for fair and diverse screening outcomes, small businesses have less financial freedom, especially since they are short-staffed and need more workers. In 2017, the median market difference between the smaller and larger public businesses was 8.8 billion dollars24. Additionally, inclusive machine learning can create new biases from their data feedback loop on worker performance and put certain applicants applying to jobs at a disadvantage. For example, if female A and female B were employed in a business and performed horribly, then female C would be discriminated against for actions done by others.

Solution Three

AI biases correspond directly to the data used to train its intelligence. Data that lack representation and are sourced from historical time periods or demographics with gender inequality will lead to discriminatory outcomes that match the ethical level. Similarly to the common phrase ’trash in, trash out’, it describes how terrible inputs/actions create terrible results.

To combat concerns surrounding skewed data, Matt Fisher, an expert contributor to HRmorning, states that if “AI models are trained on a heterogeneous dataset, you can eliminate any concerns about AI hiring bias.”25. He proposes for recruitment systems to use diverse datasets from a wide range of geographic areas to resolve AI bias and discrimination. In Bullhorn, the company that Fisher works for, their algorithms train using data from millions of candidates from more than ten thousand companies. As a cloud-based platform with more than ten thousand clients globally, providing non-discriminative data are one of the key components for company success26.

Filtering and selecting specific information to form a heterogeneous dataset can result in a greater representation of applicants. The main concern regarding this method is emphasized from a realistic point of viewpoint for small businesses, especially if their services are not aimed at the job recruitment industry. Bullhorn has an estimated annual revenue of 750 million dollars as of September 2024, and around 1,400 employees across the countries. It is unsustainable for small businesses to pour similar amounts of resources into trainingAI with data up to Fisher’s standard. Additionally, algorithmic biases against certain backgrounds will continue to remain if they are considered factors in the employment selection process. For example, if data about Group X shows that they are generally slow at making pizzas, then Person X will be subjected to the bias of bad pizza-making skills. Fisher further states the need for frequent disparate impact testing on AI algorithms to avoid and remove biases that could develop. Practicing frequent evaluation is essential to achieve great results in any industry, but not every company has the funds and time to do so.

Solution Four

Instead of reviewing previous achievements and experience, companies lean towards examining the current skills of applicants. Unilever is a global consumer goods company that recruits applicants through their assessment match scores and capabilities27. According to an online preparation guide to Unilever’s online assessment, their employment selection process consists of four steps: (i) attach the applicant’s CV and cover letter to Unilever’s careers website, (ii) complete tests that show if an applicant is qualified, (iii) go through an online interview with HR from Unilever, (iiii) take a day-long series of tasks that measure skills and performance.

By accomplishing these evaluations that go through the digital scores, in-person assessments, and human recruiter judgment, it reduces the impact of human biases and ensures a fair filtering process for applicants based on their CV. Unilever’s recruitment process does not rely on AI, resulting in zero chance of algorithmic bias or representative bias. Employers’ emphasis on skills and task completion reduces the value of work experience, minimizing gender discrimination against mothers who take maternity leave.

While this method reduces historical discrimination from potentially-affected applicants, it is a long and tedious process for the company and applicants. Other large businesses with numerous workers would have to repeat the same recruitment cycle for new hires, and small businesses or shops may not have the time or need to heavily filter through applicants if they are short-staffed. If there were abundant company resources, Unilever’s recruitment method would be the ideal solution.

Solution Five

To solve public issues/debates, it is common for people to assemble together for discussion and action. Cynthia Dwork, a Gordon McKay Professor of Computer Science at Harvard University, connected experts in algorithmic fairness, privacy, AI, law, critical race theory, organizational behavior, economics, and social networking to create the Hire Aspirations Institute28. As a group, they plan to “investigate pathways to minimize the transfer of persistent patterns of hiring bias and discrimination onto electronic platforms, from data and algorithms to corrective transformations and law.”

Up until June 2023, the Hire Aspirations Institute has suggested removing poorly worded job descriptions and is continuing to work on producing different algorithms, hiring platforms, and modeling techniques. The collaboration of researchers from top corporations and prestigious universities encourages the creation of creditable work and a wide range of ideas, allowing their solutions to have a quick and efficient impact on employment processes that involve AI. According to their website, the latest news of their organization has been featured on the BBC and Harvard SEAS News on January 10, 2024. Their collaborations appear to be essential for creating change, yet it is urged for the Hire Aspirations Institute to regularly update their progress on fulfilling their mission of reducing algorithmic hiring biases29.

Proposed Solution: Differentiating the Weight Distribution of Resume Factors

Job recruitment inequality, involving machine learning, and different lens perspectives shows that there must be a change to maintain ethics and equality. However, switching to human recruitment has its own share of inequality and stereotypes, intentional or not.

AI biases that favor and discriminate against certain groups continue to be present throughout the creations of developed solutions by extraordinary researchers30. Undoubtedly, it is difficult to find a universal resolution that can accommodate all parties.

The paper proposes that businesses recruit applicants through a combination of weighted scores based on their qualities. The weight percentages have been determined through reducing the influence of factors that are negatively impacted by maternity leave and concentrating the importance onto factors that demonstrate an applicant’s competitiveness and specialty. AI, trained with data on applicant traits by using machine learning, would grade and assign different values for the applicants’ awards/achievements, skills, education, and work experience. Interview scores from human recruiters would be included after the first filtering of applicants based on their other qualities.

Since only the selected applicants are interviewed, it shortens the interview process and allows for the second filtering process to include both the CV score given by AI and the interview score given by human recruiter feedback. This method of combining both perspectives can also reduce the influence of human biases on applicants’ recruitment chances.

After collecting the score data, AI would weigh the values at fixed percentages that vary depending on the importance of certain traits and company preferences. For example, the ideal ranking and weight of applicant qualities would be the following: (i) awards/achievements with 30 percent weight, (ii) skills with 25 percent weight, (iii) interview score with 18 percent weight, (iiii) education with 15 percent weight, (iiiii) work experience with 12 percent weight, and personal information with 0 percent weight. Awards/Achievements demonstrate applicant distinction, the value of their work, and display of an applicant’s passions. Skills, like experience in coding, art styles, and data collection, show if the applicant is qualified for the work involved in the job. The interview score uncovers the mood, attitude, and communication skills of an applicant, which are essential to maintaining a stable work environment. While education diplomas can correspond with achievements, they showcase the base and quality (depending on the college/university attended) of an applicant’s knowledge. Work experience can explain an applicant’s experience in an industry and their company loyalty. The only downside is that it is prone to gender discrimination since maternity leaves appear in women’s CVs.

The lower weight on work experience allows women to receive a fairer judgment and score from AI. Personal information should have a weight of zero percent because scoring an applicant based on gender or other background characteristics will create biases in AI recruitment. In November 2018, a published calibration exercise revealed that closing the work-gender gap could increase gross domestic product (GDP) by an average of 35 percent31. Understandably, the percentages vary between companies that focus on different industries yet they still should have work experience diminished due to gender inequalities that disfavor women. Through this solution, younger generations who have found job recruitment difficult may also have an advantage through this solution since their work experience is relatively lower than older generations.

There are limitations to the proposed solution. Although companies should still have reduced work experience due to gender inequalities that disfavor women, it is understandable that the other percentage weights will vary between companies that focus on different industries. Through this solution, younger generations who have found job recruitment difficult may also have an advantage, as their work experience is relatively lower than older generations. This sets disadvantages on older employees who have their CVs built on the basis of job experience and business loyalty.

Implementing the solution and the percentages proposed is realistic for large businesses with many employees and small businesses with limited resources. As a result of AI and the importance assigned to certain aspects of an applicant’s CV, the evaluation of applicants will be quick, effective, inexpensive, and fair.

Conclusions

The utilization of AI in the workforce is expanding due to its efficient completion of tasks and accuracy. Job recruitment is heavily dependent on AI’s ability to rank qualities based on how society perceives them. However, society’s beliefs are composed of discrimination and stereotypes against certain groups. Out of 60,000 respondents from 46 countries, 38 percent of people believe that “men make better business executives than women do”32. The data used to train machine learning will detect trends of gender inequality, reflected from past and present discrimination.

The gender biases – from work experience gaps for mothers who take maternity leave – rooted in data patterns are unethical and an injustice to women. To give men and women equal opportunities in the workforce, there must be solutions implemented to remove algorithmic and representative biases. The paper’s solution is relativistic for all perspectives in terms of financial costs, time management, and simplicity. The outcome of the resolution will provide gender equality and recruitment to qualified applicants.

Acknowledgments

Parker Howell, Graduate Student – PhD at University of Michigan

References

  1. Stefanowicz, B. (2024, September 3). AI recruitment statistics: What is the future of hiring? Tidio. https://www.tidio.com/blog/ai-recruitment/. Accessed September 25, 2024. []
  2. Channar, Zahid, et al. “Gender Discrimination in Workforce and Its Impact on the Employees.” EconStor, 2011, www.econstor.eu/bitstream/10419/188023/1/pjcss053.pdf. Accessed 6 Apr. 2025. []
  3. Tobin, S., & Yoon, S. (2017). World employment and social outlook: Trends for women 2017. International Labour Organization. https://www.ilo.org/research-and-publications/world-employment-and-social-outlook/worldemployment-and-social-outlook-trends-women-2017. Accessed September 25, 2024. []
  4. Cottier, Ben, et al. “The Rising Costs of Training Frontier AI Models.” Arxiv.org, 2016, arxiv.org/html/2405.21015v1. Accessed 4 Apr. 2025. []
  5. Dastin, J. (2018, October 11). Insight – Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruitingtool-that-showed-bias-against-women-idUSKCN1MK0AG/. Accessed September 15, 2024. []
  6. Bureau of Labor Statistics. (2024, January 12). 34.7 percent of business establishments born in 2013 were still operating in 2023. Bureau of Labor Statistics. https://www. bls.gov/opub/ted/2024/34-7-percent-of-business-establishments-bornin-2013-were-still-operating-in-2023.htm. Accessed September 25, 2024. []
  7. Kalev, Alexandra, and Gal Deutsch. “Gender Inequality and Workplace Organizations: Understanding Reproduction and Change.” Handbook of the Sociology of Gender, 6 June 2018, pp. 257–269, link.springer.com/chapter/10.1007/9783-319-76333-0_19, https://doi.org/10.1007/978-3-319-76333-0_19. Accessed 1 Apr. 2025. []
  8. Kassa, Shimelis. “Challenges and Opportunities of Women Political Participation in Ethiopia.” Journal of Global Economics, vol. 03, no. 04, 2015, www.researchgate.net/profile/Shimelis-Kassa2/publication/289570459_Challenges_and_Opportunities_of_Women_Political_Participation _in_Ethiopia/links/5eb50a824585152169be8cc4/Challenges-and-Opportunities-of-WomenPolitical-Participation-in-Ethiopia.pdf, https://doi.org/10.4172/2375-4389.1000162. Accessed 29 Mar. 2025. []
  9. Britannica. (n.d.). Industrial Revolution: Definition, history, dates, summary, and facts. Britannica. https://www.britannica.com/event/IndustrialRevolution. Accessed September 22, 2024. []
  10. Tsongas Industrial History Center. (n.d.). The role of women in the industrial revolution. UMass Lowell. https://www.uml.edu/tsongas/barilla-taylor/women-industrial-revolution.aspx. Accessed September 22, 2024. []
  11. Kabeer, Naila. “Globalization, Labor Standards, and Women’s Rights: Dilemmas of Collective (In)Action in an Interdependent World.” Feminist Economics, vol. 10, no. 1, Mar. 2004, pp. 3–35, www.researchgate.net/publication/24080924_Globalization_Labor_Standards_and_ Women’s_Rights_Dilemmas_of_Collective_In_Action_in_an_Interdependent_World, https://doi.org/10.1080/1354570042000198227. Accessed 2 Apr. 2025. []
  12. DigitalRecruiterTM App. (2023, December 18). The AI revolution: Transforming recruiting and HR with cuttingedge technology. LinkedIn. https://www.linkedin.com/pulse/ai-revolution-transformingrecruiting-hr-cutting-edge-yexmc. Accessed September 25, 2024. []
  13. Maximize Market Research. (2023). AI recruitment market: Industry analysis and forecast (2024-2030). Maximize Market Research. https://www.maximizemarketresearch.com/ market-report/global-ai-recruitment-market/63261. Accessed September 25, 2024. []
  14. Parker, K., & Funk, C. (2017, December 14). 42 percent of US working women have faced gender discrimination on the job. Pew Research Center. https://www.pewresearch.org/ short-reads/2017/12/14/gender-discriminationcomes-in-many-forms-for-todays-working-women/. Accessed September 2, 2024. []
  15. Lam, Chun Bun, et al. “The Division of Household Labor: Longitudinal Changes and Within-Couple Variation.” Journal of Marriage and Family, vol. 74, no. 5, 24 Sept. 2012, pp. 944–952, www.ncbi.nlm.nih.gov/pmc/articles/PMC3925744/, https://doi.org/10.1111/j.1741-3737.2012.01007.x. Accessed 6 Apr. 2025. []
  16. Dyvik, E. H. (2024, July 4). Number of employees worldwide 2023. Statista. https://www.statista.com/statistics/1258668/global-employment-figures-by-gender/. Accessed September 2, 2024. []
  17. Bureau of Labor Statistics. (2024, March 12). Women’s earnings were 83.6 percent of men’s in 2023. Bureau of Labor Statistics. https://www.bls.gov/opub/ted/2024/womens-earnings-were-836-percent-of-mens-in-2023.htm. Accessed September 2, 2024. []
  18. National Center for Education Statistics. (n.d.). Degrees conferred by degree-granting institutions, by level of degree and sex of student: Selected years, 1869-70 through 202122. National Center for Education Statistics. https://nces.ed.gov/programs/digest/ d12/tables/dt12_310.asp. Accessed September 2, 2024. []
  19. Byham, T. M., et al. (2024, March 6). Women in leadership statistics: Insights for inclusion. DDI. https://www.ddiworld.com/blog/women-leadership-statistics. Accessed September 2, 2024. []
  20. Slater and Gordon Lawyers. (2014, August 12). We highlight maternity discrimination. Slater and Gordon. https://www.slatergordon.co.uk/newsroom/slatergordon-highlights-maternity-discrimination/. Accessed September 2, 2024. []
  21. Albaroudi, E., et al. (2024). A comprehensive review of AI techniques for addressing algorithmic bias in job hiring. MPDI, 1-404. https://www.mdpi.com/2673-2688/5/1/19. Accessed September 24, 2024. []
  22. Learning Collider. (2022). White papers — Learning Collider. Learning Collider. https://www.learningcollider.org/white-papers. Accessed September 29, 2024. []
  23. Kennedy, Brian, et al. “Public Awareness of Artificial Intelligence in Everyday Activities.” Pew Research Center Science & Society, Pew Research Center, 15 Feb. 2023, www.pewresearch.org/science/2023/02/15/public-awareness-of-artificialintelligence-in-everyday-activities/. Accessed 6 Apr. 2025. []
  24. Govindarajan, V., et al. (2019, August 16). The gap between large and small companies is growing. Why? Harvard Business Review. https://hbr.org/2019/08/the-gap-between-large-andsmall-companies-is-growing-why. Accessed September 29, 2024. []
  25. Fischer, M. (2024, April 18). Stop AI hiring bias: 3 keys to keeping it under control. HR Morning. https://www.hrmorning.com/ articles/ai-hiring-bias. Accessed October 6, 2024. []
  26. LeadIQ. (n.d.). Bullhorn company overview, contact details and competitors. LeadIQ. https://leadiq.com/c/bullhorn/5a1d89df2400002400633dd2. Accessed October 6, 2024. []
  27. Unilever. “Unilever Global Company Website | Unilever Global.” Unilever Global Company Website, 2023, www.unilever.com. Accessed 6 Apr. 2025. []
  28. Harvard John A. Paulson School of Engineering and Applied Sciences. (2023, June 12). How can bias be removed from artificial intelligence-powered hiring platforms? Harvard John A. Paulson School of Engineering and Applied Sciences. https://seas.harvard.edu/news/ 2023/06/how-can-bias-be-removed-artificial-intelligencepowered-hiring-platforms. Accessed September 24, 2024. []
  29. Hire Aspirations Institute. (n.d.). Hire Aspirations Institute. Harvard University. https://hireaspirations.seas.harvard.edu.Accessed October 20, 2024. []
  30. Chen, Z. (2023). Ethics and discrimination in artificial intelligence-enabled recruitment practices. ResearchGate, 1-13. https://www.researchgate.net/publication/373948488 _ Ethics_and_discrimination_in_artificial_intelligence-enabled_recruitment_practices. Accessed September 29, 2024. []
  31. Ostry, J. D., et al. (2018). Economic gains from gender inclusion: New mechanisms, new evidence. International Monetary Fund. https://www.imf.org/en/Publications/WP/Issues/2018/03/12/Economic-Gains-From-GenderInclusion-New-Mechanisms-New-Evidence-45722. Accessed September 25, 2024. []
  32. World Values Survey. (2023). WVS Database. World Values Survey. https://www. worldvaluessurvey.org/WVSDocumentationWV7.jsp. Accessed September 29, 2024. []

LEAVE A REPLY

Please enter your comment!
Please enter your name here