Technology experts have expressed concern that algorithms used by modern AI systems produce discriminatory outputs, presumably because they are trained on data in which societal biases are ingrained
Representational images. Pic/iStock
Algorithms are quite responsible for exhibiting and amplifying gender bias, according to a new research published in the journal Proceedings of the National Academy of Sciences (PNAS). The study is among the latest to uncover how artificial intelligence (AI) can alter our perceptions and actions.
ADVERTISEMENT
"There is increasing concern that algorithms used by modern AI systems produce discriminatory outputs, presumably because they are trained on data in which societal biases are embedded," said Madalina Vlasceanu, a postdoctoral fellow in New York University's Department of Psychology and the paper's lead author.
"As a consequence, their use by humans may result in the propagation, rather than reduction, of existing disparities," Madalina added.
"These findings call for a model of ethical AI that combines human psychology with computational and sociological approaches to illuminate the formation, operation, and mitigation of algorithmic bias," author David Amodio, a professor in NYU's Department of Psychology and the University of Amsterdam shared.
Technology experts have expressed concern that algorithms used by modern AI systems produce discriminatory outputs, presumably because they are trained on data in which societal biases are ingrained.
"Certain 1950s ideas about gender are actually still embedded in our database systems," Meredith Broussard, author of Artificial Unintelligence: How Computers Misunderstand the World and a professor at NYU's Arthur L. Carter Journalism Institute, told the Markup earlier this year.
Researchers performed Google image searches for the gender-neutral term "person" in different countries in two analyses--one involving 37 countries and another involving 52 countries--in the countries' dominant language and found that in countries with higher gender inequality, the first 100 search results contained more photos of men-presenting individuals than women-presenting individuals; in a further experiment, when participants in the United States were shown Google image search results for unfamiliar job titles that included more men than women, they were more likely to state that they would hire a man to do the job, suggesting that gender inequality is mirrored in internet search algorithms and can lead to biased decision-making, according to the authors.
"These results suggest a cycle of bias propagation between society, AI, and users," Vlasceanu and Amodio wrote, adding that the "findings demonstrate that societal levels of inequality are evident in internet search algorithms and that exposure to this algorithmic output can lead human users to think and potentially act in ways that reinforce the societal inequality."
It is notable that this research was conducted using a VPN-based internet search methodology. This approach allowed researchers to probe international differences in internet search algorithm function with low financial and administrative costs. However, national VPN access varies, such that it may be banned, obstructed, or otherwise unavailable in many locations (43) and thus may not provide an exhaustive international assessment. Nevertheless, this method provided them with access to 58 nations spanning six continents, with sufficient variability in national gender inequality to permit a valid test of our hypothesis.
This story has been sourced from a third party syndicated feed, agencies. Mid-day accepts no responsibility or liability for its dependability, trustworthiness, reliability and data of the text. Mid-day management/mid-day.com reserves the sole right to alter, delete or remove (without notice) the content in its absolute discretion for any reason whatsoever