X

AI May Be Able To Develop And Fortify Prejudice: Report

Cardiff University and MIT researchers have found that AI programs may be able to develop prejudice completely on their own based on real-world interactions, even without being trained on data sets that include human prejudice. According to data obtained from running a large number of AI simulations depicting a game of give and take, researchers learned that AI can form prejudices simply by copying whatever agent is obtaining the most desireable outcome, or highest profit on a short-term basis. This leads to organic prejudice development that goes on to create insulated communities.

The game is simple enough. AI agents decide whether to donate to somebody inside or outside of their group depending on the donation strategies and reputation points displayed by others. This simple game, run over a great many simulations, reveals that bots tend to cling to and imitate other bots who share their logic and outcomes, and the group as a whole gravitates toward whomever is seeing the largest return. If prejudice does not develop, then groups and communities in the simulation can form based on other criteria, and this can actually cause larger or more specialized groups to form different prejudices. The end result is that, in this example, the machine always ends up with some sort of deeply held prejudice or assumption, which manifests as a tendency toward peers and away from the subject of the prejudice.

According to the scientists behind the study, the real-world implications of this study could eventually lead to a thorough in-field analysis. For now, the biggest factor is whether AI can recognize existing prejudices in society, their origin points and targets, and what courses of action or consequences are associated with those prejudices. In this way, the system described above could lead to AI mimicking humans in a sense because the people who are more well-off or score higher in other metrics will be the ones most closely emulated. A system like this would mean that AI would likely cling to whoever is getting the best outcomes, and stay away from those who are seeing smaller returns, essentially chasing advantage and ignoring disadvantaged individuals.