KEY EVENTS On April 23, 2021, Paige Chu presented Technology and Racism: An Environment for Violence? at the Generation Z Congress. The presentation was followed by a moderated question and answer period and further discussion in a moderated break-out room. Key points of discussion included: how technology has allowed for public issues to be absorbed into private spaces, the differential experiences with the use of technology, how technology has been used to perpetuate racism, and the difficulties with holding tech companies accountable. NATURE OF DISCUSSION Presentation Paige Chu provided several case studies to emphasize the role technology plays in perpetuating racism, including prenatal testing in Charleston, Norplant birth control, and the COMPAS recidivism algorithm. The discussion then turned to the difficulties with detecting racism driven by technology and the laws in place designed to protect tech-companies from repercussions. Question Period During the question period, the discussion focused primarily on potential solutions to the problem of technology and racism, the speaker’s opinion on facial recognition technology, and the speaker’s personal concerns about the internet. TECHNOLOGY AND RACISM: AN ENVIRONMENT FOR VIOLENCE? Date: April 30, 2021 Disclaimer: This briefing note contains the encapsulation of views presented by the speaker and does not exclusively represent the views of the Canadian Association for Security and Intelligence Studies. Paige Chu Page 167 The Journal of Intelligence, Conflict, and Warfare BACKGROUND Presentation Although ‘technology’ is often thought of as electronic devices, such as phones and computers, it also encompasses a much broader range of innovations, including vaccines and health-related diagnostic testing. Despite having genuine intentions, many technologies can be used to perpetuate racism and discriminate against ethnic minorities, as can be seen in several case studies. In the case of Charleston, South Carolina in 1989, a public policy was implemented allowing for the arrest of pregnant women who showed a history of drug use in their prenatal tests, which disproportionately discriminated against low-income racial minorities. Public policy failures such as this should be addressed in the public sphere; however, technology has made it possible for public issues, like unfit parenting, to be absorbed into private, corporate spaces through the use of private diagnostic testing facilities. Similarly, a birth control device called Norplant was designed by Population Council and Pfizer in the 1960’s, which was a small metal rod inserted into a woman’s arm. This device was initially used in developing countries as a means of population control; however, it eventually made its way back to the US where it was targeted towards poor communities, people with disabilities, and black communities as a form of selective reproduction. Coercive methods were used to get particular women to receive the implant, often failing to explain the terms of Norplant’s use and denying requests for removal, despite significant health concerns being reported by its users. In the above circumstances, the underlying agenda of these reproductive technologies was to protect against dangerous motherhood; however, they were deployed in a discriminatory way and only served to bolster racist practices. Alternatively, a statistic-based algorithm created by Northpointe Inc. was designed to objectively determine a defendant’s risk of recidivism upon release from prison, free from human bias. Though their intent was to reduce racial bias in the process of decision making, prejudicial assumptions were built into the underlying framework for the algorithm. A lengthy survey was provided to defendants with questions that became increasingly personal and irrelevant, such as “how many of your friends have ever been arrested?” These questions were primed with underlying racial bias and resulted in more black defendants being incorrectly judged to be high-risk, and more white defendants being incorrectly judged to be low risk. Paige Chu Page 168 The Journal of Intelligence, Conflict, and Warfare Despite the clear flaws of the above technologies, many private tech-companies and corporations are protected by intellectual property laws, meaning they are not legally required to share the specifics of their algorithms or underlying technology. This makes it nearly impossible to detect and fight back against technology-driven racism. Despite centuries of ethnic discrimination and racial oppression, science has shown that race has no actual biological or genetic significance for humans. The diversity of humankind does not meet the threshold to separate us into different races. There are no differences in strength, intelligence, or ability among various races, leading us to conclude that race is purely a social system with a political function. Question Period With regards to potential solutions to the problem of technology and racism, it is possible that improving legislation might help to ensure laws are keeping up with technology and not allowing prejudicial practices to fall through the cracks. It is essential to hold technology accountable through a democratic process, to account for differential experiences of technology. Policies should be designed to improve identity verification and user safety, as well as holding tech companies responsible for the data they are collecting and how they use it. With regards to facial recognition technology, it is largely unnecessary as so much of our personal information is being collected already. There are already many advanced forms of technology, including social media platforms, that are effective at “picking us out of a crowd” similar to that of facial recognition. There are also concerns with how facial recognition technology would be used and how their data would be collected. In terms of concerns with the internet, there are significant difficulties with misinformation, the speed with which misinformation can travel and how users are not guarded against certain content on the internet. There are times when a user can see things that they aren’t emotionally prepared for, and there are no safeguards in place to protect vulnerable individuals. It is also difficult to gauge a person’s intent online. It is much easier to gauge safety in a physical space by observing another person’s body language and verbal cues. It is much more difficult to judge safety in a virtual space because there are so few cues to rely on. Sharing personal information on the internet can become a significant concern for many. Thoughts that are posted online often do not feel authentic when you Paige Chu Page 169 The Journal of Intelligence, Conflict, and Warfare must analyze the words being used and how you might be perceived by the public. There is also a permanence to what is being posted online that you do not see in the real world. A fleeting thought posted online today may be drastically different from how you think in a year from now; however, it can never be deleted. KEY POINTS OF DISCUSSION Presentation • Technology has opened up for public issues to be absorbed into private spaces, such as large corporations and tech-companies. • Reproductive technology, such as pre-natal testing and Norplant birth control, have historically been used to discriminate against racial minorities, people with disabilities and those in low-income neighborhoods. • Prejudiced assumptions and racism can be built straight into technology, as seen in the case of COMPAS, which used an algorithm based in prejudiced assumptions to predict recidivism rates of defendants. • There is no mechanism to hold private tech-companies accountable, making it possible for technology to be used as a tool of racism with no repercussions. • Race is a social system with a political function; there is no biological or genetic significance between races. So, when technology serves as the line of separation without proper accountability, it emboldens the myth of racial superiority and cultivates a space for these forms of violence to occur. Question Period • To address the problem with technology and racism, legislation must be improved to ensure laws are keeping up with technology and that large corporations and tech companies are being held accountable. • Facial recognition technology is largely unnecessary, as there are currently advanced forms of technology adept at picking us out of a crowd through other means. • There are significant concerns with misinformation on the internet, the speed with which misinformation travels, and the lack of protection for vulnerable individuals accessing information online. It is also much more difficult to judge another person’s intent online due to the lack of available social cues. Paige Chu Page 170 The Journal of Intelligence, Conflict, and Warfare • Sharing personal information online can become a concern for many people, due to the lack of authenticity in online interactions and the permanence of online posts. This work is licensed under a Creative Commons Attribution- NonCommercial-NoDerivatives 4.0 International License. © (PAIGE CHU, 2021) Published by the Journal of Intelligence, Conflict, and Warfare and Simon Fraser University Available from: https://jicw.org/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ https://jicw.org/