--- tags: ADS-S22 robots: noindex, nofollow --- # Race against Technology: Introduction Discuss at least one question below, putting your name in brackets in front of your answer. Each question below needs to have at least one discussion, but you can discuss more than one question. During class I will ask on each of you to share out one of your discussion points. #### How are names racially coded? What technologies do our names interact with? What considerations did Benjamin make when choosing a name for her son? What coding is embedded in your own name? How does your name determine how racially visible or invisible you are? * [Faith] Names reflect individual identity. However, when used as a marker by technologies names are stripped of individuality. Technologies flag certain names based around the assumption of race. When Benjamin named her son, she had to take into consideration these technologies, and the profiling that would occur from others due to the implications certain names hold in our culture. Uncoincidentally, when I first read Benjamin’s questions regarding names, I thought there was nothing particularly special about my name. My name isn’t affiliated with my lineage. I think my parents’ primary consideration was personal preference. As Benjamin discuss that’s just it. My parents’ and my lack of consideration reflects the invisible coding of my name. #### What is the New Jim Code? How do racial codes facilitate social control? Should technological advances be developed without accounting for human bias? Why or why not? How is the production of data connected to a long history of exclusion and discrimination? * [Matt Solone] The 'New Jim Code', as stated by the book is "the employment of new technologies that reflect and reproduce existing inequities but that are promoted and perceived as more objective or progressive than the discriminatory systems of a previous era." The racial codes given to those less fortunate or from a minority family are almost boxed into the category of bad people based soley on zip codes and names that someone somwhere has declared 'unpleasent.' I think the using human bias can be necessary in some technological adavances, not all. Generally we think that no bias is good and I would agree but as the book states sometimes bias sneaks through the back door without anyone noticing. So I think if someone is developing some technological advancement that may or may not be controlled by bias we should take that in to account in the early phases so that we do not get blindsided by the racial bias later in development later. Then the production of data is connected to our long history of inequity beacause history folllows us everywhere and if we do not acknowledge that our past has an effect on our future then there will never be any true progress for anyoneor anything. #### How can we draw attention to coded inequity? Why is it important to identify how tools and data represent reality? How can we resist the creation a digital caste system? How are racist results connected to the encoding process? * [Derek Borders] Drawing attention today is paradoxically easier and harder than ever. Awareness can be raised on any number of platforms, but those platforms are inundated with worthy causes, then buried under a mountain of clickbait and fake news. I suspect the most effective approach may be through directly targeting practicing and aspiring data professionals. The nature of the message is technical enough to be of questionable value as a mass awareness goal. Spreading messages and drawing attention are not really in my personal wheelhouse so I don't have specific suggestons. Including this sort of thing in otherwise technical courses and bootcamps seems like a good start. Chiming in on relevant things we see in social media is an individual step we can take. It is important to identify how tools and data represent reality so that we can better anticipate their shortcomings and the ways they might inherit and amplify existing issues. 'Digital Caste System' is one of a number of sensationalist terms and phrases Benjamin employs that feel a bit over the top to me. (I suppose this is mostly a literary style I've come to regard with suspicion in recent works from people doing TED talks and hitting the media loops.) It seems to me that the vast majority of this is simply a fancy new tool being used for an ancient purpose. If there's a digital caste system, it's mostly springing out of an analogue one. As a baseline, I suspect racist results are connected to the encoding process in that they are built on incomplete models of a racist (and honestly probably as classist as racist) reality. What I think is most difficult in this line of inquiry is figuring out where to draw lines between idealistic aspirations, moral obligation, professional responsibility, and legal duty. Sure, given infinite resources and infinite mental energy, it would be great if everybody was always conscious of and vigilant against ways their models and tools might create, amplify, or even just sustain inequity. I'm not sure that's a useful target for affecting real progress though. In a world of iterative development, minimum viable products, and often ruthless capitalism, what tangible, achievable steps can we take in the near to medium term to make progress on this issue? I think the best way to 'resist the creation a digital caste system' is to answer these harder, more immediate questions, as well as to continue the eternal effort to resist existing, entrenched, analogue caste systems. We need specific goals at each level. As individual developers we can strive to always do our best to think about how models might affect inequity. Within the limits of our positions, we can attempt to minimize this. As an industry, we should try to come up with tools and best practices to standardize this practice. As a socity, we need to work on the underlying issues being perpetuated by data, and possibly come up with some regulations to combat automated inequity (though I don't know that I trust congress to do anything helpful there.) #### How do choices made by private tech industries influence and operate as public policy decisions? How do decisions made by social media companies uphold and impact political values? What are some of the harms that tech companies have caused? * [Skip] I try to avoid social media, so I don't want make any claims. But I know conservatives feel social media sites actively censor their views, and do not hold views of liberals to the same standard. If this is the case, then I suppose it is a problem; but it is hard for me argue against a private company not being able to censor individuals on it's platform. It seems if news papers can pick and choose what can and can't be stated in their paper, while claiming to be independent, then social media site should as well. #### What are some of the potential harms of government data sharing? Can you think of a time that your personal data has been used against your individual interests? Is it important to intervene in the tech industry’s attempts to self-regulate? Why or why not? * [Rica] The data the government collects from us go towards things such as law enforcement and public health. A potential harm would include less security. Gathering too much data would lead to higher maintenance costs and storage. Keeping track would be too much and thus losing security. I can’t think of any time my personal data has been used against my interest, but in the book, it described a woman being denied a bank loan due to the fact the government has data of her having a tumor, despite having a high income. I think it is important to intervene because we need protection of any harmful data breaches, protecting consumer’s rights, and as well as health and safety #### What are some examples of how cosmetic diversity is utilized in technology? How does marketing cosmetic diversity shift attention away from creating and amplifying substantive change? How does utilizing cosmetic diversity boost engagement and profits? Where have you seen this happening in your life? * [Ethan] An example that Benjamin brings up regarding how cosmetic diversity is utilized in technology is Netflix’s recommendation algorithm which shows Black actors as the poster for the movie when in reality that actor only has a minor role in the film. Marketing cosmetic diversity shifts attention away from creating and amplifying substantive change because it acts as a substitute for discussion about systemic disadvantages and gives the false impression that little to no work needs to be done. Utilizing cosmetic diversity boosts engagement and profits for companies because it allows them to capitalize on the audience that would like to see and support diversity. #### Is technology a neutral tool? Why or why not? Why is it critical that we assess technology by its outcomes rather than by the intentions of its creators? * [Brandon] When talking about technology as being neutral or not, it strikes me as an inherently philisohpical debate between kantian versus utilitarian ethics. These breakdown to moral judgements being placed on motivations versus consequences. When looking from a motivations perspective, regardless of the technology we choose, due to tools themselves lacking inherent agency they must be neutral by definition. When looking however from a consequences perspective, it will really depend on the technology in question and the some large amount of moral calculus to sum up all it's affects. This is a view many people take when analyzing technology and is useful for making more pragmatic descisions in terms of determining future outcomes. But I personnally feel more kantian in terms of this discussion which lays ethical responsibility upon the human agent's intent. > Need more discussion here. Less academic language and more tangible examples. #### Why is it impossible for designers and developers to be colorblind? How has the white aesthetic influenced artificial intelligence? What examples of this dominant aesthetic have you witnessed in your own use of technology? When have you seen tech companies use Black celebrities to advertise their products? * [Joseph] It is impossible for anyone to be colorblind because, as humans, we inherently have some kind of hidden bias. One may not think of themselves as racist, but put them in a certain situation (walking home alone) and their biases may come to the forefront (a black man comes the opposite way, and they clutch their purse). AI designers also are victim to the biases present within the data they use. They may have a plan to reduce bias as much as possible in their model, but if the data misrepresents a certain group of people then the model is no longer color blind. An example from the book that highlights white aesthetic influencing AI is through the software that associated white sounding names with "pleasant" words and black sounding names with "unpleasant" words. The most prominent place I see the white aesthetic dominating technology is in audio and voice recognition technologies, specifically Siri. All the English voices, whether male or female, sound like they could belong to a white person. Though I try to avoid advertising for anything as much a possible, I have noticed that video game and video game technology companies show the most representation of black people. #### How can centering intentions make the social costs of technology invisible? Why is it important to acknowledge that technology is developed within political and social contexts? How have you witnessed access to technology positioned as a solution to racial inequality? Have these examples downplayed structural barriers to access? Have they included or ignored the participation and innovation from people of color? * [Joseph] Centering intentions involves being mindful at every step in development. Mindful of potential harm, or unfair advantages taken advantage of or given to people. Technology is developed in the scope of social contexts because of the people who develop it. Technologies can effectively bypass some politically protected freedoms to insert biases that may yield more profit. Take the Diversity advertising company, for example. They are effectively doing legalized discrimination in the interest of profit. To the consumer's eyes, it seems like technology is catering to them. This may be true, but in reality the main objective of these tech companies is profit. On the question of technology as a solution to racial inequality, my mind goes to the Obama Phone program, which aimed to give smartphones to disadvantaged (qualifying) families. I believe this program did wonders to break down structural barriers and give internet access to many people of color. I think the next step is some kind of cheap or free internet access to these communities, to further connect then with the world and open doors for opportunities. * [Josh] With centering intentions I think there has to be some sincerity and understanding of what the demographic represents. With sincerity and understanding the tech companies would come off as more genuine. If a company were to promote a product, but by using social impacts as a means to advertise, it doesn’t seem very sincere to me and the company doesn’t come off as understanding. I don’t have a background in business and I’m not a business major so I don’t know much about marketing and advertising, maybe I don’t understand how it works, but it just seems wrong to me. Someone mentioned, what if the proceeds of the product were to be used as donations for supporting social justice foundations, I don’t have a problem with this, it seems a bit more sincere since the money is going into good use, I hope. It’s important to acknowledge that tech is developed within political and social contexts because there is a bias that appeals to certain demographics, again with these technologies it uses political and social contexts as a way to advertise it. I haven’t witnessed a piece of tech as a solution to racial inequality, but I have seen platforms such as youtube,twitch, google, etc. promote content creators of color. I don’t think this downplays barriers to access, it’s good to see content creators that aren’t white be represented on these tech platforms, but it seems like these companies are like “Look we have people of color on our platform, come use it to see more”, maybe I could be wrong about this, I mean it’s good to see diversity on tech platforms and to hear from other voices. As for participation and innovation from people of color, I’m not sure to be honest, I have seen more people of color streaming on these platforms, but I wouldn’t know if there has been a significant increase in participation.