---
tags: ADS-S22
robots: noindex, nofollow
---
# Race against Technology: Engineered Inequity
## What was the Beauty AI initiative? What were the outcomes of this contest? Is it possible to assess health based on a photo? Why or why not? How are algorithms trained to see and assess beauty?
[Matt Solone] - The beauty AI initiative was a contest developed by an organization called Youth Laboratories where users would send in selfies of themselves and then be judged by a 'jury' of robots who would deem them, king or queen. The outcomes of this contest were of all 44 winners all but 6 were white and according to the book "only one finalist had visibly dark skin." Youth Laboratories seems to say that you can find valuable information just through pictures and that their goal is to "find effective ways to slow down aging and help people look healthy and beautiful.” Now, I think that this may be helpful for the obvious health problems(i.e. acne, allergic reactions, visible cuts, etc). However, I do not think major health recommendations should be based on photos alone or up to a robot. Not only could this cause a scare to the user but the machine could be drawing conclusions based on the race and ethnicity of the user, as stated in the book, "Given the overwhelming Whiteness of the winners and the conflation of socially biased notions of beauty and health, darker people are implicitly coded as unhealthy and unfit." As per the book, the algorithms were trained using a supervise model where there were pre-labeled photos in the training set of data. I think that they are also trained on the developers' biases whether they know it or not, But "it is not just the human programmers’ preference for Whiteness that is encoded, but the combined preferences of all the humans whose data are studied by machines as they learn to judge beauty..."
[Rica Rebusit] - Beauty AI initiative was the first ever beauty contest judged by robots. The outcomes were that robots did not like people with dark skin and “all 44 winners across the various age groups except six were White” Where one finalist had dark skin. Like Matt said, it can help with superficial health problems but it is not possible to assess health based on a photo because there are times when someone may look okay and perfect, but have some underlying health issues that a photo can not capture. If AI says you're fine when you're really not, it causes problems. The book says algorithms are trained to assess beauty using pre-labeled images. If the robots were judging based on these images, the programmers have a bias.
## What is a robot? How have robots been included in discourse about dehumanization? How are robots included in discourse about racialization? How does a focus on robots obscure the realities of a deeply unequal tech labor force?
[Joseph Shifman] - A robot is any machine that can perform a task, simple or complex, directed by humans or programmed to operate automatically. Since the middle ages, humans have been fascinated with automating jobs and this culminates into dehumanizing tasks. Factory jobs get replaced by robots that do the same job repeatedly without getting tired or needing to be paid. Talk of robot "slaves" in the 1950s higlights the whiteness of the tech industy, as the imagined owners of these robots would most likely be affluent and (at the time most likely) white. Plus, the dissociation between white tech CEOs and the manual laborers collecting a check from him is referred to as "the new digital different caste". Racism is built into the tech industry and reinforced by language such as "master" and "slave" disk.
[Brandon Trahms] - A robot is definitionaly a mechanical enitity which takes in inputs and outputs whatever the creator has designed it to. Often thought to have physical form, which goes back to the idustrial era where machines stared the first steps toward automation. At this time, the adversarial nature of humans and machine began to take mass hold but the ideas of dehumanizaed machines have been around since the classical era with the ideas of automitons in philosophy. But as realistic robots have become a closer reality, the conversation on a once inhuman object becoming more human has increased our understandinng of how we dehumanize objects and people alike. Now robots have been included in the conversation of racialization for their use in the extending of the racist will of people. Since these robots on focused on for thier impact, it can obscure the sources of these machines in the tech labor force which are influence by the inequality of the tech industry.
all of these interpretations of power dynamics is through our racialized lense of the modern era. Slavery inherently is not racial in objective terms.
Also the fact that something so happens to work for you, is that really an extent of your individual power if you did not do anything to initiate it?
[Derek Borders] - Specifically with regard to inequality of the tech labor force, what are we considering 'tech' here? Does geek squad count? I think we also need to ask 'why?'. What is the pipeline like? How do the demographics of the old guard compare to the < 10 years of experience crowd. Is it getting better? How fast? What is the goal? What are the levels of interest? If everybody could magically do exactly what they wanted? Would we end up with equal gender and race representation in every industry? I'm skeptical. I suspect even in this magical utopia, we'd still see more men than women and an overrepresentation of some races and an underrepresentation of others.
Also I object to the suggestion of a zero sum power/benefit scenario.
Better does not always mean worse for some.
## How does the example of the automated soap dispenser illustrate racist design? Why does Benjamin suggest that we should avoid thinking about technology within a binary of good versus bad or trivial versus consequential?
[Ethan Nguyen] - The example of the automated soap dispenser illustrates racist design because the soap dispenser worked a lot better when the user had white skin than compared to when the user had black skin. The explanation for this was that the technology used “requires light to bounce back from the user and activate the sensor, so skin with more melanin [...] does not trigger the sensor” but regardless of the explanation, the design enforces racism. Benjamin suggests that we should avoid thinking about technology within a binary good vs bad, trivial vs consequential, because technology can simultaneously make Black people “invisible” (in the case of the soap dispenser), or “hypervisible” (in the case of facial recognition for police surveillance). In other words, there exist technologies that are capable of fixing problems of racist design but they are being overlooked. Benjamin also draws attention to the fact that people are unlikely to view technology through the same lens as “unjust laws of a previous era” when a similar form of discrimination is being conducted.
[Derek Borders] - I found this example curious. Aren't pretty much everybody's palms lacking in melanin? How did this soap dispenser work?
Was the soap dispeser actually less functional for darker skinned folks, or was this something people noticed that wasn't part of the standard usage? (Found the video. Dude does have some dark palms. Like, dark enough that maybe that soap dispenser does work on many or most dark skinned people. I'd be curious about how effective the thing is across a random sample of black folks.)
The concept has merit despite my questions about the chosen example. I think Ethan pretty much nailed the main answer to the question.
For this section in general, I feel like Benjamin spends too much time talking about racism flowing from developer to technology and not enough talking about racism flowing from society as an input into technology. It seems to me that most of the things we're talking aobut are much more often failures of developers to adequately compensate for flaws in society testing, and training data than what I would consider malice or even negligence on the developers' part.
## What did you learn about China’s social credit system? How does this system shape and investigate its users’ behaviors? What apps do you currently engage with that utilize ratings and ranking systems? Does knowing that you will be rated or ranked impact your behavior and choices? Why or why not?
[Josh Vong] - I've learned that China's social credit system may dock points if a person is playing too much video games, which is crazy to me since China has a large market in video games. This system shapes and investigates a user's behavior by pushing them in a direction that China's government views as "good." It develops the society into a social hirearchy where you would not be able to get a certain job or purchase property just because your social credit score isn't high enough. Some apps I use that utilizes ratings are: yelp, youtube, and twitch. These apps I would say revolve around ratings and rankings due to popularity and seeing which restaurant in the area is the best or what streamer is everybody watching. I would say yes if I was going to be rated or ranked it would definetly change the way I would act and do things. The reason why because it would feel like eyes watching me and my every move, one single flaw or mistake and I would get bombarded with dislikes or negative comments. The way I do certain things would change if it lead to me having a higher rank. It would be me wearing a metaphorical mask and act, that I would have to put on everyday. I wouldn't really be myself anymore, just a former shell trying to get a higher rating.
[Faith Fatchen] China's social credit system docks points from a persons score noot just for financially related behaviors, but also leisure choices. The fact that this scoring is related to credit elevates the stakes. Like Josh said, this system allows China's government to better control people's lives. In the book it mentioned facebook has scoring system. As long as the ranking is not tied to governement institutions I would not be bothered if they were transparent. Given the current landscape it does not change my behavior knowing that I might be ranked by my instagram or facebook. This may be naive of me but I do not think this rating/ranking has an effect on my perception of the world. Now, I would re-evaluate this opinion if I was given new information about the usage of the ratings/rankings.