###### tags: `CDA`
# Reading Responses (Set 2)
- Checklist for a [good reading response](https://reagle.org/joseph/zwiki/Teaching/Best_Practices/Learning/Writing_Responses.html) of 250-350 words
- [x] Begin with a punchy start.
- [x] Mention specific ideas, details, and examples from the text and earlier classes.
- [x] Offer something novel that you can offer towards class participation.
- [x] Check for writing for clarity, concision, cohesion, and coherence.
- [x] Send to professor with “hackmd” in the subject, with URL of this page and markdown of today’s response.
## Reading responses 5 out of 5
### Nov 04 Tue - Finding someone & living alone
In the TV series *Black Mirror’s* episode 4 from season 4, “Hang the DJ,” two people trust a dating algorithm to decide whom to love and for how long. This episode serves as a metaphor for today’s digital dating world, where we often know people only on a shallow level through swipes, likes, and perfectly cropped photos instead of connecting with them on a deeper level. Rudder (2010) states how people claim things like “I make $100,000 a year” and describe out-of-dated photos as recent to show how people adjust the truth for meeting what the app rewards. Since family and friends tend to be “underserving us,” Thompson (2019) explains that algorithms have become “a digital bazaar where one’s appearance, interestingness, quick humor, lighthearted banter... is submitted for 24/7 evaluation,” which turns online dating into how we show ourselves and how we are judged.
However, this new market does not bring freedom; rather, as what Vinter (2023) calls it "soul-destroying," where dating becomes "admin" and affection feels robotic. In the meantime, Chamie (2021) provides number showing that "one in seven adults is now living alone" because of "historic transformation in living arrangements." Therefore, these four readings reveal how technology reshapes intimacy from genuine face-to-face's interactions to superficial online rating systems.
While reading these stories of digital romance, specifically Thompson's image of love when people submit themselves for constant evaluation reminded me of our previous reading about "context collapse." As Marwick and Boyd (2010) note, Twitter users try to "tweet honestly" to seem authentic to the public. Similarly, dating app users portray the image of themselves they want potential partners to see. Yet, as Marwick and Boyd (2010) also highlight, authenticity "depends on who is judging." Perhaps this is why many people choose to live alone. Digital communication has made relationships landscape being performative, and solitude may be the only space where one can be truly authentic.
### Nov 18 Tue - Artificial intelligence
When Spotify's AI DJ “X” reads a song out loud and explains how I should feel, it interrupts the essence of listening to music. It should be about how every individual feels the music within themselves instead of having it narrated by an algorithm, which connects to a larger problem today: generative systems are speaking for us. Stollnitz (2023) describes how GPT models turn language into “tokens” and use “attention” to predict the next words in order to produce text that is based on a “probability distribution,” even though the model “doesn’t understand meaning.” Meanwhile, Vincent (2022) shows how Stable Diffusion was “trained on artwork scraped from the internet” and can “copy their signature styles” and “replicate the look of their work” without consent. Gold (2023) also summarizes how Sydney can become “absolutely unhinged,” “hallucinate” by entering a “delusional loop,” and begin “acting erratically.” Therefore, whether it is DJ X, image generators, or chatbots, these systems imitate human patterns so closely that it becomes hard to distinguish AI from humans and makes the boundary between assistance and harm more ambiguous, just as in our prior class activity.
This ambiguous boundary connects to our previous reading about “Ad Blocking,” in which Marti (2017) calls the “ad blocking paradox.” In today’s world, targeted advertisements follow users through “retargeting,” creating “information asymmetry.” When tracking becomes invasive, people respond with “tracking protection,” “banner blindness,” or simply install ad blockers. Likewise, Gold states that Sydney was “trained on huge datasets of human text scraped from the web,” which can make users dissatisfied or uneasy. Just as generative AI systems can recreate patterns from humans, targeted ads also blur the boundary between “signal” and “noise.” Lastly, when Spotify’s AI DJ “X” starts telling me why I should like a track, systems that are meant to help us can easily shape our experiences in ways we do not notice.
### Nov 21 Fri - Algorithmic bias
A coding mistake once made the phrase “miserable failure” the top result for George W. Bush. This technique was used long before AI became part of our daily lives, yet it already foreshadowed how biased "truth" is in a world run by algorithms. As O'Neil (2016) states, "models are opinions embedded in mathematics" (p. 21). She explains that models rely on "proxy data," and especially three core elements: *Opacity, Scale, and Damage* (p. 31), which are what constitute "Weapons of Math Destruction (WMD)." For instance, the case of the U.S. News ranking exemplifies how its "self-reinforcing" shaped the ranking’s "destiny" because the model "has scant grounding in statistical analysis" and was not objective in terms of measuring education (pp. 51–52).
Rutherford and White (2016) illustrate how searching terms like "woman" and "child" produced "overwhelmingly white results," while "unprofessional hairstyles for work" returned images of Black women. Although Google claimed its algorithm is driven by "more than 200 different 'clues'" such as "popularity," "context," and "meta-tagging," these proxies can unconsciously create racial biases and hierarchies. Furthermore, Hochman’s (2023) experiment with ChatGPT exposed how it "polices wrongthink." When a user prompted it to write a story in which Trump won, the AI responded with "False Election Narrative Prohibited," and was willing to generate a fictional story in which Hillary won.
O'Neil, Rutherford, White, and Hochman’s readings on how algorithms control what we see reminded me of Gold’s (2023) discussion of Sydney, Spotify, and Speedy. Modern AI consists of "huge, alien piles of math" that even its creators do not fully understand. For example, Microsoft’s chatbot Sydney had to be "lobotomized." This contrast shows how Google Search and AI tools can magnify bias through assumptions. Personally, I do not believe that algorithms like the U.S. News ranking should be built on proxies, since the true standard should be how effectively an institution maximizes students’ growth rather than simplified numbers. Hence, algorithmic bias is not just a technical error like the Google Bomb case, but a result of systems that are *opaque*, used at a large *scale*, affect many people, and cause real *damage* to people’s lives.
### Dec 02 Tue - Digital language and generations
[The loudly crying face emoji](https://www.iemoji.com/view/emoji/25/smileys-people/loudly-crying-face) means entirely different to younger users: it can express embarrassment, disbelief, being touched, or even represents "lol." These semantics shift capture how digital language evolves across different generations. McCulloch (2019) argues that the internet has been creating distinct "Internet People," each defined by when they first went online. There are the *Old Internet People* who established norms with "emoticons like :-) and :-/, all caps as shouting, and a list of acronyms (p. 72)". *Full Internet People* who learned "internet slang from context and their peers, and associate it with tone of voice" through messaging (p. 83); *Semi Internet People* who treated online space as "functional tasks," such as reading news and making travel plans (p. 85); and *Post Internet People* who use lowercase (p. 104). These "cohorts" inhabit different linguistic worlds and result in a digital age where the same symbol does not merely vary in meaning but functions as a different tone.
McCulloch's interview with Audie Cornish builds on this argument by explaining how new rules require people to "interpret your tone of voice" without any vocal clues. For instance, McCulloch (2019) describes punctuation as a new "battleground," which a simple period adds "solemnity," "finality," or even "passive aggression." Meanwhile, younger users use "lol" as a softener and means of double meaning, in which "I hate you lol" becomes a friendly joke, whereas "I love you lol" expresses fake and mean distance. Furthermore, older people who learned LOL from an explicit rulebook and acronym lists still consider it as "laughing out loud." Thus, McCulloch believes that digital language changes based on what people actually and socially do online rather than rules like grammar.
Yet, McCulloch's idea of how different Internet People interpret tone aligns with our previous class discussion about online dating. Specifically, as younger users rely on softeners, double meanings, and lowercase to hint their emotions, [Vinter's (2023) article on dating apps](https://archive.ph/67Xzr) highlights an interviewee's description of digital interaction as "quite soul-destroying" reduced to "hours of admin." Perhaps this is why the tone we speak online leads to misunderstanding across generations.
### Dec 05 Fri - Pushback
"I'm alive. You're alive. It's beautiful. That's why we shouldn't be consuming life through technology," said by Ms. Watling (Vadukul, 2023). Her words encapsulate the argument that "overloaded users are pushing back against the permanent connectivity... calling evertime," referring to the "non-stop expectation of availability exacerbated by portable and wearable technologies that tether the users" online. Morrison and Gomez (2014) outlined the primary characteristic of pushback motivation, both exclusive and non-exclusive, including *emotional dissatisfaction, external values, taking back control, addiction, and privacy*. Specifically, people confront it through behavior adaptation such as "deleting all the social media applications" and "disabling all email accounts." Some even "dropping out from technology altogether," which is the back to the woods method and the one that young people agree on.
Additionally, Vadukul exemplifies how young people "relied on flip phones and laptops, rather than smartphones," and how they complained "pre-Luddite life," where longing to play social media apps like Instagram, Snapchat, and Tiktok lead them to "fell asleep groggy and irritable." Therefore, they joined the Luddite Club to revolve their lives around literature and human connection.
By reading Morrison and Gomez's point on "papers in journals... recognize information overload...identifying user concerns with Web-related identity and privacy issues," it connects to our earlier Privacy Footprint activities, where our identity is exposed on webpages, True People Search, and social networks simply searching our name. As [Haridy (2019)'s article about Facebook](https://newatlas.com/computers/facebook-not-secretly-listening-conversations/) describes, platform like Facebook use surveillance advertising to predict users through massive data tracking. Consequently, when privacy concerns continue to arise, pushback is also metaphorically stepping out of "continuous partial attention," "bowling alone," and "filter bubble." Indeed, with the vastness of this world, if we choose to embrace without *evertime*, the world we live in is filled with so much beauty worthy of our pursuit.