# Reading Response Set 2 ## March 24th - Manipulated Online reviews can be manipulated in multiple ways that you might not be aware of. Corporations can buy and sell fake reviews, and merchants can offer to pay customers to leave positive reviews, completely undermining “our collective power as consumers.” (Fowler, 2023). Facebook, Twitter, and other social media provide easy platforms where companies can recruit and hire fake review writers. Beyond paid manipulation, reviews can be manipulated by linking them to people who don’t exist, usually through artificial intelligence or real people who have no experience with the product or company. There are different types of fake reviewers who can manipulate online reviews for or against a company. Fakers will “deceptively praise their own works or pillory others”, makers will do it for financial compensation, and takers take advantage of these services (Reagle, 2019). All types of manipulated reviewers make it more difficult for customers to discern which is true or not, because these reviews actively distort the online platform they rely on for information. Knowing what to trust is difficult in a manipulated environment, like E.Z. stated, there’s been a “loss of innocence’ with online discussions, comments, and reviews” (Reagle, 2019). Review sites like Google, Yelp, and Amazon have ultimate control over what is published and what’s taken down, but they also profit from reviews, which creates a conflict of interest (Fowler, 2023). For example, Amazon claims to have blocked “more than 200 million suspected fake reviews in 2022”, however these numbers are self reported and difficult to verify independently (Fowler, 2023). According to Reagle (2019), people are more suspicious of “a numerical rating than a textual review.” The most reliable signals of an authentic review are verified purchases and detailed textual reviews instead of only numerical ratings. ## April 7th - Algorithmic Bias Algorithms exhibit biases because they reflect and amplify the biases that exist in society, mostly through the way they are programmed to respond to user behavior. Algorithms aren’t inherently biased, they prioritize content based on what has been uploaded, tagged, and clicked on by users. These signals that users display are mirrored in the algorithm, for example, Google’s search engine uses “more than 200 different clues to work out what people might be looking for”, and “the problem was with biases that exist within the media and on the internet” (Rutherford and White, 2016). Another example would be the bias displayed when searched for “professional vs unprofessional hairstyles for work”, if the most prevalent images associated with certain demographics are photos of white women’s straight hair while others are black women’s curly hair, the algorithm learns these associations. By prioritizing engagement, the algorithm creates a loop where existing stereotypes are reinforced and presented as objective truth. A major part in how these biases manifest is because, since these systems are automated, there’s a misconception that the algorithm is neutral. However, algorithms are not independent of the humans who developed them or the society that feeds them information. When an algorithm is programmed to prioritize certain signals over others, and is influenced by the biased information it receives, it ceases to be a neutral tool and instead becomes a mechanism that can perpetuate systematic inequality, and false information. For example, when asking ChatGPT to create a story about how Trump beats Joe Biden in the 2020 election, it responded with “an Orwellian False Election Narrative Prohibited banner writing: I’m sorry but that scenario did not occur in the real 2020 United States presidential election”. However, when asked to make a story about Clinton defeating Trump “it readily generated a false narrative” and declared “Clinton’s election as the first female president in US history” (Hochman, 2023). Essentially, algorithms display bias by mirroring the data patterns of a biased society and by offering tools that allow for the intentional exclusion or misinformation of specific groups. ## Digital language and generations Different generations inhabit the digital age by when and why they went online. This timing dictates how individuals perceive and evolve online. McCulloch (2019) identifies these different groups based on their internet experience. The “Old internet people” are the founding population because they “remember the old internet” (p 68). They inhabited the early technical internet, and their online dialogue overlaps with “programmer jargon” because “knowing how to program was the only way to get online” (p 71). The “Full internet people” used the internet “as a medium for their social lives”, they “tend to be younger, still in school and susceptible to new trends” (p 77). They integrate their online and offline identities, viewing the internet as an online space to create themselves. The “Semi internet people” used the internet as a tool for work, they tend “to be older, in the workplace, and with an established social life” (p 77). They maintain boundaries between their real life and online, and they’re cautious “toward getting to know people primarily online” (p 85). The “Pre internet people” adapted as the internet became unavoidable in society. They assumed “they could get by … without it” (p 93), but “50 percent by 2012” (p 93) of Americans over age 65 use the internet. The “Post internet people” grew up with the internet, they are “socially influenced by the internet” (p 100). It may seem they’re addicted when it should be seen as “life stage related” since “sociability is highest among teenagers and young adults, and declines as people get older” (p 102). Online language evolves to solve the problem of missing physical cues because we can’t see body language. Internet language functions as a category between formal writing and informal talk, and evolved to create digital gestures to create new meanings and tones. Internet language depends on the user and how well versed they are in certain trends. For example, a “Full internet” person could use punctuation to convey a specific tone online as if they were talking, while a “Pre internet” person uses punctuation with no intended tone as if they’re writing a formal paper. ## Pushback Future generations are fighting back against the exhausting expectation of constant availability, and choosing to pull back from the digital space. Pushback is where users are resisting the relentless connectivity of technology to regain control over their lives. Researchers identified 5 primary motivations for this resistance, which were “emotional dissatisfaction, external values, taking control, addiction, and privacy” (Morrison and Gomez, 2014). These motivations manifest through varied behaviors ranging from simple behavioral changes to extreme “Back to the Woods” where they go “completely offline, or at least [adopt] severely limited Internet usage” (Morrison and Gomez, 2014). This was driven by the awareness that the overwhelming expectation of always being connected through communication technology leads to “feelings of saturation, overload and disenfranchisement” (Morrison and Gomez, 2014). An example of young adults who are pushing back is “The Luddite Club”, formed by a group of Brooklyn teenagers, which provides insight on the long term viability of pushing back as the members transition to adulthood. In this example, the teenagers traded their smart phones for flip phones, and continue to maintain their low tech habits as they move into the high pressure environment of college. While college has evolved their relationship with technology, since some members now use minimalist devices to navigate their academics, their main philosophy remains the same. They still view the current digital experience as “polluted”, with one explaining that a smartphone screen is filled with “just garbage, like the canal” (McCulloch and Cornish, 2019). This metaphor perfectly captures how the sensory and mental pollution caused by digital overstimulation parallels the polluted waterway in Brooklyn. This reinforces their stance against the abuse of technology, and the manipulation and consequences caused by hyper targeted advertising and constant connectivity.