###### tags: `CDA` # Reading Responses (Set 2) - Checklist for a [good reading response](https://reagle.org/joseph/zwiki/Teaching/Best_Practices/Learning/Writing_Responses.html) of 250-350 words - [ ] Begin with a punchy start. - [ ] Mention specific ideas, details, and examples from the text and earlier classes. - [ ] Offer something novel that you can offer towards class participation. - [ ] Check for writing for clarity, concision, cohesion, and coherence. - [ ] Send to professor with “hackmd” in the subject, with URL of this page and markdown of today’s response. ## Reading responses 5 out of 5 ### Mar 1 Fri - Collapsed Context "There is no such thing as universal authenticity; rather, the authentic is a localized, temporally situated social construct that varies widely based on community" (p. 124). Through this quote, Alice E. Marwick and Danah Boyd introduce the concept of perceived authenticity, specifically by analyzing identity performances through social media sites. By incorporating self-preservation theory, symbolic interactionism, and coded communication in their analysis, Marwick and Boyd demonstrate publicity culture's effect on people's inability to present a true, "authentic" version of themselves. Focusing on the social media app Twitter, their study revealed a broad range in the type of audience Twitter users believe they are posting for. Some claim they are posting for themselves as a "self-conscious, public rejection of audience" (p. 118), while others claim they are posting for their friends, a broad title that can range anywhere from close, in-person friends to mutual online strangers. Marwick and Boyd illustrate that despite this range, there is always an inescapable audience on social platforms that users are always performing for, whether consciously or not. The app BeReal was launched in 2020, aiming to disrupt the habit of performance that is prevalent on social media sites. Brooke Erin Duffy and Ysabel Gerrard argue that despite its intention of deconstructing the performative standard of social media, BeReal is having the inverse effect and is perpetuating a movement that Gen Z is all too familiar with: always being camera-ready. Instead of allowing users to feel comfortable doing "boring" things, BeReal has pressured users to always do something "interesting" in fear that they will be "caught" in the wrong moment. Four years after its release, BeReal has undergone many changes to its initial functions. Today, you can still post a BeReal even if you missed the initial timer, and many people, sometimes including myself, wait until the most exciting moment of their day to participate. Similarly, when users post within the two-minute periods, they get "awarded" 3 more BeReals for the day, which they can take at any (usually the most interesting) moment. I recently saw that the newest BeReal update incorporated brand and celebrity accounts for users to follow. Upon seeing the accounts for the first time, my initial thoughts were that through these "business" accounts, BeReal could discretely advertise their business partners. After these readings, I now know that these updates are going against their initial "no ads, no bullshit" motto. As Duffy and Gerrard say, even though BeReal aimed to be the "antidote to social media fakery," the app has lost its original mission in the search to stay relevant and compete with other social media platforms. No matter how much we try to "be real," as members of society, we constantly perform while simultaneously complaining about the lack of authenticity in the media. The overall feeling is a "sense of disillusionment," which got me thinking about how we crave more realness from influencers and content creators each day. One of the most prominent examples of this is the quick rise to fame of Alix Earle on TikTok. Portraying herself as authentic, the creator rapidly amassed a following by showing her "messy girl" vibe, dirty room, and Umiami college party aesthetic. Her first followers thanked her for showing her "unfiltered self" and not performing for the cameras. However, the more famous Alix Earle gets, and the more she establishes a "brand" for herself through podcasts, vlogs, and Tiktoks, it is clear how she is performing, either cognitively or not, for her fans who want to see the specific "partier" side of her that first made her famous. ### Mar 19 Tue - Ads & social graph background I have been submerged in the business of advertising for as long as I can remember. My mom and dad work as creative directors in advertising agencies, and for years, I've listened to them explain how the Internet has changed and continues to change their workspace. Having graduated in the 90s, they learned about the tools they use in their jobs today through practice rather than through theory. However, as much as they complain about the dangers the Internet does to advertising (e.g., how an influencer's thirty-second video can now be more influential than a commercial itself), they also talk about how the Internet has created precision in their jobs. The readings for today put their words into perspective and helped me understand exactly how the "precision" they talk about is created. Online advertising is a branch of promotion and marketing that specializes in placing adverts on the web and the Internet (Stokes, p. 294). Like other types of advertising, it seeks to bring awareness to a brand, generate sales, and raise a "share of voice in the marketplace," but unlike other types of advertising, it is unique because it is trackable (p. 294). This is its main advantage, and through tracking, online ads bring key insight, feedback, and information that advertisers use to create better and more precise ads. Digital adverts engage in tracking through the use of cookies (Vox, 2020, 0:11). As we discussed in class, a cookie is a "file" of your information and ID that a server asks you to remember and send next time you visit, third-party cookies, however, are those same bits of information, but they come from sites that you might not visit and work through embedded websites. In the video, Vox interviewer Cleo Abrams explains how third-party cookies follow you through websites and gives examples of embedding and shared information, such as Facebook owning Instagram and Google being the site that puts most of the ads in other sites. Advertisers can know how we react and engage with their ads through targeting. This helps with targeting and optimizing, giving specific information like the number of times an ad has been seen, the number of times it has sent people to the advertised website, and how successful it is in creating sales and revenue (Stokes, p. 311). Additionally, it can give information like the browser, internet service provider, and time of day where the ad engagement took place. Tracking also makes ads more efficient by providing tools for frequency capping, geo-targeting, sequencing, and exclusivity (pp. 309-310). As a social media intern for a student-run snack company, the analytics behind ad performance help me daily in creating ad efficiency. Through tools like Instagram for Businesses, I can know what time of day people are more likely to see my posts, how many of them actually went to the website through it, and even in what states my ads are gaining the most traction. It is nice to now understand the logistics of what is going on behind the screen and how the information presented to me is gathered. ### Apr 2 Tue - Artificial Intelligence The arrival of AI “has driven some to declare the end of high school English, and even homework itself,” says Rebecca Heilweil regarding the recent popularization and debates surrounding the generative technology. Over the past years, the use of artificially intelligent programs like ChatGPT and Stable Diffusion has become increasingly common, resulting in debates regarding the takeover of AI and its “possibility of making some jobs obsolete” (p. 4). The programs take user’s requests and instructions and use them to create images and text that are “pretty close” to a human’s work for little to no cost (p. 3). However, the key words here are “pretty close;” AI can never replace humans because it lacks key traits that we only have, like empathy and the ability to innovate. After multiple debates and peer discussions, I have come to this conclusion, which helps me appreciate the ways in which AI can aid humans and understand how it might change our professional landscapes without completely replacing us. When I say that AI cannot “innovate,” I refer to how the tool works through “machine learning.” This process feeds existing data to the system, which then imitates it to “produce something new based on its previous experience” (p. 5). Additionally, because the system is not human, it can never display emotions like empathy, which are necessary to understand and solve our everyday problems. To me, this means that in fields that rely heavily on innovation and problem-solving, like business, architecture, and medicine AI will not replace human jobs or create a unique business venture, building structure, or medical treatment. While the creators of ChatGPT say the tool is “not yet ready to be relied on for anything important,” I don’t think the tool should ever be something we rely on but rather something we use as a supplement in our jobs. Even though artificial intelligence can aid us in many ways, like most developing technologies, it can result in “complicated trade-offs” and side effects. Some recent AI controversies dealt with the legalities of who deserved credit and profit from published works generated by the tools (p. 12). A notable example happened last November when Bad Bunny denounced a viral song written, recorded, and released on TikTok by a fan that used AI to replicate his voice. The artist expressed distaste for his art being copied and used by others without his consent. To combat similar issues and protect artists’ ownership and rights, the AI image generator Stable Diffusion updated its platform to make it harder for the tool to create images copying the art styles of artists who never approved the reproductions (Vincent, p. 1). In the update, Stable Diffusion also reduced the program’s ability to generate nude and pornographic images by removing similar content from their training data (p. 3). This is just one of the multiple steps companies are taking to ensure that emerging AI programs follow principles that allow them to contribute to the greater good and be “socially beneficial” (Gold, p. 20). ### Apr 5 Fri - Algorithmic bias "Models are opinions embedded in mathematics," says O'Neil when explaining the harmful correlation between bias and patterns (p. 8). Models are representations and recordings of processes that help people predict possible outcomes based on past occurrences (p. 5). They are fair because they are constantly being updated and are available to everyone, but toxic models, known as weapons of math destruction (WMDs), also exist. WMDs use proxies, the substitute stand-in data, to fill the gaps of missing information in their models (p. 4). However, models are incredibly biased because when the creator decides what data is included and excluded, they cause blind spots that "reflect the judgments and priorities of its creators" (p. 8). The resulting correlations can be "discriminatory… and some illegal" (p. 4). With technology, recidivism models that work with algorithms have emerged but are embedded with assumptions (p.12). The LSI-R questionnaire, a test that categorizes prisoners as high, medium, or low risk, is an example of a biased WMD. While the test does not ask about race, it asks questions about lifestyle, upbringing, and family and friends, which put non-white people at a disadvantage due to society's assumptions (p. 13). The LSI-R was used as a score judges could refer to when sentencing and helped enforce and maintain a toxic cycle (p. 14). While algorithms are heavily mathematical, they are still strongly impacted by the ideas of a community. Biases similar to those created by LSI-R only got worse with the rise of the Internet, resulting in racial bias in search engine results like Google (Rutherford & White, 2016). Google searches like "beauty," "woman," and even "beautiful dreadlocks" show predominantly white results, while the results for "unprofessional hair for work" and "three black teenagers" are discriminatory towards black women (pp. 5-6). Society's biases create algorithmic biases as what's most popular and interacted with on the media and Internet are "what the engine's algorithm ends up reflecting" (p. 3). This article reminded me of a peer discussion I had regarding racial bias in medical websites and search engines. The search results are overwhelmingly white when you look up pictures of any skin disease, anywhere from acne to skin cancer. This is an issue because it makes it harder for non-white people to detect possible skin diseases and seek the medical help needed before it's too late. As AI develops and gains popularity, users like Hochman (2023) are stumbling upon built-in ideological biases. ChatGPT, an AI generator, tends to "suppress or silence viewpoints" and take a clear stance on political matters. Its political stance is reflected in its answers to requests to create stories involving themes like Trump and Biden, vaccination, voter fraud, and drag queens (Hochman, 2023). The algorithm of ChatGPT generates results that represent popular ideas but, in the process, creates a double standard and "polices wrongthink" (Hochman, 2023). I wonder and would like to test out if AI image generators like [mage.space](https://www.mage.space) would also present racial bias when asked to generate images like "beauty," "woman," or "friends" with no racial context given. ### Apr 16 Tue - Pushback With so many technology tools and the widespread connectivity society can experience daily, a new outlook known as “pushback” emerges. Pushback is a phenomenon that refers to the “expressions of resistance to and saturation with communication technologies and information overload” (Morrison & Gomez, 2014). In other words, it’s a growing sentiment and desire for detachment within a part of the online community, which believes life is better when people take control over their technology use. According to a study done by Morrison and Gomez (2014), some of the motivations behind pushback include emotional dissatisfaction, addiction, privacy, and the need to take control. The movement’s followers usually engage in behavioral adaptation and tech control, such as locking their phones away routinely or deactivating social media accounts. Contrary to past beliefs, Morrison and Gomez (2014) found that pushback is felt across all age groups, from “digital natives,” those who grew up with the Internet, to “digital immigrants,” those who adapted to it later on in their lives. Pushback is a direct consequence of “evertime,” the event where “technology users can and often are continously connected to the Internet and its communication services” (Morrison & Gomez, 2014). Being instantly connected at all hours of the day by communication technologies changes people’s relationships, often putting more stress and pressure on users. For example, parents have become dependent on evertime to track their childrem, which pressures both parties, saying “You follow your kids now… we’re the helicopter parent generation” (Vadukul, 2022). For kids who want to quit technology all together, like 17 year old Logan Lane the founder of Luddite Club, this is what keeps them back from full “pushback.” Her high school organization, the Luddite Club, “promotes a lifestyle of self-liberation from social media and technology” (Vadukul, 2022). The Morrison and Gomez (2014) study examines the presence of pushback in places like popular news media and blogs. In popular news media, there are “personal accounts of disenchantment with technology” (Morrison & Gomez, 2014). The accounts are covered by the press, where people report their “virtual suicide” (Morrison & Gomez, 2014). This reminds me of the many times Selena Gomez has left Instagram. Whenever she is involved in some online controversy, she takes to Instagram to release a statement saying she needs time offline to work on herself. However, just a few days later she returns to Instagram and starts posting again as if nothing happened. The accounts of pushback in blogs “address the audience as peers, discussing experiences in a reflective way… to those who they presume might share the same concerns” (Morrison & Gomez, 2014). This form of pushback expression reminds me of a virtual Luddite Club, meeting as a community every week to discuss disconnection. When reading about the Luddite Club, precisely the parent’s argument of tracking their children, I wondered: If the world is so heavily reliant on technology and phones, could complete pushback ever be possible?