# Reading Responses (Set 2) - Checklist for a [good reading response](https://reagle.org/joseph/zwiki/Teaching/Best_Practices/Learning/Writing_Responses.html) of 250-350 words - [x] Begin with a punchy start. - [x] Mention specific ideas, details, and examples from the text and earlier classes. - [x] Offer something novel that you can offer towards class participation. - [x] Check for writing for clarity, concision, cohesion, and coherence. - [x] Send to professor with “hackmd” in the subject, with URL of this page and markdown of today’s response. ## Reading responses 5 out of 5 ### Mar 19 Tue - Ads & social graph background Just keep swimming, just keep swimming, just keep swimming swimming swimming. Sound familiar? Before cookies were created, the experience of the web was like having a conversation with Dory, a fish that suffers from short-term memory loss, from “Finding Nemo.” We have Lou Montulli to thank for the invention of cookies so we do not have to deal with Dory any longer. Although his invention makes our lives easier, there are rising concerns about how whether our online activity is no longer safe or private. Lou voices his concerns in his interview with Vox expressing how "we will continue to fight a technological tit-for-tat war that will never end unless legislative policy is put into place." In both the Vox video and the chapter from Robe Stokes' "Online Advertising", there are sections dedicated to the concept of "Tracking." The Vox video takes a fear-instilling approach to this entire conversation. Cleo Abram describes tracking by explaining how websites would eventually be full of elements hosted by third parties. All of those elements can save their own cookies on one’s browser. Those cookies are created by the domain of the third party who can then access the data from the site you’re on but also from every site you visit that uses the same third-party elements. Stokes on the other hand explains the many different types of ads one might see like banner adverts, interstitial banners popups and pop-unders, floating adverts, wallpaper adverts, and map adverts. When he discusses tracking he adds to the fear element when he says, "Many third-party ad servers will set a cookie on impression of an advert, not only on clickthrough, so it is possible to track conversions that happen indirectly (called view-through conversions)." Basically, no one is safe! After reading and watching this I can not wait to download the ad blocker on my computer for our next assignment. Some questions that came to mind as I was watching and reading the material were: - Should everyone just use a private browser from now on? Or are those not as private as we think they are? - Facebook Pixel is that even legal? - When it comes to tracking, when one is not on the web, but just using their phone isn't the phone company, say Apple, tracking their swipes on their keyboard amongst other things? ### Mar 22 Fri - Manipulated In a digital age where every click carries weight and every review holds sway, the battle for online reputation has become a high-skates game of survival! Both Geoffrey Fowler's "Fake reviews are illegal and subject to big fines under new FTC rules" and Joseph Reagle's "Manipulated: Which ice cube is the best?" paint a vivid picture of the complexities surrounding user-generated content, revealing the intricate dance or cat and mouse game between authenticity and manipulation in online interactions. Fowler plunges the reader into the depths of the digital realm, where the utopian vision of early internet pioneers collides with the harsh realities of profit-driven platforms. The prevalence of fake reviews, estimated at "30% to 40%," undermines consumer trust. The Federal Trade Commission has proposed harsh rules to combat fake online reviews, with fines of up to $50,000 for each instance of deception. The proposed rules clarify what constitutes deceptive practice including misrepresentation of experiences, fake reviewers, and undisclosed conflicts of interest. Although they do not hold major review platforms directly accountable, the rules aim to deter fraudulent practices and empower the FTC to take legal action against offenders. Meanwhile, Reagle delves into the world of online reviews and reputation management, exposing the tangled web of incentives and ethical dilemmas faced by businesses, users, and platforms. From the murky waters of "pay to play" extortion to the insidious influence of sponsored content and social graphs, the chapter highlights the pervasive impact of financial motivation on online discourse. E.Z.'s grief over the "loss of innocence" in online discussions movingly encapsulates the erosion of authenticity in the face of profit-driven incentives. As we move through this tricky space online, we as consumers have to stay alert to money-driven influences creeping into our digital lives, or we risk losing the realness and honesty that once made the internet special. Some questions that came to mind as I was watching and reading the material were: - How do online review platforms like Yelp balance the need for authentic user feedback with the potential for manipulation and extortion? - To what extent do extrinsic motivations, such as receiving free products through Amazon's Vine program or TikTok's brand deals, affect the authenticity and integrity of user reviews? ### Mar 26 Tue - Bemused Have you ever gotten sucked into the online rabbit hole, laughed or learned something from a comment section, had an account get hacked, or fallen asleep listening to ASMR? Because I have! And if you ask anyone with an online presence around the same age as me, I can guarantee you they can relate to one of the interactions above. As I read chapter 7, "Bemused: WTF!," of Joseph Reagle's book *Reading the Comments*, I found myself relating to each of the sections of this chapter. Online comments often reflect collective confusion and amusement, demonstrated through reviews and reactions to absurd products on platforms like Amazon. This conclusion reminded me of our in-class discussion surrounding authenticity, where we talked about why humans are so obsessed with rating everything, leading to our discussion about Amazon Vine and other reviews found online. The chapter highlights examples of humorous and bizarre product reviews, showcasing how online commenting can lead to unexpected and entertaining content. Examples of funny reviews found on Amazon mentioned in the reading were "sex lube" and “naturally occurring radioactive materials.” My favorite product reviews are when people review mirrors. The picture they attach to their review always makes me giggle. Although reviews can be hilarious and can spark a good laugh, other reviews can actually expose absurdities in products, social contexts, and recommendations. So in my opinion, reviews are always worth reading. Section "Excuses 'I Was Hacked!'" of chapter 7 takes a more serious tone addressing a too-often-used escape route of the internet. Instead of taking responsibility for one's actions, people often resort to excuses like claiming they were "hacked" to mitigate the consequences of their own actions. The section explores instances of individuals, like Amy and Samy Bouzaglo, using the "hacked" excuse to deflect responsibility for controversial or embarrassing online behavior, illustrating the challenges of authenticity and accountability in digital communication. The internet facilitates the emergence of new phenomena, such as the autonomous sensory meridian response (ASMR), through online communities and discussions. Online platforms, and I would argue influencers as well, help foster niche interests and communities where people can come together and communicate with one another. In conversation with TikTok and the emergence of ASMR, there are also now what is called "muckbangs." Muckbangs are when people record themselves and the sounds they make as they eat, another form of ASMR. My go-to community is the fashion side of TikTok, to see not only what people are posting but also what people are saying about fashion in the comments. For a really good laugh, I go to Trisha Paytas's TikTok and read her comments. My favorite TikTok comment section I have seen so far was on a video that asked women to comment one piece of advice they would have told their younger self. I stayed in that comment section for hours. Some questions that came to mind as I was watching and reading the material were: - How do online platforms and social media companies address and mitigate the spread of misinformation and harmful content in comment sections? A flagging and deleting system? - In what ways do advancements in technology, such as AI and natural language processing, shape the future of online commentary and interaction? - Can online communities and comment sections serve as effective spaces for activism, advocacy, and social change, or do they primarily reinforce existing power structures and inequalities? ### Apr 05 Fri - Algorithmic bias In a world where the battle for success is waged on multiple fronts, the journey from college admissions to online algorithms and AI reveals the battlegrounds where competition and bias collide! From the cutthroat arena of college admissions, where students guard themselves with "Weapons of math destruction," to the digital realm, where algorithms dictate our online experiences, and AI chatbots like ChatGPT emerge as both allies and adversaries, the stakes have never been higher. In the first chapter of Oneil's "Weapons of Math Destruction," the idea of the "WMD" exposes the flaws of a system where standardized testing, extracurricular achievements, and personal stories compete for attention. With intense competition comes a high-stress environment where families invest a significant amount to secure their child's future. This is often at the expense of mental well-being and educational equity. The third chapter discusses another battlefield—the U.S. News college rankings. The limitations of ranking systems come to light, with emphasis on metrics that prioritize factors like SAT scores and graduation rates over affordability and quality of education. The consequences ripple through the higher educational landscape, shaping college behavior and influencing student and family decisions. This chapter in particular reminded me of a discussion I had in my freshman one-credit course class in the fall. My professor was going over planning our future years out here at Northeastern and he showed this presentation that he said he was "forced" to show. The presentation encouraged all of us to plan our time at Northeastern to be able to graduate in four years. He shared with us that Northeastern is upset that its ranking has fallen so much, because of students graduating in five years instead of four, which is why he had to show us the slide presentation. Transitioning from the college admissions realm to the digital sphere, the concerns raised in the second reading about Google's "racist" search results and image labeling algorithms bring attention to another battleground—algorithmic biases. Despite Google's assertion that search results reflect societal patterns, evidence of racial biases in image search results raises questions about the neutrality of these algorithms. The World White Web project's efforts to diversify search results highlight the urgency to address these biases and their impact on online representation. Before reading this article, I had never thought about how Google's search engine could technically be racist. This article shocked me and opened my mind to a new perspective on how I view what I search, especially images. Venturing further into the digital world, the rise of AI-powered chatbots like ChatGPT offers a glimpse into the future of human-computer interaction. However, the National Review article uncovers the potential downsides of these advancements, revealing instances where ChatGPT shows ideological biases in its responses. The refusal to engage with certain topics or questions and the selective treatment of political figures' corruption allegations showcase the complexities of AI ethics and the implications for free speech. This article reminded me of our conversation last class about AI. We talked about how AI deals with or can have empathy and how it will not tell you how to dispose of a dead body but if you ask it in a different way, say you tell the chatbot you are writing a fiction story, there are ways around the set algorithms. Also, this article made me recall that Dr. Reagle told us that if you tell the AI that it is something like a space engineer it will solve math problems better and more accurately. How does that downfall not fall under being unethical? The common theme throughout these readings is the intersection of competition and bias in various domains. However, after reading these chapters and articles, I was left wondering: - How can we ensure transparency and accountability in the design and implementation of algorithms and AI systems to mitigate biases and promote fairness? - Are there alternative approaches to college rankings and admissions processes that could better prioritize factors like affordability, diversity, and educational quality? (Is the option of "test-optional" when applying to college a step in this direction?) ### Apr 16 Tue - Pushback Our collective timeline as a society continues to become increasingly dominated by digital connectivity, where every part of our lives seems so intertwined with technology, that rebellion is bound to occur. But this rebellion is not led by your stereotypical revolutionaries with their physical weapons; it is led by individuals and groups who have grown disappointed with the promises of technology. They desire something more authentic; some may say simpler. Stacey Morrison explores the phenomenon of pushback in their article "Pushback: Expressions of resistance to the ‘evertime’ of constant online connectivity." Her article exposes our tech-savvy society by revealing a surprising truth: emotional dissatisfaction, rather than practical concerns, that fuels the resistance against technology. The article further examines the various behaviors individuals adopt in response to their dissatisfaction with technology. These behaviors include adaptation, where users modify their technology use to better fit their needs, and social agreements, where groups collectively agree to restrict technology use in certain contexts. Other strategies include tech solutions, such as downgrading to simpler devices or implementing screen time or parental controls, and radical solutions like complete disconnection from the internet. The Luddite Club is a perfect example of a group of teenagers in Brooklyn who are pushing back by restricting technology with social agreements and downgrading. Led by Logan Lane, these teens reject the confinements of modern technology in favor of analog experiences and human connection. Their weekly gathering in Prospect Park is evidence of the power of simplicity, where drawing, reading classic literature, and engaging in meaningful conversations take precedence over likes, notifications, and shares. Their philosophy challenges societal norms and raises critical questions about privilege, mental health, and genuine happiness. Reflecting on these readings reminded me of a few summers ago when I had this longing desire to power down my iPhone and only use a flip phone and my camera. I never ended up actually doing it, even though that's what I talked about all school year long. However, I do want to try it out sometime soon. Although I didn't follow through, I did set restrictions on my phone, and I stayed on top of limiting myself by listening to the restrictions when they appeared. I noticed a significant increase in my mood when I was on social media less, however, now I do not have any restrictions on my phone...I should really enable them again. Some questions that came to mind as I was reading the material were: - Can the principles and values embraced by groups like the Luddite Club be scaled up to inform broader societal discussions about the role of technology in our lives? - Will I ever actually trade my iPhone in for a flip phone for some amount of time?