# Reading Responses Set #2
### Finding Someone & living alone - RR #1
“Today’s daters are looking for nothing less than a human Swiss Army knife of self-actualization,” writes Derek Thompson for *The Atlantic*. In theory, this doesn’t seem too demanding in today’s world. With dozens of dating apps and millions of users, you’re bound to find The One at some point. Right?
Well, not really. Tinder’s users dropped by 5% in 2021, according to an article from *The Guardian*, which notes frustration and burnout as some of the feelings users are experiencing. Describing some Danish philosophy from the 19th century, Thompson writes, “anybody who feels obligated to select the ingredients of a perfect life from an infinite menu of options may feel lost in the infinitude,”. Combine this anxiety with stories of ableism and harassment, a growing individualist culture, and the how most people inflate their age and income, according to OkCupid’s research, it’s easy to see why people are leaving the apps.
Personally, I don’t like how dating apps make you present an extremely curated version of oneself. It does a lot of the work you’d already do on a first date, like asking about family, religion, occupation, etc. In real life, anyone can lie about these things as well, but they can’t lie about their appearance. Inflating one’s height and beauty aren’t as replicable in person. While I agree that shifting away from the apps is a good idea for many people, minority groups tend to thrive on dating apps, where they may be able to find their community more easily. Or, like one woman testifies in *The Guardian*, maybe we should just take a break from performative dating culture altogether. Isn’t performance still performance whether its online or not?
### Manipulated - RR #2
Chances are, the last time you bought something online, you read the reviews. Few Internet users these days are naive enough not to worry about being ripped off. But, often times, the reviews, likes, and comments we look to for quality assurance are the ones ripping us off. In *Reading the Comments*, Joseph Reagle identifies the three main players in these schemes: fakers, who praise their own work or attack others’; makers, who do so for a fee; and takers, who pay for the work of makers. On sites from Yelp to Amazon to Facebook, individuals and businesses can manipulate appearances to increase sales, boost public favor, and even promote a specific agenda or ideology. If you’re really serious, you can hire a reputation management firm to even sneak clauses in terms of service that ban customers from making negative reviews in the first place.
Because many sites’ algorithms favor positive engagement, they’ve tried to remove as many suspect likes, views, comments, and reviews as possible. Despite this progress, Reagle finds many sites new solutions of using social graphs even more manipulative. With Facebook’s Sponsored Stories, brands would pay to promote content to users by having Facebook show them how their friends interacted with that content. From a business perspective, it was an incredible success, but concerns of privacy and manipulation overpowered this when users started seeing their pictures in Facebook ads.
So, how can we protect consumers from manipulation and keep the online world honest and trustworthy? I don’t think that we can. As Reagle writes, “this behavior is driven by the high value of comment today, an obsessive desire to rate and rank everything, the dynamic of competition, and the sense that everyone else is already doing it”. Until we no longer care what other people think, not just about products or other people but also ourselves, we will always be trapped by opinion.
### Artificial intelligence - RR #3
In his article for *Gold’s Guide*, Tyler Gold describes large learning models of artificial intelligence as “small, thin veneers of smiling faces covering up distorted monsters based in a twisted image of all the information we have ever created on the internet,”. It sounds scary, but how dangerous is AI really?
Generative AI is developed by using machine learning, where AI is fed lots of data to train itself to do specific tasks. ChatGPT learned from text on the internet and scripts of dialogue, while Stable Diffusion, an image generator, learned from images and their captions. Even when you use AI, you’re helping it develop by providing it training data.
Logically, artificial intelligence should only be as dangerous or beneficial as we make it. In *Vox*, Rebecca Heilweil writes, “Even if this tech doesn’t take over your entire job, it might very well change it,”. She predicts that ChatGPT’s basic coding proficiencies could help lower overall costs of software development and that DALL-E could generate advertisements and graphics for less. But what about those with malicious intentions? The concern over cheating with AI has created debate in the education world about whether teachers should be working **with** or **against** AI. Image generating AI can be used to create non-consensual pornography and images of child abuse, notes James Vincent in *The Verge*.
Even without user malintent, AI can produce morally gray material. Vincent reflects concerns of artists whose work is being used without consent to train AI and, thus, copy and profit from their style. As this practice becomes considered more immoral, AI companies are being held accountable and changing their policies. AI is a reflection of the data it’s fed, which, when given humanity’s whole history on the internet, causes it to replicate our biases and prejudices. A Lensa AI feature has “the concerning tendency to depict women without any clothing”. ChatGPT made an airline passenger screening system that suggested higher risk scores for natives and visitors of Syria and Afghanistan. AI developers have the opportunity now to create a *pure, objective, and unbiased* tool. As Heilweil says, these decisions will have ripple effects. I think SAG-AFTRA and the WGA’s recent deals including protections from exploitation by AI are a good sign for the future. I hope other unions and the U.S. government can follow suit.
### Authenticity - RR #4
“Authenticity is now a make or break quality,” Rachel Lerman writes for the *Washington Post*. In fact, she says, companies spent $5.2 billion in 2019 on influencer marketing on Instagram *alone* for this authenticity. During the pandemic, brands used down-to-earth influencers to market their products. But, it can take years to make it to the big-time, peddling Voss water bottles from your living room. In an article for *The Atlantic*, Taylor Lorenz examined the trend among influencers of making fake brand deals. According to Lorenz’s findings, sponsored content is influencer street cred. Before getting their first real deal, influencers build their credit with fake ads, hoping to attract potential partners. So, just as it pays to be authentic, it pays to be inauthentic. But, as shown by the anonymity of many of the influencer subjects in the piece, it’s still best to keep this inauthenticity under wraps.
Hatebloggers and anti-fandom communities have garnered attention from academics, which has been overwhelming toward male-dominated communities. *In Policing “Fake” Femininity: Authenticity, Accountability, and Influencer Antifandom*, Brooke Duffy, Kate Miltner, and Amanda Wahlstedt discuss female-dominated anti-fandom communities, often seen as “frivolous gossip” forums. For example, GOMIBLOG, has been described as “a hub for ‘mean girls’ and ‘the cruel site for female snark’”. Duffy, et al. observed that “members of the GOMI community presume they are participants in call-out culture”. In reality, they engage in horizontal violence, or aggression occurring laterally within oppressed groups instead of towards larger structural problems. While trying to be feminist, GOMI finds itself a part of gendered symbolic violence.
So, how do we make conversations about influencer authenticity productive? Duffy, et al. make the point that belittling female-dominated anti-fandoms just feeds the flames of horizontal violence. Maybe, they theorize, if these communities feel like they’re being heard, discourse will become gentler and influencers might feel less attacked. Duffy, et al. also warn that we must leave room for nuance because “polarized perspectives only reinforce the tired history of gendered, in-group antagonism,”. I very much agree with this sentiment and believe that cognitive distortions like black-and-white thinking definitely play a part in the unproductivity of anti-fandoms. But, is there room in the influencer world for such metacognition?
### Pushback - RR #5
Overload, disillusionment, and dependency. These are just some of the feelings many in the “evertime” experience. Evertime, defined by Morrison and Gomez, is the phenomenon of being continuously connected to the Internet, and the non-stop expectation of availability that comes with it. In their journal, *Pushback: Expressions of resistance in to the “evertime” of constant online connectivity*, Morrison and Gomez found, through literature review, five primary motivations and five primary behaviors behind pushback to evertime.
Based on the beginning of this response, it’s easy to see how emotional dissatisfaction was one of the most frequent motivations. Another frequent motivation was taking control of one’s time and energy. As someone who has lost hours scrolling on TikTok, I think lots of young people relate to this. The least frequent motivation found was privacy. Accepting terms and conditions has become a force of habit. This didn’t surprise me as it did the authors because I feel many today are coming to expect a lesser standard of privacy on and offline. Other motivations included addiction to technology and external values like religion or politics.
Among pushback behaviors, the most frequent was behavior adaptation which can involve managing time and applications like allowing yourself only “ten more minutes” or “one more episode”. The least frequent was “back to the woods” behavior, or dropping out from technology altogether. Morrison and Gomez describe this as an “extreme reaction,” which is understandable due to how difficult it is to do today, especially post-pandemic. Another behavior is social agreement to not use technology at, say, a restaurant or wedding reception so you can “live in the moment”. It’s become increasingly less of a norm, usually requiring a verbal agreement. Tech solution, while ironic, is a behavior I think should be used more. Common practices include parental controls and downgrading to flip phones. I see it as working with technology instead of against it. In his article about Luddite anti-technology teens for *The New York Times*, Alex Vadukul spoke to concerned parents who use technology to monitor their children’s safety and location. For the newest generation, there was no world without smartphones. Is pushback possible for them if they can’t long for a simpler time?