###### tags: `CDA` # Reading Responses (Set 2) - Checklist for a [good reading response](https://reagle.org/joseph/zwiki/Teaching/Best_Practices/Learning/Writing_Responses.html) of 250-350 words - [ ] Begin with a punchy start. - [ ] Mention specific ideas, details, and examples from the text and earlier classes. - [ ] Offer something novel that you can offer towards class participation. - [ ] Check for writing for clarity, concision, cohesion, and coherence. - [ ] Send to professor with “hackmd” in the subject, with URL of this page and markdown of today’s response. ## Reading responses 5 out of 5 ### March 21 Tuesday - Manipulated Forsey (2019) describes the oldest tactic used by content creators/ influencers to gain popularity as “Can you go like it (my Instagram post) so I don't look lame?" This description gives us insight into the most elemental intentions of the average social media influencer: gaining an audience. Forsey focuses on the social media platform known as Instagram that is based around posting pictures and short videos. The platform used to present a user post feed in a chronological order but has since moved onto a popularity-based algorithm. The algorithm functions by ranking posts based on the engagement they get, such as likes and comments, and then only presenting high engagement content to other users. For brands and new influencers, this can present an unfair challenge and so people have come up with shady tactics to desperately overcome the algorithm. One of these tactics is known as Instagram pods that are mutually beneficial groups of similar content creators. Within these groups, influencers and companies engage with each other’s content in a timely fashion to artificially boost the viewership of their content. This viewership can then be capitalized on to push certain products and or agendas onto the viewers. The primary way in which companies and influencers affect their viewers is through the marketplace problem of information asymmetry. This allows for the content creators to change and manipulate their content to persuade the audience more effectively. Reagle demonstrates the deceitful ways in which people employ information asymmetry for financial and political gain through writing fake reviews to popularizing secretly sponsored content. He states that these tactics and their effectivity lie with the human “obsessive desire to rate and rank everything.” Similarly, to Reagle, I cannot help but question as to why people cannot help but manipulate. Are we at a point where all influencers are simply sock puppets for corporations that force secretly sponsored content onto us? Are we already at a point where we cannot differentiate between an advertisement and authentic content? Or have we already lost our authenticity to the capitalist machine? ### March 28 Tuesday - Artificial Intelligence Heilweil (2023) describes the newest groundbreaking technological development that we cannot seem to avoid any longer; but, like it or not, “Artificial intelligence is suddenly everywhere.” ChatGPT and DALL-E are OpenAI’s latest products that implement artificial intelligence and have revolutionized the technological landscape. ChatGPT is a generative text AI chatbot that is able to write essays, answer complex questions and even create detailed travel itineraries. DALL-E is a generative AI that can make illustrations and variations of uploaded images. This artificial intelligence is developed using machine learning where it is trained on a set of data points such as movie scripts to blog posts for ChatGPT and existing illustration by artists for DALL-E which they learn to mimic and produce novel output. Most recently, users have been able to use this technology to do their homework, gain stock trading advice and even pass standardized tests like the SAT; indicating that this AI technology will help advance the human race and solve many of today’s problems. However, the realm wherein most people’s opinions on generative AI become more unsure is when human emotions, politics and mediums that express emotions, such as art, become involved. That uncomfortable feeling, we all have when something that isn’t human starts to mimic human behavior a little too well is not new and happens every new technological advancement cycle. However, this does mean we should become more pacifistic towards it. Similarly, to how I felt upon learning about Lil Miquela, the creators behind this new technology could have their own biases and agendas and the data points that train these AI’s could have an improper distribution of particular views. Analogous to the fetishization of robot-entertainers Lil Miquela and Poppy, users managed to use AI image generator Stable Diffusion to create pornographic images. Vincent (2022) describes how users were left disgruntled when Stability AI removed their AI image generators ability to create NSFW content. While I agree with Stability AI’s decision as anyone would be able to generate fake NSFW content of people without their consent, this continued perversion seems to carry on from one technology to the next. CoconutKitty143, a social media influencer, used a popular AI TikTok filter to appeal to a specific audience while posting NSFW content and making her face child-like; essentially catfishing her audience. This connection of real human emotions to AI seems to disturb us and rightfully so. As the lines between real and fake/ AI generated begin to blur, the question of human authenticity needs to be raised. Should we create boundaries to keep emotions and AI separate? If Bing’s Chatbot ‘Sydney’ interaction with Kevin Liu about love is any indication, we certainly should. ### April 11 Tuesday - Collapsed Context “Oh wait! I can’t post about this. What if my ____ looks at it?” is a battle we all deal with today as our social media platforms collapse the ability to have nuanced and contextualized interactions. Boyd explores a prime example of this through Twitter; a text based social media platform. Twitter users navigate multiple social spheres and imagined audiences and balancing the desire to maintain a positive impression on everyone while needing to seem authentic. This is where the concepts of authenticity and collapsing context interact. Grazian’s study of the blue bars in Chicago explores how authenticity becomes highly constructed and constantly changes due to its dependency on the definition of authenticity held by the evaluator; making the shift between different forms of authenticity seem fake. This is especially intriguing as both the performance of authenticity and inauthenticity are equally constructed by discourse and context. Goffmans’s theory of the presentation of the self-online provides a useful way to understand the performance of identity. Users feel the need to balance the tension between revealing (Frontstage: widely acceptable) and concealing (Backstage: controversial) parts of themselves. Context collapse results in these imagined audiences in the user’s mind. They are usually the most sensitive such as parents or partners or even professional superiors. This imagined highly critical imagined audience may end up stifling personal authentic discourse since the lowest common denominator philosophy limits users to topics safe for all readers. The way in which users manage these tensions is through self-censorship or balance. Most users simply steer clear of controversial topics while others choose to strategically offer certain bits of personal information and even use polysemy to appease multiple audience spheres. Brands have been able to capitalize on this persistent human need to chase authenticity and “realness” by using it as a marketing ploy to entice users, especially young users. Those who have been socialized in the art of strategic self-presentation early on. The latest iteration authenticity projection app of this cycle is BeReal, an app that pushed spontaneity and informality and claims to rise above the fakeness of the staged and curated posts of Instagram. This tech solutionism approach to a real human problem can never truly work, just like A.I could never really understand emotions, relying on an app to tell us how to be authentic can never work. In the end, I cannot help but wonder whether chasing authenticity only results in manipulated outcomes. Just like how some influencers use Instagram pods to get more followers, does this constant realness chase ends with manipulated results? ### April 14 Friday - Authenticity, work, & influence “Fake it till you make it” is not just a statement anymore; In fact, it has become a way of livelihood for many in this saturated influencer climate. According to Lorenz, as social media continues to gain momentum, even after the rapid uptick during the pandemic, influencers have been integral to marketing and advertising strategies implemented by companies. The ability of influencers to sway the consumer behavior of their followers and promote products simply through social media is powerful. This shift has led to a plethora of individuals trying to become influencers themselves. However, the process of transitioning from the average social media user to a professional influencer is usually not easy. Lorenz reveals that this difficulty has led to some individuals faking their sponsorships solely to look more credible and appealing to other brands. Unfortunately, these ways of gaining social media status are not new at all. Influencers also use Instagram pods to artificially influence the recommendation algorithm to increase viewership. It is becoming increasingly common in the world of social media marketing to employ such strategies as they believe it will increase their chances of securing an actual paid partnership. Due to the fierce nature of employment within the field, influencers feel the need to take any measures possible to make themselves look more appealing for these partnerships. As a result, these opportunities have become sort of badges of credibility. Similar to what Elon Musk did with verified tick on twitter, it just goes to show individuals in this space are willing to do anything, even pay for temporary fake credibility. Of course, this trend within the industry is not without consequences. This consistent wave of sponsored and sponsored-looking content confuses the audience as to what the true intention behind the post is. Here is where the question of authenticity comes into play. Influencers have also become targets of hate and criticism. A parallel wave of hateblogs also emerged where users dissect the activities of prominent influencers and raise questions about the authenticity, the nature of work and influence within the digital age. As social media platforms become more integrated into our lives (we already spend at least an hour every day on them), it is essential to understand and monitor how influencers shape our perceptions of reality and authenticity, gender and power. I would be very curious to know, how has social media changed our perception of reality and authenticity already? How different were they just a couple years ago? ### April 18 Tuesday - Pushback “So, I put my phone in a box.” Said Logan Lane as she felt too burned out from scrolling past another perfect Instagram selfie. I believe everyone from my generation has experienced this type of burnout at least once. We currently live in a hyper-connected world where social media takes on many forms. It started with trying to have as many connections on Facebook as possible but now it’s as many followers and likes and shares on TikTok. Daily, we feel addicted to use our phones; check our email, doom-scroll some TikTok or stalk someone on Instagram. What I have realized from all the different topics within this class is that the technology we use daily is not built solely to help us. It has been built to draw us in with its allure of convenience but trap us in the blink of an eye. All the social media we use is curated using algorithms that contain their own biases, built with endless scrolling to give us an uninterrupted dosage of dopamine. The influencers we watch are playing created characters by sponsoring companies and presenting themselves in a hyper-edited manner. Watching this sort of content for prolonged periods of time distorts our perception of reality and when we finally get off our phones, all we see is how imperfect our body is and how terribly imperfect reality is, deteriorating our mental health. While we watch these timespan shortening 8 second videos; data collection agencies monitor all our behavior and then sell that data to ad agencies, effectively commodifying us. In short, I am not surprised there has been significant pushback movement to regain control, establish boundaries, resist information overload and achieve a better personal life balance. While I may think the Luddites that Alex has written about come across as pretentious new wave angsty rebellious teens, I do agree with their sentiment. There are several reasons as to why we would want to pushback and not submit to the “evertime” as Gomez and Morrison say. The reasons can range from emotional dissatisfaction (No. 1 cause), addiction (No. 2 cause) to wanting privacy. Morrison and Gomez also explore pushback techniques like behavior adaptation, the most popular choice, indicating the firm grip technology has on our daily lives. Furthermore, the ‘back to woods’ approach, the most extreme form of disconnection, was the lowest used technique. This signifies just how reliant we have become as a species on the constant use of technology. With all these negative effects that these technology platforms have each of us, it is nice to disconnect sometimes and remind yourself sometimes that you are just a tiny speck floating around on a space rock. It makes me think, would being a Luddite be so bad? Or is finding my own balance what I need first?