owned this note
owned this note
Published
Linked with GitHub
# Reading Responses Set #2
## Is the Future of Dating Virtual? Not Necessarily.
Dating may be one of the things the internet cannot completely change. Three of this week's sources have taken a look at online dating and the ways in which it has impacted broader dating and social patterns through algorithms that compel users to lie and the simultaneous death and rebirth of more typical dating patterns.
Starting off with a blog post from the popular dating app OkCupid, it appears that the construction of a dating profile inherently leads to intentional deception. According to data provided in the article, Male users are more likely to lie about their heights up to two inches, taller women are less likely to get matches than their shorter counterparts, and older men are more likely to lie about their income (OkCupid, 2010). These factors connect to the idea that users are competing against one another to be the most dateable versions of themselves. Lies about height and income are used to help perpetuate the idea that if one can construct the ideal version of themselves online, then they will be worthy of connection.
These stressors are not seen only among a small subset of users but rather represent a larger portion of digital citizens that have turned to online dating to seek out relationships. In Derek Thompson’s (2019) article for The Atlantic, he outlines the history of more traditional “mediated” relationships that arose through matchmaking from “intimates” and contrasts those practices with the much more comprehensive world of online dating. In Thompson’s view, online dating is the natural progression from proceeding forms of dating.
However, Robyn Vinter’s work for The Guardian provides a voice to those who feel disenchanted with the monotonous world of swiping right. Through her careful interviewing of former users of online dating platforms, Vinter discovered that many of her subjects are shunning digital means of connection in protest of its’ harmful or unhelpful effects. Those who read Vinter’s (2023) article will come to realize that despite the different opinions and perspectives collected under the umbrella of this author’s journalism, one sentiment provided by a woman named Clare prevails: “…People are so much more magic in real life.”.
While it is hard for me to see where the future of dating is headed, I can see both sides of the issue. As a young person I have dated through both methods. Throughout high school and college I have met people through friends and shared environments, but I also found great success on dating apps where I met my current boyfriend. I think what both Thompson and Vinter’s arguments lacked is the admission that both of these styles can coexist despite the apparent dominance of online dating. I firmly believe in the concept of love languages, and I believe this theory can impact the quality of either format when applied to an individual's dating life. It is the spread of this type of information that I believe can better alleviate the stress of concerned individuals and best connect hopefuls with the dating format that works best for them. Regardless, love is a complex beast to analyze, and I think it will continue to be no matter how we choose to pursue it in the future.
## Banner Ads and Cookies
The main types of ads available online are banner ads (also known as display ads) that are frequently shown on social media platforms, news websites, blogs, and other social platforms. The ways in which these ads are presented to users is through tracking the preferences and activity of consumers through data provided by cookies.
As described by the creator of cookies Lou Montulli in a video for Vox, cookies were originally intended to be a way to “add memory to the web” (Vox, 2020, 0:54). Before cookies, if a user switched tabs or refreshed their page, whatever they were doing would be lost and they would have to redo whatever they had been doing.
Montulli created cookies to solve this problem by creating a way for websites to access your history on their site. This type of cookie is known as a first-party cookie. Because this data is provided by the user’s device directly, it allows web-history to be accessed by the website without giving access to anything unnecessary. First-party cookies are like a tool, and they are necessary for many websites to function.
When this technology was introduced, web developers realized that they could utilize cookies to share data and use that information to expertly market products to consumers. This created third-party cookies, a type of cookie that shares your history across many different websites with one company. Now, when third-party cookies are enabled, if you are looking at a cute pair of jeans on Old Navy’s website, within a certain time frame, Facebook will know your favorite style of jeans the second you login to the platform.
This sharing of data across multiple different platforms and among many different companies creates the phenomena that many have described as ‘the phone is always listening’. Cellphones and other devices are not being tapped by companies or governments as many people would believe. What is really happening is that data is being shared with a slew of companies every time the internet is accessed, and that information is being used to remind consumers of products they passed over or new ones that companies believe will pique their interest.
Through cookies, the banner ads that so obnoxiously populate our feeds everyday are force fed to us in scarily accurate ways.
## Bemusement: Why is the Internet so Confusing?
The internet is full of confusion, much of which is caused by miscommunication and mishaps. This is to say that the way we choose to communicate over the internet matters. Often our unreadable tones or dry sarcasm can send us into a world of trouble or simply dilute systems that are intended to be helpful. All these contrasting modes of communication contribute to feelings of confusion and bemusement for users.
Some practices like parody reviews are fun, celebrated aspects of online culture. They have become accepted parts of society and have even been recognized by platforms like Amazon that have become homes for this brand of comedy. Other practices like dark sarcastic humor online have been utilized by online activists to imply that threatening jokes aren’t jokes at all.
The case of Jack Carter was especially concerning to me as I could not believe that a teenager making an admittedly distasteful joke online landed him in jail for five months (Reagle, 2019). I would like to believe that there must have been presumed probable cause to incite his jailing but given the nature of cancel culture (especially in its earlier days when this case occurred), it is concerning the lack of thought that seemingly went into jailing Jack Carter.
In conclusion, this source highlighted the humor and sorrow that can be found in online spaces; proving that there is still quite a bit of duality to be had in online spaces. While the internet is filled to the brim with opportunities for humor and memes, it is also a hotbed for malice and cancellation.
## Artificial Intelligence
The consequences of artificial intelligence (AI) systems and their many abilities will vary depending on the fields affected. As Heilweil (2023) mentioned in her article for Vox, many academics are questioning whether they should crack down on the use of AI or embrace it and make it a part of curriculum.
This move will drastically affect education as it will determine what kinds of work are graded and how new technology will be incorporated into the classroom. What strikes me as sad in the midst of this debate as it pertains to schooling is that those students who have never turned to AI to write a paper or complete an assignment will be forced to adapt because of the cheating of their peers.
Personally, I have never written a paper using AI and have always taken great pride in my writing. It would sadden me if the tides of academia shifted so that writing was forced to incorporate the use of outside programming to dispel cheating when myself and many of my peers never would have cheated in the first place.
In terms of AI image generation technology and its regulation I believe that we will begin to see more instances like that of the altering of Stability AI’s stable diffusion software (Vincent, 2022).
I remember last year when there was a large controversy surrounding the creation and spread of pornographic images depicting Taylor Swift. These images depicted the pop star in various sexually explicit poses and situations in which Swift was completely nude.
Eventually the images stopped being spread and the discourse surrounding them began to calm down. Because this movement of deepfake celebrity porn had finally reached someone as famous as Taylor Swift, I believe that this represented a breaking point for this practice and caused a crackdown by AI companies to ensure something like this does not happen again.
Overall, I think we are going to see a lot more adaption to AI than we will banning of these systems. This technology is becoming so deeply integral to so much of the professional, academic, and cultural worlds that I think it will be seen as easier to absorb these tools into everyday life than it will be to expel them all together.
Whether or not these practices will lead to the regulation of artificial intelligence remains to be seen but I think no matter what, AI will continue to exist in some shape or form for the foreseeable future.
## Algorithmic Bias in Search Engines and Artificial Intelligence
Algorithmic bias can take many different forms. In some instances, as seen with Google, algorithms are influenced by the societies they are meant to serve. As Swedish graphic designer Johana Burai said (as cited in Rutherford & White, 2016), “The people in society are creating Google, in a way,”. In the instances recorded in Buzzfeed’s article however, one question begs to be asked: what society is Google reflecting?
Google is an American company, and the racially biased scenarios reported on by BuzzFeed seem to all be from the western world where Caucasian people dominate. This could be the primary explanation for why Google generates results overwhelmingly representing white people and perspectives. More white people exist in these societies and therefore make up the largest share of users by race.
However, when the use of artificial intelligence (AI) enters the discussion, more concerns are raised. As we discussed earlier this week on Tuesday, AI needs to be taught how to respond to stimuli by feeding it data that tells it what certain things are. In terms of Google, the introduction of their new photos app that used an AI assistant to sort photos into categories mislabeled pictures of two black friends as gorillas (Rutherford & White, 2016). Despite Google’s public apology and promise to rectify the matter, concerns still lingered surrounding what type of data was fed to this AI that would cause it to perceive these friends as gorillas instead of human beings.
This debate surrounding what companies feed to their AI has been further exacerbated by conservative media’s realization that the popular AI chatbot ChatGPT demonstrates a clear liberal bias. This bias has presented itself primarily in the form of what stories the bot will or will not tell based on the suggested subject matter. For example, if a user asks the bot to write a story in which Hillary Clinton won the 2016 presidential election instead of President Donald Trump, the bot has no trouble complying. However, if a user were to ask the bot to write a story detailing what would have happened if Donald Trump had won reelection against President Joe Biden, the bot refuses to comply on the basis of not spreading a “False Election Narrative" (Hochman, 2023).
Both issues demonstrate how bias can be both intentional and a reflection of society itself. In the case of Google’s search engine, bias is simply a reflection of the most popular results which unfortunately reflect the white majority of Google users. In the case of Google and OpenAI’s programming of their AI assistants, it is clear that bias played a part in the programming of these systems. While Google’s transgressions may not be as glaring as the liberal bias enforced on ChatGPT, there were certainly errors made that contributed to Google photo’s mislabeling of black users.