# MATHIAS' VERY COOL PERSONAL SITE <iframe src="https://giphy.com/embed/zrlYZhjKtoalO" width="480" height="360" frameBorder="0" class="giphy-embed" allowFullScreen></iframe><p><a href="https://giphy.com/gifs/sharingan-zrlYZhjKtoalO">via GIPHY</a></p> HELLO AND WEEEEELCOME to the super sweet HackMD page belonging to me, Mathias Zykowski Tvilling. Here you will find all the work necessary to qualify me for the exams. ## BRAIN DUMPING, 10/11 2020 The introduction of the digital common for me could provide a framework to discuss some online communities a bit more focussed. It also raised a lot of questions I think. The realtionship with FLOSS culture seemed pretty straight forward to me in the beginning, however I think during this block it has actually made me think a lot more about what FLOSS has to be in order to be considered a common - in the same way, the example of FLOSS also helps me grasp the meaning of common as it is a concrete case. One of the more interesting questions raised during this block was how something like FLOSS (and commons in general, I think) can co-exist with a capitalist society. I still do not really think this has been answered to be completely honest, at least not in a way that I find satisfactory. This question of digging deeper and looking at how FLOSS sometimes pretends to be more FLOSS than it is, or how some companies are tied with classic commercial companies makes it an interesting discussion to have. Perhaps what is meant is that capitalist ideals such as constant progression and "more" is present in the mindset of FLOSS, constantly developping packages, improving the software etc. This just doesn't seem to be the whole story to me. --------- ## BLOCK ASSIGNMENTS Here you will find all the Block Assignments I have worked on with my group. Happy reading. ------- ### BLOCK ASSIGNMENT 1 <iframe width="560" height="315" src="https://www.youtube.com/embed/iuABouMOO-w" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> *Video demonstration of how our prototype works* #### Introduction In the process of creating a computational prototype, our focus has been on Data visualisation including both the hidden and visible parts thereof. We encounter data visualisation in many aspects of our daily lives, nonetheless they have become such a valuable tool, that we rarely stop to reflect upon how these visualisations are made, or which data they represent. A data visualisation often substitutes an important argument, with its convincing rhetoric building upon numbers. We set out to investigate what role data plays in the process of making a data visualisation. How choices about data sources, data cleaning, transformation and many other factors are a part of visualising data. #### Method For many years, even millennials, there has been a cultural practice of collecting, storing and analysing data. The volume, variety and use of data has grown, and the ways in which data can be used has grown enormously. Kitchen and Lauriault argues in “Towards critical data studies: Charting and unpacking data assemblages and their work” (2014); that data was formerly understood as pre-analytical and pre-factual, existing prior to interpretation and argument. Following this understanding, data was a raw material on which knowledge was built, a matter that was somewhat objective and representative of the knowledge it represented, in this way it is not the data itself, but the uses of data that are political. With the rise of Critical Data Studies, this view has been challenged; “ [...] data are constitutive of the ideas, techniques, technologies, people, systems and contexts that conceive, produce, process, manage and analyse them.” (Kitchen & Lauriault 2014, 5). With this understanding follows, that data is newer a raw objective material, it is before collection, during data cleaning, and with data visualisation processed and transformed. Drawing upon the critical data studies approach we set out to explore the different layers of data visualisation. #### Prototype The initial idea was to explore data visualisation, by creating our own tool to visualise irregularities in datasets. By drawing inspiration from the Herman Chernoff face approach; to visualise multivariate data in human faces (Chernoff, 1973), we wanted to create a prototype from which we could show inequality in different europeans countries through visualisations of faces. ![](https://i.imgur.com/yMSJ1Fw.png) With this as our main goal, we had a framework in which we could investigate the different layers of data visualisation, as follows: 1. **Data:** Finding and investigating already existing datasets on inequality → analysing potential biases and choices of visualisation of the chosen datasets → cleaning chosen data, and hereby transforming it into our own dataset. **Outcome:** Reflecting upon the infrastructure of the already existing datasets, their biases, but also our own choices of how to transform the data into something usable for the project. 2. **Visualisation/Building:** Choosing materials → conforming with the given constraints → Choosing how to build a face that was transformable **Outcome:** Reflection upon which parts of the tool should be visual, and which should be hidden, which prompted reflection about how these choices also are a part of digital visualisations etc. 3. **Technical layer/Arduino:** Transforming dataset into variables that could be processed in a code written in C → Mapping these variables to degrees on servo motors that controlled the position of the face. **Outcome:** The process of connecting visual output of a face, with mechanical motors which were controlled by a chosen dataset, showed much difficulty. A goal that seemed simple at first, had many constraints that we needed to work with or around. This resulted in an end result that was not as smooth as expected, creating disturbances for the observer of the tool. Although, this made us engage in reflection about how the vulnerability of a data visualisation can work as an argument to uncover the hidden biases in the work, and visualise not only an end product but also the process of a data visualisation. Ratto argues in “The Critical Makers Reader”, that with his approach to critical making the objects that are created is not the main outcome, instead: “[...] the intended results of these experiences are personal, sometimes idiosyncratic transformations in a participants understanding and investment regarding critical/conceptual issues.” (Ratto & Hertz 2019, 24). Starting with the initial idea of making a tool, that can visualise irregularities in datasets, we ended up with a broader conceptual understanding of data visualisations, through the process of critical making. #### Main points of discussion: * What were the most important things we learned through the process? * What is the purpose of our tool - how can we compare it to the good trick, is it doing the work of feminist data visualisation? * How does the experience of this visceral type of data visualization differ from a more common style? #### Bibliography: Chernoff, H., The Use of Faces to Represent Points in k-Dimensional Space Graphically, Journal Of The American Statistical Association, 68(342) (1973), 361-368. Haraway, Donna. (1988). Situated knowledges: The science question in feminism and the privilege of partial perspective. Feminist studies, 14(3), 575-599. Kitchin, Rob, and Travey P Lauriault (2014). “Towards critival data studies: Charting and unpacking data assemblages and their work”. Ratto, Matt & Hertz, Garnet, “Critical Making and interdisciplinary learning: Making as a Bridge between Art, Science, Engineering and Social Interventions” In Bogers, Loes, and Letizia Chiappini, eds. The Critical Makers Reader: (Un)Learning Technology. the institute of Network Cultures, Amsterdam, 2019, pp. 17-28. #### Process: [Link for process site](https://hackmd.io/KauNqt24SX62ZDpMPYU12w) [Brainstorm/ideas for project](https://hackmd.io/a4yVt49iQzGJgvNZaW8hNQ) ------- ### BLOCK ASSIGNMENT 2 Link to our program: https://estermarieaa.github.io/DC/Google_search/ #### Introduction For this project, we engaged in a critical making process exploring how Google makes search suggestions for users while they are typing in the search bar. When typing in certain words or sentences, Google tries to predict your search by suggesting the user ways to finish the search. However, certain words do not receive any such suggestions. These are typically words of profane nature, and often have connections to content of a sexual nature or pornographic material. They are still searchable words, but their placement on the profanity list means that Google will not provide suggestions for the user. This becomes interesting when considering that certain words that may be connected in some way to the porn industry can be caught by this algorithm as “collateral damage”. Specifically such words could have to do with sexuality (gay, lesbian, transexual etc). These are words that people might Google in the endeavour to learn something about themselves, and in such situations, search suggestions could play an important role acting as social proof that others might have the same questions to ask. Search suggestions then could be argued to, in some way, validate a question sparked by insecurity, and make the user feel that others face the same challenges. However, when there are no such suggestions, perhaps the user could feel as though they are abnormal or alone. Therefore, the algorithm that places these words on the profanity lists, thus hiding suggestions, are contributing to, or creating, a culture of tabu in the real world. These algorithmic structures thus have an influence on our perceived culture, and this thought has lead us to the following problem statement: *How can we through exploring how parts of the internet are hidden, highlight the way in which algorithmic structures shape digital culture?* #### Project description Our artwork is presented as a Google search page. In the search bar, a variety of words appear rapidly along with their suggested search options if any are available. These words are a combination of two different list. The first lis a list of words we found on GitHub which Google has blacklisted for suggestions - we call this the profanity list. The other is a list of the most frequently searched words on American Google - we call this list the popularity list. Fifty words from the profanity list, and fifty words from the popularity list is combined in an array to, where a randomizer choses the word to display. Each time a word pops up, a sinister robotic voice reads the word aloud, be it a profanity word or a popular word. The project does not aim to push the agenda in an outright way, but rather it aims at letting the user discover the hidden suggestions on their own by showing the contrast rapidly. The robotic voice is meant to highlight the absurdity of the profane words when read aloud alongside regular, popular words. #### What type of internet art is our project? Internet art can be defined in many different ways, but Galloway describes it as such: ““Internet art, more specifically, refers to any type of artistic practice within the global Internet, be it the World Wide Web, email, telnet, or any other such protocological technology.” (Galloway, 2004, p. 211) In this sense our work can be described as a piece of internet art since it takes place within the internet, a protocological technology, and in a sense also adresses this protocological nature through adressing google’s query-based search engine. But could our piece be described as Net Art in the sense Galloway describes it? He describes it as such: “Net art addresses its own medium; it deals with the specific conditions the Internet offers.” (Galloway, 2004, p. 216) So does our project deal with the conditions of the internet? Our project adresses the choice-based nature of an algorithm, but this could be done offline just aswell as online. Despite this the algorithm is a native element of the internet, so one could argue that algorithms are a condition of the internet. In that sense our project could resemble Net Art. #### Reflections As we began our process we wanted to address how certain dimensions of the internet are being filtered out. As we read that 30% of the internet usage is for pornography, we found it quite interesting that pornography is being filtered out, even though there is such a massive part of the internet that consists of it, and the use is spread across demography’s. The way it is filtered out can be seen in the google search bar. Words associated with pornography or profanity doesn’t have any suggestions. When investigating the words closer we found some interesting inconsistencies. In danish, words meaning lesbian, bisexual and transexual don’t have any suggestions, just as profanity and pornography connotated words. As we switched from an American google, to a danish and a mix between danish and English, results changed. Some words were blocked out while others were not. This gave us insights into the complicated algorithm deciding how words are blocked out while others are not. As we were in the process of creating our design, our process can be described as critical making. In the process we started questioning aspects while developing the design. The process raised many questions and answered some too. This is the power of the critical making as it can raise questions as: ‘’What are the relations between particular social agendas and technical objects and systems?’’ (Ratto & Herz, 2019, p.23). This was what we in particular were interested in exploring. #### Bibliography: Galloway, Alexander R (2004). “Internet Art” in Protocol: How Control Exists after Decentralization. Leonardo. Cambridge, Mass: MIT Press, pp. 208-238 Ratto, Matt & Hertz, Garnet, “Critical Making and interdisciplinary learning: Making as a Bridge between Art, Science, Engineering and Social Interventions” In Bogers, Loes, and Letizia Chiappini, eds. The Critical Makers Reader: (Un)Learning Technology. the institute of Network Cultures, Amsterdam, 2019, pp. 17-28. -------- ### BLOCK ASSIGNMENT 3 <iframe width="100%" height="300" scrolling="no" frameborder="no" allow="autoplay" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/923740186&color=%238bc99d&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&show_teaser=true&visual=true"></iframe><div style="font-size: 10px; color: #cccccc;line-break: anywhere;word-break: normal;overflow: hidden;white-space: nowrap;text-overflow: ellipsis; font-family: Interstate,Lucida Grande,Lucida Sans Unicode,Lucida Sans,Garuda,Verdana,Tahoma,sans-serif;font-weight: 100;"><a href="https://soundcloud.com/mathias-tvilling-70944421" title="Mathias Tvilling" target="_blank" style="color: #cccccc; text-decoration: none;">Mathias Tvilling</a> · <a href="https://soundcloud.com/mathias-tvilling-70944421/johannes-sizzling-hot-ones-open-software-on-the-table" title="Johannes&#x27; Sizzling Hot Ones: Open Software On The Table!" target="_blank" style="color: #cccccc; text-decoration: none;">Johannes&#x27; Sizzling Hot Ones: Open Software On The Table!</a></div> **Introduction:** This project explores the FLOSS software Atom. Atom is Free and open source, and is released under the MIT license. Which is one of the most permissive licenses, which allows both copying, merging and republishing with the only requirement of including the copyright notice in all copies made. We therefore chose to explore Atom further in our podcast, to raise questions about both advantages and disadvantages of FLOSS software with Atom as our case study. The podcast is part of an imagined series of podcasts regarding FLOSS and open source software. Where experts will come into the show, to answer questions from listeners. This episode is centered around Atom, where the imagined founder of Atom, mr. Atom is guesting the show as the weekly expert. Furthermore two listeners call in to express their opinions toward Atom. The first being “Ester” from Denmark, who is a developer at a small company who uses the Atom text editor as part of their working practice. Ester is very enthusiastic about the commoning practice of the software, and the way in which one can use other peoples extensions, and contribute with her own. Furthermore, she is interested in why Atom is Free, and not commercialised. To answer this question mr. Atom emphasizes the commoning practice of open source, and how this practice is the philosophy the Atom community is built upon. The next listener “Mike” questions the fact that Atom is not commercial, or has some commercial gains from being connected to Github, which is further owned by Microsoft. This leads to a discussion of the “free” aspect of FLOSS and in particular Atom between Mike and mr. Atom. The two viewpoints were chosen, in order to elucidate the complexity of FLOSS, and how free can have many different meanings. Lastly the host of the podcast, concludes on these questions that there are many aspects of FLOSS, it is not only Free and Open Source, and emphasizes the importance of raising the discussion of these different aspects. **To which degree is atom FLOSS?** First of all, we will try to create a definition of FLOSS. FLOSS can be understood as more of an umbrella term. A movement in which many different ways of addressing free/libre and open source software co-exist. This can be seen in the many different types of licenses with different structures which all coincide within this term. These licenses express the idea of free and open source software: ‘’Free software can be freely copied, modified, modified and copied, sold, taken apart and put back together.’’ (Mansoux, 2017, p. 92). Even though this might seem as a pretty sharp definition, the licenses differ from each other. They have different purposes, addressing different elements and degrees of freedom, political structures and overall ideologies. The common goal they seek through this is creating environments where one can share and develop software freely, making it accessible. Creating cultural liberty and equality (Mansoux, 2017, p. 82) Atom is a free and open source text editor used for coding first launched in 2014. The platform is developed by Github, and is licensed as FLOSS under the MIT License, widely considered as one of the most open free and open software licenses available ( Wheeler 2015). This license allows users to modify, distribute and use both privately and professionally with no cost. Atom is renowned for its flexibility in use as a result of a big community of contributors, who create packages and extensions for Atom which allows it to be compatible with a variety of different tasks. Atom's status as FLOSS software combined with the huge community maintaining, developing and using it arguably allows us to view Atom as a digital common in Sollfranks definition (Sollfrank, 2017). The interesting question, which is also raised in the podcast by caller “Mike”, is Atom’s affiliation with Github and by extension Microsoft. In 2018, Github, who made/owns Atom, was bought by Microsoft, and thus Atom is now part of a Microsoft owned company. Microsoft as a company cannot be said to be a part of FLOSS - certainly not to the extent that Github and Atom was/is, so what does this mean for our view of Atom as a free and open source piece of software? This is an interesting question, as Atom by all accounts is still run as a FLOSS company - however, as argued by the character “Mike” on the podcast, in some ways Atom could be seen as an asset for Github. The community of developers maintaining and improving Atom could be seen to add value not only to themselves and the platform, but also to Github, by making their assets more valuable. **Conclusion** To conclude on the different aspects of FLOSS, with Atom as our case study, one can argue the development and maintenance of Atom is a commoning practice. Where the software itself, and the different packages that can be added is the resources that the community work with, in the commoning practice of programming, sharing and learning from each other. On the other hand, the free aspect of Atom can be challenged by the fact that Atom has such close connections to Github, and further down the line Microsoft, which suggest that Atom, or FLOSS in general have hidden aspects that might not be as visible to the specific communities, in this case such as the commercial gains or the added status Github/Microsoft can obtain from the the Atom community. **Bibliography:** Mansoux, Aymeric. (2017).“In Search of Pluralism” in Sandbox Culture: A Study of the Application of Free and Open Source Software Licensing Ideas to Art and Cultural Production, pp. 76-112 Wheeler, David A. (2015). “Why Open Source Software / Free Software (OSS/FS, FLOSS, or FOSS)? Look at the Numbers!”( https://dwheeler.com/oss_fs_why.html) Solfrank, Cornelia (2017). “Art & Speculative Commons”. (https://vimeo.com/216611321) ------- ### BLOCK ASSIGNMENT 4 **Introduction** The researcher Safiya Noble points out that the inherent biases in algorithms are a result of the fallible nature of humans. She describes it as such: "The math-powered applications powering the data economy were based on choices made by fallible human beings. Some of these choices were no doubt made with the best intentions." (Noble 2018, p. 40) If the faults in these choices are the result of the fallibe nature of humans - how can we then, as fallible humans, try to avoid embedding our biases in our computational work? Research question: This project will study how one might approach creating an as unbiased as possible algorithm by exploring how others have approached this challenge, trying it out for ourselves and in the end evaluating in what ways our algorithm has succeeded and failed. **Overview over potential process** The methodology that will be used to tackle this challenge is critical making. To inform our starting point in our critical making process research will be done on how others have approached this issue. The insights gained from this research will then be used to inform how we approach creating our own machine learning algorithm through critical making. The goal of this process will not be to create an unbiased algorithm since that seems, according to the literature, to be an impossible task. Instead it will be to test the approaches found in our research and find where their flaws or challenges might be. **Evaluation of research question** In the craft of research a research question is split into; Topic, conceptual question, conceptual significance and potential practical implication. The topic of our research question is bias in machine learning/algorithm. An issue here could be that this topic is quite broad. To address this in general, one could ask what is the conflict, or what does this problem contribute with, which consequences emerge from this practice (Booth, Colomb & Williams, 2008, p. 39). In this case, we could have chosen a specific type of bias, such as racial bias or gender bias, or a specific type of algorithm or application, difference in supervised learning, unsupervised learning and so forth. The conceptual question of our research question would be what approaches there are that attempt to avoid embedded bias when creating algorithms, and what strengths or weaknesses they have. Once again this question is quite broad and in a sense it would be more fitting for a research synthesis paper instead of a critical making paper. The conceptual significance of our research question is quite clear in that the negative effects of inherited biases in algorithms are commonly recognized in our field. The potential practical application of our research question is also quite clear since having an overview of the strengths and weaknesses in different approaches to creating as unbiased as possible algorithms would make this task much more approachable to designers. This task can seem very daunting, so having a clear terminology around different methods would make approaching this easier. **Bibliography** Booth, W., Colomb, G., & Williams, J. (2008). The craft of research (3rd ed.). The University of Chicago Press. Noble, S. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press. ---- ## MX ASSIGNMENTS Here, you will find an overview of my MX assignments. I have tagged all the MX's done in collaboration with my group with "GROUP". --- ### MX001 Upon reflection of this weeks assigned readings, I am reminded to not always take things at face value. Peters' outline in the introduction of why keywords matter alongside his argument in the "Digital" essay that "[...] big data surely means big data brokers" reminds me that even seemingly uninteresting mundane things such as digital keywords in a Google search bar may hold more importance than you would think at first glance. Peters' argument that digital keywords could be seen as obligatory passage points (OPP's) made me think of them in a new light. If we adopt the argument that digital keywords, such as search words in Google, could be seen as OPP's, that would make them a necessary step in retrieving information online. Simultaneously though, I was wondering if these same digital keywords could also be regarded as an indexing action - these keywords point to certain information which may in turn reflect something from the real world. If we accept digital keywords as indexing, and adopt the argument that they act also as OPP's, I think that makes for an interesting debate on making information accessible only to those who hold the correct key(word). My main takeaway from Hayles prologue was her definition of materiality as junction between physical reality and human intention - It is the human intention element of this definition that caught my interest. I recall reading the essay "Algorithm" by Andrew Goffey a few years ago, which argued that the algorithm becomes material through the effects it's actions have on the real world. Clearly, that effect would count as the physical reality part of Hayles definition - but what about the human intention part? --- ### MX002 #### What am I interested in? Currently, in terms of data, I am really intrigued with data visualization. The high volume and velocity of big data makes me wonder how we could ever visualize the data we produce and use in a more effective way than numbers, charts and graphs. Data is governing many things in our lives at the moment, but despite this it is very difficult to understand what data is, and how it works. I personally think one way to help our understanding is to provide better visualizations, that perhaps is not as sterile and static as numbers. #### Making a map Reading the paper by Kitchin and Lauriault, I found the Looping Effect by Hacking interesting, because it in some ways did not align with my own understanding of how data works. The looping process was to my eyes too linear in the sense that one point lead to the next consistently in a loop. In my mind, the process of data seemed a lot more messy and complex. Therefore, for the map exercise, I will attempt to produce a process of data through some steps that I think data goes through. From these steps, I shall imagine how data would travel between them. I will draw arrows between steps, one at a time, and eventually, I will have a "map" of how data is constructed/how data works and a reflection of what data looks like to me. Here are the steps or points I've identified: ##### 1. Generation At this step, data is produced in one way or another. It could be a person using the internet, or it could be at data sub-set that is generated from a larger data-set. I consider this category broad, because I think this is the key to understanding the nature of how data mutates and constitutes itself. ##### 2. Collection In terms of the engines of discoverability, I would consider this step to house the counting engine. Data that has been generated is "collected", that is, counted and made measurable. ##### 3. Classification Borrowed from the looping effect. The collected data is being classified and sorted. This is the point where I think the point about going against data minimizing is relevant - this point suggests that the data has been collected before it was classified, meaning that the use of the data was not necessary known at the collection stage. ##### 4. Creating insights Data is used to create insights - could be in business or in governing. Could be consumer behaviour or predictive policing. This is similar to the knowledge point in the looping effect. ##### 5. Employment Data is employed to do a job which has consequences in the real world. This is reminiscent of the "institutions" point in the looping effect, however here I consider the actual work more that those who put the data to work. ##### 6. Product The impact on the real world that the work of the data has in the end. This can be broad. It can be an autonomous car crashing, a change in perception of an object (objects of focus in the looping effect), behavioural patterns and so on. I consider this the materiality of data. ![](https://i.imgur.com/WTsLLp2.jpg)![](https://i.imgur.com/hQUEFFy.jpg) ![](https://i.imgur.com/mF35AIS.jpg) Here is a series of images of how the map developed, and eventually looked. As you can see, the map got increasingly more messy. To my own surprise, it was quite straight forward the first time around. The complexity grew into it as soon as all points had at least one arrow connected to it. Once a full circle had been completed, I started imagining how each point could branch out. The collecting step becomes a product in its own right in practices such as self-tracking, just as the classification step can become a product in the sense that classification of human beings can have a very real effect in the world. #### Why is it important? I think it is important to understand data because it is becoming increasingly imbedded in our society in almost every way imaginable. Thus, in order to understand its ramifications, we have to understand the fabric of data, if you will. Naturally, an improved understanding of data would likely help un-blackbox a lot of topics and make it easier to make our own decisions on what data we produce and share, and whom to share it with. --- ### MX003 - GROUP ![](https://i.imgur.com/2C7hvlI.jpg) In The workshop about Feminist data visualisation, we worked with the concept of "The God Trick". In groups we discussed how to work "against" the god trick, or how one could create a visualisation which would not be seen as the God trick. Taking our stance in Covid-19 data, we created a visualisation which showed the data in a way that both draws on the competitive charts of corona cases between countries, but also tries to challenge this view by incorporating the connections between countries, and how different countries might have been at fault for affecting other countries, or bringing the virus to different parts of the world. --- ### MX004 #### [Slide Presentation](https://hackmd.io/@TYO2/SJfo2GC4D#/) *Here are the slides from our presentation* **My personal bit:** The cookie I brought was a cookie from Bet365. Interestingly, I don't bet, and have never visited their site willingly - however, I have visited the site through pop-up adds on particularly shady streaming sites, and apparently, because of that, I had bet365 cookies in my browser. This is interesting in relation to the assigned reading for today as Carmi argues that cookies have been categorized differently from spam. Cookies have been portrayed as something which would enhance the user experience of a website, and as a tool for e-commerce, while spam is this evil practice of tricking people with bulk amounts of unsolicited communication. The reason why I find the Bet365 cookie in my browser interesting is because it is there as a result of a spammy practice (pop-up adds) which could be seen as unsolicited communication. So even if we buy the argument that cookies are not spam, in this case, it is the product of spam. However, this made me think - how can I ever be sure, as a basic, faulty human being with rubbish memory, that I've never visited the actual site before? Because these cookies are saved for so long (I found cookies from websites I used for an exam 2 years ago), I started to question my own knowledge of my internet activities, and started trusting the cookies more. This is something to be very critical about - in this way, one could argue that cookies contribute to creating our sense of our online presence. --- ### MX005 - GROUP When working with the cookie-script, we quickly had an idea of what we wanted to do. Online advertisers such as YouTube value visitors from economically strong countries higher than visitors from countries with lower economic power. So we simply wanted the cookie to save from which continent you are visiting and then assign a value accordingly. Adding a continent input was not an issue, but having the cookie assign a value according to the continent was to challenging for us and the time available. Below you can find our script and try it out for yourself even though it does not have the desired feature. ``` <code> <!DOCTYPE html> <!-- //ref: https://www.youtube.com/watch?v=5ttpghXjG0g example of cookie: _user=siusoon;expires=Thu, 10 Sep 2020 09:01:51 GMT; --> <html> <head> <script> let myCookies = {}; function saveCookies() { //retrieve data from the form elements myCookies["_user"] = document.getElementById("user").value; myCookies["_uage"] = document.getElementById("age").value; myCookies["_ucon"] = document.getElementById("continent").value; /* if("_ucon" = "Afrika"){ myCookies["_uval"] = document.getElementById("continent").value; }*/ //get rid of existing cookie document.cookie = ""; //set expiry time, that is 30 seconds of now let date = new Date(); date.setTime(date.getTime() + ((30) * 1000)); let expiry = "expires=" + date.toUTCString(); //store each cookie let cookieString = ""; //loop via each myCookies (e.g user and age....) //join by ';' for (let key in myCookies) { cookieString = key+"="+myCookies[key]+";" + expiry +";"; document.cookie = cookieString; //save each cookie console.log(cookieString); } document.getElementById("out").innerHTML = document.cookie; //load in the output with the latest array first } function loadCookies() { console.log(document.cookie); myCookies = {}; let kv = document.cookie.split(";"); //different key for (let id in kv) { let cookie = kv[id].split("="); //actual value myCookies[cookie[0].trim()] = cookie[1]; //trim white space and assign to the second half i.e value } document.getElementById("user").value = myCookies["_user"]; document.getElementById("age").value = myCookies["_uage"]; } </script> </head> <body> User: <input type="text" id = "user"> <p> Age: <input type="text" id = "age"> Continent: <input type="text" id = "continent"> <p> <button onclick="saveCookies()">Save to Cookies</button> <button onclick="loadCookies()">Load From Cookies</button> <p id="out"></p> </body> </html> </code> ``` --- ### MX006 *"If the network itself is political from the start, then any artistic practice within that network must engage politics or feign ignorance."* (Galloway, 2004, p. 214) The above quote is one that really helped me understand what tactical media does. Throughout the assigned readings of this week, "tactical media" was namedropped quite a lot with little explanation as to what actually constitutes tactical media. In a previous chapter to the assigned one, Galloway defines tactical media as "*[...] media of crisis, criticism and opposition.*” (Galloway, 2004, p. 175). This understanding of tactical media as a practice of opposition makes it political, as explained by the headline quote, as to go against a political network is politically motivated. To me, one of the most political artworks presented in the assigned readings was the *Toywars* work by the ETOY group. Creating a game (as it was described) based on an abstract version of the market economics they were trying to affect in real life seems at once critical and oppositional to me - a sort of fight fire with artistic fire approach. It is as artwork which is oppositional and critical in nature, and as such politically motivated, making it (in my understanding at least) a piece of tactical media. I find it difficult to decide whether or not *Toywars* can be said to be the product of critical making practices. I think there are some overlaps between tactical media and critical making, but upon reflection I think I would class tactical media as the sort of product that could emerge from a critical making practice. In critical making, often the product is not the main goal - the achieved reflection is - whereas, in the case of *Toywars*, the reflection is a product of using it, not making it. I hope this distinction makes sense. Perhaps it could also be argued, as *Toywars* is dynamic, that it is created by its usage, and as such should almost be considered as critical making in it self... I don't know. My question for further discussion in class would centre around the era from which the assigned readings are from - they are very old. **Q**: As the media landscape and technical possibilities have developed a lot since the time these books were written, What might tactical media look like today compared to then? --- ### MX007 - GROUP #### Expanding Fullers text on how he describes internet art as "not just art": Can also be a functioning tool to do a task. In the case of the demetricator, it is an actual add on with an actual effect on the user interface. Web art is not-just-art in Fullers understanding, it can only come into occurence by being not just it-self, and has to be used. Net art is not something static, but something that evolves through for instances the users interaction with it. #### What is the relationship between art and culture? As you can see on the drawing, we interpret the relationship between art and culture, as a movement from having one understanding, and then encountering art, which can create a movement which gives us the oppurtunity to reflect upon culture in a new way. first you see the circle that is flat, art is not added. Whe dont go in other directions in relation to our reflections upon life, society and culture. When adding art we create a reflection, that makes another movement, which then makes us reflect upon culture in a new way. ![](https://i.imgur.com/uwhkWJG.jpg) ![](https://i.imgur.com/x3tVfMl.jpg) #### How can art enable us to see things, escpecially technological objects in the context of digital culture, differently? - AS described earlier, it forces us to relfect upon culture in another way. When we encounter art, whether we understand or not, we have to reflect and question what we already know. TEchnological objects are accesible to more people and in the setting of the digital it enables us to reflect upon digital culture. #### Can you give an artwork example to illustrate that? https://kunsten.nu/journal/dagens-netkunstner-ben-grosser/ The demitricator is an example of an artwork that changes the digital culture that we are already familiar with. By changing the layout of the known, it makes us reflect upon what we already know, and whether there are more layers to the understanding we have of the culture we already exist in. Demitricator is an artwork in the sense Fuller represents beacuse it is something that can be used, and somthing that is a tool. If no one interacts with it, you might say that it doesnt have any existing, but when people use it, it can make us reflect, this relation and reflection is the artwork. ----- ### MX008 For this weeks assigned readings, I refelected on the ressources of the internet. I think a defining character of how many people use the internet today, which I think is different form how it was used in the 1.0 and 2.0 days is the creation of new authorities online. By this, I think of YouTube and YouTubers in particular. On YouTube today, many different personal channels exist, which individuals showcasing or teaching some kind of skill. One such skill could be cooking - these cooking channels are showing users how to cook delicious food at home, sort of in a DIY style format. I think this is interesting, because somehow these YouTubers become authorities on the topic they are invested in: people start regarding them as wisemen/women whos opinion they trust on particular matters. As such, we as users can start to pick and chose our autorities, rather than being told by a cooking show on the television whose word to trust in cooking. If more people share the same opinion on any one channel, the number of subscribers and views act as guidelines for the newcomers, and as such we build these authorities up as a community. To me, that is an interesting effect of the way we use the internet online today. ---- ### MX009 - GROUP How the website looked: ![](https://i.imgur.com/Dcb9Nfe.jpg) Our changes: ![](https://i.imgur.com/RZYIeYl.jpg) ![](https://i.imgur.com/lz2xcx2.png) We have chosen to change the rhetoric of the danish asylum application site. We have changed the text from the alreday existing site, into a more critical perspective on what will happen when you apply, with inspiration from danish asylum cases. Futhermore we have changed the layout, by changing the background picture into a picture of the reality some refugees come from. We havent changed the colours of the layout, since we wanted it to look official, to put into focus the content on the page, such as; text and images. We chose this site, because we believe that it gives false hopes about what will happen when you apply for asylum, in this way we create awareness about an already existing problem. #### Is this a form of critical making? WE learn something about the material while we are working, in this way we begin to refelct upon the funtionality of the different elements, and how they can easily be changed. Therefore the choices of the developers becomes very clear. #### Is this Internet art? If we had done it differently, and instead of changing the conten, changes the layout into for instance something less understandable, it might have been a form of internet art in the way that it explores or works critically with the material of the internet. Our approach instead takes a critical stand toward politics. Therefore the content is more critical, therfore not neccesarily internet art. ----- ### MX010 **Can you locate the articles’ problem statements?** The two articles both have a focus on archiving, but with different focus points - while Sluis provides an overview of the changing context of an archive (from stale to dynamic, from content to meta-data etc), Rossenova is more concerned with the practical question of how to archive something based on ever evolving technology (a piece of internet art built to run in a browser that gets frequent updates and changes). **How do the authors contextualize the issues?** What I enjoyed about these two articles was that they were concerned with the problem at hand, but also presented a possible solution or suggestion for how the challenge could be overcome. This is something that I've often felt was missing from the literature. In the case of Sluis, including the example of early-days Snapchat as a suggestion to how we could break with the excessive aggregation of data and meta-data makes for a good suggestion of how we should/could work to overcome the problem. Rossenova also includes the methods which Rhizome has used to preserve some internet art - here, I think specifically of the Webrecorder and the Webenact feature. This approach helps contextualize the problem for me, as it creates an understanding of the scope of the problem, by defining it through solutions (I hope this makes sense). **Beyond internet art, what other forms of archives are challenging but interesting to you?** Perhaps also tied to internat art, and really all things we post online I suppose, I often think about how to preserve a feeling, vibe, mood or the like which is connected to an experience. This perhaps is not exclusive for online archiving, but I often think that certain sensations of visiting some place new cannot be archived in a simple photo or a video. How can we preserve/archive the feeling of experiencing a place for the first time? This is something of an impossible question to answer, but I find it really thought provoking. **Why do we need archives in culture, and what’s the role of archival practice in wider culture?** Here, I think about Normans idea of knowledge in the head an knowledge in the world - I think it is, above all, insanely useful to archive stuff - it means we can liberate cognitive capacity and focus on different tasks other than remembering. If we think about culture, I think archiving is something that helps pass down certain rules or norms through a society. --- ### MX011 - GROUP Together as a group, what are the issues and cultural phenomenon that the archival techniques are addressing? What is an archive? What are they archiving? What are the potential and limitation of these techniques? How do these techniques allow you to think about internet culture differently? In the case of the Webrecorder we found it interesting that it highlights how important interaction is on the internet. Just archiving a static image of a page would not be representative of the experience of visiting the page. The WayBack machine also keeps some of the interactivity, but lacks the ability to navigate the page through links, while a screenshot completely removes all interactivity. This lack of interactivity is really a symptom of the fact, that most of the underlying code is not being saved. This made us think of how complex it would be to both try to keep all interactivity and therefore saving alot of data, and at the same point minimizing the weight of the data. ----- ### MX012 **What is/are commons? are you familiar with this concept/practice? have you come across it before this session? if so in what circumstances?** I have come across the ceoncept of commons a few times before, however always in relatation to "The tradegy of the Commons", which of course is not a particularly nice spin on it. This concept was brought up on our second semester, though I do not remember which particular course. It was also mentioned to me in a Human Geography Class this past semester as an important concept in terms of environmental change and thinking about how we can treat our planet better. **Find an image that best visualises how you understand or imagine commons. Save that image in your computer and bring it to the session.** Probably as a result of my position, I find it hard not to think about certain collaborative websites when asked this question. Specifically, I think of Reddit and YouTube. ![](https://i.imgur.com/fbKIh3V.jpg) If I was to think of something not digital, I find myself automatically thinking of cultural prodcution - amatuer theatre, for example. I was a part of a small amatuer theathre in my home time for a short while - we had a house at our disposal, and it was entirely up to us what to do with that house. In that sense, it became commons for that specific community. ![](https://i.imgur.com/mPjp98o.jpg) **Think of 3 - 5 words (nouns, verbs, adjectives), which in some way describe the values you associate with commons.** I think for me, "Collective" is a constituting word when thinking of the commons. In the same way, I think "Community" is also important, as the way of production of the commons is through collaborating as a community. Because of my previous experiences in talking about the tragedy of the commons, I also associate the word "Responsibility" a lot with commons - we cannot exploit the commons for personal gain, so there is a certain responsibility to other members of the community involved in being part of the commons. ---- ### MX013 - GROUP **What kind of free and open source software that you like? Why?** We discussed the different kinds of Open source software that we use. For instance: - Wordpress - Processing - Modelling for instance, for 3d printing Processing, because it is an open platform, that works towards programming literacy. Depending on level of programming experience you can either contribute to the "core" of the programing enviroment, fix bugs, use the libraries for own projects, or share your "beginner" code. It becomes kind of a common, with a community. **If you have to choose to discuss one aspects of FLOSS, how would you approach this?** We find the commercial aspect of FLOSS, both interesting and confusing. It is mentioned by Mansoux that FLOSS is made in a capitalist society, and that it therefore is not incompatible with such a society. However, this is a little confusing to us, since a core value of FLOSS is the freedom to edit and distribute software freely - so how does this play into capitalist thinking? If we were to investigate this aspect of FLOSS, we would make a comparative analysis of different software belonging to FLOSS. How these differentiate in the ways software is copied, altered and reshared, and which commercial pratices these softwares implement. ------ ### MX014 - GROUP This is taken from the class discussion page, where it was originally written. A glossay: **Data** A defined entity that can consist of for instance a single number, but a data point can also consist of for instance an image in which more information is stored in a binary form. Data can therefore take many forms, and serve many purposes, both as input and output. **Machine learning** Machine learning is the practice of helping software perform a task without explicit programming or rules. **Dataset** Gathering a dataset is a curating practice either setting the entry rules for what can be included, or manually curating which data to include. Malevé argues that “With more data come more variations”, taken into account that the data is chosen in accordance with specific “rules”. Denpending on the reason for creating the dataset, it can consist of different entities, such as numbers, text, etc. And are similarly used to test, train and evaluate algortihms. A dataset in computer vision is a curated set of digital photographs that developers use to test, train and evaluate the performance of their algorithms. (Malevé, 1) ----- ### MX015 - GROUP DISCLAIMER: This discussion was held in a group in class, but we did not write anything down. Thus, this is me personally writting down some of the things we spoke about. We were particularly interested in the limitations of the ML program. In particular, working with it made us think about the imagined affordances that we project onto the software - we implicitly expected it to know the difference between the right and the left hand, but of course, this is not the case. This reflection also led us to think about how/what the algorithm actually picks up on: did it regiter movement (i.e. doesn't matter if you move your right or left hand, or you head in the same spot and in the same motion) or was it able to distinguish certain features. ----------- ### MX016 I found this weeks readings to be methodologically intriguing. In the Noble chapter, what really caught my interest was the use of intersectional analysis and Black Feminism as a framework of exploring search engines. Using black feminism as a lens of enquiry is interesting to me in this context as it just seems a good fit with the type of exploration the author was doing, as it works from the standpoint of an under-represented minority in the world of Googling. As I have never heard of it before, I was also intrigued by her discussion of the male-gaze, and using this in combination with the black feminism framework to discuss the kinds of search results she was getting. In the Ridgway experiment, I was interested by his use of auto-ethnography, as I have never heard of this before - this sort of seems like a good fit for a critical making process. I was also interested in her introduction (to me at least) of the term filter bubble. This made me think quite a lot about the way I use Twitter, and particularly the feeling online during the 2016 presidential elections. Back then, I was convinced that Donald Trump stood no chance, because all of the American accounts I followed on Twitter had no time for him. Obviously, he won back then, and it made the filter bubble I had created for myself very apparent. My question for this week is: *Is it possible to act online without entering a filter bubble? Ridgway's experiment did not achieve this, so how could it be done?* ------- ### MX017 - GROUP **Feedback for group 4** We do understand why you compare commons and opensource software. Though we believe that your assignment would benefit from a clear distinction between the two. The two concepts flow into eachother as one unit in the way you use it, both in the podcast and in the written part of your assignment. Overall it would have been nice that you define the terms you use, both the important theoretical ones, but also concept such as ownership, because in this context ownership can mean quite different things. You seem to have a nice research question; “It is then not only relevant to look at defining whether or not Wikipedia can be viewed as a commons, but also what ethical questions can be raised in relation to how the commons is operating.” But it seems that you dont really follow up on it as much as we as listeners or readers would like. In the podcast most parts seems mostly explanatory/analytic. It could have been nice if you had included reflections on a higher taxonomic level or followed up on the research questions. Such as how the users decide how and with what to contirbute with, and how this might interfere or adhere with what the foundation of wikipedia wants. ------- ### MX018 - DRAFT SYNOPSIS Hi Winnie (and everyone else) Here is my draft synopsis. I have spent a lot off time trying to figure out what my topic should be. I am very interested in data visualization, and have worked with this before. I think data visceralization is super interesting, so I wanted to work with that, but I had a hard time figuring out how to apply it to a meaningful topic. I ended up with metaphors in computation, as I think this topic has something about it. I have included my structure for the synopsis here, so you can get a sense how how my arguments are inteded to work, and where I am going with this. ------ * **Introduction** * Introduce metaphors * Use Lakeoff and Johnson * "This is what it means for a metaphorical concept, namely, ARGUMENT IS WAR, to structure (at least in part) what we do and how we understand what we are doing when we argue." P. 5 * Introduce abstract metaphors in computing * Use Fuller: metaphors are used to ensure easy use from the beginning. * However, they also conceal - use Fuller: how metaphors hide actual workings. * Introduce data visceralization briefly * "*activating emotion, leveraging embodiment, and creating novel presentation forms help people grasp and learn more from data-driven arguments, as well as remember them more fully.*" (p. 88) * Problem statement * I am studying data visceralization because I want to find out how this approach can be used to open up the black-box of abstract computational metaphors such as that of "cookies" and "cloud" in order to help my reader understand the learning potential of designing for emotional affect. * **Contextualization of the problem space: metaphors in computation** * Why can metaphors be a problem? * Use Carmi - cookies vs. Spam - why is one okay? * Use “Cloud” by Durham Peters * I might need more literature here * **Methodology** * Overarching methodology: data feminism (D'Ignazio & Klein) * False binaries: emotion vs. reason * Method used from here is data visceralization * Use emotion and embodiment to understand blackboxed metaphors * Critical making or critical technical practice? * It is probably more on the side of critical technical practice. ------ **VISCERALIZING METAPHORS: UNPACKING COMPUTATIONAL METAPHORS BY EMTION AND EMBODIMENT** (working title – does a synopsis even need a title?) “My life is like a rollercoaster at the moment, full of ups and downs”. “My heart is broken”. “I am drowning in assignments at work at the moment”. Metaphorical concepts such as those described above are prevalent all throughout our everyday language. They are linguistic tools imbedded in human language, helping us to understand something in terms of something else (Lakeoff & Johnson, 1980, p. 5). The use of metaphorical concepts becomes particularly interesting when their structures become so imbedded in everyday language, that they are used almost without consciousness of their presence. When this happens, Lakeoff and Johnson argues that the imbedded metaphorical concepts may contribute to shaping how we understand and perceive the world around us, as they “ […] structure (at least in part) what we do and how we understand what we are doing […]" (Ibid, p. 5). If we accept the argument that metaphorical concepts help shape the way we understand our actions, then what happens when the metaphors we use are not accurate of the things they describe? One way of investigating this is by looking at the use of metaphorical concept in computing. Familiar metaphors such as “desktop” or “wastebasket” have helped conceptualize the purpose and use of personal computers in computer-interface design since the introduction of the Graphical User Interface and have been prevalent in computing ever since. According to Mathew Fuller, the use of such metaphorical concepts was intended to make using something complex easy by confining it within something we already know (Fuller, 2003, p. 55). However, the ease of use comes at the cost of black boxing. Fuller argues that the actual operations of the wastebasket on our computer interface does not resemble those of the actual wastebasket it mirrors (Ibid, p. 55). Returning to Lakeoff and Johnson, when the metaphorical concepts such as the wastebasket becomes so imbedded in the users way of conceptualizing the computer that they no longer think consciously of it, the mismatch between the comfort of the metaphor and the actual computational workings can come to obscure the their understanding of how a computer operates. This is not only true for the metaphor of the wastebasket. On a larger scale in computing, metaphors such as “cookies” and “cloud” are examples of complex computational processes wrapped in the comfort of a misleading metaphor. So, how might we unpack these metaphors to reveal the operations they represent? In data feminism, one solution could be the use of data visceralization. Arguing that the false binary between reason and emotion has meant a lack of utilizing emotion in data visualization design, D’Ignazio and Klein advocate for the inclusion of emotion in data visualization (D’Ignazio & Klein, 2020, p. 77). One way of doing this is through data visceralization. Unlike visualizing, which is concerned primarily with the eyes of the user, visceralizing works in a more embodied way as it employs more sensory modalities in the aim of evoking emotion with the receiver. D’Ignazio and Klein argue that the use of data visceralization is effective as it can "[…] help people grasp and learn more from data-driven arguments, as well as remember them more fully" (Ibid, p. 88, Emphasis Added). While abstract, misleading metaphors in computation is not (necessarily) concerned directly with data, the concept of visceralization may be a useful method in unpacking these metaphorical concepts, seeing as the authors argue specifically that this approach may help people learn and remember. Therefore, this project aims to explore how the method of data visceralization may help unpack metaphorical black boxes such as “cloud” and “cookie” in computing. CONTEXTUALIZATION (working title for section) METHODOLOGY (working title for section) #### Feedback for Martin * First of all, cool topic (much like my own) * Good choice of litterature * One thing that stands out to me is your use of the ready made through critical making. To me, critical making is as much about the process as it is the final product - will the making of the readymade help you reflect on the topic of discussion? * Also, in line with this: Is a ready-made a piece of art (internet-art?), or is it a visualization? The picture you have inserted seems more like the latter to me. * I think it is a good idea to use Google Drive as a "case-study" for this exploration - grounds your work in something specific. #### Feedback for Nicoline * Original idea, working with compression and Zip-files - very cool. * I think you have to be careful and "stick to your lane" a little bit more. We do not have a lot of written space to elaborate our thoughts on, and neither do we have a lot of time in the oral exam. In your draft synopsis, you mention several different things (Archiving, metaphors, compression, materiality of data), and I think you lose your way a little bit from your problem statement that specifically addresses the materiality of data through Zip-files. * Along the same lines, in terms of method, I think you should consider not doing too much. In your draft, you are talking about both doing 1-bit computing by critical making and data visualization, which are two different methods that are both very time consuming. Remember the scope of what you are working with.