# Mikaels repository ## Block Assignments: ### Block assignment 1 [Block 1](https://hackmd.io/m5Pa7-ItTWSTAwr1bAo2xg) <iframe width="560" height="315" src="https://www.youtube.com/embed/iuABouMOO-w" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> *Video demonstration of how our prototype works* #### Introduction In the process of creating a computational prototype, our focus has been on Data visualisation including both the hidden and visible parts thereof. We encounter data visualisation in many aspects of our daily lives, nonetheless they have become such a valuable tool, that we rarely stop to reflect upon how these visualisations are made, or which data they represent. A data visualisation often substitutes an important argument, with its convincing rhetoric building upon numbers. We set out to investigate what role data plays in the process of making a data visualisation. How choices about data sources, data cleaning, transformation and many other factors are a part of visualising data. #### Method For many years, even millennials, there has been a cultural practice of collecting, storing and analysing data. The volume, variety and use of data has grown, and the ways in which data can be used has grown enormously. Kitchen and Lauriault argues in “Towards critical data studies: Charting and unpacking data assemblages and their work” (2014); that data was formerly understood as pre-analytical and pre-factual, existing prior to interpretation and argument. Following this understanding, data was a raw material on which knowledge was built, a matter that was somewhat objective and representative of the knowledge it represented, in this way it is not the data itself, but the uses of data that are political. With the rise of Critical Data Studies, this view has been challenged; “ [...] data are constitutive of the ideas, techniques, technologies, people, systems and contexts that conceive, produce, process, manage and analyse them.” (Kitchen & Lauriault 2014, 5). With this understanding follows, that data is newer a raw objective material, it is before collection, during data cleaning, and with data visualisation processed and transformed. Drawing upon the critical data studies approach we set out to explore the different layers of data visualisation. #### Prototype The initial idea was to explore data visualisation, by creating our own tool to visualise irregularities in datasets. By drawing inspiration from the Herman Chernoff face approach; to visualise multivariate data in human faces (Chernoff, 1973), we wanted to create a prototype from which we could show inequality in different europeans countries through visualisations of faces. ![](https://i.imgur.com/yMSJ1Fw.png) With this as our main goal, we had a framework in which we could investigate the different layers of data visualisation, as follows: 1. **Data:** Finding and investigating already existing datasets on inequality → analysing potential biases and choices of visualisation of the chosen datasets → cleaning chosen data, and hereby transforming it into our own dataset. **Outcome:** Reflecting upon the infrastructure of the already existing datasets, their biases, but also our own choices of how to transform the data into something usable for the project. 2. **Visualisation/Building:** Choosing materials → conforming with the given constraints → Choosing how to build a face that was transformable **Outcome:** Reflection upon which parts of the tool should be visual, and which should be hidden, which prompted reflection about how these choices also are a part of digital visualisations etc. 3. **Technical layer/Arduino:** Transforming dataset into variables that could be processed in a code written in C → Mapping these variables to degrees on servo motors that controlled the position of the face. **Outcome:** The process of connecting visual output of a face, with mechanical motors which were controlled by a chosen dataset, showed much difficulty. A goal that seemed simple at first, had many constraints that we needed to work with or around. This resulted in an end result that was not as smooth as expected, creating disturbances for the observer of the tool. Although, this made us engage in reflection about how the vulnerability of a data visualisation can work as an argument to uncover the hidden biases in the work, and visualise not only an end product but also the process of a data visualisation. Ratto argues in “The Critical Makers Reader”, that with his approach to critical making the objects that are created is not the main outcome, instead: “[...] the intended results of these experiences are personal, sometimes idiosyncratic transformations in a participants understanding and investment regarding critical/conceptual issues.” (Ratto & Hertz 2019, 24). Starting with the initial idea of making a tool, that can visualise irregularities in datasets, we ended up with a broader conceptual understanding of data visualisations, through the process of critical making. #### Main points of discussion: * What were the most important things we learned through the process? * What is the purpose of our tool - how can we compare it to the good trick, is it doing the work of feminist data visualisation? * How does the experience of this visceral type of data visualization differ from a more common style? #### Bibliography: Chernoff, H., The Use of Faces to Represent Points in k-Dimensional Space Graphically, Journal Of The American Statistical Association, 68(342) (1973), 361-368. Haraway, Donna. (1988). Situated knowledges: The science question in feminism and the privilege of partial perspective. Feminist studies, 14(3), 575-599. Kitchin, Rob, and Travey P Lauriault (2014). “Towards critival data studies: Charting and unpacking data assemblages and their work”. Ratto, Matt & Hertz, Garnet, “Critical Making and interdisciplinary learning: Making as a Bridge between Art, Science, Engineering and Social Interventions” In Bogers, Loes, and Letizia Chiappini, eds. The Critical Makers Reader: (Un)Learning Technology. the institute of Network Cultures, Amsterdam, 2019, pp. 17-28. #### Process: [Link for process site](https://hackmd.io/KauNqt24SX62ZDpMPYU12w) [Brainstorm/ideas for project](https://hackmd.io/a4yVt49iQzGJgvNZaW8hNQ) ### Block assignment 2 [Block 2](https://hackmd.io/prv-FQ01Qzq2Bx8s_jumRg) Link to our program: https://estermarieaa.github.io/DC/Google_search/ #### Introduction For this project, we engaged in a critical making process exploring how Google makes search suggestions for users while they are typing in the search bar. When typing in certain words or sentences, Google tries to predict your search by suggesting the user ways to finish the search. However, certain words do not receive any such suggestions. These are typically words of profane nature, and often have connections to content of a sexual nature or pornographic material. They are still searchable words, but their placement on the profanity list means that Google will not provide suggestions for the user. This becomes interesting when considering that certain words that may be connected in some way to the porn industry can be caught by this algorithm as “collateral damage”. Specifically such words could have to do with sexuality (gay, lesbian, transexual etc). These are words that people might Google in the endeavour to learn something about themselves, and in such situations, search suggestions could play an important role acting as social proof that others might have the same questions to ask. Search suggestions then could be argued to, in some way, validate a question sparked by insecurity, and make the user feel that others face the same challenges. However, when there are no such suggestions, perhaps the user could feel as though they are abnormal or alone. Therefore, the algorithm that places these words on the profanity lists, thus hiding suggestions, are contributing to, or creating, a culture of tabu in the real world. These algorithmic structures thus have an influence on our perceived culture, and this thought has lead us to the following problem statement: *How can we through exploring how parts of the internet are hidden, highlight the way in which algorithmic structures shape digital culture?* #### Project description Our artwork is presented as a Google search page. In the search bar, a variety of words appear rapidly along with their suggested search options if any are available. These words are a combination of two different list. The first lis a list of words we found on GitHub which Google has blacklisted for suggestions - we call this the profanity list. The other is a list of the most frequently searched words on American Google - we call this list the popularity list. Fifty words from the profanity list, and fifty words from the popularity list is combined in an array to, where a randomizer choses the word to display. Each time a word pops up, a sinister robotic voice reads the word aloud, be it a profanity word or a popular word. The project does not aim to push the agenda in an outright way, but rather it aims at letting the user discover the hidden suggestions on their own by showing the contrast rapidly. The robotic voice is meant to highlight the absurdity of the profane words when read aloud alongside regular, popular words. #### What type of internet art is our project? Internet art can be defined in many different ways, but Galloway describes it as such: ““Internet art, more specifically, refers to any type of artistic practice within the global Internet, be it the World Wide Web, email, telnet, or any other such protocological technology.” (Galloway, 2004, p. 211) In this sense our work can be described as a piece of internet art since it takes place within the internet, a protocological technology, and in a sense also adresses this protocological nature through adressing google’s query-based search engine. But could our piece be described as Net Art in the sense Galloway describes it? He describes it as such: “Net art addresses its own medium; it deals with the specific conditions the Internet offers.” (Galloway, 2004, p. 216) So does our project deal with the conditions of the internet? Our project adresses the choice-based nature of an algorithm, but this could be done offline just aswell as online. Despite this the algorithm is a native element of the internet, so one could argue that algorithms are a condition of the internet. In that sense our project could resemble Net Art. #### Reflections As we began our process we wanted to address how certain dimensions of the internet are being filtered out. As we read that 30% of the internet usage is for pornography, we found it quite interesting that pornography is being filtered out, even though there is such a massive part of the internet that consists of it, and the use is spread across demography’s. The way it is filtered out can be seen in the google search bar. Words associated with pornography or profanity doesn’t have any suggestions. When investigating the words closer we found some interesting inconsistencies. In danish, words meaning lesbian, bisexual and transexual don’t have any suggestions, just as profanity and pornography connotated words. As we switched from an American google, to a danish and a mix between danish and English, results changed. Some words were blocked out while others were not. This gave us insights into the complicated algorithm deciding how words are blocked out while others are not. As we were in the process of creating our design, our process can be described as critical making. In the process we started questioning aspects while developing the design. The process raised many questions and answered some too. This is the power of the critical making as it can raise questions as: ‘’What are the relations between particular social agendas and technical objects and systems?’’ (Ratto & Herz, 2019, p.23). This was what we in particular were interested in exploring. #### Bibliography: Galloway, Alexander R (2004). “Internet Art” in Protocol: How Control Exists after Decentralization. Leonardo. Cambridge, Mass: MIT Press, pp. 208-238 Ratto, Matt & Hertz, Garnet, “Critical Making and interdisciplinary learning: Making as a Bridge between Art, Science, Engineering and Social Interventions” In Bogers, Loes, and Letizia Chiappini, eds. The Critical Makers Reader: (Un)Learning Technology. the institute of Network Cultures, Amsterdam, 2019, pp. 17-28. ### Block 3 assignment [Block 3](https://hackmd.io/h_G7ROR5Ts-R0jEFWDW57w) <iframe width="100%" height="300" scrolling="no" frameborder="no" allow="autoplay" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/923740186&color=%238bc99d&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&show_teaser=true&visual=true"></iframe><div style="font-size: 10px; color: #cccccc;line-break: anywhere;word-break: normal;overflow: hidden;white-space: nowrap;text-overflow: ellipsis; font-family: Interstate,Lucida Grande,Lucida Sans Unicode,Lucida Sans,Garuda,Verdana,Tahoma,sans-serif;font-weight: 100;"><a href="https://soundcloud.com/mathias-tvilling-70944421" title="Mathias Tvilling" target="_blank" style="color: #cccccc; text-decoration: none;">Mathias Tvilling</a> · <a href="https://soundcloud.com/mathias-tvilling-70944421/johannes-sizzling-hot-ones-open-software-on-the-table" title="Johannes&#x27; Sizzling Hot Ones: Open Software On The Table!" target="_blank" style="color: #cccccc; text-decoration: none;">Johannes&#x27; Sizzling Hot Ones: Open Software On The Table!</a></div> **Introduction:** This project explores the FLOSS software Atom. Atom is Free and open source, and is released under the MIT license. Which is one of the most permissive licenses, which allows both copying, merging and republishing with the only requirement of including the copyright notice in all copies made. We therefore chose to explore Atom further in our podcast, to raise questions about both advantages and disadvantages of FLOSS software with Atom as our case study. The podcast is part of an imagined series of podcasts regarding FLOSS and open source software. Where experts will come into the show, to answer questions from listeners. This episode is centered around Atom, where the imagined founder of Atom, mr. Atom is guesting the show as the weekly expert. Furthermore two listeners call in to express their opinions toward Atom. The first being “Ester” from Denmark, who is a developer at a small company who uses the Atom text editor as part of their working practice. Ester is very enthusiastic about the commoning practice of the software, and the way in which one can use other peoples extensions, and contribute with her own. Furthermore, she is interested in why Atom is Free, and not commercialised. To answer this question mr. Atom emphasizes the commoning practice of open source, and how this practice is the philosophy the Atom community is built upon. The next listener “Mike” questions the fact that Atom is not commercial, or has some commercial gains from being connected to Github, which is further owned by Microsoft. This leads to a discussion of the “free” aspect of FLOSS and in particular Atom between Mike and mr. Atom. The two viewpoints were chosen, in order to elucidate the complexity of FLOSS, and how free can have many different meanings. Lastly the host of the podcast, concludes on these questions that there are many aspects of FLOSS, it is not only Free and Open Source, and emphasizes the importance of raising the discussion of these different aspects. **To which degree is atom FLOSS?** First of all, we will try to create a definition of FLOSS. FLOSS can be understood as more of an umbrella term. A movement in which many different ways of addressing free/libre and open source software co-exist. This can be seen in the many different types of licenses with different structures which all coincide within this term. These licenses express the idea of free and open source software: ‘’Free software can be freely copied, modified, modified and copied, sold, taken apart and put back together.’’ (Mansoux, 2017, p. 92). Even though this might seem as a pretty sharp definition, the licenses differ from each other. They have different purposes, addressing different elements and degrees of freedom, political structures and overall ideologies. The common goal they seek through this is creating environments where one can share and develop software freely, making it accessible. Creating cultural liberty and equality (Mansoux, 2017, p. 82) Atom is a free and open source text editor used for coding first launched in 2014. The platform is developed by Github, and is licensed as FLOSS under the MIT License, widely considered as one of the most open free and open software licenses available ( Wheeler 2015). This license allows users to modify, distribute and use both privately and professionally with no cost. Atom is renowned for its flexibility in use as a result of a big community of contributors, who create packages and extensions for Atom which allows it to be compatible with a variety of different tasks. Atom's status as FLOSS software combined with the huge community maintaining, developing and using it arguably allows us to view Atom as a digital common in Sollfranks definition (Sollfrank, 2017). The interesting question, which is also raised in the podcast by caller “Mike”, is Atom’s affiliation with Github and by extension Microsoft. In 2018, Github, who made/owns Atom, was bought by Microsoft, and thus Atom is now part of a Microsoft owned company. Microsoft as a company cannot be said to be a part of FLOSS - certainly not to the extent that Github and Atom was/is, so what does this mean for our view of Atom as a free and open source piece of software? This is an interesting question, as Atom by all accounts is still run as a FLOSS company - however, as argued by the character “Mike” on the podcast, in some ways Atom could be seen as an asset for Github. The community of developers maintaining and improving Atom could be seen to add value not only to themselves and the platform, but also to Github, by making their assets more valuable. **Conclusion** To conclude on the different aspects of FLOSS, with Atom as our case study, one can argue the development and maintenance of Atom is a commoning practice. Where the software itself, and the different packages that can be added is the resources that the community work with, in the commoning practice of programming, sharing and learning from each other. On the other hand, the free aspect of Atom can be challenged by the fact that Atom has such close connections to Github, and further down the line Microsoft, which suggest that Atom, or FLOSS in general have hidden aspects that might not be as visible to the specific communities, in this case such as the commercial gains or the added status Github/Microsoft can obtain from the the Atom community. **Bibliography:** Mansoux, Aymeric. (2017).“In Search of Pluralism” in Sandbox Culture: A Study of the Application of Free and Open Source Software Licensing Ideas to Art and Cultural Production, pp. 76-112 Wheeler, David A. (2015). “Why Open Source Software / Free Software (OSS/FS, FLOSS, or FOSS)? Look at the Numbers!”( https://dwheeler.com/oss_fs_why.html) Solfrank, Cornelia (2017). “Art & Speculative Commons”. (https://vimeo.com/216611321) ### Block assignment 4 [Block 4](https://hackmd.io/RhGqgttBQWKI_MurAEjQBg) **Introduction** The researcher Safiya Noble points out that the inherent biases in algorithms are a result of the fallible nature of humans. She describes it as such: "The math-powered applications powering the data economy were based on choices made by fallible human beings. Some of these choices were no doubt made with the best intentions." (Noble 2018, p. 40) If the faults in these choices are the result of the fallibe nature of humans - how can we then, as fallible humans, try to avoid embedding our biases in our computational work? Research question: This project will study how one might approach creating an as unbiased as possible algorithm by exploring how others have approached this challenge, trying it out for ourselves and in the end evaluating in what ways our algorithm has succeeded and failed. **Overview over potential process** The methodology that will be used to tackle this challenge is critical making. To inform our starting point in our critical making process research will be done on how others have approached this issue. The insights gained from this research will then be used to inform how we approach creating our own machine learning algorithm through critical making. The goal of this process will not be to create an unbiased algorithm since that seems, according to the literature, to be an impossible task. Instead it will be to test the approaches found in our research and find where their flaws or challenges might be. **Evaluation of research question** In the craft of research a research question is split into; Topic, conceptual question, conceptual significance and potential practical implication. The topic of our research question is bias in machine learning/algorithm. An issue here could be that this topic is quite broad. To address this in general, one could ask what is the conflict, or what does this problem contribute with, which consequences emerge from this practice (Booth, Colomb & Williams, 2008, p. 39). In this case, we could have chosen a specific type of bias, such as racial bias or gender bias, or a specific type of algorithm or application, difference in supervised learning, unsupervised learning and so forth. The conceptual question of our research question would be what approaches there are that attempt to avoid embedded bias when creating algorithms, and what strengths or weaknesses they have. Once again this question is quite broad and in a sense it would be more fitting for a research synthesis paper instead of a critical making paper. The conceptual significance of our research question is quite clear in that the negative effects of inherited biases in algorithms are commonly recognized in our field. The potential practical application of our research question is also quite clear since having an overview of the strengths and weaknesses in different approaches to creating as unbiased as possible algorithms would make this task much more approachable to designers. This task can seem very daunting, so having a clear terminology around different methods would make approaching this easier. **Bibliography** Booth, W., Colomb, G., & Williams, J. (2008). The craft of research (3rd ed.). The University of Chicago Press. Noble, S. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press. ## MX assignments and reflections If something is lacking a reference refer to below: ![](https://i.imgur.com/Mb1GGxM.jpg) Mini-exercises ### MX001 A common theme in this weeks writings is describing some kind of transition from an analogue culture to a digital culture or just simply the emergence of a digital culture. While this in these texts are explored from a linguistic or literature-focused perspective it might be interested to apply a more artefact- or behavior-based perspective. To explore this one might take a digital artefact and explore what cultural effects it has had on its surrounding culture. This might not be considered actual digital culture, since it is more a mix of analogue culture and digital culture, but one might argue that this is always the case, since “Every digital device is really an analogical device.” as Benjamin Peters describes it. For this weeks assignment we had to choose a digital artefact that interests us and which might have some influence on digital culture. I chose my ordinary Bluetooth headset since I find it interesting how digital devices have made listening to music something much more individualistic and pervasive in our everyday lives. If we tried to explore how different types of portable music devices and their accompanying headsets have changed our culture, this might give us insights of a different nature than a linguistic or literature-based analysis might offer. We might find that our ability to listen to music anywhere and at anytime distances us from our surroundings, or maybe it simply adds another layer to our interaction with our surroundings. Perhaps we would find remnants of a less digital music culture for example lovers of vinyl music, who believe that this bit-based music is inferior to at more “analog” type. An interesting point that Benjamin Peters makes is that the emergence of the digital often both creates the digital version of something but also its analogue counterpart. So exploring the emergence of a digital artefact might uncover both the digital culture it creates, but also its analogue counterpart and where these meet. This might not be entirely correct since the analogue counterpart often already existed, but is simply given the descriptor “Analogue”. The analogue counterpart might also change in some way related to its new digital counterpart, but that might not be necessary. In the example of the development of music the “analogue” counterpart, which in this example could be music printed on Vinyl, already existed, so it has simply become the counterpart to the digital music. But this has also changed it from simply being a way to listen to music, to becoming almost a statement against its digital counterpart and gaining a new cultural significance. It might be interesting to use these insights to look into other technologies that are currently emerging, and trying to identify how this emergence might affect its analogue counterpart. ### MX002 1. What interests you: When reading this weeks texts at first it became apparent to me how broad of a topic data is and how challenging it is to explore it in any kind of extensive way. The academic field that tries to do so is obviously also very young, so it should not be surprising that fields attempt to describe the structures surrounding data is very complex or in some ways messy. I find the distinction between "cooked" and "uncooked" data very interesting because this notion of exploring biases in datasets is something I have spent quite a bit of time doing in a more commercial setting. In that setting the aim of this practice is to discover biases or inaccuracies in datasets that might make them dangerous in an economic sense to navigate after. Here we for example explore who has registered the data and how their practices affects the dataset. I find it very interesting that the field "Critical Data Studies" does something methodically very similar but with a non-commercial goal. Maybe the two approaches could learn from eachother. 2. Make a taxonomy/map of ‘data’, or the concept of data (on paper or other software then post as an image?) The messy nature of data is also why I in my "map" of data tried to keep it very simple and only really visualize some of the "looping effect" described by Kitchin. It simply highlights how input is exposed to a lot of different structures that might bias or corrupt the data in the black-boxed process of data creation. ![](https://i.imgur.com/E0a4Elk.jpg) 3. Why we need to conceptualize and understand data? I think an obvious argument here is that data is becoming an increasingly integrated element in our everyday lives and in an increasingly hidden way. This highlights why it is important to understand data, but not really why we should approach it through conceptualization. An argument on that point might be that the actual nature or structure of data is such a complex size, that we need to conceptualize it, so that a broader section of the population can relate to it. This of course sets very high standards for this conceptualization, since it, in the same way as data, is very susceptible to becoming biased, political or simply imprecise. ### MX003 When working with the COVID-19 data it became apparent, that trying to gain an absolute overview of which data were being shown and which were being hidden would be an almost impossible task. In the same way it is challenging to identify all the editorial choices that have been made, when it comes to the visualization of data. It therefore seemed appropriate to choose a simple aspect and work on that. We chose to work on the chart of countries that was presented in both of the visualizations we were shown in class. This visualization reminds one of a scoreboard, which is usually seen in competitions. One might argue that this visualizations pits the countries "against" one another in the pandemic even though the word "pandemic" in itself describes that we are all affected. So we wanted to create a visualization that communicated this instead. You can see it below. ![](https://i.imgur.com/4zjvO3R.jpg) The idea is simply a 3D-dimensional space in which each country is represented by a ball. The size of the ball is decided by the countries amount of infected citizens. The balls are then connected by lines, and the thickness of these lines indicates how many infected have traveled between these two countries. But in the end this became an example the challenge it is to visualize data. Because our example became even more of a competition and even included an element of blaming other countries. This made me think that maybe this is why many use the simple visualizations - because it is simply so easy to fall into these "traps" and therefore you might aswell just follow "best practice". ### MX004 For my cookie-related "piece" I have chosen a new type of cookie-consent-form that is very popular at the moment. You can see an example of it below. ![](https://i.imgur.com/cZqTM73.png) The interesting part of this type of consent-form is that it has become popular in response to changes in the law regarding cookies. The law simply states that cookies are not allowed to be set prior to consent. I find this cookie-consent-form to be interesting because it is such an obvious attempt at trying to get consent while following the rules by framing the question and categories in specific ways, much in the same way that the text describes how cookies and spam received different "framings". The form allows you to categorize your cookies and even describe some of them as "necessary", and then they are actually set before consent. In this way you are able to seem compliant while actually setting as many cookies as before. In the same way some very classic UI-tricks are being used in the way that the "Allow all cookies" botton is in the most click-able spot and has a color that is very common in "call-to-action"-buttons. For my creative project I have found this obfuscation plugin: https://www.springwise.com/obfuscation-plugin-clicks-every-online-ad/ This plugin obfuscates your data by clicking every single ad that is displayed. This distorts the data that is collected about you, and makes you a less attractive candidate for something like retargeting. ### MX005 When working with the cookie-script, we quickly had an idea of what we wanted to do. Online advertisers such as YouTube value visitors from economically strong countries higher than visitors from countries with lower economic power. So we simply wanted the cookie to save from which continent you are visiting and then assign a value accordingly. Adding a continent input was not an issue, but having the cookie assign a value according to the continent was to challenging for us and the time available. Below you can find our script and try it out for yourself even though it does not have the desired feature. ``` <code> <!DOCTYPE html> <!-- //ref: https://www.youtube.com/watch?v=5ttpghXjG0g example of cookie: _user=siusoon;expires=Thu, 10 Sep 2020 09:01:51 GMT; --> <html> <head> <script> let myCookies = {}; function saveCookies() { //retrieve data from the form elements myCookies["_user"] = document.getElementById("user").value; myCookies["_uage"] = document.getElementById("age").value; myCookies["_ucon"] = document.getElementById("continent").value; /* if("_ucon" = "Afrika"){ myCookies["_uval"] = document.getElementById("continent").value; }*/ //get rid of existing cookie document.cookie = ""; //set expiry time, that is 30 seconds of now let date = new Date(); date.setTime(date.getTime() + ((30) * 1000)); let expiry = "expires=" + date.toUTCString(); //store each cookie let cookieString = ""; //loop via each myCookies (e.g user and age....) //join by ';' for (let key in myCookies) { cookieString = key+"="+myCookies[key]+";" + expiry +";"; document.cookie = cookieString; //save each cookie console.log(cookieString); } document.getElementById("out").innerHTML = document.cookie; //load in the output with the latest array first } function loadCookies() { console.log(document.cookie); myCookies = {}; let kv = document.cookie.split(";"); //different key for (let id in kv) { let cookie = kv[id].split("="); //actual value myCookies[cookie[0].trim()] = cookie[1]; //trim white space and assign to the second half i.e value } document.getElementById("user").value = myCookies["_user"]; document.getElementById("age").value = myCookies["_uage"]; } </script> </head> <body> User: <input type="text" id = "user"> <p> Age: <input type="text" id = "age"> Continent: <input type="text" id = "continent"> <p> <button onclick="saveCookies()">Save to Cookies</button> <button onclick="loadCookies()">Load From Cookies</button> <p id="out"></p> </body> </html> </code> ``` ### MX006 The tactical qualities of Internet Art are defines like this by Galloway: "... I suggest here that computer crashes, technical glitches, corrupted code, and otherwise degraded aesthetics are the key to this disengagement. They are the "tactical" qualities of Internet Art's deep-seated desire to become its own medium, for they are the moments when the medium itself shines through and becomes important." (Galloway, p. 224) So how is this seen in the artworks mentioned in his text? Lisa Jevbratt's "Non-site Gallery" is a good example in that it plays with the idea of 404-sites which definetely resembles the tactical qualities since it has similar qualities to a computer crash or a technical glitch. > Is the artist using the critical making approach? Trying to decode what kind of process an artist has gone through can be challenging especially with absolutely no documentation. Even on Lisa's own site the project is only described in about 4 lines of text. But since this is a computational project, and few programmers write an entire project without continously trying to run it and adjust it, we can assume that her process has had lots of iterations with room for reflection. At the same time a key element of tactical media is to challenge something in the mainstream culture, and this resembles the critical tradition of challenging the commonly accepted. Tactical media and critical making seem to be similar in several aspects. They both aim to challenge something. They are both centered around actually doing something, even though in critical making there is not as much focus on pushing the created artifact into the world. One might say that tactical media could be categorized as a subcategory of critical making, but I am not sure how precise this is. Questions for class: - How does something like tactical media fit into an academic context? - Can tactical media be seen as a subcategory to critical making? ### MX007 - GROUP _________________ Expanding Fullers text on how he describes internet art as "not just art": Can also be a functioning tool to do a task. In the case of the demetricator, it is an actual add on with an actual effect on the user interface. Web art is not-just-art in Fullers understanding, it can only come into occurence by being not just it-self, and has to be used. Net art is not something static, but something that evolves through for instances the users interaction with it. What is the relationship between art and culture? As you can see on the drawing, we interpret the relationship between art and culture, as a movement from having one understanding, and then encountering art, which can create a movement which gives us the oppurtunity to reflect upon culture in a new way. first you see the circle that is flat, art is not added. Whe dont go in other directions in relation to our reflections upon life, society and culture. When adding art we create a reflection, that makes another movement, which then makes us reflect upon culture in a new way. ![](https://i.imgur.com/uwhkWJG.jpg) ![](https://i.imgur.com/x3tVfMl.jpg) How can art enable us to see things, escpecially technological objects in the context of digital culture, differently? - AS described earlier, it forces us to relfect upon culture in another way. When we encounter art, whether we understand or not, we have to reflect and question what we already know. TEchnological objects are accesible to more people and in the setting of the digital it enables us to reflect upon digital culture. Can you give an artwork example to illustrate that? https://kunsten.nu/journal/dagens-netkunstner-ben-grosser/ The demitricator is an example of an artwork that changes the digital culture that we are already familiar with. By changing the layout of the known, it makes us reflect upon what we already know, and whether there are more layers to the understanding we have of the culture we already exist in. Demitricator is an artwork in the sense Fuller represents beacuse it is something that can be used, and somthing that is a tool. If no one interacts with it, you might say that it doesnt have any existing, but when people use it, it can make us reflect, this relation and reflection is the artwork. ### MX008 What is the web now? Trying to describe the web of today is challenging since it has become such an integrated element in our lives. In addition to this one now has to differentiate between the web as it is seen in a browser, through an application or the middle-ground which is a web-app. This difference is becoming less and less distinct. An example of this is the development of Facebook's layout. Recently an update was released that made the layout of FaceBook viewed in a browser very similar to how it is seen in a mobile application. This move towards accessing the web through application also means a move towards a web inwhich the user has almost no ability to customize what they are accessing in opposition to the early web, where the user created the layout. Accessing the web almost exclusively through applications also makes the traditional homepage even more irrelevant, since there are applications made for this specific purpose aswell. One could argue that this move towards application-based web-access is motivated by the rise of mobile devices. But while this has made the "web" increasingly accessible it has made working with the "web"/ manipulating it much more inaccessible. ### MX009 _________________ How the website looked: ![](https://i.imgur.com/Dcb9Nfe.jpg) Our changes: ![](https://i.imgur.com/RZYIeYl.jpg) ![](https://i.imgur.com/lz2xcx2.png) We have chosen to change the rhetoric of the danish asylum application site. We have changed the text from the alreday existing site, into a more critical perspective on what will happen when you apply, with inspiration from danish asylum cases. Futhermore we have changed the layout, by changing the background picture into a picture of the reality some refugees come from. We havent changed the colours of the layout, since we wanted it to look official, to put into focus the content on the page, such as; text and images. We chose this site, because we believe that it gives false hopes about what will happen when you apply for asylum, in this way we create awareness about an already existing problem. Is this a form of critical making? WE learn something about the material while we are working, in this way we begin to refelct upon the funtionality of the different elements, and how they can easily be changed. Therefore the choices of the developers becomes very clear. Is this Internet art? If we had done it differently, and instead of changing the conten, changes the layout into for instance something less understandable, it might have been a form of internet art in the way that it explores or works critically with the material of the internet. Our approach instead takes a critical stand toward politics. Therefore the content is more critical, therfore not neccesarily internet art. ### MX010 Both of the texts have a strong focus on meta-data and the how these affect the accessibility of whatever they refer to or describe. In the text "Accumulate, Aggregate, Destroy" the importance of meta-data is described as a result of the challenges of machine vision and algorithmic analysis. While meta-data in the case of ArtBase is more an issue of how they can make a database that is both compatible with other databases, and at the same time has the flexible that the content of the archive requires. I find the discrepancy between the common phrase "nothing disappears on the internet" and the need for a digital archive interesting. It seems like when we don't want something to survive on the web, it thrives, and when we want something to survive, suddenly the links to it are dead, and updated browsers make it inaccessible. This also points to why we need a digital archives. Traditionally archives enable us to explore cultures and history, and with the speed with which digital culture develops archiving becomes very important if we want to be able to explore or explain the early days of the internet, which some might argue we are still in. ### MX011 Together as a group, what are the issues and cultural phenomenon that the archival techniques are addressing? What is an archive? What are they archiving? What are the potential and limitation of these techniques? How do these techniques allow you to think about internet culture differently? In the case of the Webrecorder we found it interesting that it highlights how important interaction is on the internet. Just archiving a static image of a page would not be representative of the experience of visiting the page. The WayBack machine also keeps some of the interactivity, but lacks the ability to navigate the page through links, while a screenshot completely removes all interactivity. This lack of interactivity is really a symptom of the fact, that most of the underlying code is not being saved. This made us think of how complex it would be to both try to keep all interactivity and therefore saving alot of data, and at the same point minimizing the weight of the data. ### MX012 what is/are commons? are you familiar with this concept/practice? have you come across it before this session? if so in what circumstances? Sollfrank describes commons as "alternative modes of ownership and collective ways of dealing with ownership". In this way it becomes a third mode of ownership apart from state or private ownership. In my experience the closest thing I have encountered to this are communal gardens. Here each person has their own bit of soil to farm, but there is a strong sense of community. find an image that best visualises how you understand or imagine commons. Save that image in your computer and bring it to the session. As my image I have simply chosen an image of a communal garden from New York. A place such as this is obviously gated in a sense, but for the community it serves it serves as a kind of commons. ![](https://i.imgur.com/PTmRJcV.jpg) think of 3 - 5 words (nouns, verbs, adjectives), which in some way describe the values you associate with commons. I think one of the most connected words would simply be "us". "Community" would be another. "Access" is also closely related to the topic. >Brain-dumping On the topic of FLOSS the aspect of commerciality caught my attention the most. It was interesting how something that on the surface seemed to be incompatible with a capitalistic system, could actually be commercialized in many ways. Aymeric mentioned how this was one of the reasons why we should probably move towards the older types of FLOSS again. This made me think if this was just the way critical projects often go - they are created through non-commercial values, but then they are quickly made commercial through some trick or opaque technique. Then when this becomes obviouos the critical people object and try thinking of a new approach and then the whole thing repeats. I think the methods of commercialization that are present in FLOSS have a strength in that they are often angled towards corporations. Free individual users might produce packages that the paying corporations can enjoy, but they are granted access to something freely. ### MX013 - GROUP **What kind of free and open source software that you like? Why?** We discussed the different kinds of Open source software that we use. For instance: - Wordpress - Processing - Modelling for instance, for 3d printing Processing, because it is an open platform, that works towards programming literacy. Depending on level of programming experience you can either contribute to the "core" of the programing enviroment, fix bugs, use the libraries for own projects, or share your "beginner" code. It becomes kind of a common, with a community. **If you have to choose to discuss one aspects of FLOSS, how would you approach this?** We find the commercial aspect of FLOSS, both interesting and confusing. It is mentioned by Mansoux that FLOSS is made in a capitalist society, and that it therefore is not incompatible with such a society. However, this is a little confusing to us, since a core value of FLOSS is the freedom to edit and distribute software freely - so how does this play into capitalist thinking? If we were to investigate this aspect of FLOSS, we would make a comparative analysis of different software belonging to FLOSS. How these differentiate in the ways software is copied, altered and reshared, and which commercial pratices these softwares implement. ------ ### MX014 - GROUP This is taken from the class discussion page, where it was originally written. A glossay: **Data** A defined entity that can consist of for instance a single number, but a data point can also consist of for instance an image in which more information is stored in a binary form. Data can therefore take many forms, and serve many purposes, both as input and output. **Machine learning** Machine learning is the practice of helping software perform a task without explicit programming or rules. **Dataset** Gathering a dataset is a curating practice either setting the entry rules for what can be included, or manually curating which data to include. Malevé argues that “With more data come more variations”, taken into account that the data is chosen in accordance with specific “rules”. Denpending on the reason for creating the dataset, it can consist of different entities, such as numbers, text, etc. And are similarly used to test, train and evaluate algortihms. A dataset in computer vision is a curated set of digital photographs that developers use to test, train and evaluate the performance of their algorithms. (Malevé, 1) ----- ### MX015 - GROUP DISCLAIMER: This discussion was held in a group in class, but we did not write anything down. Thus, this is me personally writting down some of the things we spoke about. We were particularly interested in the limitations of the ML program. In particular, working with it made us think about the imagined affordances that we project onto the software - we implicitly expected it to know the difference between the right and the left hand, but of course, this is not the case. This reflection also led us to think about how/what the algorithm actually picks up on: did it regiter movement (i.e. doesn't matter if you move your right or left hand, or you head in the same spot and in the same motion) or was it able to distinguish certain features. ----------- ### MX016 **Short reflection and questions for inquiry** The effects of "personalization" that Ridgway point out have been a point of interest for me for a while. This interest has more been in relation to news and the "filter bubbles" that she also mentions and not on how search engines affect this. I cannot help but think that anyone who reads up on this subject would see the result of these algorithmic processes as an issue, so why is nothing being done about it? Could it be that the exponential strength of algorithms is simply valued so highly, that we cannot really challenge or limit it? If we had to imagine a digital culture in which algorithms did not create these issues, how would that culture be different? How could we create some kind of supervision, when it comes to algorithms in digital culture? ### MX017 - GROUP **Feedback for group 4** We do understand why you compare commons and opensource software. Though we believe that your assignment would benefit from a clear distinction between the two. The two concepts flow into eachother as one unit in the way you use it, both in the podcast and in the written part of your assignment. Overall it would have been nice that you define the terms you use, both the important theoretical ones, but also concept such as ownership, because in this context ownership can mean quite different things. You seem to have a nice research question; “It is then not only relevant to look at defining whether or not Wikipedia can be viewed as a commons, but also what ethical questions can be raised in relation to how the commons is operating.” But it seems that you dont really follow up on it as much as we as listeners or readers would like. In the podcast most parts seems mostly explanatory/analytic. It could have been nice if you had included reflections on a higher taxonomic level or followed up on the research questions. Such as how the users decide how and with what to contirbute with, and how this might interfere or adhere with what the foundation of wikipedia wants. ------- ### MX018 - Synopsis draft ¬¬¬¬¬Exploring filter bubbles¬¬¬¬¬ Before the days of the internet everyone who read the same newspaper or saw the same billboards would also see the same content. Today everyone of us, at least if we participate in almost any kind of digital culture, get our own curation of content. This leads to us being shown only the content that makes us consume more content. Every individual online in a sense experiences digital culture in their own filter bubble (Gillespie 2014, 188), as the phenomenon has been termed, in which, “… we find only the news we expect and the political perspectives we already hold dear” (Ibid.). But how did this shift in content-curation happen and how does it affect digital culture? This paper will aim to explore this in order to investigate another question which is: How can we gain a better understanding of the effects of filter bubbles on digital culture by trying to “enter” each other’s filter bubbles? ¬¬¬¬¬Methodology¬¬¬¬¬ In this paper I will explore the causes behind this shift, how it has affected digital culture and how we might approach avoiding the negative effects of filter bubbles. The answers to these questions will be sought through both literature and a critical making process. This practical critical making process will have as its aim to produce a product, that can be distributed publicly and has the goal of making its users reflect on filter bubbles and how it affects them and their lives. This seems to be a valuable goal since it has been stated that “… the filter bubble doesn’t just affect how we process news. It can also affect how we think” (Pariser 2011, 76). With this goal of creating reflection in the users of the product, the methodology of this project could be described as Reflective Design, which has exactly this aim (Ratto & Hertz 2019, 24). While this is the goal of the end-product there is also the aim of gaining a better understanding of the technical structure of personalization of the web and the values and design choices embedded into it. This aim is very similar to the methodology of Critical Technical Practice in which the aim is to uncover the values embedded into technical disciplines (Ibid.). ¬¬¬¬¬The rise of the filter bubble¬¬¬¬¬ So how did the phenomenon of the filter bubble come to be so prevalent in our digital culture? While filter bubbles are about much more than curation of news-related content, approaching the subject from this angle shows the causes of the rise of the filter bubble efficiently. Previous to the internet what news-related content the public was exposed to was determined by editors at the few large media corporations or newspapers (Pariser 2011, 49). These large corporations were able to keep this power, since producing and distributing content was expensive and only “… those who could buy ink by the barrel could reach an audience of millions…” (Ibid., 51). This changed with the rise of the internet. Now anyone with access to the internet had similar capabilities. This meant that the cost of production for content plummeted resulting in larger and larger amounts of content. With more content to consume it became even more important to chose what to consume, which meant that curation became essential. The editors at the large media corporations were very expensive and therefore software-based curators became prevalent (Ibid.). While these algorithmically based curators are much more efficient, they lack the embedded ethics and responsibility towards the public, that some argue the editors at the large media corporations had (Ibid., 59). Alongside news-related content advertising had a similar journey. Advertising went from having to rely on newspapers or TV channels to distribute their content and chose these based on the segments of the publics that they claimed to be the gatekeepers to. This changed with the internet, where advertisers suddenly were able to try to reach their target audience across the web. The focus was no longer on a specific newspaper or TV channel, but instead on the specific user (Pariser 2011, 50). To summarize a long gradual development the internet caused a massive increase in content-production, that could not be matched by human editors and therefore popularized algorithmically based editors, that were not just able to handle the new amount of content but also personalize the curation of this content instead of curating towards a large segment. This personalization of content curation in turn caused the same development in advertising, where the target was no longer large untransparent audiences but instead specific user through personalization. ¬¬¬¬¬The cookie¬¬¬¬¬ But how do these algorithmically based editors personalize content? The first step towards this was introduced in 1994 and has received the term Cookies (Carmi 2017, 5). The purpose of these cookies were to make “… shopping online easier.” (Ibid.). Previous to this a users visit to a website would be seen as a new visit each time they visited. This was troublesome for the business of web shopping since this meant that a user’s shopping cart would be empty when they returned. Meta-text: After this I will continue to describe how the introduction of the cookie and its later developments have enabled personalization. I will also include Shoshana Zuboff’s the Age of Capitalism to describe the issues of filter bubbles and personalization. Then I will continue by exploring how others have tried to escape the effects of filter bubbles. ¬¬¬¬¬Attempts to escape the filter bubble¬¬¬¬¬ This paper is not the first to attempt to find some solution to the filter bubble. An example is Renée Ridgway, who explored the effects of personalization on search results in Google by having a computer subjected to personalization and another that she attempted to keep without any personalization (Ridgway 2017, 378). While her project is very interesting and her approach of avoiding personalization through using the browser Tor has its merits, she points out that while she escapes the constant filtering of Google, using Tor in a sense also effects what you see and what you do not see and in a sense becomes just another filter bubble (Ibid., 393). Aiming for anonymity therefore seems to be another filter bubble, and therefore this will not be the approach of my critical making practice. Another approach could be obfuscation (Lacking source), in which instead of trying to become anonymous you make a mess of the data collections that is being done around you, and through that the personalization becomes imprecise. An example of this is the obfuscation plugin by SpringWise, which simply clicks all ads presented to you. This approach seems to be able to challenge the construction of the filter bubble around the user, but in my critical practice I have chosen to take another approach. Instead of trying to avoid or obfuscate filter bubbles I will try to see how we can get to experience the filter bubbles of others. How can I try to experience the internet in the way my mother does? Or in the way a 10 year old does? ¬¬¬¬¬Critical Making¬¬¬¬¬ At this time the idea that I will try to construct is a “Cookie portfolio” where you can click a persona, for example a 60 year old woman, and have her cookies saved on your browser. In that way the user can experience the filter bubble that this woman experiences digital culture through.