---
tags: Old Version
---
:::info
Ch.4: Data capture (https://gitlab.com/aesthetic-programming/book/-/tree/master/source/4-DataCapture)
:::
# Ch.4: 資料擷取(Data Capture)

## setup()
設置()
本章重點介紹程式如何採集和處理輸入數據。我們已經通過函式 mouseX 和 mouseY(參見第 2 章“可變幾何”)介紹了與物理設備的交互,以及通過函式 mouseIsPressed() 和 windowResized() 聆聽事件的想法(參見第 3 章“無限循環”)。在本章中,我們將擴展這些想法並介紹不同類型的數據採集,包括鼠標移動、鍵盤按下、音效音量和使用網絡攝像頭進行影像/面部跟踪。
This chapter focuses on how a program captures and processes input data. We have already introduced interactivity with physical devices with the functions mouseX and mouseY (see Chapter 2, “Variable geometry”), as well as the idea of listening events via the functions mouseIsPressed() and windowResized() (see Chapter 3, “Infinite loops”). In this chapter we expand on these ideas and present different types of data capture, including mouse movement, keyboard press, audio volume, and video/face tracking with a web camera.
將本章置於“數據採集”之下,讓我們能夠從直接交互轉向質疑正在採集哪些類型的數據以及如何處理這些數據,1 以及這種更廣泛的文化趨勢(通常稱為“數據化”)的後果。 2 這個術語——數據和商品化的收縮——指的是我們生活的方方面面似乎都被轉化為數據的方式,數據隨後被轉化為信息,然後貨幣化(如 Kenneth Cukier 和 Victor Mayer-Schöenberger 在他們的文章“大數據的興起”)。3 我們的數據,“人類行為的模式”,是在 Shoshana Zuboff 所謂的“監視資本主義”的邏輯中提取和傳播的,這表明需要大量的各種方式為計算目的收集的數據,例如預測分析(例如,您喜歡這本書,所以我們認為您可能也喜歡這些書)。
Framing this chapter under “Data capture” allows us to move from immediate interactions to questioning which kinds of data is being captured and how it is being processed,1 as well as the consequences of this broader cultural tendency that is often called “datafication.”2 This term — a contraction of data and commodification — refers to the ways in which all aspects of our life seem to be turned into data which is subsequently transferred into information which is then monetized (as described by Kenneth Cukier and Victor Mayer-Schöenberger in their article “The Rise of Big Data”).3 Our data, “the patterns of human behaviors,” is extracted and circulated within the logic of what Shoshana Zuboff calls “surveillance capitalism,”4 demonstrating the need for large quantities of all manner of data to be harvested for computational purposes, such as predictive analytics (e.g. you like this book so we think you might like these books too).
我們將在第 10 章“機器學習”中回到其中的一些問題,但就目前而言,在大數據時代,似乎需要採集所有數據,即使是最平凡的數據按鈕按下等動作。本章從一個相對簡單的操作開始,比如打開或關閉設備——燈、廚房用具等。此外,按鈕是“誘人的”,5 具有即時反饋和即時滿足感。它迫使你按下它。類似地,在 Facebook 等軟體和在線平台中,一個按鈕需要交互,邀請用戶點擊,並以二元狀態與它交互:喜歡或不喜歡,接受或取消。功能很簡單——打開或關閉——並給人以有意義的交互的印象,儘管提供的選擇非常有限(就像大多數交互系統一樣)。事實上,這種二元選擇可能被認為比互動更“被動”,比如接受 Facebook 等社交媒體平台的條件條款而不費心閱讀細節,或者“喜歡”某物做為一種註冊你的參與的方式,無論多麼膚淺或轉瞬即逝。提供了獲取數據的許可,因此我們的友誼、想法和經歷都被“數據化”了。在使用表情符號時,甚至我們的情緒狀態也會受到監控(在第 2 章“可變幾何”中討論)。
We will return to some of these issues in Chapter 10, “Machine unlearning,” but suffice to say, for now, that in the era of big data, there appears to be a need to capture data on everything, even from the most mundane actions like button pressing. This chapter begins with a relatively simple action like switching a device on or off — light, a kitchen appliance, and so on. Moreover a button is “seductive,”5 with its immediate feedback and instantaneous gratification. It compels you to press it. Similarly in software and online platforms like Facebook, a button calls for interaction, inviting the user to click, and interact with it in binary states: like or not-like, accept or cancel. The functionality is simple — on or off — and gives the impression of meaningful interaction despite the very limited choices on an offer (like most interactive systems). Indeed this binary option might be considered to be more “interpassive” than interactive, like accepting the terms of conditions of a social media platform like Facebook without bothering to read the details, or “liking” something as a way of registering your engagement however superficial or fleeting. Permission for capture data is provided, and as such our friendships, thoughts, and experiences all become “datafied.” Even our emotional states are monitored when it comes to the use of emoticons (discussed in Chapter 2, “Variable geometry”).
考慮到這些想法,下一節將介紹可自定義的“贊”按鈕的示例程式碼,以展示簡單交互(例如按下按鈕)的潛力。如何考慮按鈕的特殊性和可供性,以及點贊按鈕如何成為“社交按鈕”,從而在 Carolin Gerlitz 和 Anne Helmond 所說的“點贊經濟”中創造經濟價值。 6 與前幾章一樣,我們將使用按鈕做為起點來完成各種類型的採集。隨後,我們將反思更廣泛的影響。
With these ideas in mind, the next section will introduce the sample code for a customizable “Like” button in order to demonstrate the potential of simple interactions such as pressing a button. How the specificities and affordances of buttons can be considered, as well as how the like button becomes a “social button,” thus creating economic values in what Carolin Gerlitz and Anne Helmond call “the like economy.” 6 As in previous chapters we will work through the various types of capture using buttons as a starting point. Subsequently, we will reflect on the wider implications.
## start()
開始()

*圖 4.1:示例程式碼的 Web 界面和交互
Figure 4.1: The web interface and interaction of the sample code*
RunMe, https://aesthetic-programming.gitlab.io/book/p5_SampleCode/ch4_DataCapture/
從這個示例程式碼開始,草圖為可定制的「Like」按鈕合併了四個數據輸入:
Starting with this sample code, the sketch incorporates four data inputs for a customizable “like” button:
1. 可以使用鼠標單擊按鈕,然後更改按鈕的顏色。
2. 當鼠標離開按鈕區域時,按鈕的顏色會恢復。
3. 單擊鍵盤的空格鍵時,按鈕將旋轉 180 度。
4. 按鈕將根據音效/麥克風輸入的音量改變其大小。
5. 按鈕會隨著面部識別軟體的輸入而移動,跟隨它認為是嘴巴的動作。
1. The button can be clicked using the mouse and then the button’s color is changed.
2. The button’s color is resumed when the mouse is moved away from the button area.
3. The button will rotate 180 degrees when you click the keyboard’s spacebar.
4. The button will change its size according to the volume of the audio/mic input.
5. The button will move in line with input from the facial recognition software, following the movement of what it considers to be the mouth.
該按鈕已使用級聯樣式表 (CSS) 進行自定義,該樣式表以由選擇器和宣示塊組成的格式描述對象的樣式和視覺元素。7 這些標識您要自定義哪些元素以及如何自定義恰恰。 CSS 與 HTML 一起工作,我們可以使用 p5.js 庫創建 HTML 的 DOM 對象,如按鈕(將在下一節中進一步詳細解釋)。
The button has been customized using Cascading Style Sheets (CSS), which describe the style and visual elements of an object in a format that consists of a selector and a declaration block.7 These identify which elements you want to customize and how to do it precisely. CSS works with HTML and we can create HTML’s DOM objects like a button with the p5.js library (which will be explained in further detail in the following section).
## Exercise in class (Decode) 課堂練習(解碼)
通過仔細查看 RunMe 中的「Like」按鈕,您能否列出示例程式碼中引入的樣式自定義列表?
By looking at the like button closely in the RunMe, can you come up with a list of stylistic customizations that have been introduced in the sample code?
然後查看下一節(第 23-49 行)中的原始碼,並用您自己的話描述按鈕的一些樣式。
Then look at the source code in the next section (Lines 23-49) and describe some of the button’s styling in your own words.
## Source code 原始碼
```javascript
/*Interacting with captured data: Mouse, Keyboard, Audio, Web Camera
check:
1. sound input via microphone: https://p5js.org/examples/sound-mic-input.html
2. dom objects like button
3. p5.sound library:
https://github.com/processing/p5.js-sound/blob/master/lib/p5.sound.js
4. Face tracking library: https://github.com/auduno/clmtrackr
5. p5js + clmtracker.js: https://gist.github.com/lmccart/2273a047874939ad8ad1
*/
let button;
let mic;
let ctracker;
let capture;
function setup() {
createCanvas(640, 480);
//web cam capture
capture = createCapture(VIDEO);
capture.size(640, 480);
capture.hide();
// Audio capture
mic = new p5.AudioIn();
mic.start();
//setup face tracker
ctracker = new clm.tracker();
ctracker.init(pModel);
ctracker.start(capture.elt);
//styling the like button with CSS
button = createButton('like');
button.style("display", "inline-block");
button.style("color", "#fff");
button.style("padding", "5px 8px");
button.style("text-decoration", "none");
button.style("font-size", "0.9em");
button.style("font-weight", "normal");
button.style("border-radius", "3px");
button.style("border", "none");
button.style("text-shadow", "0 -1px 0 rgba(0, 0, 0, .2)");
button.style("background", "#4c69ba");
button.style(
"background","-moz-linear-gradient(top, #4c69ba 0%, #3b55a0 100%)");
button.style(
"background","-webkit-gradient(linear, left top, left bottom, \
color-stop(0%, #3b55a0))");
button.style(
"background","-webkit-linear-gradient(top, #4c69ba 0%, #3b55a0 100%)");
button.style(
"background","-o-linear-gradient(top, #4c69ba 0%, #3b55a0 100%)");
button.style(
"background","-ms-linear-gradient(top, #4c69ba 0%, #3b55a0 100%)");
button.style(
"background","linear-gradient(to bottom, #4c69ba 0%, #3b55a0 100%)");
button.style(
"filter","progid:DXImageTransform.Microsoft.gradient \
( startColorstr='#4c69ba', endColorstr='#3b55a0', GradientType=0 )");
//mouse capture
button.mouseOut(revertStyle);
button.mousePressed(change);
}
function draw() {
//getting the audio data i.e the overall volume (between 0 and 1.0)
let vol = mic.getLevel();
/*map the mic vol to the size of button,
check map function: https://p5js.org/reference/#/p5/map */
button.size(floor(map(vol, 0, 1, 40, 450)));
//draw the captured video on a screen with the image filter
image(capture, 0,0, 640, 480);
filter(INVERT);
let positions = ctracker.getCurrentPosition();
//check the availability of web cam tracking
if (positions.length) {
//point 60 is the mouth area
button.position(positions[60][0]-20, positions[60][1]);
/*loop through all major points of a face
(see: https://www.auduno.com/clmtrackr/docs/reference.html)*/
for (let i = 0; i < positions.length; i++) {
noStroke();
//color with alpha value
fill(map(positions[i][0], 0, width, 100, 255), 0, 0, 120);
//draw ellipse at each position point
ellipse(positions[i][0], positions[i][1], 5, 5);
}
}
}
function change() {
button.style("background", "#2d3f74");
userStartAudio();
}
function revertStyle(){
button.style("background", "#4c69ba");
}
//keyboard capture
function keyPressed() {
//spacebar - check here: http://keycode.info/
if (keyCode === 32) {
button.style("transform", "rotate(180deg)");
} else { //for other keycode
button.style("transform", "rotate(0deg)");
}
}
```
## DOM elements: creating and styling a button DOM 元素:創建和样式化按鈕
“DOM”代表文檔對像模型,一種類似於 HTML 的文檔,具有樹結構,允許程式動態訪問和更新內容、結構和样式。我們不會關注各種樹結構,而是關注來自表單的元素,這些元素是 DOM 的一部分。這些表單元素包括按鈕、單選按鈕、複選框、文本輸入等,這些都是在線填寫表單時經常遇到的。創建表單元素的基本結構相對簡單。 DOM 下的 p5.js 參考指南 8 列出了表單創建語法的各種示例,例如createCheckbox()、createSlider()、createRadio()、createSelect()、createFileInput() 等等。我們需要創建一個按鈕的那個叫做 createButton()。
“DOM” stands for Document Object Model, a document like HTML with a tree structure that allows programs to dynamically access and update content, structure, and style. Rather than focusing on the various tree structures, we will focus on elements from forms that are part of the DOM. These form elements include buttons, radio buttons, checkboxes, text input, etc., and these are usually encountered when filling in forms online. The basic structure for creating form elements is relatively simple. The p5.js reference guide, under the DOM,8 lists various examples of form creation syntax, e.g. createCheckbox(), createSlider(), createRadio(), createSelect(), createFileInput(), and so on. The one that we need to create a button is called createButton().
首先,您需要為按鈕指定一個對象名稱,如果您使用多個按鈕,則需要想出多個不同的名稱,以便為每個按鈕設置屬性9。
First you need to assign an object name to the button, and if you use multiple buttons, you will need to come up with multiple different names so you can set the properties9 for each one.
- 讓按鈕;:第一步是通過分配名稱來宣示對象。
- let button;: First step is to declare the object by assigning a name.
- button = createButton('like');:創建一個按鈕並考慮要顯示的文本。
- button = createButton('like');: Create a button and consider the text is to be displayed.
- button.style("xxx","xxxx");:這是CSS標準,其中第一個參數是選擇/選擇器,第二個是宣示塊/屬性。例如,如果要設置字體顏色,則可以分別輸入“color”和“#fff”。10 對於此示例程式碼,所有樣式都是直接從 2015 Facebook 界面中復制的 CSS 原始碼程式碼。樣式包括顯示、顏色、填充、文本裝飾、字體大小、字體粗細、邊框半徑、邊框、文本陰影、背景和過濾器,並添加了變換。 - button.style("xxx","xxxx");: This is the CSS standard, where the first parameter is a selection/selector and the second is a declaration block/attributes. For example, if you want to set the font color, then you can put in “color” and “#fff” respectively.10 For this sample code, all the styling was copied directly from the 2015 Facebook interface by looking at their CSS source code. Styling includes display, color, padding, text-decoration, font-size, font-weight, border-radius, border, text-shadow, background and filter, with the addition of transform.
- button.size();:設置按鈕的寬度和高度。
- button.size();: This sets the button’s width and height.
- button.position();:設置按鈕的位置。
- button.position();: This sets the button’s position.
- button.mousePressed(change);:這會改變按鈕的顏色,並讓用戶在按下鼠標時使用自定義函式 change() 控制啟動音效(更多內容請參見「音效採集」部分)。
- button.mousePressed(change);: This changes the button’s color, and gives users control over starting audio with the customized function change() when the mouse is pressed (more to follow in the section of “Audio capture”).
- button.mouseOut(revertStyle);:當鼠標移離按鈕元素時,這將使用自定義函式 revertStyle() 恢復原始按鈕的顏色。
- button.mouseOut(revertStyle);: This reverts the original button’s color with the cutomized function revertStyle() when the mouse moves off the button element.
## Mouse capture 鼠標捕捉
在上一章中,程式聆聽鼠標移動並使用內建語法 mouseX 和 mouseY 採集相應的 x 和 y 坐標。 此示例程式碼包含特定的鼠標偵聽事件,例如每次用戶按下鼠標按鈕時都會調用的 mouseOut() 和 mousePressed() 函式。 請參閱以下程式碼的摘錄:
In the previous chapter, the program listened for mouse movement and captured the corresponding x and y coordinates using the built-in syntaxes mouseX and mouseY. This sample code incorporates specific mouse listening events, such as mouseOut() and mousePressed() functions which are called every time the user presses a mouse button. See the excerpt from the code below:
```javascript
//mouse capture
button.mouseOut(revertStyle);
button.mousePressed(change);
function change() {
button.style("background", "#2d3f74");
userStartAudio();
}
function revertStyle(){
button.style("background", "#4c69ba");
}
```
函式 mousePressed() 和 mouseOut() 鏈接到要觸發操作的按鈕。 還有其他與鼠標相關的mouseEvents,11 如mouseClicked()、mouseReleased()、doubleClicked()、mouseMoved() 等。
The functions mousePressed() and mouseOut() are linked to the button you want to trigger actions. There are other mouse-related mouseEvents,11 such as mouseClicked(), mouseReleased(), doubleClicked(), mouseMoved(), and so on.
## Keyboard capture 鍵盤捕捉
```javascript
function keyPressed() {
//spacebar - check here: http://keycode.info/
if (keyCode === 32) {
button.style("transform", "rotate(180deg)");
} else { //for other keycode
button.style("transform", "rotate(0deg)");
}
}
```
`keyPressed()` 函式用於偵測任何鍵盤按下事件。如果要指定任何 `keyCode`(即鍵盤上的實際鍵),示例程式碼顯示瞭如何在 `keyPressed()` 函式中實現條件語句。
The use of the keyPressed() function is for listening any keyboard pressing events. If you want to specify any keyCode (that is the actual key on the keyboard), the sample code shows how a conditional statement can be implemented within the keyPressed() function.
“條件結構”與你在前一章中學到的類似,但與“if-else”語句也有所不同。它解釋為:如果按下鍵盤上的空格鍵,則按鈕旋轉 180 度,如果按下鍵盤上的任何其他鍵,則按鈕恢復到原來的 0 度狀態。因此,“if-else”結構允許您設置偵聽事件的進一步條件:如果檢測到除空格鍵之外的 `keyCode`,程式將執行其他操作。
The “conditional structure” is something similar to what you have learnt in the previous chapter, but it is also something different with the “if-else” statement. It explains as: if the spacebar on the keyboard is pressed, then the button rotates 180 degrees, and if any other keys of the keyboard are pressed, then the button reverts back to the original state of 0 degrees. The “if-else” structure therefore allows you to setup a further condition with the listening event: if a keyCode is detected other than the spacebar, the program will do something else.
`keyCode` 接受數字或特殊鍵,如 BACKSPACE、DELETE、ENTER、RETURN、TAB、ESCAPE、SHIFT、CONTROL、OPTION、ALT、UP_ARROW、DOWN_ARROW、LEFT_ARROW、RIGHT_ARROW。在上面的例子中,空格鍵的 `keyCode` 是 32(見第 3 行)。
keyCode takes in numbers or special keys like BACKSPACE, DELETE, ENTER, RETURN, TAB, ESCAPE, SHIFT, CONTROL, OPTION, ALT, UP_ARROW, DOWN_ARROW, LEFT_ARROW, RIGHT_ARROW. In the above example, the keyCode for a spacebar is 32 (see Line 3).
大寫和小寫字母之間的 `keyCode` 沒有區別,即“A”和“a”都是 65。
There is no difference in keyCode between capital and lower case letters, i.e. “A” and “a” are both 65.
與 `mouseEvents` 類似,還有許多其他的鍵盤事件,12 如 keyReleased()、keyTyped()、keyIsDown()。
Similar to mouseEvents, there are also many other keyboardEvents,12 such as keyReleased(), keyTyped(), keyIsDown().
## Audio capture 音效捕捉
```javascript
let mic;
function setup() {
button.mousePressed(change);
// Audio capture
mic = new p5.AudioIn();
mic.start();
}
function draw() {
//getting the audio data i.e the overall volume (between 0 and 1.0)
let vol = mic.getLevel();
/*map the mic vol to the size of button,
check map function: https://p5js.org/reference/#/p5/map */
button.size(floor(map(vol, 0, 1, 40, 450)));
}
function change() {
userStartAudio();
}
```
示例程式碼中使用了基本的網絡音效 p5.sound 庫。 它包括音效輸入、聲音文件播放、音效分析和合成等功能。 13
The basic web audio p5.sound library is used in the sample code. It includes features like audio input, sound file playback, audio analysis, and synthesis.13
該庫應包含在 HTML 文件中(如第 1 章“入門”所示),以便我們可以使用相應的函式,例如 p5.AudioIn() 和 getLevel()。
像按鈕一樣,您首先宣示對象,例如 讓麥克風; (參見第 1 行),然後設置輸入源(通常是計算機麥克風)並開始收聽音效輸入(參見 setup() 中的第 6-7 行)。 執行整個示例程式碼時,瀏覽器會彈出一個螢幕,要求獲得訪問音效來源的權限。 此音效採集僅在授予訪問權限時才有效。
The library should be included in the HTML file (as demonstrated in Chapter 1, “Getting started”) so we can use the corresponding functions such as p5.AudioIn() and getLevel().
Like a button, you first declare the object, e.g. let mic; (see Line 1,) and then set up the input source (usualy a computer microphone) and start to listen to the audio input (see Lines 6-7 within setup()). When the entire sample code is executed, a popup screen from the browser will ask for permission to access the audio source. This audio capture only works if access is granted.

*圖 4.2:音效使用權限
Figure 4.2: Permission for audio access*
*圖 4.3:相機使用權限
Figure 4.3: Permission for camera access*
示例程式碼引用了 p5.sound 庫中 p5.AudioIn() 下的方法,該方法使用 getLevel() 方法讀取返回值在 0.0 到 1.0 之間的輸入源的振幅(音量級別)。
The sample code refers to methods under p5.AudioIn() in the p5.sound library, which reads the amplitude (volume level) of the input source returning values between 0.0 to 1.0 using the getLevel() method.
將引入一個新函式 map()(在第 15 行)來映射一個範圍內的數字。由於返回的音量值在 0.0 到 1.0 的範圍內,因此對應的值不會對按鈕的大小產生顯著差異。因此,音效輸入的範圍將動態映射到按鈕的大小範圍。
A new function map() (in Line 15) will be introduced to map a number across a range. Since the values for volume returned are on a range of 0.0 to 1.0, the corresponding value will not make a significant difference in terms of the size of the button. As such, the range of the audio input will then map to the size range of the button dynamically.
函式 userStartAudio()(參見第 19 行)將使程式能夠在用戶交互事件中採集麥克風輸入,在本例中它是 mousePressed() 事件。這是許多 Web 瀏覽器(包括 Chrome)強制執行的做法,其中用戶知道音效事件發生在後台,並避免來自 Web 瀏覽器的自動播放或自動採集功能。
The function userStartAudio() (see Line 19) will enable the program to capture the mic input on a user interaction event, and in this case it is the mousePressed() event. This is a practice enforced by many web browsers, including Chrome, in which users aware of the audio events happen in the background and to avoid auto play or auto capture features from a web browser.
– Video/Face capture 107
影像/面部捕捉
let ctracker;
let capture;
function setup() {
createCanvas(640, 480);
//web cam capture
capture = createCapture(VIDEO);
capture.size(640, 480);
capture.hide();
//setup face tracker
ctracker = new clm.tracker();
ctracker.init(pModel);
ctracker.start(capture.elt);
}
function draw() {
//draw the captured video on a screen with the image filter
image(capture, 0,0, 640, 480);
filter(INVERT);
let positions = ctracker.getCurrentPosition();
//check the availability of web cam tracking
if (positions.length) {
//point 60 is the mouth area
button.position(positions[60][0]-20, positions[60][1]);
/*loop through all major points of a face
(see: https://www.auduno.com/clmtrackr/docs/reference.html)*/
for (let i = 0; i < positions.length; i++) {
noStroke();
//color with alpha value
fill(map(positions[i][0], 0, width, 100, 255), 0, 0, 120);
//draw ellipse at each position point
ellipse(positions[i][0], positions[i][1], 5, 5);
}
}
}
對於特定的影像/人臉捕捉,示例程式碼使用 clmtrackr,它是數據科學家 Audun M. Øygard 在 2014 年開發的 JavaScript 庫,用於將人臉模型與圖像或影像中的人臉對齊。 14 基於 Jason Saragih 設計的人臉算法和 Simon Lucey,15 該庫基於預先訓練的面部圖像機器視覺模型對面部進行實時分析,將其標記為 71 個點以進行分類。 (見圖 4.5) 由於它是一個 JavaScript 庫,您需要將該庫放在工作目錄中,並將庫和人臉模型鏈接到 HTML 文件中。 (見圖 4.4)
For the specific video/face capture, the sample code uses clmtrackr which is a JavaScript library developed by data scientist Audun M. Øygard in 2014 for aligning a facial model with faces in images or video.14 Based on facial algorithms designed by Jason Saragih and Simon Lucey,15 the library analyses a face in real-time marking it into 71 points based on a pre-trained machine vision model of facial images for classification. (See Figure 4.5) Since it is a JavaScript library, you need to put the library in the working directory, and link the library, and the face model in the HTML file. (see Figure 4.4)

圖 4.4:導入新庫和模型的 HTML 文件結構
Figure 4.4: The HTML file structure to import the new library and models

圖 4.5:跟踪器點在臉上。感謝 clmtrackr 的創建者 Audun M. Øygard
Figure 4.5: The tracker points on a face. Courtesy of the clmtrackr’s creator, Audun M. Øygard
該程式通過影像捕捉使用網絡攝像頭進行面部識別和詳細信息如下:
The program uses the webcam via video capture to do facial recognition and details as follow:
1. `let ctracker;` &`let capture;`:初始化用於人臉跟踪和影像捕捉的兩個變數。
let ctracker; & let capture;: Initialize the two variables that are used for face tracking and video capture.
2. 第 7 行中的 `createCapture(VIDEO)`:這是一個 HTML5 `<video>` 元素(DOM 的一部分),用於採集來自網絡攝像頭的提要。關於此功能,您可以定義螢幕截圖的大小(取決於網絡攝像頭的分辨率)和螢幕上的位置,例如`capture.size(640, 480)`;我們還使用了 `capture.hide()`;隱藏影像來源,以便按鈕和彩色跟踪點不會使影像物件崩潰。
createCapture(VIDEO) in Line 7: This is a HTML5 <video> element (part of the DOM) that captures the feed from a webcam. In relation to this function you can define the size of the screen capture (which depends on the resolution of the webcam) and position on screen, e.g. capture.size(640, 480); We also uses capture.hide(); to hide the video feed so that the button and the colored tracker points will not crash the video object.
3. 第 11-13 行與 ctracker 相關: `ctracker = new clm.tracker()`, `ctracker.init(pModel);`和`ctracker.start(capture.elt);`:類似於音效和攝像頭的使用,首先需要初始化ctracker庫,選擇分類模型(將在第10章“機器學習”中討論),並從影像來源。
Lines 11-13 are related to ctracker: ctracker = new clm.tracker(), ctracker.init(pModel); and ctracker.start(capture.elt);: Similar to audio and camera use, you first need to initialize the ctracker library, select the classified model (to be discussed in Chapter 10, “Machine unlearning”), and start tracking from the video source.
4. 為了在INVERT模式下顯示捕捉的影像,程式使用 `image(capture, 0,0, 640, 480);`以圖像格式繪製影像來源,並相應地應用過濾器:`filter(INVERT);` (見第 18-19 行)
In order to display the captured video in the INVERT mode, the program uses image(capture, 0,0, 640, 480); to draw the video feed in an image format, and apply the filter accordingly: filter(INVERT); (See Line 18-19)
5. 第 21 行中的 `ctracker.getPosition()`:當我們將跟踪點放入`position`陣列時,使用 for 循環(第 30-36 行)循環遍歷所有 71 個跟踪點(因為它從 0 開始,以 70 結束) ) 並以 `position[][]` 的形式以二維陣列的形式返回 x 和 y 坐標中的位置。位置陣列的第一個維度 ([]) 表示從 0 到 70 的跟踪點。第二個維度 ([][]) 檢索跟踪點的 x 和 y 坐標。
ctracker.getPosition() in Line 21: While we get the tracker points into an array position, a for-loop (in line 30-36) is used to loop through all 71 tracker points (as it starts with 0 and ends with 70) and return the position in terms of x and y coordinates as a two-dimensional array in the form of position[][]. The first dimension ([]) of the position array indicates the tracker points from 0 to 70. The second dimension ([][]) retrieves the x and y coordinates of the tracker points.
6. 獲取跟踪點上的所有數據允許繪製橢圓以覆蓋面部。由於like按鈕的位置跟在嘴巴的位置之後,位置在60點(但由於按鈕需要定位在嘴巴的中點,因此需要將按鈕向左移動20個像素左右) ,然後程式會將位置做為陣列返回(參見第 26 行):positions[60][0]-20 和 position[60][1]。第二個陣列的 [0] 和 [1] 維指的是 x 和 y 坐標。
Getting all the data on the tracker points allows ellipses to be drawn to cover the face. Since the position of the like button follows that of the mouth, which postions at the point 60 (but since the button requires to position at the mid point of the mouth, therefore it needs to move the button to the left for around 20 pixels), the program will then return the position as an array (see line 26): positions[60][0]-20 and positions[60][1]. The second array’s dimensions of [0] and [1] refer to the x and y coordinates.
## Exercise in class 課堂練習
要熟悉各種採集模式,請嘗試以下操作:
To familiar yourself with the various modes of capture, try the following:
1. 通過修改各種參數(例如 keyCode 以及其他鍵盤和鼠標事件)來探索各種採集模式。
Explore the various capture modes by tinkering with various parameters such as keyCode, as well as other keyboard, and mouse events.
2. 研究跟踪點並嘗試更改點贊按鈕的位置。
Study the tracker points and try to change the position of the like button.
3. 嘗試測試面部識別的邊界(使用燈光、面部表情和麵部構圖)。一張臉在多大程度上可以被識別,這在多大程度上是不可能的?
Try to test the boundaries of facial recognition (using lighting, facial expressions, and the facial composition). To what extend can a face be recognized as such, and to what extent is this impossible?
4. 你知道人臉是如何建模的嗎?面部識別技術如何在整個社會中得到應用,由此產生的一些問題是什麼?
Do you know how the face is being modeled? How has facial recognition technology been applied in society at large, and what are some of the issues that arise from this?
值得回顧一下第 2 章「變數幾何」,以提醒人們面部識別如何根據幾何形狀(例如人眼之間的距離或嘴巴的大小)識別人臉,以建立面部特徵可以與標準化數據庫進行比較。主要問題之一是,這些數據庫因數據的準備方式、選擇、收集、分類、分類和清理(在第 10 章“機器學習”中進一步討論)而存在偏差。你的臉在多大程度上符合標準?
It would be worth checking back to Chapter 2, “Variable geometry,” for a reminder of how facial recognition identifies a person’s face from its geometry — such as the distance between a person’s eyes or size of their mouth — to establish a facial signature that can be compared to a standardized database. One of the main problems is that these databases are skewed by how data was prepared, its selection, collection, categorization, classification, and cleaning (further discussed in Chapter 10, “Machine unlearning”). To what extent does your face meet the standard?
## The concept of capture 採集的概念
下一節討論用於數據採集的不同輸入的各種示例。 目的是展示其應用的其他一些可能性,更重要的是展示這與數據化、商品化、監視和個性化的關係。 換句話說,這是一個更廣泛地討論數據政治的機會:質疑我們的個人數據如何被採集、量化、存檔和使用,以及用於什麼目的? 有什麼影響,誰有權訪問採集的數據並從中獲利? 很少有人確切知道哪些數據被採集或如何使用?16 然而,儘管使用了“採集”一詞,但我們還應該指出,這並不是完全監禁,還有逃生路線。 稍後會詳細介紹。
This next section discusses various examples of different inputs for data capture. The intention is to showcase some other possibilities of its application, and more importantly how this relates to datafication, commodification, surveillance and personalization. In other words, this is an opportunity to discuss data politics more broadly: to question how our personal data is captured, quantified, archived, and used, and for what purpose? What are the implications, and who has the right to access the captured data, and derive profit from it? Few people know exactly which data is captured or how it is used?16 However, despite the use of the term “capture,” we should also point out that this is not total incarceration, and there are escape routes. More on this later.
## Web analytics and heatmap 網絡分析和熱圖
目前,使用最廣泛的網絡分析服務是由谷歌提供的,它包含大量關於網站流量和瀏覽行為的數據,包括獨立訪問次數、網站平均停留時間、瀏覽器和操作系統信息、流量來源和 用戶的地理位置等。 然後可以進一步利用這些數據來分析客戶的個人資料和用戶行為。
At the moment, the most widely used web analytics service is provided by Google and contains tremendous amounts of data on website traffic and browsing behavior, including the number of unique visits, average time spent on sites, browser and operating system information, traffic sources and users’ geographic locations, and so on. This data can then be further utilized to analyze customers’ profiles and user behaviors.

*圖 4.6:谷歌分析截圖
Figure 4.6: Google Analytics screenshot*
熱圖是可視化工具之一,提供數據的圖形表示來可視化用戶行為。它通常用於行業以進行數據分析。例如,很容易跟踪光標的位置併計算其在網頁不同區域停留的持續時間,從而指示哪些內容比其他內容“更熱”。這對於營銷目的非常有用,尤其是了解哪些內容對用戶的吸引力或多或少,以及公司或政黨分析在哪裡最好地放置他們的廣告和其他宣傳。 Facebook-Cambridge Analytica 數據醜聞是一個相關的案例研究。 2018 年初,有消息稱,數百萬人的 Facebook 個人資料在未經他們同意的情況下被收集,用於政治廣告目的。 17 Facebook 等大公司 18 不斷探索新的數據採集方法以優化螢幕呈現.
Heatmap is one of the visualization tools and provides a graphical representation of data to visualize user behavior. It is commonly used in industries for the purpose of data analytics. For example, it is easy to track the cursor’s position and compute the duration of its stay in different areas of a web page, providing an indication as to which content is “hotter” than the rest. This is useful for marketing purposes, not least to understand which content is more or less attractive to users, and for companies or political parties to analyze where to best place their ads and other propaganda. The Facebook–Cambridge Analytica data scandal makes a pertinent case study. In early 2018, it was revealed that the personal data of millions of peoples’ Facebook profiles had been harvested without their consent, and used for political advertising purposes.17 Major corporations such as Facebook,18 constantly explore new data capture methods to optimize screen presentation.

*圖 4.7:用於分析網頁的熱圖示例
Figure 4.7: An example of a heatmap for analyzing a web page*
## Form elements 表單元素
正如我們在交互方面所爭論的那樣,選擇是有限的,但每個表單元素(如下拉菜單或按鈕)都表示不同的可供性。19 研究員 Rena Bivens 對 Facebook 的註冊頁面進行了與可用性別選項相關的徹底分析。 20 Facebook 在 2004 年首次推出時沒有性別字段,但在 2008 年引入了一個下拉列表時情況發生了變化,該下拉列表僅包含男性或女性選項,進一步更改為使用單選按鈕來強調二元選擇. 2014 年發生了突破,當時 Facebook 允許用戶自定義性別字段,您現在可以從 50 多個性別選項列表中進行選擇。根據 Facebook 的說法,他們希望通過“真實身份”來增強“個性化體驗”,21 然而,這種個性化(無論是在 Facebook 還是在整個社會中)仍然是有爭議的,因為它也符合市場細分的目的(將用戶劃分為更多的子組)。
As we argued with regard to interaction, the choices are limited, and yet each form element like a dropdown menu or a button indicates different affordances.19 Researcher Rena Bivens has made a thorough analysis of Facebook’s registration page in relation to the gender options available.20 When Facebook was first launched in 2004 there was no gender field, but things changed in 2008 when a drop-down list was introduced that consisted solely of the options Male or Female, further changed with the use of radio buttons to emphasize the binary choice. A breakthrough occurred in 2014 when Facebook allowed users to customize the gender field and you can now select from a list of more than 50 gender options. According to Facebook, they wanted to enhance “personalized experiences” with “authentic identity,”21 however it remains arguable that this personalization (both at Facebook and in society in general) is well-intended as it also serves the purpose of market segmentation (in dividing users into ever more sub-groups).

*圖 4.8:截至 2020 年 2 月 Facebook 的自定義性別字段
Figure 4.8: Facebook’s custom gender field as of February 2020*
## Metrics of likes 喜歡的指標
單個 Like 按鈕的使用提供了一個很好的例子,說明我們的感受是如何被捕捉的。這家名為“Happy or Not”的公司生產按鈕技術和分析軟體——例如超市中常見的那種,臉上帶著快樂或悲傷的表情——也為工作場所提供反饋技術,正如他們的標語所示: 22 Facebook 於 2016 年推出的“喜歡”、“愛”、“哈哈”、“哇”、“悲傷”和“憤怒”這六個表情符號更準確地標記了我們標準化的工作和娛樂體驗.所有點擊都被“分類”為情感指標,在網絡上公開顯示,並用於算法計算以優先考慮向用戶提供提要。很明顯,點擊次數是如何為平台所有者的利益服務的,而且為了證明這一點,Facebook 和 Instagram 已經測試了隱藏帖子指標的想法,以便將注意力轉移到他們更喜歡稱之為“將人們聯繫起來”23——好像是為了證明他們的利益是無私的。
The use of a single Like button provides a good example of how our feelings are captured. The aptly named company “Happy or Not” who produce push button technology and analytics software — the kind found in supermarkets for instance, with happy or sad faces — also provide feedback technologies for the workplace, as indicated by their strapline: “Creating happiness in every business, worldwide.”22 The six emoticons Facebook launched in 2016, including “Like,” “Love,” “Haha,” “Wow,” “Sad” and “Angry,” mark our standardized experience of work and play more precisely. All clicks are “categorized” into emotional metrics, displayed publicly on the web, and used for algorithmic calculation to prioritize feeds to users. It is fairly clear how the clicks serve the interests of platform owners foremost, and, as if to prove the point, Facebook, and Instagram have tested the idea of hiding the metrics on posts in order to shift attention to what they prefer to call “connecting people”23 — as if to prove their interests to be altruistic.
這種量化的做法是藝術家本傑明·格羅瑟 (Benjamin Grosser) 在其 2012 年首次出版的 Demetricator 系列 24 中模仿的東西,這使得與元數據相關的所有數字都消失了。與通知、回复、收藏夾和提要相關聯的數字的相關“值”都已作廢。或者更確切地說,很明顯,點擊產生了價值,而這一點的證據因其缺失而引人注目。
This practice of quantification is something the artist Benjamin Grosser has parodied in his Demetricator series,24 first published in 2012, which makes all the numbers associated with the metadata disappear. The associated “value” of numbers associated with notifications, replies, favorites, and feeds, have all been nullified. Or rather, it becomes clear that the clicking produces value and the proof of this is conspicuous by its absence.

圖 4.9:Benjamin Grosser 的 Facebook Demetricator,去計量喜歡、分享、評論和時間戳。原始(頂部),Demetricated(底部)。由藝術家提供
Figure 4.9: Benjamin Grosser’s Facebook Demetricator, demetricating Likes, Shares, Comments, and Timestamps. Original (top), Demetricated (bottom). Courtesy of the artist
跟踪顯然是一項大生意,並且帶有自己的隱形斗篷。 2013 年,Facebook 進行了一項關於最後一刻自我審查的研究項目,25 揭示了他們甚至能夠跟踪未發布的狀態更新/帖子/評論的能力,包括刪除的文本或圖像。這種“剩餘數據”,可能被認為是“廢料”、“數字廢氣”或數據廢氣,但這些數據具有豐富的預測價值。 26 這意味著 Facebook 不僅對採集您擁有的東西感興趣發布,但也可以從剩餘數據中採集您的思維過程。認為數據採集擴展到想像的領域是發人深省的。
Tracking is clearly big business and comes with its own invisibility cloak. In 2013, Facebook conducted a research project about last-minute self-censorship,25 revealing their capability of being able to track even unposted status updates/posts/comments, including erased texts, or images. This “residual data,” which might be considered “waste material,” “digital exhaust,” or data exhaust,” and yet this data is rich in predictive values.26 The implication is that Facebook is not only interested in capturing what you have posted, but also capturing your thought processes from residual data. It is sobering to think that data capture extends to the realm of imagination.
## Voice and audio data 語音和音效數據
我們的電腦、手機和其他小工具等智能設備通常配備語音識別功能,例如 Siri、Google Assistant 或 Alexa,可將音效輸入轉換為軟體命令,並通過更個性化的體驗進行反饋,以協助執行日常任務.現在你可以在幾乎所有東西中找到這些語音助手,包括微波爐等日常用品,隨著機器學習的發展,它們變得越來越對話和“智能”,有人可能會說“智能”。眾所周知,這些“語音助手”可以很好地執行簡單的任務,並且變得更加智能,同時通常可以為機器學習應用程式採集語音。將這些有形的語音助手放置在我們的家中,可以在不面對螢幕時捕捉您的選擇和品味。在物聯網中,設備為你服務,你為設備服務。事實上,我們成為了為他人創造價值的“設備”。 27
Smart devices like our computers, phones, and other gadgets are commonly equipped with voice recognition — such as Siri, Google Assistant or Alexa — which turns audio input into commands for software, and feedback with more personalized experiences to assist in the execution of everyday tasks. You can find these voice assistants in just about everything now including, everyday objects like microwaves, and they become more and more conversational and “smart,” one might say “intelligent,” as machine learning develops. These “voice assistants,” as they are known, carry out simple tasks very well, and become smarter, and at the same time capture voices for machine learning applications in general. Placing these tangible voice assistants in our homes allows the capturing of your choices and tastes when not facing a screen. In the internet of things, the device serves you, and you serve the device. Indeed we become “devices” that generate value for others.27

圖 4.10:語音和音效活動的螢幕截圖
Figure 4.10: Screenshot of Voice & Audio activity
在圖 4.10 中,文本如下:
In the Figure 4.10, the text reads as:
>“語音和音效錄音可保存您在 Google 服務上的網絡和應用活動以及使用或連接到 Google 語音服務的網站、應用和設備中的語音和其他音效輸入的錄音。 […] 這些數據可幫助 Google 為您提供更個性化的 Google 服務體驗,例如在您說“Hey Google”與 Google 助理交談時改進的語音識別,無論是在 Google 上還是在 Google 下。這些數據可能會在您登錄的任何 Google 服務中保存和使用,以便為您提供更加個性化的體驗。”
“Voice and audio recordings save a recording of your voice and other audio inputs in your Web & app activity on Google services and from sites, apps and devices that use or connect to Google speech services. […] This data helps Google give you more personalised experiences across Google services, like improved speech recognitionn when you say “Hey Google” to speak to your Assistant, both on and off Google. This data may be saved and used in any Google service where you are signed in to give you more personalised experiences.”
## Health tracker 健康追踪器

*圖 4.11:睡眠追踪器截圖
Figure 4.11: Screenshot of sleep tracker*
健身和幸福也變得數據化,並且隨著個人目標的設定,也被“遊戲化”。 隨著福利國家的解體,個人福祉變得越來越個性化,“自我追踪”應用程式提供虛假自主感的趨勢越來越大。 可以使用 Fitbit 或 Apple Watch 等可穿戴設備跟踪和分析運動、步數、心率甚至睡眠模式。 這些“量化自我”的實踐,有時被稱為“身體黑客”或“自我監視”,與將採集和獲取融入日常生活各個方面的其他趨勢重疊。
Fitness and well-being becomes datafied too, and with the setting of personal targets, also “gamified.” As the welfare state is dismantled, personal well-being becomes more and more individualized and there is a growing trend for “self-tracking” apps to provide a spurious sense of autonomy. Movement, steps, heart rate, and even sleep patterns can be tracked and analyzed using wearable devices such as the Fitbit, or the Apple Watch. These practices of the “quantified self,” sometimes referred to as “body hacking” or “self-surveillance,” overlap with other trends that incorporate capture and acquisition into all aspects of daily life.
## While() 當()
在晚期資本主義下,時間本身似乎已被採集,“24/7 的非時間無情地侵入社會或個人生活的方方面面。例如,現在幾乎沒有任何情況不能做為數字圖像或信息進行記錄或存檔。”28 我們引用了喬納森·克拉里 (Jonathan Crary) 的書 24/7:晚期資本主義和睡眠的終結,該書描述了白天之間的區別的崩潰和夜晚,這意味著我們注定要隨時產生數據。如果睡眠曾經被認為是無法提取任何價值的資本主義的最後避難所,29 那麼現在似乎不再如此。
Under late capitalism, temporality itself seems to have been captured, and “there is a relentless incursion of the non-time of 24/7 into every aspect of social or personal life. There are, for example, almost no circumstances now that cannot be recorded or archived as digital imagery or information.”28 We quote from Jonathan Crary’s book 24/7: Late Capitalism and the Ends of Sleep which describes the collapse of the distinction between day and night, meaning we are destined to produce data at all times. If sleep was once thought to be the last refuge from capitalism where no value could be extracted,29 then this no longer seems to be the case.
甚至睡眠也被數據化了,這似乎表明我們的主觀性也被捕捉到了多大程度。我們有意或無意地生產、共享、收集、使用和濫用大量數據,但其採集對我們有什麼影響?數據商品與其人類主體之間的主體間關係是什麼?正如本章所討論的,我們的個人和職業生活似乎完全沉浸在各種“數據化”過程中,但這是否意味著我們被困在數據的監獄中,不知不覺地為他人創造價值?在最後一節中,我們嘗試更多地解開這些想法,特別是數據流(我們稱之為大數據)上下文中的價值概念,並檢查我們在這些並非完全沒有代理的數據化結構中的位置.
That even sleep has become datafied seems to point to the extent to which our subjectivities have also been captured. We produce, share, collect, use and misuse, knowingly, or not, massive amounts of data, but what does its capture do to us? What are the inter-subjective relations between data-commodity and its human subjects? As discussed in this chapter, our personal and professional lives seem to be fully enmeshed in various processes of “datafication,” but does this mean that we are trapped in a prison-house of data, unwittingly producing value for others? In this last section we try to unpack these ideas a little more, and in particular the idea of value in the context of the data flow (that we call big data), and examine our position within these datafied structures which is not entirely without agency.
2015 年,柏林一年一度的藝術和數字文化節 transmediale 發布了一個公開電話,討論了 Capture All 的普遍邏輯以及生活、工作和娛樂的量化。電話中包含了一些值得在此重複的問題:“是否仍然存在抗拒數字資本主義要求 CAPTURE ALL 的存在模式,或者別無選擇,只能一起玩?如果是這樣,是否有不玩這種數字量化遊戲的藝術策略和投機方法?為了開闢新的生活方式,可以利用無情的量化和遊戲化之間的 [……] 差距是什麼?”30 希望本章的實際任務和示例能夠在某種程度上指出一些替代方案。
In 2015, transmediale, an annual art and digital culture festival in Berlin, posted an open call addressing the pervasive logic of Capture All and the quantification of life, work and play. The call included some questions worth repeating here: “Are there still modes of being that resist the imperative of digital capitalism to CAPTURE ALL or is there no option but to play along? If so, are there artistic strategies and speculative approaches that do not play this game of quantification by the numbers? What are the […] gaps of relentless quantification and gamification that can be exploited in order to carve out new ways of living?”30 Hopefully the practical tasks and examples of this chapter go some way to pointing out some alternatives.
馬克思主義理論可以幫助我們在更概念化的層面上理解這一點。我們所描述的各種技術可以理解為生產資料,馬克思稱之為“固定資本”,然後將其轉化為“交換價值”,即貨幣價值。然而,正如 Tiziana Terranova 所說,將這一過程視為簡單地採集用戶的勞動價值並竊取相關價值的過程沒有抓住重點。 31 與其說個人用戶願意為他們願意提供的數據而需要補償,不如說這是一個更大的過程。更重要的社會方面,特別是在大數據的背景下,我們可能會補充。她解釋說:“與馬克思主義的某些變體相反,這些變體傾向於將技術完全等同於“死勞動”、“固定資本”或“工具理性”,因此對於控制和採集而言,重要的是要記住,對於馬克思來說,機器的進化也反映了生產力的發展水平,這些生產力被資本主義經濟釋放但從未完全被資本主義經濟所控制。”32
Marxist theory can help us make sense of this on a more conceptual level. The various techniques we have described can be understood as means of production, what Marx would refer to as “fixed capital,” which is then turned into “exchange value,” or in other words monetary value. Yet to see this process as one in which the labor-value of users is simply captured and the associated value stolen misses the point, as Tiziana Terranova states.31 Rather than individual users needing compensation for their willing supply of data, it is the bigger social aspect that is more significant, particularly in the context of big data, we might add. She explains: “Contrary to some variants of Marxism which tend to identify technology completely with “dead labor,” “fixed capital” or “instrumental rationality,” and hence with control and capture, it seems important to remember how, for Marx, the evolution of machinery also indexes a level of development of productive powers that are unleashed but never totally contained by the capitalist economy.”32
我們可以在自由和開源運動的社會能量中找到一些證據,例如,補償在社會交換的層面上運作。這種說法有助於將注意力從個人的努力轉移到社會關係上。如果我們要發展一種不同於“採集一切”邏輯的立場,並尋求更積極、更有希望的解釋,那麼這一點的政治就顯得尤為重要。談到按下按鈕,Terranova 將社會關係描述為兩個極點之間的不對稱關係——一個是主動的,另一個是接受的。對她來說,“喜歡和被喜歡、寫作和閱讀、看和被看、標記和被標記”等行為是從個人形式到集體形式轉變的例子。她考慮了“這些動作如何成為離散的技術對象(如按鈕、評論框、標籤等),然後鏈接到底層數據結構”,反過來,這些動作如何表達能夠對流程進行試驗的可能性“個體化”和“跨個體化”,即社會轉型本身的可能性。
We can find some evidence of this in the social energies of the free and open source movement, for instance, where compensation operates at the level of social exchange. This claim then serves to shift attention from the efforts of the individual to social relations. The politics of this is especially important if we are to develop a position different from the logic of “capture all” and look to more positive, and hopeful interpretations. Referring to button pressing, Terranova describes social relations as an asymmetrical relations between two poles — one active, the other receptive. To her, actions such as “liking and being liked, writing and reading, looking and being looked at, tagging and being tagged,” are examples of the transition from individual to collective forms. She considers how “these actions become discrete technical objects (like buttons, comment boxes, tags, etc.) which are then linked to underlying data structures,” and, in turn, how these actions express the possibility of being able to experiment with processes of “individuation” and “transindividuation,” i.e. the possibility of social transformation itself.
這一論點參考了吉爾伯特·西蒙東的哲學,參考了個體化——如何將一個人或事物與其他人或事物區分開來——趕上其他個性化的轉變過程。在本出版物中沒有空間(或需要,我們認為)詳細討論這一點,但現在只需說跨個體化描述了個體“我”和集體“我們”之間的轉變以及它們是如何轉變的33 我們希望這本書項目能發生一些這樣的事情,這已經是集體設計,但也為新版本的生產和社會關係的再加工開闢了進一步的可能性。當然,這涉及修補與數據採集相關的底層程式碼和價值,以及我們重塑後者主要目的的能力。這是一個公開的邀請,不僅可以採集數據,還可以釋放其其他潛力。
This line of argument makes reference to the philosophy of Gilbert Simondon, to the transformational process by which individuation — how a person or thing is identified as distinguished from other persons or things — is caught up with other individuations. There is no space (or need, we think) to go into this in detail in this publication, but for now it suffices to say that transindividuation describes the shift between the individual “I” and the collective “We” and how they are transformed through one another.33 We hope something of this happens to this book project, which is already collective by design, but also opens up further possibilities for the production of new versions and social relations in its reworking. Of course this involves tinkering with the underlying codes and values associated with data capture, and our ability to reinvent the latter’s main purpose. This is an open invitation to not only capture data, but to also unleash its other potentials.
## MiniX: Capture All 迷你練習:採集所有
**目標:
Objective:**
- 試驗各種數據採集輸入,包括音效、鼠標、鍵盤、網絡攝像頭等。
To experiment with various data capture inputs, including audio, mouse, keyboard, webcam, and more.
- 批判性地反思數據採集和數據化的過程。
To critically reflect upon the process of data capture and datafication.
**更多靈感:
For additional inspiration:**
- 勞倫,勞倫·麥卡錫 (2017),http://lauren-mccarthy.com/LAUREN。
LAUREN by Lauren McCarthy (2017), http://lauren-mccarthy.com/LAUREN.
- Winnie Soon (2015) 的無稽之談,http://siusoon.net/nonsense/。 (閱讀該項目意圖的原始碼中的註釋。)
nonsense by Winnie Soon (2015), http://siusoon.net/nonsense/. (Read the comment in the source code for this project’s intentions.)
- Facebook Demetricator by Benjamin Grosser (2012-present), https://bengrosser.com/projects/facebook-demetricator/, and subsequent Instagram Demetricator, https://bengrosser.com/projects/instagram-demetricator/ or Twitter Demetricator, https://bengrosser.com/projects/twitter-demetricator/.
**任務(RunMe):
Tasks (RunMe):**
1. 嘗試各種數據採集輸入和交互設備,例如音效、鼠標、鍵盤、網絡攝攝影機/錄像等。
Experiment with various data capture input and interactive devices, such as audio, mouse, keyboard, webcam/video, etc.
2. 制定一個草圖,對跨媒體公開呼籲“採集全部”做出鬆散響應,https://transmediale.de/content/call-for-works-2015。 (想像一下,您想將草圖/藝術品/批判性或推測性設計作品做為展覽的一部分提交給跨媒體。)
Develop a sketch that responds loosely to the transmediale open call “Capture All,” https://transmediale.de/content/call-for-works-2015. (Imagine you want to submit a sketch/artwork/critical or speculative design work to transmediale as part of an exhibition.)
**要考慮的問題(自述文件):
Questions to think about (ReadMe):**
- 為您的作品提供標題和簡短描述(1000 個字符或更少),就像您要提交給電影節一樣。
Provide a title for and a short description of your work (1000 characters or less) as if you were going to submit it to the festival.
- 描述您的程式以及您使用和學習的內容。
Describe your program and what you have used and learnt.
- 闡明您的計劃和思維如何解決“採集所有”的主題。
Articulate how your program and thinking address the theme of “capture all.”
- 數據採集的文化含義是什麼?
What are the cultural implications of data capture?
## Required reading 必讀
- Shoshana Zuboff,“Shoshana Zuboff 談監視資本主義 | VPRO 紀錄片,”https://youtu.be/hIXhnWUmMvw。
- “p5.js 示例 - 交互 1”,https://p5js.org/examples/hello-p5-interactivity-1.html。
- “p5.js 示例 - 交互 2”,https://p5js.org/examples/hello-p5-interactivity-2.html。
- “p5 DOM 參考”,https://p5js.org/reference/#group-DOM。
- Ulises A. Mejias 和 Nick Canry,“Datafication”,互聯網政策評論 8.4(2019 年),https://policyreview.info/concepts/datafication。
## Further reading 進一步閱讀
Søren Pold,“按鈕”,Fuller 編輯,軟體研究。
Søren Pold, “Button,” in Fuller, ed., Software Studies.
Carolin Gerlitz 和 Anne Helmond,“同類經濟:社交按鈕和數據密集型網絡”,《新媒體與社會》第 15 期,沒有。 8、12 月 1 日(2013 年):1348-65。
Carolin Gerlitz and Anne Helmond, “The Like Economy: Social Buttons and the Data-Intensive Web,” New Media & Society 15, no. 8, December 1 (2013): 1348–65.
Christian Ulrik Andersen 和 Geoff Cox,編輯,關於數據化研究的同行評審期刊 4,沒有。 1 (2015),https://aprja.net//issue/view/8402。
Christian Ulrik Andersen and Geoff Cox, eds., A Peer-Reviewed Journal About Datafied Research 4, no. 1 (2015), https://aprja.net//issue/view/8402.
Audun M. Øygard,“clmtrackr - 面部跟踪 JavaScript 庫”,https://github.com/auduno/clmtrackr。
Audun M. Øygard, “clmtrackr - Face tracking JavaScript library,” https://github.com/auduno/clmtrackr.
Daniel Shiffman,HTML / CSS/DOM - p5.js 教程(2017),https://www.youtube.com/playlist?list=PLRqwX-V7Uu6bI1SlcCRfLH79HZrFAtBvX。
- Daniel Shiffman, HTML / CSS/DOM - p5.js Tutorial (2017), https://www.youtube.com/playlist?list=PLRqwX-V7Uu6bI1SlcCRfLH79HZrFAtBvX.
Tiziana Terranova,“紅色堆棧攻擊! 算法、資本和公共自動化”,EuroNomade(2014 年),http://www.euronomade.info/?p=2268。
Tiziana Terranova, “Red Stack Attack! Algorithms, Capital and the Automation of the Common,” EuroNomade (2014), http://www.euronomade.info/?p=2268.
– Notes 119
筆記
1
這與數據可視化領域產生共鳴,愛德華・塔夫特 (Edward Tufte) 認為應該允許數據“不言自明”,而不是迷失在可視化的裝飾中。這導致錯誤地認為數據是原始的和未經中介的。數據開始時相對原始和未經解釋,但在實踐中已經被選擇、定位、預處理和清理、挖掘等,尤其是為了使其可讀。總會有一些關於其組成的額外信息,通常來自最初收集它的方式。參見 Edward R. Tufte, The Visual Display of Quantitative Information [1983] (Cheshire, CT: Graphics Press, 2001)。 ↩
This resonates with the field of data visualization, and Edward Tufte’s belief that data should be allowed to “speak for itself” rather than be lost in the ornamentation of visualization. This makes the mistake in thinking that data is raw and unmediated. Data begins relatively raw and uninterpreted, but in practice is already selected, targeted, preprocessed and cleaned, mined, and so on, not least to make it human readable. There is always some additional information about its composition, usually derived from the means by which it was gathered in the first place. See Edward R. Tufte, The Visual Display of Quantitative Information [1983] (Cheshire, CT: Graphics Press, 2001). ↩
2
Christian Ulrik Andersen 和 Geoff Cox,編輯,關於數據化研究的同行評審期刊,APRJA 4,第 1 期(2015 年)。 ↩
Christian Ulrik Andersen and Geoff Cox, eds., A Peer-Reviewed Journal About Datafied Research, APRJA 4, no.1 (2015). ↩
3
Kenneth Cukier 和 Victor Mayer-Schöenberger,“大數據的興起”,外交事務(2013 年 5 月/6 月):28-40。 ↩
Kenneth Cukier and Victor Mayer-Schöenberger, “The Rise of Big Data,” Foreign Affairs (May/June 2013): 28–40. ↩
4
Shoshana Zuboff,“Shoshana Zuboff 談監視資本主義 | VPRO 紀錄片,”vpro 紀錄片。 2020 年 4 月 26 日訪問。 https://youtu.be/hIXhnWUmMvw。參見她的書《監視資本主義時代:在權力的新前沿為人類未來而戰》(紐約:PublicAffairs,2019 年)。 ↩
Shoshana Zuboff, “Shoshana Zuboff on Surveillance Capitalism | VPRO Documentry,” vpro documentary. Accessed April 26 (2020). https://youtu.be/hIXhnWUmMvw. See her book, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (New York: PublicAffairs, 2019). ↩
5
Søren Pold,“按鈕”,Matthew Fuller 編輯,軟體研究(馬薩諸塞州劍橋:麻省理工學院出版社,2008 年),34。用戶尤其會被按鈕的措辭所吸引,Pold 建議按鈕是用獨特的功能和意義(同上,31)。 ↩
Søren Pold, “Button,” in Matthew Fuller ed., Software Studies (Cambridge, Mass.: MIT Press, 2008), 34. Users are seduced by the wording of the button not least, and Pold suggests that a button is developed with distinct functionality and signification (Ibid., 31). ↩
6
Carolin Gerlitz 和 Anne Helmond,“同類經濟:社交按鈕和數據密集型網絡”,《新媒體與社會》第 15 期,第 8 期,12 月 1 日(2013 年):1348-65。 ↩
Carolin Gerlitz and Anne Helmond, “The Like Economy: Social Buttons and the Data-Intensive Web,” New Media & Society 15, no.8, December 1 (2013): 1348–65. ↩
7
按鈕的樣式和 Facebook 2015 年的點贊按鈕樣式完全一樣。↩
The styling of the button is exactly the same as Facebook’s like button styling in 2015. ↩
8
https://p5js.org/reference/#group-DOM. ↩
9
請參閱此處的 p5.Element 方法列表,https://p5js.org/reference/#/p5.Element。 ↩
See the p5.Element method list here, https://p5js.org/reference/#/p5.Element. ↩
10
樣式按鈕遵循 CSS 的語法,它控制著像按鈕這樣的 DOM 元素應該如何顯示。提供的示例顯示瞭如何使用語法 button.style('xxx:xxxx'); 將 CSS 合併到 JavaScript 文件中。另一種方法是遵循具有列出 .class 選擇器的 CSS 文件的約定。這樣,你需要在JavaScript文件中有語法來標記類名:button.class('class_name');,然後在CSS文件中列出CSS元素和類屬性。更多示例可以在這裡找到:https://www.w3schools.com/csS/css3_buttons.asp,並查看 Daniel Shiffman 關於 CSS 基礎的影像,https://www.youtube.com/watch?v=zGL8q8iQSQw。 ↩
Styling a button follows the syntax of CSS, and that controls how a DOM element like a button should be displayed. The provided example shows how CSS is incorporated into the JavaScript file by using the syntax button.style('xxx:xxxx');. Another way of doing this is to follow the convention of having a CSS file that lists the .class selector. In this way, you need to have the syntax in the JavaScript file to mark the class name: button.class('class_name');, and then list out the CSS elements and class attributes in the CSS file. More examples can be found here: https://www.w3schools.com/csS/css3_buttons.asp, and see Daniel Shiffman’s video on the basic of CSS, https://www.youtube.com/watch?v=zGL8q8iQSQw. ↩
11
參考頁面中的相關功能,在 Events > Mouse> 下,參見 https://p5js.org/reference/。 ↩
The related function in the reference page, which is under Events > Mouse>, see https://p5js.org/reference/. ↩
12
參考頁面中的相關功能,在 Events > Keyboard> 下,參見 https://p5js.org/reference/。 ↩
The related function in the reference page, which is under Events > Keyboard>, see https://p5js.org/reference/. ↩
13
查看聲音庫的各種功能:https://p5js.org/reference/#/libraries/p5.sound。
See the sound library’s various features: https://p5js.org/reference/#/libraries/p5.sound. ↩
14
參見 https://www.auduno.com/2014/01/05/fitting-faces/。 ↩
See https://www.auduno.com/2014/01/05/fitting-faces/. ↩
15
Jason M. Saragih、Simon Lucey 和 Jeffrey F. Cohn,“Face Alignment Through Subspace Constrained Mean-shifts”,2009 年 IEEE 第 12 屆計算機視覺國際會議,京都(2009 年):1034-1041。 doi:10.1109/ICCV.2009.5459377。 ↩
Jason M. Saragih, Simon Lucey and Jeffrey F. Cohn, “Face Alignment Through Subspace Constrained Mean-shifts,” 2009 IEEE 12th International Conference on Computer Vision, Kyoto (2009): 1034-1041. doi: 10.1109/ICCV.2009.5459377. ↩
16
GDPR(通用數據保護條例)等立法的出台是對這種缺乏透明度的回應。 GDPR 是歐盟法律 (2016) 中關於數據保護和隱私的法規,適用於歐盟和歐洲經濟區的所有公民。它還解決了歐盟和歐洲經濟區以外的個人數據傳輸問題。請參閱 https://gdpr-info.eu/。 ↩
The introduction of legislation such as the GDPR (General Data Protection Regulation) is a response to this lack of transparency. GDPR is a regulation in EU law (2016) on data protection and privacy that applies to all the citizens of the European Union and the European Economic Area. It also addresses the transfer of personal data outside the EU and EEA areas. See https://gdpr-info.eu/. ↩
17
衛報對此的報導,“劍橋分析文件”,可以在 https://www.theguardian.com/news/series/cambridge-analytica-files 找到。 Facebook 最終被迫支付巨額罰款,參見 Alex Hern,“Facebook 同意就劍橋分析醜聞支付罰款”,《衛報》,10 月 30 日(2019 年),https://www.theguardian.com/technology/2019/oct /30/facebook-agrees-to-pay-fine-over-cambridge-analytica-scandal ↩
The Guardian’s coverage of this, “The Cambridge Analytica Files,” can be found at https://www.theguardian.com/news/series/cambridge-analytica-files. Facebook was ultimately forced to pay a hefty fine, see Alex Hern, “Facebook agrees to pay fine over Cambridge Analytica scandal,” The Guardian, October 30 (2019), https://www.theguardian.com/technology/2019/oct/30/facebook-agrees-to-pay-fine-over-cambridge-analytica-scandal ↩
18
Will Conley,“Facebook 調查跟踪用戶的光標和螢幕行為”,Slashgear,10 月 30 日(2013 年)。可在:https://www.slashgear.com/facebook-investigates-tracking-users-cursors-and-screen-behavior-30303663/。 ↩
Will Conley, “Facebook investigates tracking users’ cursors and screen behavior,” Slashgear, October 30 (2013). Available at: https://www.slashgear.com/facebook-investigates-tracking-users-cursors-and-screen-behavior-30303663/. ↩
19
可供性提供提示,提示用戶如何與某物進行交互。見 James J. Gibson, The Theory of Affordances, in Robert Shaw and John Bransford, eds。感知、表演和知識(新澤西州希爾斯代爾:勞倫斯·埃爾鮑姆協會,1977 年),127-143。 ↩
Affordance provides cues which give a hint how users may interact with something. See James J. Gibson, The Theory of Affordances,” in Robert Shaw and John Bransford, eds. Perceiving, Acting, and Knowing (Hillsdale, NJ: Lawrence Erlbaum Associates, 1977), 127–143. ↩
20
Rena Bivens,“性別二元不會被取消程式設計:Facebook 上的性別編碼十年”,《新媒體與社會》第 19 期,第 6 期,(2017 年):880-898。 doi.org/10.1177/1461444815621527。 ↩
Rena Bivens, “The Gender Binary will not be Deprogrammed: Ten Years of Coding Gender on Facebook,” New Media & Society 19, no.6, (2017): 880–898. doi.org/10.1177/1461444815621527. ↩
21
Facebook,S-1 表格註冊宣示(2012 年)。可在:https://infodocket.files.wordpress.com/2012/02/facebook_s1-copy.pdf。 ↩
Facebook, Form S-1 registration statement (2012). Available at: https://infodocket.files.wordpress.com/2012/02/facebook_s1-copy.pdf. ↩
22
Esther Leslie,“另一種氛圍:對抗人力資源、表情符號和設備”,《視覺文化雜誌》第 18 期第 1 期,4 月(2019 年)。 ↩
Esther Leslie, “The Other Atmosphere: Against Human Resources, Emoji, and Devices,” Journal of Visual Culture 18 no.1, April (2019). ↩
23
Laurie Clarke,“為什麼隱藏喜歡不會讓 Instagram 成為一個更快樂的地方”,《連線》,7 月 19 日(2019 年),https://www.wired.co.uk/article/instagram-hides-likes。 ↩
Laurie Clarke, “Why hiding likes won’t make Instagram a happier place to be,” Wired, July 19 (2019), https://www.wired.co.uk/article/instagram-hides-likes. ↩
24
參見 Ben Grosser 的 Demetricator 系列作品:Facebook Demetricator,https://bengrosser.com/projects/facebook-demetricator/; Instagram Demetricator,https://bengrosser.com/projects/instagram-demetricator/; Twitter Demetricator,https://bengrosser.com/projects/twitter-demetricator/。 ↩
See Ben Grosser’s Demetricator series of artworks: Facebook Demetricator, https://bengrosser.com/projects/facebook-demetricator/; Instagram Demetricator, https://bengrosser.com/projects/instagram-demetricator/; Twitter Demetricator, https://bengrosser.com/projects/twitter-demetricator/. ↩
25
Sauvik Das 和 Adam DI Kramer,“Facebook 上的自我審查”,AAAI 博客和社交媒體會議 (ICWSM),7 月 2 日(2013 年),https://research.fb.com/publications/self-censorship-on- Facebook/。 ↩
Sauvik Das and Adam D. I. Kramer, “Self-censorship on Facebook,” AAAI Conference on Weblogs and Social Media (ICWSM), July 2 (2013), https://research.fb.com/publications/self-censorship-on-facebook/. ↩
26
Zuboff, Shoshana Zuboff 談監視資本主義 | VPRO 紀錄片。 ↩
Zuboff, Shoshana Zuboff on Surveillance Capitalism | VPRO Documentry. ↩
27
轉述 Leslie 文章“另一種氛圍:反對人力資源、表情符號和設備”的最後幾行:“工人成為他們自己的設備。它們成為傳播資本主義的工具[……]。” ↩
Paraphrasing the final lines of Leslie’s essay “The Other Atmosphere: Against Human Resources, Emoji, and Devices”: “The workers become their own devices. They becomes devices of communicative capitalism […].” ↩
28
Jonathan Crary,24/7:晚期資本主義和睡眠的終結(倫敦:Verso,2013),30-31。 ↩
Jonathan Crary, 24/7: Late Capitalism and the Ends of Sleep (London: Verso, 2013), 30–31. ↩
29
Crary, 24/7, 10-11. ↩
30
transmediale, Capture All, https://transmediale.de/content/call-for-works-2015. ↩
31
Tiziana Terranova,“紅色堆棧攻擊!算法、資本和公共自動化”,EuroNomade(2014 年)。可在 http://www.euronomade.info/?p=2268 ↩
Tiziana Terranova, “Red Stack Attack! Algorithms, Capital and the Automation of the Common,” EuroNomade (2014). Available at http://www.euronomade.info/?p=2268 ↩
32
Terranova,“紅色堆棧攻擊!” ↩
Terranova, “Red Stack Attack!” ↩
33
對伯納德·斯蒂格勒 (Bernard Stiegler) 而言,伊里特·羅格夫 (Irit Rogoff) 解釋說,“‘跨個體化’的概念並不取決於個體化的‘我’或個體化的‘我們’,”而是“在個體化前的環境中共同個體化的過程, “我”和“我們”都通過彼此轉化。”參見 Bernard Stiegler 和 Irit Rogoff,“跨個體化”,e-flux 14,三月(2010 年),https://www.e-flux.com/journal/14/61314/transindividuation/。
To Bernard Stiegler, explains Irit Rogoff, “The concept of ‘transindividuation’ is one that does not rest with the individuated ‘I’ or with the interindividuated ‘We’,” but “is the process of co-individuation within a preindividuated milieu and in which both the ‘I’ and the ‘We’ are transformed through one another.” See Bernard Stiegler and Irit Rogoff, “Transindividuation,” e-flux 14, March (2010), https://www.e-flux.com/journal/14/61314/transindividuation/. ↩
34
Terranova,“紅色堆棧攻擊!” ↩
Terranova, “Red Stack Attack!” ↩