# BigQuery - 基礎語法篇 ## Select用法 - `*` - `expression` - `expression.*` ### SELECT * REPLACE 符合條件並replace > 不能replace 未定義的column > ```sql WITH orders AS (SELECT 5 as order_id, "sprocket" as item_name, 200 as quantity) SELECT * REPLACE ("widget" AS item_name) FROM orders; /*----------+-----------+----------* | order_id | item_name | quantity | +----------+-----------+----------+ | 5 | widget | 200 | *----------+-----------+----------*/ WITH orders AS (SELECT 5 as order_id, "sprocket" as item_name, 200 as quantity) SELECT * REPLACE (quantity/2 AS quantity) FROM orders; /*----------+-----------+----------* | order_id | item_name | quantity | +----------+-----------+----------+ | 5 | sprocket | 100 | *----------+-----------+----------*/ ``` ### SELECT DISTINCT 以下資料型態是不被許允做distinct - struct - array ## SELECT ALL 等於select * ### SELECT AS STRUCT - Struct 延伸解釋 **結構化物件的概念。 SELECT AS STRUCT 允許我們將這幾個欄位的值組合成一個單一的結構,這個結構可以包含不同資料類型的欄位。** 舉例: | date | product_type | sales_amount | | --- | --- | --- | | 2023-01-01 | A | 100 | | 2023-01-01 | B | 150 | | 2023-01-02 | A | 120 | | 2023-01-02 | B | 200 | **使用 SELECT AS STRUCT,我們可以將這些欄位的值組合成一個結構,例如:** ```sql -- 使用 SELECT AS STRUCT 创建结构化对象 SELECT STRUCT(date AS sales_date, product_type AS type, sales_amount AS amount) AS sales_data FROM sales; ``` | sales_data | | --- | | {sales_date: '2023-01-01', type: 'A', amount: 100} | | {sales_date: '2023-01-01', type: 'B', amount: 150} | | {sales_date: '2023-01-02', type: 'A', amount: 120} | | {sales_date: '2023-01-02', type: 'B', amount: 200} | > Struct是一種key與value中具有屬性的數據類型,類似於python dictionary,在Struct可以處理非常複雜的數據結構 > select struct data type資料 ```sql WITH table AS ( SELECT STRUCT('Jed' AS name, '25' AS age) AS personal_info ) SELECT personal_info.name FROM table /*----------+-----------+----------* | row | name | +----------+-----------+----------+ | 1 | Jed | *----------+-----------+----------*/ ``` STRUCT to JSON ```sql SELECT TO_JSON(STRUCT(1 AS id, [10,20] AS coordinates)) AS pt; /*+--------------------------------+ | pt | +--------------------------------+ | {"coordinates":[10,20],"id":1} | +--------------------------------+*/ ``` ### SELECT AS VALUE select STRUCT value ```sql SELECT AS VALUE STRUCT(1 AS a, 2 AS b) xyz ``` ![upload_dbe062e6c69147728b621c78f86a79fb](https://hackmd.io/_uploads/r1ltBKcw1l.png) ### 補充 ### SELECT STRUCT attribute value array 可以透過.屬性方式 select array全部element 建立一個表 ![upload_8189d2f67d59ae8457993b658406c85a](https://hackmd.io/_uploads/H1EFrF9PJg.png) ```sql WITH table as (SELECT STRUCT("Alice" AS name, 30 AS age, ["reading", "hiking", "cooking"] AS interests) AS Person) select Person.interests from table ``` ![upload_7a793efa6dcfc436978aa6c856d5dff2](https://hackmd.io/_uploads/HkE9SY9PJe.png) ### UNNSET array轉換成row可以使用此方法,且可以用as來定義select出來的column > 注意UNNEST會破壞排序,如果要恢復原本排序需要使用WITH OFFSET並ORDER BY OFFSET > ```sql SELECT * FROM UNNEST(['foo', 'bar', 'baz', 'qux', 'corge', 'garply', 'waldo', 'fred']) AS element WITH OFFSET AS offset ORDER BY offset; /*----------+--------* | element | offset | +----------+--------+ | foo | 0 | | bar | 1 | | baz | 2 | | qux | 3 | | corge | 4 | | garply | 5 | | waldo | 6 | | fred | 7 | *----------+--------*/ ``` 省略UNNEST也能達到一樣的效果 ```sql WITH Coordinates AS (SELECT [1,2] AS position) SELECT results FROM Coordinates, UNNEST(Coordinates.position) AS results; ``` 不需打UNNEST,與上面搜尋結果一致 ```sql WITH Coordinates AS (SELECT [1,2] AS position) SELECT results FROM Coordinates, Coordinates.position AS results; ``` ![upload_04920969ff8aa0d023c278561a2f71bc](https://hackmd.io/_uploads/BJvirt5wye.png) UNNEST NULL & element NULL - UNNEST 並不會顯示NULL與empty array row - 但如果NULL包含在array或其他資料型態則會顯示 例如: 建立一個table裡面3個row, ```sql WITH example_table AS ( SELECT 1 AS id, [1, 2, 3] AS values UNION ALL SELECT 2 AS id, NULL AS values UNION ALL SELECT 3 AS id, [] AS values ) select * from example_table ``` ![upload_7fd86e3bc06e3cd29eb6c9fca0f9124f](https://hackmd.io/_uploads/BJOQ8Fqw1x.png) 對array使用UNNEST ```sql SELECT id, value FROM example_table, UNNEST(values) AS value; ``` 可以發現row 2、3並不會顯示出來 ![upload_1b13b8315f91d7c6d20ec1616486d95e](https://hackmd.io/_uploads/BJBS8Fqwyx.png) 接著建立table 裡面1 row 定義array裡面有null element ```sql WITH example_table AS ( SELECT 1 AS id, [1, 2, NULL, 3] AS values ) SELECT id, value FROM example_table, UNNEST(values) AS value; ``` query可以看到null會被列出來 ![upload_082d78759f1e2dea1e4040b7248ee966](https://hackmd.io/_uploads/ByyLUY9Dyx.png) ## FROM bigquery 可以FROM多個table做處理 ```sql SELECT commodity, price, sales_volume FROM [Jed-store-jp], #FROM jp、us、tw table [Jed-store-us], [Jed-store-tw] ORDER BY commodity DESC LIMIT 1000 ``` 將多個with table合成一張table ```sql WITH Players AS ( SELECT 'gorbie' AS username, 29 AS level, 'red' AS team UNION ALL SELECT 'junelyn', 2 , 'blue' UNION ALL SELECT 'corba', 43, 'green'), NPCs AS ( SELECT 'niles' AS username, 'red' AS team UNION ALL SELECT 'jujul', 'red' UNION ALL SELECT 'effren', 'blue'), Mascots AS ( SELECT 'cardinal' AS mascot , 'red' AS team UNION ALL SELECT 'parrot', 'green' UNION ALL SELECT 'finch', 'blue' UNION ALL SELECT 'sparrow', 'yellow') SELECT * FROM ( SELECT username, team FROM Players UNION ALL SELECT username, team FROM NPCs); ``` ![upload_7c14a49b75095cbfa32f9dbdacb4eff4](https://hackmd.io/_uploads/rJaU8Y5w1e.png) > **union會將重複的自動踢掉,而union all將會保留重複的** > ### **Table subqueries** bigquery也一樣能用**subqueries** ```sql SELECT results.username FROM (SELECT * FROM Players) AS results; /*-----------* | username | +-----------+ | gorbie | | junelyn | | corba | *-----------*/ ``` 加入條件子句 ```sql SELECT   username FROM (   WITH red_team AS (SELECT * FROM NPCs WHERE team = 'red')   SELECT * FROM red_team ); /*-----------*  | username  |  +-----------+  | niles     |  | jujul     |  *-----------*/ ``` ### nested data structure 在 FROM , field_path 可以FROM嵌套的數據結構。 ```sql SELECT * FROM T1 t1, t1.array_column; SELECT * FROM T1 t1, t1.struct_column.array_field; SELECT (SELECT ARRAY_AGG(c) FROM t1.array_column c) FROM T1 t1; SELECT a.struct_field1 FROM T1 t1, t1.array_of_structs a; SELECT (SELECT STRING_AGG(a.struct_field1) FROM t1.array_of_structs a) FROM T1 t1; ``` ### nested data structure練習 建立table schema, - id[srt] - dob[str] - last_name[str] - first_name[srt] - addresses[record] - status - address - city - state - zip - numberoftears ![upload_6a6a9c5c3847ef381073155e85ccc192](https://hackmd.io/_uploads/rJluIFqPyx.png) > **Record資料結構的Nested 最大深度上限為15** > > **注意!在建立column時千萬要小心不要含空白,因為不會擋** > - 數據結構 ```json [ { "id": "1", "first_name": "John", "last_name": "Doe", "dob": "1968-01-22", "addresses": [ { "status": "current", "address": "123 First Avenue", "city": "Seattle", "state": "WA", "zip": "11111", "numberOfYears": "1" }, { "status": "previous", "address": "456 Main Street", "city": "Portland", "state": "OR", "zip": "22222", "numberOfYears": "5" } ] }, { "id": "2", "first_name": "Jane", "last_name": "Doe", "dob": "1980-10-16", "addresses": [ { "status": "current", "address": "789 Any Avenue", "city": "New York", "state": "NY", "zip": "33333", "numberOfYears": "2" }, { "status": "previous", "address": "321 Main Street", "city": "Hoboken", "state": "NJ", "zip": "44444", "numberOfYears": "3" } ] } ] ``` - 建立數據 insert data ```sql INSERT INTO test.qq (id, first_name, last_name, dob, addresses) values ("1","John","Doe","1968-01-22",[ ( "current", "123 First Avenue", "Seattle", "WA", "11111", "1" ), ( "previous", "456 Main Street", "Portland", "OR", "22222", "5" ) ]) INSERT INTO test.qq (id, first_name, last_name, dob, addresses) values ("2","John","Doe","1980-10-16",[ ( "current", "789 Any Avenue", "New York", "NY", "33333", "2" ), ( "previous", "321 Main Street", "Hoboken", "NJ", "44444", "3" ) ]) ``` ![upload_417c406daf76e4286f94c2c1a0539b25](https://hackmd.io/_uploads/SJiFItqPJe.png) select nested array element 列出每個row addresses.address index 0所有 row ```sql SELECT first_name, last_name, addresses[offset(0)].address FROM test.qq; ``` ![upload_48257acb7a6c15d3f9a67f8f27b712c9](https://hackmd.io/_uploads/SkG0LK9w1g.png) 接著使用CROSS JOIN UNNEST (addresses) 右表展開addresses裡面所有element,並篩選出紐約地區 ```sql SELECT first_name, last_name, a.address, a.state FROM test.qq CROSS JOIN UNNEST(addresses) AS a WHERE a.state != 'NY'; ``` ![image](https://hackmd.io/_uploads/HyqxwtcD1l.png) ![image](https://hackmd.io/_uploads/r1DbvK5v1g.png) ### PIVOT 旋轉資料的操作,可以將表中的行轉換為列,接在FROM 後面 建立高麗菜與蘋果的每一季度的表 ```sql with Produce AS ( SELECT 'Kale' as product, 51 as sales, 'Q1' as quarter UNION ALL SELECT 'Kale', 123, 'Q2' UNION ALL SELECT 'Kale', 45, 'Q3' UNION ALL SELECT 'Kale', 3, 'Q4' UNION ALL SELECT 'Apple', 77, 'Q1' UNION ALL SELECT 'Apple', 0, 'Q2' UNION ALL SELECT 'Apple', 25, 'Q3' UNION ALL SELECT 'Apple', 15, 'Q3' UNION ALL SELECT 'Apple', 2, 'Q4') SELECT * FROM ``` ![image](https://hackmd.io/_uploads/HkezwK9PJx.png) 接著計算出每一季度的銷售額,並加總顯示kale與apple的年度數據 ```sql with Produce AS ( SELECT 'Kale' as product, 51 as sales, 'Q1' as quarter UNION ALL SELECT 'Kale', 123, 'Q2' UNION ALL SELECT 'Kale', 45, 'Q3' UNION ALL SELECT 'Kale', 3, 'Q4' UNION ALL SELECT 'Apple', 77, 'Q1' UNION ALL SELECT 'Apple', 0, 'Q2' UNION ALL SELECT 'Apple', 25, 'Q3' UNION ALL SELECT 'Apple', 15, 'Q3' UNION ALL SELECT 'Apple', 2, 'Q4') SELECT * FROM (SELECT product, sales, quarter FROM Produce) PIVOT(SUM(sales) FOR quarter IN ('Q1', 'Q2', 'Q3', 'Q4')) ``` ![image](https://hackmd.io/_uploads/rkjGwYcDke.png) 顯示每年每季度的資料 ```sql WITH Produce AS ( SELECT 'Kale' as product, 51 as sales, 'Q1' as quarter, 2020 as year UNION ALL SELECT 'Kale', 23, 'Q2', 2020 UNION ALL SELECT 'Kale', 45, 'Q3', 2020 UNION ALL SELECT 'Kale', 3, 'Q4', 2020 UNION ALL SELECT 'Kale', 70, 'Q1', 2021 UNION ALL SELECT 'Kale', 85, 'Q2', 2021 UNION ALL SELECT 'Apple', 77, 'Q1', 2020 UNION ALL SELECT 'Apple', 0, 'Q2', 2020 UNION ALL SELECT 'Apple', 1, 'Q1', 2021) SELECT * FROM Produce ``` ![image](https://hackmd.io/_uploads/Hy4QPt9vkx.png) 計算每年結果,如果資料不包含在聚合結果會顯示null ![image](https://hackmd.io/_uploads/HJ9XwK5Pkg.png) note - 不允許 前面使用WITH offset與法 - 不允許與FOR SYSTEM_TIME AS OF使用 - 子查詢不能是 **STRUCT** - 例子 ```sql WITH sales AS ( SELECT DATE '2023-01-01' AS date, 'A' AS product_type, 100 AS sales_amount UNION ALL SELECT DATE '2023-01-01' AS date, 'B' AS product_type, 150 AS sales_amount UNION ALL SELECT DATE '2023-01-02' AS date, 'A' AS product_type, 120 AS sales_amount UNION ALL SELECT DATE '2023-01-02' AS date, 'B' AS product_type, 200 AS sales_amount ) SELECT * FROM ( SELECT STRUCT(date, product_type, sales_amount) AS sales_struct FROM sales ) PIVOT ( SUM(sales_amount) FOR product_type IN ('A' AS A, 'B' AS B) ); ## sales_amount 為STRUCT屬性,但PIVOT是不允許這種操作的 ``` - PIVOT在call function必須要是**aggregate_function(ex: SUM、AVG、MAX、MIN 等)** - 使用聚合函數只接受一個參數,不允許兩個參數ex: SUM(Q3, Q4) - PIVOT會導致數據row排列方式改變 ### UNPIVOT 旋轉資料的操作,可以將表中的列轉換為行,接在FROM 後面 建立table ```sql WITH sales AS ( SELECT DATE '2023-01-01' AS date, 100 AS product_A, 150 AS product_B UNION ALL SELECT DATE '2023-01-02' AS date, 120 AS product_A, 200 AS product_B ) select * from sales ``` ![image](https://hackmd.io/_uploads/HkGEwFqvJl.png) 旋轉 ```sql WITH sales AS ( SELECT DATE '2023-01-01' AS date, 100 AS product_A, 150 AS product_B UNION ALL SELECT DATE '2023-01-02' AS date, 120 AS product_A, 200 AS product_B ) SELECT date, product_type, sales_amount FROM sales UNPIVOT ( sales_amount FOR product_type IN (product_A, product_B) ) AS unpivoted_sales; ``` ![image](https://hackmd.io/_uploads/BJcEDtqvJe.png) single_column_unpivot ```sql WITH sales AS ( SELECT DATE '2023-01-01' AS date, 100 AS product_A, NULL AS product_B UNION ALL SELECT DATE '2023-01-02' AS date, 120 AS product_A, 200 AS product_B ) ``` ![image](https://hackmd.io/_uploads/BylBPF9DJl.png) ```sql WITH sales AS ( SELECT DATE '2023-01-01' AS date, 100 AS product_A, NULL AS product_B UNION ALL SELECT DATE '2023-01-02' AS date, 120 AS product_A, 200 AS product_B ) SELECT * FROM sales UNPIVOT INCLUDE NULLS ( value FOR product_type IN (product_A, product_B) ); ``` ![image](https://hackmd.io/_uploads/SycBwKqD1e.png) 預設是默認去除NULL的資料 **`EXCLUDE NULLS`** **`INCLUDE NULLS`** multi_column_unpivot 建立表 ```sql WITH Produce AS ( SELECT 'Kale' as product, 51 as Q1, 23 as Q2, 45 as Q3, 3 as Q4 UNION ALL SELECT 'Apple', 77, 0, 25, 2) SELECT * FROM Produce ``` ![image](https://hackmd.io/_uploads/rky8wFqwJl.png) 依照四個季度都轉換成列並累加 ```sql WITH Produce AS ( SELECT 'Kale' as product, 51 as Q1, 23 as Q2, 45 as Q3, 3 as Q4 UNION ALL SELECT 'Apple', 77, 0, 25, 2) SELECT * FROM Produce UNPIVOT(sales FOR quarter IN (Q1, Q2, Q3, Q4)) ``` ![image](https://hackmd.io/_uploads/rkNUvY5vye.png) 把Q1與Q1為一個row,為前半年, Q3與Q4為一個row,為後半年 > 不能在同一列集中使用相同的名稱。例如, `(Q1, Q1)` 會導致錯誤。 > > 但不能在同一列集中用相同的名稱是可以的,例如, `(Q1, Q2), (Q1, Q3)` ,這個我試過,但不行,不確定是不是我理解錯誤了[REF](https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax#unpivot_operator) > ```sql WITH Produce AS ( SELECT 'Kale' as product, 51 as Q1, 23 as Q2, 45 as Q3, 3 as Q4 UNION ALL SELECT 'Apple', 77, 0, 25, 2) SELECT * FROM Produce UNPIVOT( (first_half_sales, second_half_sales) FOR semesters IN ((Q1, Q2) AS 'semester_1', (Q3, Q4) AS 'semester_2')) ``` ![image](https://hackmd.io/_uploads/SyiUPK9wye.png) ### TABLESAMPLE 隨機抽取資料樣本數,**與 LIMIT 子句不同,TABLESAMPLE 從表中隨機傳回資料子** > 在不需要精確的答案,當您使用大表時,此運算符很有用,採樣返回各種記錄,同時避免了與 掃描和處理整個表格。查詢的每個執行都可以 返回不同的結果,因為每個執行過程都是獨立的 計算樣本 > **從表格 your_table 中取得 5% 的隨機樣本** ```sql SELECT * FROM your_table TABLESAMPLE SYSTEM (5); ``` ## JOIN 基本上與SQL join差不多 LEFT JOIN ```sql FROM A LEFT OUTER JOIN B ON A.w = B.y /* Table A Table B Result +-------+ +-------+ +---------------------------+ | w | x | * | y | z | = | w | x | y | z | +-------+ +-------+ +---------------------------+ | 1 | a | | 2 | k | | 1 | a | NULL | NULL | | 2 | b | | 3 | m | | 2 | b | 2 | k | | 3 | c | | 3 | n | | 3 | c | 3 | m | | 3 | d | | 4 | p | | 3 | c | 3 | n | +-------+ +-------+ | 3 | d | 3 | m | | 3 | d | 3 | n | +---------------------------+ */ ``` CROSS JOIN ```sql FROM A CROSS JOIN B /* Table A Table B Result +-------+ +-------+ +---------------+ | w | x | * | y | z | = | w | x | y | z | +-------+ +-------+ +---------------+ | 1 | a | | 2 | c | | 1 | a | 2 | c | | 2 | b | | 3 | d | | 1 | a | 3 | d | +-------+ +-------+ | 2 | b | 2 | c | | 2 | b | 3 | d | +---------------+ */ ``` CROSS JOIN 也能用逗號來表示 ```sql FROM A, B /* Table A Table B Result +-------+ +-------+ +---------------+ | w | x | * | y | z | = | w | x | y | z | +-------+ +-------+ +---------------+ | 1 | a | | 2 | c | | 1 | a | 2 | c | | 2 | b | | 3 | d | | 1 | a | 3 | d | +-------+ +-------+ | 2 | b | 2 | c | | 2 | b | 3 | d | +---------------+ */ ``` > CROSS JOIN 是不能在括號裡面操作 FROM (A, B) // INVALID > ON 與 USING Operator ON: **用來指定連接兩個表格的條件,這個條件可以是一個或多個欄位之間的邏輯關係** USING: **用於連接兩個表格時指定共同列的名稱。這個關鍵字假定兩個表格中的欄位具有相同的名稱,並且只連接這些共同的欄位。** 但兩者可以返回一樣的結果 ```sql FROM A JOIN B ON A.x = B.x FROM A JOIN B USING (x) /* Table A Table B Result ON Result USING +---+ +---+ +-------+ +---+ | x | * | x | = | x | x | | x | +---+ +---+ +-------+ +---+ | 1 | | 2 | | 2 | 2 | | 2 | | 2 | | 3 | | 3 | 3 | | 3 | | 3 | | 4 | +-------+ +---+ +---+ +---+ */ ``` 括號優先順序 正常順序是由左至右 ```sql FROM A JOIN B USING (x) JOIN C USING (x) -- A JOIN B USING (x)        = result_1 -- result_1 JOIN C USING (x) = result_2 -- result_2                  = return value ``` 也搭配括號來做順序上的處理 ```sql FROM ( (A JOIN B USING (x)) JOIN C USING (x) ) -- A JOIN B USING (x) = result_1 -- result_1 JOIN C USING (x) = result_2 -- result_2 = return value ``` ```sql FROM ( A JOIN (B JOIN C USING (x)) USING (x) ) -- B JOIN C USING (x) = result_1 -- A JOIN result_1 = result_2 -- result_2 = return value ``` 如果有使用到CROSS JOIN逗號表示子句,則後面JOIN一定要加上括號 ```sql FROM A, B JOIN C JOIN D ON C.x = D.y ON B.z = C.x // INVALID FROM A, B JOIN (C JOIN D ON C.x = D.y) ON B.z = C.x // VALID ``` 另外CROSS JOIN 逗號表示不能直接接RIGHT JOIN與FULL JOIN,如果要使用必須加上括號 ```sql FROM A, B RIGHT JOIN C ON TRUE // INVALID FROM A, B FULL JOIN C ON TRUE // INVALID FROM A, (B RIGHT JOIN C ON TRUE) // VALID FROM A, (B FULL JOIN C ON TRUE) // VALID ``` **Correlated join operation** 未看完 ### WHERE 其實很單純就是符合條件為True顯示,False不顯示結果 ```sql SELECT * FROM Roster WHERE SchoolID = 52; ``` ```sql SELECT * FROM Roster WHERE STARTS_WITH(LastName, "Mc") OR STARTS_WITH(LastName, "Mac"); ``` ### GROUP BY 分組及順序 - `NULLS FIRST | NULLS LAST` - `NULLS FIRST` :將空值排序在非空值之前。 - `NULLS LAST` 。將空值排序在非空值之後。 - `ASC | DES CexpressionASCNULLS FIRSTNULLS LAST` 數值的升序或降序對結果進行排序。 是預設值。 如果未使用or指定空排序 - `NULLS FIRST` 如果排序順序為升序,則預設套用。 - `NULLS LAST` 如果排序順序為降序,則預設套用。 可以對aliases做Group by ```sql WITH PlayerStats AS ( SELECT 'Adams' as LastName, 'Noam' as FirstName, 3 as PointsScored UNION ALL SELECT 'Buchanan', 'Jie', 0 UNION ALL SELECT 'Coolidge', 'Kiran', 1 UNION ALL SELECT 'Adams', 'Noam', 4 UNION ALL SELECT 'Buchanan', 'Jie', 13) SELECT SUM(PointsScored) AS total_points, LastName AS last_name FROM PlayerStats GROUP BY last_name; /*--------------+-----------+ | total_points | last_name | +--------------+-----------+ | 7 | Adams | | 13 | Buchanan | | 1 | Coolidge | +--------------+-----------*/ ``` ### HAVING 在這個範例中,資料表 employee 包含了員工的信息,包括所在部門 (department) 和工資 (salary)。 此查詢按部門分組,計算每個部門的平均工資 (AVG(salary)),然後使用 HAVING 子句篩選出平均工資大於 50000 的部門。 整體來說,HAVING 子句對分組後的聚合結果進行條件過濾,而 WHERE 子句對原始資料行進行條件過濾。 ```sql SELECT department, AVG(salary) AS avg_salary FROM employee GROUP BY department HAVING AVG(salary) > 50000; ``` REF [https://cloud.google.com/bigquery/docs/nested-repeated#console](https://cloud.google.com/bigquery/docs/nested-repeated#console) [https://peilee-98185.medium.com/bigquery-的巢狀結構-array-struct-2f1c9bc1999](https://peilee-98185.medium.com/bigquery-%E7%9A%84%E5%B7%A2%E7%8B%80%E7%B5%90%E6%A7%8B-array-struct-2f1c9bc1999) [https://support.google.com/analytics/answer/4419694?hl=zh-Hant#zippy=%2C本文內容](https://support.google.com/analytics/answer/4419694?hl=zh-Hant#zippy=%2C%E6%9C%AC%E6%96%87%E5%85%A7%E5%AE%B9) [https://stackoverflow.com/questions/26272514/how-to-pivot-table-in-bigquery](https://stackoverflow.com/questions/26272514/how-to-pivot-table-in-bigquery) [https://github.com/uncleguanghui/pyflink_learn/blob/master/examples/README.md](https://github.com/uncleguanghui/pyflink_learn/blob/master/examples/README.md) [https://medium.com/@dataproducts/introduction-to-stateful-stream-processing-49447863931a](https://medium.com/@dataproducts/introduction-to-stateful-stream-processing-49447863931a) [https://nightlies.apache.org/flink/flink-docs-master/zh/](https://nightlies.apache.org/flink/flink-docs-master/zh/) [https://hackmd.io/bXWIJVEFR1mhpj_0qavYEA?view](https://hackmd.io/bXWIJVEFR1mhpj_0qavYEA?view) [https://speakerdeck.com/sucitw/building-data-pipelines-on-apache-nifi-with-python?slide=2](https://speakerdeck.com/sucitw/building-data-pipelines-on-apache-nifi-with-python?slide=2)