# Effective Python 2E
### Chapter 7: Concurrency and Parallelism
**`subprocess` and `threading`** (Items 52--57)
[David Ye](https://dwye.dev/) @ Houzz
<aside class="notes">
上半:items 52 to 57, subprocess, threading<br />
下半:items 59 to 64, concurrent.futures, Coroutines<br />
新的和精彩的都在下半,不想聽舊東西的聽完 subprocess 就可以離開了(X
</aside>
---
## Outline
- [When Concurrency is Necessary (Item 56)](#/2)
- [`subprocess` (Item 52)](#/3)
- [`threading` (Items 53--55, 57)](#/4)
---
## When Concurrency is Necessary
**Concurrency** (並行):
- enable many distinct paths of execution.
- do many different things *seemingly* at the same time.
**Parallelism** (平行):
- *actually* doing many different things at the same time.
<aside class="notes">
Concurrency 指程式架構,將程式拆開成多個可獨立運作的工作,像是驅動程式都可獨立運作,但不需要平行化。Parallelism 則指程式執行,同時執行多個程式。
<br />
https://hackmd.io/@sysprog/concurrency/https%3A%2F%2Fhackmd.io%2F%40sysprog%2FS1AMIFt0D
</aside>
----
### Item 56: Know How to Recognize When Concurrency Is Necessary
- do the I/O in parallel
```python=
def game_logic(state, neighbors):
...
# Do some blocking input/output in here:
data = my_socket.recv(100)
...
```
- **Fan-out**: start concurrent units
- **Fan-in**: wait for concurrent units of work to finish
<aside class="notes">
主要就是需要等待 input 的部分,或是 system call 之類的,會讓 python 在那邊等的,就要丟出去
</aside>
---
## `subprocess`
### Item 52: Use `subprocess` to Manage Child Processes
[PEP-324](https://peps.python.org/pep-0324/#replacing-os-popen)
- provide higher level api for `os.system` etc.
- manage input / output stream
<aside class="notes">
取代 os 而被提出的,用來跑系統指令
</aside>
----
`subprocess.run`: run and wait
```python=
res = subprocess.run(
['echo', 'Hello from the child process'],
capture_output=True,
)
# raise subprocess.CalledProcessError when return code not 0
res.check_returncode()
print(res.stdout)
# Hello from the child process
```
```python=
subprocess.run(["test", "-d", "someFolder"], check=True)
# CalledProcessError: Command '['test', '-d', 'someFolder']' returned non-zero exit status 1.
```
<aside class="notes">
可以用 check=True 捕捉錯誤,例如: test -d someFolder 不存在時會報錯
</aside>
----
- `subprocess.Popen` will not wait
- `Popen.poll()`
- terminated: return "return code"
- otherwise: return `None`
```python=
proc = subprocess.Popen(['sleep', '1']) # return a Popen object
while proc.poll() is None:
print('Working...')
print('Exit status', proc.poll())
# Working...
# Working...
# Working...
# Exit status 0
```
----
`Popen.communicate` wait and get output
```python=
proc = subprocess.Popen(
..., stdout=subprocess.PIPE, # pipe to the standard stream should be opened
)
try:
outs, errs = proc.communicate(timeout=15)
except TimeoutExpired:
proc.kill()
outs, errs = proc.communicate()
```
<aside class="notes">
從官網偷的
</aside>
----
manage streams
```python=
def run_encrypt(data):
env = os.environ.copy()
env['password'] = 'zf7ShyBhZOraQDdE/FiZpm/m/8f9X+M1'
proc = subprocess.Popen(
['openssl', 'enc', '-des3', '-pass', 'env:password'],
env=env,
stdin=subprocess.PIPE, # pipe to the standard stream should be opened
stdout=subprocess.PIPE
)
proc.stdin.write(data) # add some data to stdin
proc.stdin.flush() # Ensure that the child gets input
return proc
```
<aside class="notes">
class io.BufferedReader 的 flush method -> 把 write buffer 內的 data 丟進 stream 內,讓 stdin 可以保證吃到 write 丟進的資料
</aside>
----
### Chaining Parallel Processes
pipe one stdout to another stdin
```python=
def run_hash(input_stdin):
return subprocess.Popen(
['openssl', 'dgst', '-whirlpool', '-binary'],
stdin=input_stdin,
stdout=subprocess.PIPE
)
```
```python=
encrypt_procs = []
hash_procs = []
for _ in range(3):
data = os.urandom(100)
encrypt_proc = run_encrypt(data)
encrypt_procs.append(encrypt_proc)
# pipe
hash_proc = run_hash(encrypt_proc.stdout)
hash_procs.append(hash_proc)
# Allow encrypt_proc to receive a SIGPIPE if hash_proc exits.
encrypt_proc.stdout.close()
encrypt_proc.stdout = None # ?
```
```python=
# fan in
for proc in encrypt_procs:
proc.communicate()
for proc in hash_procs:
out, _ = proc.communicate()
print(out[-10:])
```
<aside class="notes">
有人看得懂最下面兩行嗎 <br />
Ensure that the child consumes the input stream and the communicate() method doesn't inadvertently steal input from the child. Also lets SIGPIPE propagate to the upstream process if the downstream process dies.<br />
https://stackoverflow.com/questions/23074705/ <br />
1. 通知下游結束了,寫進來的資料不用等他被讀完 <br />
2. asign stdout to None?
</aside>
----
### Timeout Subprocess
```python=
proc = subprocess.Popen(['sleep', '10'])
try:
proc.communicate(timeout=0.1) # raise TimeoutExpired after 0.1 sec
except subprocess.TimeoutExpired:
proc.terminate() # send SIGTERM to child process
proc.wait() # wait to the end and get return code
>>>
-15 # killed
```
<aside class="notes">
原本要跑 10 秒,但在 0.1 sec 後 timeout,並 proc.terminate() 殺掉
</aside>
---
## `threading`
> In CPython, the global interpreter lock, or GIL, is a mutex that protects access to Python objects, preventing multiple threads from executing Python bytecodes at once.
> One thread runs Python, while N others sleep or await I/O
<aside class="notes">
確保在 Python 運行時僅運行一個 Thread 來保證 Thread-safe。
</aside>
----
### Item 53: Use Threads for Blocking I/O, Avoid for Parallelism
```python=
from threading import Thread
threads = [] # fan out 5 threads to wait for I/O
for _ in range(5):
thread = Thread(target=slow_systemcall)
thread.start()
threads.append(thread)
for thread in threads: # fan in
thread.join()
```
<aside class="notes">
直接塞一個 function 給他跑的概念 -> slow_systemcall is a I/O blocked func
</aside>
----
### Item 54: Use `Lock` to Prevent Data Races in Threads
GIL is thread-safe on bytecodes, not Python data structure.
`+=` is **not atomic**
```python=
counter.count += 1
```
```python=
value = getattr(counter, 'count')
result = value + 1
setattr(counter, 'count', result)
```
<aside class="notes">
不是 atomic 就有可能跑到一半被插入別的事情,改到內部資料,造成非預期結果,例如: result = value + 1 跑了兩次,才跑 setattr, 這樣最後 value 還是一樣是 1
</aside>
----
```python=
from threading import Lock
class LockedCounter:
def __init__(self):
self.lock = Lock() # a lock instance for each instance
self.count = 0
def increment(self, offset):
with self.lock(): # mutex lock, make the block atomic
self.count += offset
```
```python=
counter = LockedCounter()
def worker(sensor_index, how_many, counter):
for _ in range(how_many):
... # read from sensor
counter.increment(1)
threads = [] # fan out 5 threads to wait for I/O
for i in range(5):
thread = Thread(target=worker, args=(i, how_many, counter))
threads.append(thread)
thread.start()
for thread in threads: # fan in
thread.join()
```
----
### Item 55: Use `Queue` to Coordinate Work Between Threads
I/O blocked pipeline, e.g. `download` -> `resize` -> `upload`
**a Queue can:**
- blocking operations
- buffer sizes
- joining
<aside class="notes">
其實就是一個會等待下一個資料進來的結構
</aside>
----
```python=
from queue import Queue
queue = Queue(1) # buffer size is 1
def consumer():
print('consumer waiting')
queue.get() # block, wait until some data are put
print('consumer get 1')
queue.get()
print('consumer get 2')
thread = Thread(target=consumer)
thread.start()
print('producer putting')
queue.put(object())
print('producer put 1')
queue.put(object())
print('producer put 2')
thread.join()
```
```
consumer waiting
producer putting
producer put 1
consumer get 1
producer put 2
consumer get 2
```
<aside class="notes">
一個簡單的例子,get 會 block,put 放入資料
</aside>
----
- `Queue.task_done()` mark last `get` done
- `Queue.join()` wait for all `put` done
```python=
task_count = 2
def consumer():
print('consumer waiting')
for i in range(task_count):
queue.get()
print('consumer working', i + 1)
print('consumer done', i + 1)
queue.task_done() # mark this task done
thread = Thread(target=consumer)
thread.start()
print('producer putting')
for i in range(task_count):
queue.put(object())
print('producer put', i + 1)
queue.join() # wait for all task done
print('producer done')
thread.join()
```
```
consumer waiting
producer putting
producer put 1
consumer working 1
consumer done 1
consumer working 2
consumer done 2
producer put 2
producer done
```
<aside class="notes">
task_done 會標記上一個 get 拿到的資料已經使用完
</aside>
----
### Task Pipeline
#### An iterable queue
```python=
class ClosableQueue(Queue):
SENTINEL = object() # arbitrary mark
def close(self):
self.put(self.SENTINEL)
def __iter__(self):
while True:
item = self.get()
try:
if item is self.SENTINEL:
return # end iter
yield item
finally:
self.task_done() # always mark task done for every iteration
```
#### A worker `get` from this queue and `put` to next queue
```python=
class StoppableWorker(Thread):
def __init__(self, func, in_queue, out_queue):
super().__init__()
self.func = func
self.in_queue = in_queue
self.out_queue = out_queue
def run(self):
for item in self.in_queue: # get
result = self.func(item)
self.out_queue.put(result)
```
<aside class="notes">
這邊示範把 Queue 和 Thread 包起來方便使用
</aside>
----
#### Start work!
```python=
download_queue = ClosableQueue()
resize_queue = ClosableQueue()
upload_queue = ClosableQueue()
done_queue = ClosableQueue() # get results here
threads = [
# download, resize, upload are functions
StoppableWorker(download, download_queue, resize_queue),
StoppableWorker(resize, resize_queue, upload_queue),
StoppableWorker(upload, upload_queue, done_queue),
]
for thread in threads:
thread.start()
```
```python=
for _ in range(n_jobs):
download_queue.put(object())
download_queue.close() # send "SENTINEL" to the queue
download_queue.join() # wait and mark the queue done
resize_queue.close()
resize_queue.join()
upload_queue.close()
upload_queue.join()
for thread in threads:
thread.join()
```
<aside class="notes">
每個 stage 都有一個 thread 去做 i/o block operation, 這樣就可以前後同時進行,並用 queue 確保資料的傳遞,最後都會到 done_queue
</aside>
----
#### Use multiple worker for every stage
```python=
def start_threads(count, *args):
threads = [StoppableWorker(*args) for _ in range(count)]
for thread in threads:
thread.start()
return threads
def stop_threads(closable_queue, threads):
for _ in threads:
closable_queue.close() # every worker can get "SENTINEL"
closable_queue.join()
for thread in threads:
thread.join()
```
use the function:
```python=
download_threads = start_threads(n_dowlnoad_worker, download, download_queue, resize_queue)
resize_threads = start_threads(n_resize_worker, resize, resize_queue, upload_queue)
upload_threads = start_threads(n_upload_worker, upload, upload_queue, done_queue)
for _ in range(n_jobs):
download_queue.put(object())
stop_threads(download_queue, download_threads)
stop_threads(resize_queue, resize_threads)
stop_threads(upload_queue, upload_threads)
```
<aside class="notes">
可以開多個 worker for each stage, 但不要太多(下一頁會解釋
</aside>
----
### Item 57: Avoid Creating New `Thread` Instances for On-demand Fan-out
- for create a new `Thread` instance
- extra memory ~ 8 MB
- heavy overhead
- Code changes
- catch exception in threads manually (by default NOT raise to caller)
- add locks
#### Solutions
- Queue (Item 58)
- cons: code changes, fixed worker count
- **Coroutines** (Newer, Next part!)
<aside class="notes">
不要動態開 thread, 開了太多 thread 搞不好還是會變慢 (比 i/o block 還慢)<br />
Item 58 看了浪費時間 XD
</aside>
---
# The End
Thanks for listening!
- [back to outline](#/1)
{"metaMigratedAt":"2023-06-16T23:19:14.908Z","metaMigratedFrom":"YAML","title":"Effective Python Chp 7-1: Concurrency and Parallelism","breaks":true,"slideOptions":"{\"height\":1000,\"width\":1500,\"theme\":\"white\"}","contributors":"[{\"id\":\"915f29e1-3f9c-4908-bbd4-a58795589e48\",\"add\":23473,\"del\":11243}]"}