# Strategy for Concurrency in Network ## 1. Data parallelism: ### PyTorch: * single-machine, multi-machine: DDP(Decentralized, synchronous) * PS: RPC * asynchronize: RPC ### TF: * single-machine: MirroredStrategy(Decen, syn), CentralStorageStrategy (PS, Syn). * multi-machine: MultiWorkerMirroredStrategy(Decen, syn), ParameterServerStrategy (PS, asyn). ## 2. Model parallelism ### PyTorch: * RPC: ### TF: * no ## 3. Layer Pipelining - Pipeline parallelism ### PyTorch * RPC ### TF: * no # Concurrency in Training Các yếu tố ảnh hưởng đến whether centralize: network topology, bandwidth, communication latency, parameter update frequency and desired fault tolerance.