# Training and Deployment with SageMaker The contents of this document include - Training by `Estimator` class in SageMaker python SDK. - Deployment by `PytorchModel` class. You also can refer [SageMaker Minimum Example](https://cinnamon.qiita.com/shared/items/b51ed86652c387ed8ccc) for the training with `Tensorflow`, `Pytorch` and `XMNetModel` class. ※ Using `Tensorflow`, `Pytorch` and `XMNetModel` is easier than `Estimator`. You'd better try to use `Tensorflow`, `Pytorch` and `XMNetModel` first, and if get dependencies error you should use `Estimator`. # Training Below is the schematic for the training job. <img src="https://i.imgur.com/UVGuf7k.png" width="80%"> When starting training job, docker container based on training code image in ECR run and read training data in S3. At the same time, GPU instance starts being in operate. After finishing training, the output model is transferred to S3 and the use of GPU instance is automatically stopped. In a nutshell, what you'll do for training in SageMaker is following 3 points. - Put training data into S3 in advance - Build a Docker image for training and push it to ECR - Create training Job with SageMaker SDK Let's look at these points in details.  ## Put Training Data into S3 This is easy. Just do it via AWS consle or [AWS CLI](https://docs.aws.amazon.com/cli/latest/reference/s3/index.html). ex.) `aws s3 cp {file_path} s3://{backet_path}/{path}`. ## Create Docker Image Please look at the directory structure bellow. <img src="https://i.imgur.com/EWONcz4.png" width="90%"> `/opt/program` ... Directory that you prepare by Dockerfile. `/opt/ml`... Directory which is prepared by SageMaker. When calling training job, `/opt/ml` is merged with `/opt/program` and **`docker run --rm <image_name> train`** runs internally. (You can overwrite `entry_point` other than `train` by setting `ENTORYPOINT` in `Dockerfile`.) So `train` must be the trigger script to start training. Docker container is removed after finishing `train` automatically. Here is how each each dictionary function. --- - `/opt/ml/config/` Where configs for training are to be stored. More details [later](### Set Hyperparameter as an Argument). - `/opt/ml/input/data/` Where the training data you pushed in S3 will be stored. The default `<channel name>` is `training`. - `/opt/ml/model/` Where the trained model should be stored after finishing training. The model files here is to be transferred to S3 automatically. - `/opt/ml/output/` Where standard error outputs while training are to be stored. --- In terms of `program` files, be aware of specifying path correctly so that output model directory stored in `/opt/ml/model/` and training data is read from `/opt/ml/input/data/<channnel_name>/`. Here is my Dockerfile example. <details><summary>Dockerfile</summary><div> ```Dockerfile FROM tensorflow/tensorflow:1.10.0-gpu-py3 RUN apt-get update RUN apt-get install -y software-properties-common RUN apt-get install -y \ libsm6 \ libxext6 \ libxrender-dev \ libxrender1 \ libsm6 \ libglib2.0-0 \ libxrender1 \ libxext6 \ git \ cmake ENV LANG C.UTF-8 ENV LC_ALL C.UTF-8 ENV PATH="/opt/ml:${PATH}" RUN mkdir -p /opt/ml/program COPY program /opt/ml/program/ RUN pip install --upgrade pip RUN pip install -r /opt/ml/program/requirements.txt COPY train /opt/ml/ RUN chmod +x /opt/ml/train WORKDIR /opt/ml/ ``` </div></details> ## Push Image to ECR This is example shell scrip to build and push the image to ECR. <details><summary>example script</summary><div> ``` shell:build_and_push.sh #!/usr/bin/env bash # This script shows how to build the Docker image and push it to ECR to be ready for use # by SageMaker. # The argument to this script is the image name. This will be used as the image on the local # machine and combined with the account and region to form the repository name for ECR. image=$1 if [ "$image" == "" ] then echo "Usage: $0 <image-name>" exit 1 fi chmod +x decision_trees/train chmod +x decision_trees/serve # Get the account number associated with the current IAM credentials account=$(aws sts get-caller-identity --query Account --output text) if [ $? -ne 0 ] then exit 255 fi # Get the region defined in the current configuration (default to us-west-2 if none defined) region=$(aws configure get region) region=${region:-us-west-2} fullname="${account}.dkr.ecr.${region}.amazonaws.com/${image}:latest" # If the repository doesn't exist in ECR, create it. aws ecr describe-repositories --repository-names "${image}" > /dev/null 2>&1 if [ $? -ne 0 ] then aws ecr create-repository --repository-name "${image}" > /dev/null fi # Get the login command from ECR and execute it directly $(aws ecr get-login --region ${region} --no-include-email) # Build the docker image locally with the image name and then push it to ECR # with the full name. docker build -t ${image} . docker tag ${image} ${fullname} docker push ${fullname} ``` </div></details> <br /> How to use: `bash build_and_push.sh <image_name>` tip: Before pushing image to ECR, check if Docker is built safely by `docker build -t <image_name>`. Pushing process takes time. ## Create Training Job We can start training with SageMaker python SDK using docker image in ECR. This is example code. <details><summary>example code</summary><div> ```python:train_script.py from sagemaker import get_execution_role from sagemaker.estimator import Estimator estimator = Estimator(role=get_execution_role(), train_instance_count=1, train_instance_type='ml.p3.8xlarge', output_path='s3://sagemaker-sdk-tuning/shumpei/model', image_name = f"{your_account}.dkr.ecr.{your_region}.amazonaws.com/{your_image}:latest", train_max_run=5*24*60*60, ) estimator.fit(inputs='s3://sagemaker-sdk-tuning/shumpei/training_data/SDK_Lines_data.tar.gz', job_name=None) ``` </div></details> <br /> tip: After `train_max_run` passing SageMaker terminates the training job regardless of its current status. The default set is 24 * 60 * 60 [sec] = 1 [day] and default limitation is 24 * 60 * 60 [sec] = 3 [days]. If you assume whole training takes more than 3 days, ask @Tetsuya Saito or @matoba to release limitation. You can set set `<channel name>` of training job as by using dict. ```diff:train_script.py - estimator.fit(inputs =s3://sagemaker-sdk-tuning/shumpei/training_data/SDK_Lines_data.tar.gz', job_name=None) + estimator.fit(inputs ={'ocr': 's3://sagemaker-sdk-tuning/shumpei/training_data/SDK_Lines_data.tar.gz'}, job_name=None) ``` Please refer [documentation](https://sagemaker.readthedocs.io/en/latest/estimators.html) for details about each argument of `Estimator`. ### Set Hyperparameter as an Argument You can set hyperparameter explicitly by defining `config` as bellow and set it as `hyperparameters` in `Estimator` ```diff:train_script.py + config = { 'n_epochs': 100, + 'batch_size': 16, + 'input_data_cache' : True + } ``` ```diff:train_script.py estimator = Estimator(role=get_execution_role(), train_instance_count=1, train_instance_type='ml.p3.8xlarge', output_path='s3://sagemaker-sdk-tuning/shumpei/model', image_name = f"{your_account}.dkr.ecr.{your_region}.amazonaws.com/{your_image}:latest", train_max_run=5*24*60*60, + hyperparameters=config ) ``` This config is stored in `/opt/ml/config/hyperparameter.json` when creating training job. Be careful that when config is stored as json file, the contents in config are converted into string. You need to change the type appropriately when using values in config. If you don't specify `hyperparameters`, you have to push image to ECR every time you change configs. That's why setting `hyperparameter` is effective. ### Use the Local Mode In local mode the local CPU is exploited instead of borrowing GPU instance. It is effective **when debugging your script quickly** because you can skip the process for launching GPU instance. You can use Local Mode by setting such as `instance_type` or `training_instance_type` `local`. tip: Only in local mode you can designate the path of training data stored in local. This help running time minimize by skipping the of pulling data from S3. In order to specify local data please follow the form of `file:///xxx/yyy/`. You'd better test your script with small data in local before using all data. The [official documentation ](https://sagemaker.readthedocs.io/en/stable/overview.html#local-mode)include a few important note. ### Check the Process of Training You can check the process of traning job such as CPU Utilization and GPU utilization from `SageMaker>Training>Training` in AWS console. ![](https://i.imgur.com/Y25dekq.png) For instance, if the GPU Utilization is low you can increase the number of batch size to make training process fast. # Deployment I take `PytorchModel` as an example for deployment. You can apply other classes such as `TensorflowModel`, `PytorchModel`, `MXNetModel` with the same way. Here I suppose the case when the you already have trained model, and want to deploy it to the endpoint. You don't need to create Web server nor Application server for the deployment because `PytorchModel` already incorporates them. ## Create Predictor First, create the predictor from `PytorchModel` <details><summary>example code</summary><div> ```python from sagemaker import get_execution_role from sagemaker.pytorch.model import PyTorchModel #role = 'arn:aws:iam::533155507761:role/sagemaker-example-admin' role = get_execution_role() model_data = 's3://sagemaker-sdk-tuning/shumpei/model/uocr.tar.gz' dependencies = ['/home/ec2-user/SageMaker/sagemaker/lib_ocr-universal-1.0'] model = PyTorchModel(model_data=model_data, role=role, dependencies=dependencies, entry_point='deploy.py', framework_version='1.0.0', env = {'PYTHONPATH':'/opt/ml/code/lib_ocr-universal-1.0:${PYTHONPATH}'} ) predictor = model.deploy(instance_type='ml.t2.large', initial_instance_count=1) ``` </div></details> <br /> `dependencies` is the directory that contains necessary libraries. For example, you can specify directory in the virtual environment. Directories set as `dependencies` is copied in `/opt/ml/code` in container. Be careful not to forget setting `PYTHONPATH` to use libraries. The libraries installed by default in `PyTorchModel` container are [here](https://github.com/aws/sagemaker-python-sdk/tree/master/src/sagemaker/pytorch). In the script specified in `endopint`, you write 4 kinds of method, `model_fn`, `input_fn`, `predict_fn` and `output_fn` to define the data flow from input to output. (You have to follow this format.) [Details here](https://sagemaker.readthedocs.io/en/stable/using_pytorch.html#model-loading). <details><summary>endpoint script</summary><div> ```python:deploy.py #!/usr/bin/python from __future__ import absolute_import import logging logger = logging.getLogger() from sagemaker_containers.beta.framework import (encoders, worker) import os from uocr.uocr_api import UOCR2Model def model_fn(model_dir): os.system('pip install /opt/ml/code/lib_ocr-universal-1.0') model = UOCR2Model() model.loadModel(model_path=os.path.join(model_dir, 'md_best_m002_crnn5_003_so_trf_syn.pth.tar'), char_path=os.path.join(model_dir, 'all_jp_charset_combine_v1.txt'), use_memory=False) return model def input_fn(data, content_type): return encoders.decode(data, content_type) def predict_fn(data, model): res = model.predict_img(data) return res def output_fn(prediction, accept): #application/x-npy return worker.Response(response=encoders.encode(prediction, accept), mimetype=accept) ``` </div></details> <br /> This script is imported inertially in SageMaker repeatedly, so it's not recommended to write some sentence in global scope. If `deploy()` run successfully, you can check endpoint from `SagerMaker>Inference>Endpioints` in AWS console. If not, check the log from `CloudWatch>Logs`. The result of inference returns by sending input data to the endpoint by `predict()`. <img src="https://cinnamon.qiita.com/files/8d70af30-52f7-ae1b-c80c-ae6c53f897b3.png" width="100%"> # For Additional Information Here is the information which will be helpful for the use of SageMaker: - [Official example code on GitHub](https://github.com/awslabs/amazon-sagemaker-examples) - [Official Documentation](https://sagemaker.readthedocs.io/en/latest/index.html) - [Source code of sagemaker-python-sdk](https://github.com/aws/sagemaker-python-sdk) To imitate the official example code depending on your situation is the easiest and fastest way. When you find official documentation is not enough to grasp how it function, source code of sagemaker-python-sdk is helpful. Furthermore, feel free to ask @shumpei, @khai, @matoba in slack if there is something unclear about SageMaker.