# Rootless Docker on NIG HPC 遺伝研スパコン上で Rootless Docker を使用する上での検証を行った。 ## モチベーション - 一般ユーザ権限で、Elasticsearch や Nginx などのコンテナを起動したい - 標準的に Docker を用いて記述する CWL といった Workflow を容易に実行したい - 現在利用されている Singularity では、解析に使用する Docker Image を一度 Singularity 用に Build し直す必要があったため、煩雑であった ## 背景知識とリンク 一番わかり易い図として、 ![Overview](https://i2.wp.com/cdn-images-1.medium.com/max/2000/1*SfAokC2YQ-f04Wc2WhSRCw.png) ref. from https://www.docker.com/blog/experimenting-with-rootless-docker/ 要は: - dockerd 自体を User Namespaces (Linux Kernel の機能) 内で起動する - User 権限での daemon process と User ごとの Docker Socket が作成される - Sub UID/GID を使って、コンテナ内の UID/GID を Remapping する - Container 内の様々な UID/GID で作成されたファイルを、実行した User の権限で扱うことが出来る 詳しい説明については下記のリンクを参照: - [Docker Docs - Rootless mode](https://matsuand.github.io/docs.docker.jp.onthefly/engine/security/rootless/) - [RootlessモードでDockerをより安全にする [DockerCon発表レポート]](https://medium.com/nttlabs/rootless-docker-12decb900fb9) - Rootless Docker の開発者 (NTT の須田さん) による解説 - [Docker Blog - Experimenting with Rootless Docker](https://www.docker.com/blog/experimenting-with-rootless-docker/) - [Docker rootlessで研鯖運用](https://drgripa1.hatenablog.com/entry/2021/05/08/195822) - 共有計算機で動かした例としては、これがわかりやすかった - [Installing and securing Docker rootless for production use ](https://medium.com/@flavienb/installing-and-securing-docker-rootless-for-production-use-8e358d1c0956) ## 検証 業務ノード (`at038`) で検証した。 ### インストール ```bash= [suecharo@at038 ~]$ uname -a Linux at038 3.10.0-862.14.4.el7.x86_64 #1 SMP Wed Sep 26 15:12:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux [suecharo@at038 ~]$ cat /etc/os-release NAME="CentOS Linux" VERSION="7 (Core)" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (Core)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:7" HOME_URL="https://www.centos.org/" BUG_REPORT_URL="https://bugs.centos.org/" CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7" [suecharo@at038 ~]$ id -a uid=4589(suecharo) gid=10652(oo-nig) groups=10652(oo-nig),3500(ddbj) ``` CentOS7 は、root 権限によるいくつかの設定が必要、また制約として: - ストレージドライバ `overlay2` が使用不可 - `systemd` (`systemctl --user`) による daemon の管理が不可 - これにより `cgroup` も使用不可になる ```bash= # for newuidmap and newgidmap $ curl -o /etc/yum.repos.d/vbatts-shadow-utils-newxidmap-epel-7.repo https://copr.fedorainfracloud.org/coprs/vbatts/shadow-utils-newxidmap/repo/epel-7/vbatts-shadow-utils-newxidmap-epel-7.repo $ yum install -y shadow-utils46-newxidmap # for network (recommended) $ yum install slirp4netns # for subordinate UID/GID # USER:START:COUNT $ echo "$(id -un):100000:65536" >> /etc/subuid $ echo "$(id -un):100000:65536" >> /etc/subgid $ echo "user.max_user_namespaces=28633" >> /etc/sysctl.conf $ sysctl -p ``` その後、ユーザ権限で rootless docker を install する。スクリプト中で行われていることは、環境の check と binary を download して `~/bin` に展開しているだけである。そのため、system の docker daemon を disable にする必要はない。 ```bash= $ curl -fsSL https://get.docker.com/rootless | sh # system の docker daemon が動いていて error が出た場合 $ curl -fsSL https://get.docker.com/rootless | FORCE_ROOTLESS_INSTALL=1 sh ``` 結果として、 ```bash= [suecharo@at038 ~]$ ls ~/bin containerd ctr docker-init dockerd-rootless-setuptool.sh rootlesskit-docker-proxy containerd-shim docker docker-proxy dockerd-rootless.sh runc containerd-shim-runc-v2 docker-compose dockerd rootlesskit vpnkit ``` docker command や dockerd などの binary が rootless 用に用意される。 PATH や環境変数を設定しておく。 ```bash= [suecharo@at038 ~]$ echo "PATH=/home/suecharo/bin:$PATH" >> ~/.bashrc [suecharo@at038 ~]$ echo "DOCKER_HOST=unix:///run/user/$(id -u)/docker.sock" >> ~/.bashrc [suecharo@at038 bin]$ source ~/.bashrc ``` `docker-compose` も install しておく。 ```bash= [suecharo@at038 ~]$ curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o ~/bin/docker-compose [suecharo@at038 ~]$ chmod +x ~/bin/docker-compose ``` ここで、rootless docker の data-root は `~/.local/share/docker` である。この場所が NFS などの場合 error が発生する (Luster の場合も発生した)。そのため、`~/.config/docker/daemon.json` において、別の場所を data-root として指定する。 ```bash= $ mkdir -p ~/.config/docker $ mkdir -p /data1/suecharo/rootless_docker $ echo '{"data-root": "/data1/suecharo/rootless_docker"}' > ~/.config/docker/daemon.json ``` 起動 ```bash= [suecharo@at038 ~]$ dockerd-rootless.sh --storage-driver vfs + case "$1" in + '[' -w /run/user/4589 ']' + '[' -w /home/suecharo ']' + rootlesskit= + for f in docker-rootlesskit rootlesskit + command -v docker-rootlesskit + for f in docker-rootlesskit rootlesskit + command -v rootlesskit + rootlesskit=rootlesskit + break + '[' -z rootlesskit ']' + : '' + : '' + : builtin + : auto + : auto + net= + mtu= + '[' -z ']' + command -v slirp4netns + '[' -z ']' + command -v vpnkit + net=vpnkit + '[' -z ']' + mtu=1500 + '[' -z ']' + _DOCKERD_ROOTLESS_CHILD=1 + export _DOCKERD_ROOTLESS_CHILD ++ id -u + '[' 4589 = 0 ']' + command -v selinuxenabled + selinuxenabled + exec rootlesskit --net=vpnkit --mtu=1500 --slirp4netns-sandbox=auto --slirp4netns-seccomp=auto --disable-host-loopback --port-driver=builtin --copy-up=/etc --copy-up=/run --propagation=rslave /home/suecharo/bin/dockerd-rootless.sh --storage-driver vfs + case "$1" in + '[' -w /run/user/4589 ']' + '[' -w /home/suecharo ']' + rootlesskit= + for f in docker-rootlesskit rootlesskit + command -v docker-rootlesskit + for f in docker-rootlesskit rootlesskit + command -v rootlesskit + rootlesskit=rootlesskit + break + '[' -z rootlesskit ']' + : '' + : '' + : builtin + : auto + : auto + net= + mtu= + '[' -z ']' + command -v slirp4netns + '[' -z ']' + command -v vpnkit + net=vpnkit + '[' -z ']' + mtu=1500 + '[' -z 1 ']' + '[' 1 = 1 ']' + rm -f /run/docker /run/containerd /run/xtables.lock + '[' -n '' ']' ++ stat -c %T -f /etc + '[' tmpfs = tmpfs ']' + '[' -L /etc/ssl ']' ++ realpath /etc/ssl + realpath_etc_ssl=/etc/.ro130063142/ssl + rm -f /etc/ssl + mkdir /etc/ssl + mount --rbind /etc/.ro130063142/ssl /etc/ssl + exec dockerd --storage-driver vfs INFO[2021-08-26T23:18:36.825210480+09:00] Starting up WARN[2021-08-26T23:18:36.825355483+09:00] Running in rootless mode. This mode has feature limitations. INFO[2021-08-26T23:18:36.825369739+09:00] Running with RootlessKit integration INFO[2021-08-26T23:18:36.828251772+09:00] libcontainerd: started new containerd process pid=53273 INFO[2021-08-26T23:18:36.828346890+09:00] parsed scheme: "unix" module=grpc INFO[2021-08-26T23:18:36.828368140+09:00] scheme "unix" not registered, fallback to default scheme module=grpc INFO[2021-08-26T23:18:36.828401533+09:00] ccResolverWrapper: sending update to cc: {[{unix:///run/user/4589/docker/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>} module=grpc INFO[2021-08-26T23:18:36.828430237+09:00] ClientConn switching balancer to "pick_first" module=grpc INFO[2021-08-26T23:18:36.850043245+09:00] starting containerd revision=e25210fe30a0a703442421b0f60afac609f950a3 version=v1.4.9 INFO[2021-08-26T23:18:36.894281361+09:00] loading plugin "io.containerd.content.v1.content"... type=io.containerd.content.v1 INFO[2021-08-26T23:18:36.894486887+09:00] loading plugin "io.containerd.snapshotter.v1.aufs"... type=io.containerd.snapshotter.v1 INFO[2021-08-26T23:18:36.900042841+09:00] skip loading plugin "io.containerd.snapshotter.v1.aufs"... error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found.\\n\"): skip plugin" type=io.containerd.snapshotter.v1 INFO[2021-08-26T23:18:36.900102693+09:00] loading plugin "io.containerd.snapshotter.v1.btrfs"... type=io.containerd.snapshotter.v1 INFO[2021-08-26T23:18:36.900567987+09:00] skip loading plugin "io.containerd.snapshotter.v1.btrfs"... error="path /data1/suecharo/rootless/containerd/daemon/io.containerd.snapshotter.v1.btrfs (xfs) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 INFO[2021-08-26T23:18:36.900605738+09:00] loading plugin "io.containerd.snapshotter.v1.devmapper"... type=io.containerd.snapshotter.v1 WARN[2021-08-26T23:18:36.900651664+09:00] failed to load plugin io.containerd.snapshotter.v1.devmapper error="devmapper not configured" INFO[2021-08-26T23:18:36.900675459+09:00] loading plugin "io.containerd.snapshotter.v1.native"... type=io.containerd.snapshotter.v1 INFO[2021-08-26T23:18:36.900712979+09:00] loading plugin "io.containerd.snapshotter.v1.overlayfs"... type=io.containerd.snapshotter.v1 INFO[2021-08-26T23:18:36.901628208+09:00] loading plugin "io.containerd.snapshotter.v1.zfs"... type=io.containerd.snapshotter.v1 INFO[2021-08-26T23:18:36.901948159+09:00] skip loading plugin "io.containerd.snapshotter.v1.zfs"... error="path /data1/suecharo/rootless/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 INFO[2021-08-26T23:18:36.901980900+09:00] loading plugin "io.containerd.metadata.v1.bolt"... type=io.containerd.metadata.v1 WARN[2021-08-26T23:18:36.902040953+09:00] could not use snapshotter devmapper in metadata plugin error="devmapper not configured" INFO[2021-08-26T23:18:36.902058686+09:00] metadata content store policy set policy=shared INFO[2021-08-26T23:18:36.902181467+09:00] loading plugin "io.containerd.differ.v1.walking"... type=io.containerd.differ.v1 INFO[2021-08-26T23:18:36.902220169+09:00] loading plugin "io.containerd.gc.v1.scheduler"... type=io.containerd.gc.v1 INFO[2021-08-26T23:18:36.902285472+09:00] loading plugin "io.containerd.service.v1.introspection-service"... type=io.containerd.service.v1 INFO[2021-08-26T23:18:36.902382354+09:00] loading plugin "io.containerd.service.v1.containers-service"... type=io.containerd.service.v1 INFO[2021-08-26T23:18:36.902410357+09:00] loading plugin "io.containerd.service.v1.content-service"... type=io.containerd.service.v1 INFO[2021-08-26T23:18:36.902460771+09:00] loading plugin "io.containerd.service.v1.diff-service"... type=io.containerd.service.v1 INFO[2021-08-26T23:18:36.902487181+09:00] loading plugin "io.containerd.service.v1.images-service"... type=io.containerd.service.v1 INFO[2021-08-26T23:18:36.902509513+09:00] loading plugin "io.containerd.service.v1.leases-service"... type=io.containerd.service.v1 INFO[2021-08-26T23:18:36.902533698+09:00] loading plugin "io.containerd.service.v1.namespaces-service"... type=io.containerd.service.v1 INFO[2021-08-26T23:18:36.902555499+09:00] loading plugin "io.containerd.service.v1.snapshots-service"... type=io.containerd.service.v1 INFO[2021-08-26T23:18:36.902581688+09:00] loading plugin "io.containerd.runtime.v1.linux"... type=io.containerd.runtime.v1 INFO[2021-08-26T23:18:36.902707996+09:00] loading plugin "io.containerd.runtime.v2.task"... type=io.containerd.runtime.v2 INFO[2021-08-26T23:18:36.902797734+09:00] loading plugin "io.containerd.monitor.v1.cgroups"... type=io.containerd.monitor.v1 INFO[2021-08-26T23:18:36.903528787+09:00] loading plugin "io.containerd.service.v1.tasks-service"... type=io.containerd.service.v1 INFO[2021-08-26T23:18:36.903611132+09:00] loading plugin "io.containerd.internal.v1.restart"... type=io.containerd.internal.v1 INFO[2021-08-26T23:18:36.903718183+09:00] loading plugin "io.containerd.grpc.v1.containers"... type=io.containerd.grpc.v1 INFO[2021-08-26T23:18:36.903753028+09:00] loading plugin "io.containerd.grpc.v1.content"... type=io.containerd.grpc.v1 INFO[2021-08-26T23:18:36.903780710+09:00] loading plugin "io.containerd.grpc.v1.diff"... type=io.containerd.grpc.v1 INFO[2021-08-26T23:18:36.903808522+09:00] loading plugin "io.containerd.grpc.v1.events"... type=io.containerd.grpc.v1 INFO[2021-08-26T23:18:36.903831786+09:00] loading plugin "io.containerd.grpc.v1.healthcheck"... type=io.containerd.grpc.v1 INFO[2021-08-26T23:18:36.903870879+09:00] loading plugin "io.containerd.grpc.v1.images"... type=io.containerd.grpc.v1 INFO[2021-08-26T23:18:36.903894474+09:00] loading plugin "io.containerd.grpc.v1.leases"... type=io.containerd.grpc.v1 INFO[2021-08-26T23:18:36.903936763+09:00] loading plugin "io.containerd.grpc.v1.namespaces"... type=io.containerd.grpc.v1 INFO[2021-08-26T23:18:36.903970837+09:00] loading plugin "io.containerd.internal.v1.opt"... type=io.containerd.internal.v1 INFO[2021-08-26T23:18:36.904034066+09:00] loading plugin "io.containerd.grpc.v1.snapshots"... type=io.containerd.grpc.v1 INFO[2021-08-26T23:18:36.904063231+09:00] loading plugin "io.containerd.grpc.v1.tasks"... type=io.containerd.grpc.v1 INFO[2021-08-26T23:18:36.904083709+09:00] loading plugin "io.containerd.grpc.v1.version"... type=io.containerd.grpc.v1 INFO[2021-08-26T23:18:36.904101793+09:00] loading plugin "io.containerd.grpc.v1.introspection"... type=io.containerd.grpc.v1 INFO[2021-08-26T23:18:36.904378182+09:00] serving... address=/run/user/4589/docker/containerd/containerd-debug.sock INFO[2021-08-26T23:18:36.904445839+09:00] serving... address=/run/user/4589/docker/containerd/containerd.sock.ttrpc INFO[2021-08-26T23:18:36.904496845+09:00] serving... address=/run/user/4589/docker/containerd/containerd.sock INFO[2021-08-26T23:18:36.904532302+09:00] containerd successfully booted in 0.056418s WARN[2021-08-26T23:18:36.913954255+09:00] Could not set may_detach_mounts kernel parameter error="error opening may_detach_mounts kernel config file: open /proc/sys/fs/may_detach_mounts: permission denied" INFO[2021-08-26T23:18:36.915074920+09:00] parsed scheme: "unix" module=grpc INFO[2021-08-26T23:18:36.915155191+09:00] scheme "unix" not registered, fallback to default scheme module=grpc INFO[2021-08-26T23:18:36.915197560+09:00] ccResolverWrapper: sending update to cc: {[{unix:///run/user/4589/docker/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>} module=grpc INFO[2021-08-26T23:18:36.915260999+09:00] ClientConn switching balancer to "pick_first" module=grpc INFO[2021-08-26T23:18:36.916365383+09:00] parsed scheme: "unix" module=grpc INFO[2021-08-26T23:18:36.916488855+09:00] scheme "unix" not registered, fallback to default scheme module=grpc INFO[2021-08-26T23:18:36.916550501+09:00] ccResolverWrapper: sending update to cc: {[{unix:///run/user/4589/docker/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>} module=grpc INFO[2021-08-26T23:18:36.916593141+09:00] ClientConn switching balancer to "pick_first" module=grpc INFO[2021-08-26T23:18:37.360275848+09:00] Loading containers: start. INFO[2021-08-26T23:18:37.739306102+09:00] Removing stale sandbox 97f10a72965a37d0ab4051b729cc320e3aee72e69c7f5e46be9228e28396efc5 (fd9fcc5fbfc6ecad1c1abc43f050c14563ded98b261125018cc1594cf4f7ef89) WARN[2021-08-26T23:18:37.771087275+09:00] Error (Unable to complete atomic operation, key modified) deleting object [endpoint 59b8d25312240c914bc3e60366dd84df05b5c2e30146e88b9e001d7d6e59c5f9 13e40763b84e341bf8bd589479adaa8974380d21039d3b6579fdb8a462810a65], retrying.... INFO[2021-08-26T23:18:37.831578835+09:00] Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address time="2021-08-26T23:18:37.987569866+09:00" level=info msg="starting signal loop" namespace=moby path=/run/.ro991887437/user/4589/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/fd9fcc5fbfc6ecad1c1abc43f050c14563ded98b261125018cc1594cf4f7ef89 pid=53602 INFO[2021-08-26T23:18:38.624539844+09:00] Loading containers: done. INFO[2021-08-26T23:18:38.647941520+09:00] Docker daemon commit=75249d8 graphdriver(s)=vfs version=20.10.8 INFO[2021-08-26T23:18:38.648137529+09:00] Daemon has completed initialization INFO[2021-08-26T23:18:38.680298866+09:00] API listen on /run/user/4589/docker.sock ``` ### Nginx 特権 port 以下 (default: 1024) は使用できない。 `docker-compose.nginx.yml` ```yaml= version: "3" services: app: image: nginx:latest restart: on-failure port - "6190:80" ``` ```bash= [suecharo@at038 rootless]$ docker-compose -f docker-compose.nginx.yml up -d Creating network "rootless_default" with the default driver Creating rootless_app_1 ... done [suecharo@at038 rootless]$ curl -fsSL localhost:6190/ <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> ... ``` ### Elasticsearch cgroup が使えなくても `ulimit` や `cpulimit` は使うことが出来る。 `docker-compose.es.yml` ```yaml= version: "3" services: db: image: docker.elastic.co/elasticsearch/elasticsearch:7.14.0 environment: discovery.type: "single-node" bootstrap.memory_lock: "true" ES_JAVA_OPTS: "-Xms512m -Xmx512m" TAKE_FILE_OWNERSHIP: "true" ports: - 6190:9200 ulimits: memlock: soft: -1 hard: -1 nofile: soft: 65536 hard: 65536 volumes: - /data1/suecharo/es-data:/usr/share/elasticsearch/data:rw restart: on-failure ``` ここで、Elasticsearch は、コンテナ内で `1000:0` として起動される。(Elasticsearch7 は root user での起動がプログラムレベルで禁止されている) そのため、`TAKE_FILE_OWNERSHIP` という環境変数を設定し、entrypoint 内で `/usr/share/elasticsearch/{data,logs}` を `1000:0` に変更する。(Volume mount された direcotry は、コンテナ内では `0:0`) また、Luster 領域を volume mount した場合、File I/O は問題ないが、Ownership の変更で error が発生した。そのため `/data1` 領域を volume mount した。 ```bash= [suecharo@at038 rootless]$ docker-compose -f docker-compose.es.yml up -d Creating network "rootless_default" with the default driver Creating rootless_db_1 ... done [suecharo@at038 rootless]$ curl localhost:6190/ { "name" : "74f2671d128d", "cluster_name" : "docker-cluster", "cluster_uuid" : "ylzD1YjoRcuxh5XYJPK5mw", "version" : { "number" : "7.14.0", "build_flavor" : "default", "build_type" : "docker", "build_hash" : "dd5a0a2acaa2045ff9624f3729fc8a6f40835aa1", "build_date" : "2021-07-29T20:49:32.864135063Z", "build_snapshot" : false, "lucene_version" : "8.9.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" } ``` ここで、Host 側から永続化した data 領域を確認すると、 ```bash= [suecharo@at038 rootless]$ ls -l /data1/suecharo/es-data 合計 0 drwxrwxr-x 3 100999 oo-nig 23 8月 26 23:48 nodes [suecharo@at038 rootless]$ rm -rf /data1/suecharo/es-data/nodes/ rm: `/data1/suecharo/es-data/nodes/' を削除できません: 許可がありません ``` `100999:oo-nig` となっている。 これは、`/etc/subuid` にて、`suecharo:100000:65536` と設定したため、`100000 + 1000 - 1` のことである。 これらの directory を消すためには、 ```bash= [suecharo@at038 es-data]$ docker run --rm -v /data1/suecharo/es-data:/mount -w /mount alpine:latest rm -rf nodes ``` のようにする。 ### cwltool Docker sibling も試してみる。 [GitHub - suecharo/sapporo_test_workflows](https://github.com/suecharo/sapporo_test_workflows): - CWL - WDL - Nextflow - Snakemake ```bash= [suecharo@at038 rootless]$ git clone git@github.com:suecharo/sapporo_test_workflows.git Cloning into 'sapporo_test_workflows'... Warning: Permanently added 'github.com,140.82.114.3' (RSA) to the list of known hosts. remote: Enumerating objects: 80, done. remote: Counting objects: 100% (80/80), done. remote: Compressing objects: 100% (58/58), done. remote: Total 80 (delta 32), reused 60 (delta 15), pack-reused 0 Receiving objects: 100% (80/80), 3.64 MiB | 2.97 MiB/s, done. Resolving deltas: 100% (32/32), done. Updating files: 100% (18/18), done. [suecharo@at038 rootless]$ cd sapporo_test_workflows [suecharo@at038 sapporo_test_workflows]$ docker run -i --rm \ > -v /run/user/4589/docker.sock:/var/run/docker.sock \ > -v ~/bin/docker:/usr/bin/docker \ > -v /tmp:/tmp \ > -v $PWD:$PWD \ > -w=$PWD \ > suecharo/cwltool:1.0.20191225192155 \ > --outdir ./results/qc_and_trimming/cwl \ > ./qc_and_trimming/cwl/trimming_and_qc.cwl \ > --fastq_1 ./qc_and_trimming/ERR034597_1.small.fq.gz \ > --fastq_2 ./qc_and_trimming/ERR034597_2.small.fq.gz \ > --nthreads 2 Unable to find image 'suecharo/cwltool:1.0.20191225192155' locally 1.0.20191225192155: Pulling from suecharo/cwltool 89d9c30c1d48: Pull complete 910c49c00810: Pull complete 356e2b6f7f7c: Pull complete e2700c1bbf1f: Pull complete b644b1801e4a: Pull complete 1d6b5ae58845: Pull complete da3d895d61d1: Pull complete 98fc897dedbe: Pull complete ee864c691a1d: Pull complete Digest: sha256:3119afc0693ce5231165708b8e99c16101f82a8853d57698049bf23ed2fa2e05 Status: Downloaded newer image for suecharo/cwltool:1.0.20191225192155 INFO /usr/local/bin/cwltool 1.0.20191225192155 INFO Resolved './qc_and_trimming/cwl/trimming_and_qc.cwl' to 'file:///home/suecharo/rootless/sapporo_test_workflows/qc_and_trimming/cwl/trimming_and_qc.cwl' INFO [workflow ] start INFO [workflow ] starting step qc_2 INFO [step qc_2] start INFO ['docker', 'pull', 'quay.io/biocontainers/fastqc:0.11.9--0'] 0.11.9--0: Pulling from biocontainers/fastqc a3ed95caeb02: Pulling fs layer 77c6c00e8b61: Pulling fs layer 3aaade50789a: Pulling fs layer 00cf8b9f3d2a: Pulling fs layer 7ff999a2256f: Pulling fs layer d2ba336f2e44: Pulling fs layer dfda3e01f2b6: Pulling fs layer 7ff999a2256f: Waiting 10c3bb32200b: Pulling fs layer 6d92b3a49ebf: Pulling fs layer dfda3e01f2b6: Waiting 10c3bb32200b: Waiting 00cf8b9f3d2a: Waiting 3aaade50789a: Download complete a3ed95caeb02: Download complete a3ed95caeb02: Pull complete 77c6c00e8b61: Verifying Checksum 77c6c00e8b61: Download complete 77c6c00e8b61: Pull complete 3aaade50789a: Pull complete 00cf8b9f3d2a: Verifying Checksum 7ff999a2256f: Verifying Checksum 7ff999a2256f: Download complete 00cf8b9f3d2a: Pull complete d2ba336f2e44: Verifying Checksum d2ba336f2e44: Download complete 7ff999a2256f: Pull complete d2ba336f2e44: Pull complete dfda3e01f2b6: Verifying Checksum dfda3e01f2b6: Download complete 10c3bb32200b: Download complete dfda3e01f2b6: Pull complete 10c3bb32200b: Pull complete 6d92b3a49ebf: Verifying Checksum 6d92b3a49ebf: Download complete 6d92b3a49ebf: Pull complete Digest: sha256:319b8d4eca0fc0367d192941f221f7fcd29a6b96996c63cbf8931dbb66e53348 Status: Downloaded newer image for quay.io/biocontainers/fastqc:0.11.9--0 quay.io/biocontainers/fastqc:0.11.9--0 INFO [job qc_2] /tmp/qxe6s0__$ docker \ run \ -i \ --volume=/tmp/qxe6s0__:/BPsmJP:rw \ --volume=/tmp/wg2ra16f:/tmp:rw \ --volume=/home/suecharo/rootless/sapporo_test_workflows/qc_and_trimming/ERR034597_2.small.fq.gz:/var/lib/cwl/stg4ee7d3cd-6ce4-4002-8591-4c1a2bbae40a/ERR034597_2.small.fq.gz:ro \ --workdir=/BPsmJP \ --read-only=true \ --log-driver=none \ --user=0:0 \ --rm \ --env=TMPDIR=/tmp \ --env=HOME=/BPsmJP \ --cidfile=/tmp/zr19i5yk/20210825031342-842423.cid \ quay.io/biocontainers/fastqc:0.11.9--0 \ fastqc \ -o \ . \ --threads \ 2 \ /var/lib/cwl/stg4ee7d3cd-6ce4-4002-8591-4c1a2bbae40a/ERR034597_2.small.fq.gz > /tmp/qxe6s0__/fastqc-stdout.log 2> /tmp/qxe6s0__/fastqc-stderr.log INFO [job qc_2] Max memory used: 0MiB INFO [job qc_2] completed success INFO [step qc_2] completed success INFO [workflow ] starting step trimming INFO [step trimming] start INFO ['docker', 'pull', 'quay.io/biocontainers/trimmomatic:0.38--1'] 0.38--1: Pulling from biocontainers/trimmomatic a3ed95caeb02: Already exists 77c6c00e8b61: Already exists 3aaade50789a: Already exists 00cf8b9f3d2a: Already exists 7ff999a2256f: Already exists d2ba336f2e44: Already exists dfda3e01f2b6: Already exists a3ed95caeb02: Already exists 10c3bb32200b: Already exists 216868b000fb: Pulling fs layer 216868b000fb: Verifying Checksum 216868b000fb: Download complete 216868b000fb: Pull complete Digest: sha256:e5a9dc8750d9413c09693cf9157f98f5ef0f1fc71cddbe501bc33db53c09d2cf Status: Downloaded newer image for quay.io/biocontainers/trimmomatic:0.38--1 quay.io/biocontainers/trimmomatic:0.38--1 INFO [job trimming] /tmp/56lwiofc$ docker \ run \ -i \ --volume=/tmp/56lwiofc:/BPsmJP:rw \ --volume=/tmp/blxij0r8:/tmp:rw \ --volume=/home/suecharo/rootless/sapporo_test_workflows/qc_and_trimming/ERR034597_1.small.fq.gz:/var/lib/cwl/stg88663388-daef-4671-8b0e-c46091526923/ERR034597_1.small.fq.gz:ro \ --volume=/home/suecharo/rootless/sapporo_test_workflows/qc_and_trimming/ERR034597_2.small.fq.gz:/var/lib/cwl/stg3a2eb541-dabc-4bd5-b401-3132fab8d85c/ERR034597_2.small.fq.gz:ro \ --workdir=/BPsmJP \ --read-only=true \ --log-driver=none \ --user=0:0 \ --rm \ --env=TMPDIR=/tmp \ --env=HOME=/BPsmJP \ --cidfile=/tmp/ouwwie28/20210825031411-123457.cid \ quay.io/biocontainers/trimmomatic:0.38--1 \ trimmomatic \ PE \ -threads \ 2 \ /var/lib/cwl/stg88663388-daef-4671-8b0e-c46091526923/ERR034597_1.small.fq.gz \ /var/lib/cwl/stg3a2eb541-dabc-4bd5-b401-3132fab8d85c/ERR034597_2.small.fq.gz \ ERR034597_1.small.fq.trimmed.1P.fq \ ERR034597_1.small.fq.trimmed.1U.fq \ ERR034597_1.small.fq.trimmed.2P.fq \ ERR034597_1.small.fq.trimmed.2U.fq \ ILLUMINACLIP:/usr/local/share/trimmomatic/adapters/TruSeq2-PE.fa:2:40:15 \ LEADING:20 \ TRAILING:20 \ SLIDINGWINDOW:4:15 \ MINLEN:36 > /tmp/56lwiofc/trimmomatic-pe-stdout.log 2> /tmp/56lwiofc/trimmomatic-pe-stderr.log INFO [job trimming] Max memory used: 0MiB INFO [job trimming] completed success INFO [step trimming] completed success INFO [workflow ] starting step qc_1 INFO [step qc_1] start INFO [job qc_1] /tmp/4omuda8t$ docker \ run \ -i \ --volume=/tmp/4omuda8t:/BPsmJP:rw \ --volume=/tmp/tij_301w:/tmp:rw \ --volume=/home/suecharo/rootless/sapporo_test_workflows/qc_and_trimming/ERR034597_1.small.fq.gz:/var/lib/cwl/stg53329028-6cf6-4744-b714-d6e10a598e1f/ERR034597_1.small.fq.gz:ro \ --workdir=/BPsmJP \ --read-only=true \ --log-driver=none \ --user=0:0 \ --rm \ --env=TMPDIR=/tmp \ --env=HOME=/BPsmJP \ --cidfile=/tmp/mc_jajjn/20210825031414-783175.cid \ quay.io/biocontainers/fastqc:0.11.9--0 \ fastqc \ -o \ . \ --threads \ 2 \ /var/lib/cwl/stg53329028-6cf6-4744-b714-d6e10a598e1f/ERR034597_1.small.fq.gz > /tmp/4omuda8t/fastqc-stdout.log 2> /tmp/4omuda8t/fastqc-stderr.log INFO [job qc_1] Max memory used: 0MiB INFO [job qc_1] completed success INFO [step qc_1] completed success INFO [workflow ] completed success INFO Final process status is success { "qc_result_1": { "location": "file:///home/suecharo/rootless/sapporo_test_workflows/results/qc_and_trimming/cwl/ERR034597_1.small_fastqc.html", "basename": "ERR034597_1.small_fastqc.html", "class": "File", "checksum": "sha1$0c15876a3b9b30b024b855cd65376212c6ab6c74", "size": 592394, "path": "/home/suecharo/rootless/sapporo_test_workflows/results/qc_and_trimming/cwl/ERR034597_1.small_fastqc.html" }, "qc_result_2": { "location": "file:///home/suecharo/rootless/sapporo_test_workflows/results/qc_and_trimming/cwl/ERR034597_2.small_fastqc.html", "basename": "ERR034597_2.small_fastqc.html", "class": "File", "checksum": "sha1$2781a231bfc548831f88edf8259b799937212c78", "size": 592566, "path": "/home/suecharo/rootless/sapporo_test_workflows/results/qc_and_trimming/cwl/ERR034597_2.small_fastqc.html" }, "trimmed_fastq1P": { "location": "file:///home/suecharo/rootless/sapporo_test_workflows/results/qc_and_trimming/cwl/ERR034597_1.small.fq.trimmed.1P.fq", "basename": "ERR034597_1.small.fq.trimmed.1P.fq", "class": "File", "checksum": "sha1$5188cdb59cab8010eb673e8bc113b7fd7a686660", "size": 5566650, "path": "/home/suecharo/rootless/sapporo_test_workflows/results/qc_and_trimming/cwl/ERR034597_1.small.fq.trimmed.1P.fq" }, "trimmed_fastq1U": { "location": "file:///home/suecharo/rootless/sapporo_test_workflows/results/qc_and_trimming/cwl/ERR034597_1.small.fq.trimmed.1U.fq", "basename": "ERR034597_1.small.fq.trimmed.1U.fq", "class": "File", "checksum": "sha1$be4d3444537d2596d122db1ca8e5955954094f6b", "size": 187289, "path": "/home/suecharo/rootless/sapporo_test_workflows/results/qc_and_trimming/cwl/ERR034597_1.small.fq.trimmed.1U.fq" }, "trimmed_fastq2P": { "location": "file:///home/suecharo/rootless/sapporo_test_workflows/results/qc_and_trimming/cwl/ERR034597_1.small.fq.trimmed.2P.fq", "basename": "ERR034597_1.small.fq.trimmed.2P.fq", "class": "File", "checksum": "sha1$cd1451093c91a9b619337a7bfbe592ad76554745", "size": 5560582, "path": "/home/suecharo/rootless/sapporo_test_workflows/results/qc_and_trimming/cwl/ERR034597_1.small.fq.trimmed.2P.fq" }, "trimmed_fastq2U": { "location": "file:///home/suecharo/rootless/sapporo_test_workflows/results/qc_and_trimming/cwl/ERR034597_1.small.fq.trimmed.2U.fq", "basename": "ERR034597_1.small.fq.trimmed.2U.fq", "class": "File", "checksum": "sha1$dd7edf1b2d599eb3c8fa49426be55d8a0360561c", "size": 80131, "path": "/home/suecharo/rootless/sapporo_test_workflows/results/qc_and_trimming/cwl/ERR034597_1.small.fq.trimmed.2U.fq" } } ``` 特に問題なし。 ```bash= [suecharo@at038 sapporo_test_workflows]$ ls -l results 合計 4 drwxr-xr-x 3 suecharo oo-nig 4096 8月 25 12:14 qc_and_trimming ``` ## 結論と Future Work - 結論 - CentOS7 の rootless docker の初期設定としては、package の追加や kernel params. の変更などあったが、比較的新しいディストリビューションにおいては、default で設定されているものである - これらが設定されていれば、user がそれぞれで binary を download してきて、実行することが出来る - Network bind, volume mount, docker sibling などについて検証を行ったが、特に問題はなかった - Sapporo も普通に動作する - Future Work - rootless dockerd を daemon 化出来ていないため、再起動などによりコンテナが停止する - 何かしらの wrapper script を用意するか、そういう仕様だと割り切る - UGE 内での利用においては、job の init 処理として、rootless docker を起動することになると思われる