Detailed View
===
## Table of Contents
[TOC]
# Introduction
Now let's take a detailed look at some specific aspects of the architecture.
## CDN
All is pretty well with it = )
## Heads up
The only `important note`: we are limited with `100 seconds` till getting `timeout error`. That means in case of our `backend` is not able to `process the workload` in `100 seconds timeframe` - it will return an `error code` (actually `Cloudflare` will).
Yes, normally we shouldn't ever face that `issue`, but it can happen in case of requesting some huge chunks of data (i.e. reports). And even then we shouldn't behave like that, but this is in the `backlog` (the `dev scope` of it).
## Optimizations
We have a numerous `optimizations enabled` in `CloudFlare`.
Just keep in mind that most of them are adjusting the `static content` (compressing, etc.) so that `hard etags` will `not work` until we have those enabled.
## Google Cloud Platform
### Load balancers
We use `1 LB` for `k8s cluster`.
### MySQL
We have backups enabled (every 24 hours).
## Kubernetes
### Engine
So, we use `GKE` as our engine.
But it has some issues.
Looks like high CPU load is caused by `backend agents of GKE`, performing some `routines for collecting metrics` of the `pod` and `not reflected` to the `GKE dashboards`.
One of the observations is the following: `as more pods per node we have` (even with 0% resources consumption or reservation) - the `more CPU load` on that node we get.
`Resources available` per `node` in `k8s cluster`:
`machine type`: 2 CPU 7.5 GB RAM
`available for allocation`: 2 CPU 5.5 GB RAM
`k8s agents consume`: 1 CPU
`real capacity` per node: `1 CPU` `5.5 GB` RAM
We can try switching to `another machine type` to get `less overhead` on resources per node:
(considering `1 core and 2GB RAM` required to maintain `GKE agents`)
1) 2xCPU 7.5xRAM: 50% CPU, 26% RAM
2) 4xCPU 15xRAM: 25% CPU, 13% RAM
3) 8xCPU 30xRAM: 16% CPU, 7% RAM
Other important points to take into consideration:
as less nodes in GKE we have - as less we pay for GKE.
and we pay only ~20% (normally) per preemptible instances, so 8xCPU 30xRAM is for just 20% of normal cost.
So `final proposal` is to move for:
`4CPU 15xRAM` `preemptible` or `8xCPU` `30xRAM` `preemptible`
### Ingress
#### YAML
```yaml=
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: https
nginx.ingress.kubernetes.io/client-max-body-size: 256m
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: PUT,GET,POST,OPTIONS
nginx.ingress.kubernetes.io/cors-allow-origin: https://www.idcreator.com
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 256m
name: idc
namespace: prod
spec:
rules:
- host: www.idcreator.com
http:
paths:
- backend:
serviceName: ingress-e19732be851ff854e45de7f8743c31a5
servicePort: 443
- host: media.idcreator.com
http:
paths:
- backend:
serviceName: ingress-ade62b97717513741d1536be1df17eb8
servicePort: 443
tls:
- hosts:
- www.idcreator.com
- media.idcreator.com
```
### Pods
#### IDC
##### YAML
```yaml=
apiVersion: apps/v1
kind: Deployment
metadata:
name: idc
namespace: prod
spec:
progressDeadlineSeconds: 600
replicas: 8
revisionHistoryLimit: 10
selector:
matchLabels:
workload.user.cattle.io/workloadselector: deployment-prod-idc
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: idc
workload.user.cattle.io/workloadselector: deployment-prod-idc
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: instance_type
operator: In
values:
- persistent
- preemtible
containers:
- image: dawnbreather/php-fpm:idc
imagePullPolicy: Always
name: idc
ports:
- containerPort: 443
name: https
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: false
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
runAsNonRoot: false
stdin: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
tty: true
volumeMounts:
- mountPath: /home/idc/www
name: static-content
- mountPath: /home/idc/www/pub/media/storage
name: media-prod
- mountPath: /home/idc/www/pub/static/_cache
name: cache-prod
- image: dawnbreather/nginx:idc
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 2
successThreshold: 1
tcpSocket:
port: 443
timeoutSeconds: 2
name: nginx
ports:
- containerPort: 443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 2
successThreshold: 2
tcpSocket:
port: 443
timeoutSeconds: 2
resources: {}
securityContext:
allowPrivilegeEscalation: false
privileged: false
readOnlyRootFilesystem: false
runAsNonRoot: false
stdin: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
tty: true
volumeMounts:
- mountPath: /home/idc/www
name: static-content
- mountPath: /home/idc/www/pub/media/storage
name: media-prod
- mountPath: /home/idc/www/pub/static/_cache
name: cache-prod
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- hostPath:
path: /tmp/static-content-prod/www
type: Directory
name: static-content
- name: media-prod
persistentVolumeClaim:
claimName: media-prod
- name: cache-prod
persistentVolumeClaim:
claimName: cache-prod
```
##### Images
* `dawnbreather/php-fpm:idc`
* `dawnbreather/nginx:idc`
##### Description
We use `idc` pod for handling the major processing in our backend.
It is `serving static content` over `nginx sidecar` and performs processing over `php-fpm`.
##### Dockerfile
###### PHP-FPM
```dockerfile=
FROM ubuntu:18.04 as zint-builder
RUN apt update && apt-get install -y cmake make g++ qtbase5-dev wget libpng-dev
RUN wget https://sourceforge.net/projects/zint/files/zint/2.6.3/zint-2.6.3_final.tar.gz
RUN tar xvf zint-2.6.3_final.tar.gz
RUN cd zint* \
&& cmake . \
&& make \
&& make install
FROM ubuntu:18.04
RUN apt update \
&& apt install -y software-properties-common \
&& add-apt-repository ppa:ondrej/php \
&& apt update \
&& apt install -y php7.1 php7.1-fpm php7.1-common php7.1-gmp php7.1-curl php7.1-soap php7.1-bcmath php7.1-intl php7.1-mbstring php7.1-xmlrpc php7.1-mcrypt php7.1-mysql php7.1-gd php7.1-xml php7.1-cli php7.1-zip \
&& apt install -y freetype* \
&& apt install -y build-essential libtool libxml2-dev libcurl4-gnutls-dev libwebp-dev libjpeg-dev libpng-dev libxpm-dev libfreetype6-dev libbz2-dev pkg-config libssl-dev \
&& apt install -y curl wget \
&& apt install -y screen \
&& apt install -y sudo openssh-client rsync vim git \
&& apt install -y openssh-server \
&& rm -rf /var/lib/apt/lists/*
COPY --from=zint-builder /usr/local/share/apps/cmake/modules/FindZint.cmake /usr/local/share/apps/cmake/modules/FindZint.cmake
COPY --from=zint-builder /usr/local/lib/libzint.so.2.6.3 /usr/local/lib/libzint.so.2.6.3
COPY --from=zint-builder /usr/local/lib/libzint.so.2.6 /usr/local/lib/libzint.so.2.6
COPY --from=zint-builder /usr/local/lib/libzint.so /usr/local/lib/libzint.so
COPY --from=zint-builder /usr/local/include/zint.h /usr/local/include/zint.h
COPY --from=zint-builder /usr/local/bin/zint /usr/local/bin/zint
COPY php-fpm.d/php-fpm.conf-ubuntu /etc/php/7.1/fpm/php-fpm.conf
COPY php.ini /etc/php/7.1/fpm/php.ini
#COPY --from=gcr.io/cloudsql-docker/gce-proxy:1.11 /cloud_sql_proxy /bin
# Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer \
&& mkdir /root/.composer || :
COPY composer-auth.json /root/.composer/auth.json
# GCloud PROXY SQL
COPY --from=gcr.io/cloudsql-docker/gce-proxy:latest /cloud_sql_proxy /bin
# OpenSSH SERVER service enable
RUN update-rc.d ssh defaults
RUN usermod --shell /bin/bash www-data \
&& usermod -d /home/idc www-data
RUN mkdir -p /home/idc/.ssh \
&& mkdir -p /home/idc/www \
&& chown -R www-data:www-data /home/idc \
&& mkdir -p /root/.ssh
COPY system/ssh/* /home/idc/.ssh/
COPY system/ssh/authorized_keys /root/.ssh/authorized_keys
RUN chown -R www-data:www-data /home/idc/.ssh \
&& chmod 600 -R /home/idc/.ssh/* \
&& chmod 600 -R /root/.ssh
RUN usermod -aG sudo www-data
RUN echo "www-data ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
WORKDIR /home/idc/www
CMD ["/usr/sbin/php-fpm7.1", "--nodaemonize", "--fpm-config", "/etc/php/7.1/fpm/php-fpm.conf"]
```
###### Nginx
```dockerfile=
FROM nginx
RUN mkdir -p /etc/nginx/snippets \
&& mkdir -p /etc/nginx/conf.d \
&& mkdir -p /etc/nginx/ssl
COPY config/general/fastcgi_params.conf /etc/nginx/
COPY config/general/mime.conf /etc/nginx/
COPY config/general/nginx.conf /etc/nginx/
COPY config/snippets/301-map.conf /etc/nginx/snippets/
COPY config/snippets/301-rewrites.conf /etc/nginx/snippets/
COPY config/site/idcreator.conf /etc/nginx/conf.d/
COPY config/general/ssl.conf /etc/nginx
COPY config/general/ssl/wild.idcreator.com-2029-11-26-152548.cer /etc/nginx/ssl/cert.crt
COPY config/general/ssl/wild.idcreator.com-2029-11-26-152548.pkey /etc/nginx/ssl/cert.key
# copy sev binary
COPY --from=dawnbreather/sev:alpine /bin/sev /bin/sev
# sev env vars storage
ENV VAR_NAMES_STORAGE_PATH=""
ENV VAR_NAMES_STORAGE="PHP_FPM_SOCKET, SERVER_NAMES"
# idcreator env vars
ENV PHP_FPM_SOCKET="localhost:9000"
ENV SERVER_NAMES="*.idcreator.com"
EXPOSE 443
CMD ["/bin/bash", "-c", "sev /etc/nginx/conf.d/idcreator.conf && nginx -g \"daemon off;\""]
```
#### Cloud Sql Proxy
##### YAML
```yaml=
```
##### Image
gcr.io/cloudsql-docker/gce-proxy:1.12
##### Description
This is a `Google's utility` to establish connectivity with `DB engines` hosted on `SQL service` since those do not have `direct endpoints`. Actually `Cloud Sql Proxy` is establishing `SSH tunnels` to the `SQL instances` and exposing the corresponding `TCP ports` over itself.
#### Jenkins worker
##### YAML
```yaml=
apiVersion: apps/v1
kind: Deployment
metadata:
name: master-agent-jnlp
namespace: jenkins
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
workload.user.cattle.io/workloadselector: deployment-jenkins-master-agent-jnlp
strategy:
type: Recreate
template:
metadata:
annotations:
cattle.io/timestamp: "2019-12-05T10:31:17Z"
creationTimestamp: null
labels:
workload.user.cattle.io/workloadselector: deployment-jenkins-master-agent-jnlp
spec:
affinity: {}
containers:
- env:
- name: JENKINS_AGENT_NAME
value: master-agent
- name: JENKINS_AGENT_SECRET
value: d908773a2e826862faecb0daf2e6144882234f2391979729d591051e70a97c8e
envFrom:
- secretRef:
name: env-prod
optional: false
image: dawnbreather/jenkins-agent:idc
imagePullPolicy: Always
name: master-agent-jnlp
resources:
limits:
cpu: 1500m
memory: 3000Mi
requests:
cpu: 50m
memory: 128Mi
securityContext:
allowPrivilegeEscalation: true
privileged: true
readOnlyRootFilesystem: false
runAsNonRoot: false
stdin: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
tty: true
dnsConfig: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
```
##### Image
dawnbreather/jenkins-agent:idc
##### Description
We use this pod in `CI/CD` workflow. It is actually processing all the `CI/CD` related operations.
##### Dockerfile
```dockerfile=
## https://hub.docker.com/r/jenkins/slave/dockerfile ##
#######################################################
FROM openjdk:8-jdk as jenkins-agent
ARG VERSION=3.35
ARG user=jenkins
ARG group=jenkins
ARG uid=1000
ARG gid=1000
RUN groupadd -g ${gid} ${group}
RUN useradd -c "Jenkins user" -d /home/${user} -u ${uid} -g ${gid} -m ${user}
LABEL Description="This is a base image, which provides the Jenkins agent executable (agent.jar)" Vendor="Jenkins project" Version="${VERSION}"
ARG AGENT_WORKDIR=/home/${user}/agent
RUN echo 'deb http://deb.debian.org/debian stretch-backports main' > /etc/apt/sources.list.d/stretch-backports.list
RUN apt-get update && apt-get install -t stretch-backports git-lfs
RUN curl --create-dirs -fsSLo /usr/share/jenkins/agent.jar https://repo.jenkins-ci.org/public/org/jenkins-ci/main/remoting/${VERSION}/remoting-${VERSION}.jar \
&& chmod 755 /usr/share/jenkins \
&& chmod 644 /usr/share/jenkins/agent.jar \
&& ln -sf /usr/share/jenkins/agent.jar /usr/share/jenkins/slave.jar
USER ${user}
ENV AGENT_WORKDIR=${AGENT_WORKDIR}
RUN mkdir /home/${user}/.jenkins && mkdir -p ${AGENT_WORKDIR}
VOLUME /home/${user}/.jenkins
VOLUME ${AGENT_WORKDIR}
WORKDIR /home/${user}
## https://hub.docker.com/r/jenkinsci/jnlp-slave/dockerfile ##
##############################################################
ARG user=jenkins
USER root
## https://github.com/tehranian/dind-jenkins-slave/blob/master/Dockerfile ##
############################################################################
# Adapted from: https://registry.hub.docker.com/u/jpetazzo/dind/dockerfile/
RUN apt-get update -qq && apt-get install -qqy \
apt-transport-https \
ca-certificates \
curl \
software-properties-common && \
rm -rf /var/lib/apt/lists/*
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
RUN add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
# Install Docker from Docker Inc. repositories.
RUN apt-get update && apt-get install -y docker-ce && rm -rf /var/lib/apt/lists/*
ADD wrapdocker /usr/local/bin/wrapdocker
RUN chmod +x /usr/local/bin/wrapdocker
VOLUME /var/lib/docker
RUN usermod -a -G docker ${user}
## Dockerfile.ubuntu-2 ##
#########################
RUN apt update \
&& apt install -y software-properties-common sudo \
&& apt install -y ca-certificates apt-transport-https \
#&& add-apt-repository ppa:ondrej/php \
&& wget -q https://packages.sury.org/php/apt.gpg -O- | apt-key add - \
&& echo "deb https://packages.sury.org/php/ stretch main" | tee /etc/apt/sources.list.d/php.list \
&& apt update \
&& apt install -y php7.1 php7.1-common php7.1-gmp php7.1-curl php7.1-soap php7.1-bcmath php7.1-intl php7.1-mbstring php7.1-xmlrpc php7.1-mcrypt php7.1-mysql php7.1-gd php7.1-xml php7.1-cli php7.1-zip \
&& apt install -y freetype* \
&& apt install -y build-essential libtool libxml2-dev libcurl4-gnutls-dev libwebp-dev libjpeg-dev libpng-dev libxpm-dev libfreetype6-dev libbz2-dev pkg-config libssl-dev \
&& apt install -y curl wget \
&& apt install -y screen \
&& apt install -y openssh-client rsync vim git \
&& apt install -y python python-pip \
#&& apt install -y openssh-server \
&& rm -rf /var/lib/apt/lists/*
# Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer \
&& mkdir /root/.composer || :
COPY composer-auth.json /root/.composer/auth.json
RUN usermod --shell /bin/bash www-data \
&& usermod -d /home/idc www-data
RUN mkdir -p /home/idc/.ssh \
&& mkdir -p /home/idc/www \
&& chown -R www-data:www-data /home/idc
COPY system/ssh/* /home/idc/.ssh/
RUN chown -R www-data:www-data /home/idc/.ssh \
&& chmod 600 -R /home/idc/.ssh/*
# Node.js
ENV NVM_DIR /usr/local/nvm
ENV NVM_BIN_DIR /opt/nvm
ENV NODE_VERSION 4.2.6
RUN cp /root/.bashrc /home/idc/ \
&& echo export NVM_DIR=/usr/local/nvm >> /home/idc/.bashrc \
&& echo export NODE_VERSION=/opt/nvm >> /home/idc/.bashrc \
&& echo export NVM_BIN_DIR=/opt/nvm >> /home/idc/.bashrc \
&& echo source $NVM_BIN_DIR/nvm.sh >> /home/idc/.bashrc \
&& chown www-data:www-data /home/idc/.bashrc
RUN git clone https://github.com/creationix/nvm.git ${NVM_BIN_DIR}
RUN mkdir -p ${NVM_DIR}
SHELL ["/bin/bash", "-c"]
RUN source ${NVM_BIN_DIR}/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
ENV NODE_PATH ${NVM_DIR}/v${NODE_VERSION}/lib/node_modules
ENV PATH ${NVM_DIR}/versions/node/v${NODE_VERSION}/bin:${PATH}
# Ansible
RUN pip install ansible
# Bower
RUN npm install -g bower gulp
# Jenkins-agent
COPY jenkins-agent /usr/local/bin/jenkins-agent
RUN chmod +x /usr/local/bin/jenkins-agent &&\
ln -s /usr/local/bin/jenkins-agent /usr/local/bin/jenkins-slave
#USER ${user}
ENV JNLP_PROTOCOL_OPTS -Dorg.jenkinsci.remoting.engine.JnlpProtocol3.disabled=true
ENTRYPOINT ["/bin/bash", "-c", "/usr/local/bin/wrapdocker & jenkins-agent"]
# -url http://jenkins.jenkins:8080/ -workDir /home/jenkins/agent -headless -tunnel jenkins-agent.jenkins:50000 <secret> <agent-node-name>
```
#### Static Storage Sync
##### Image
dawnbreather/static-storage-sync:idc
##### Description
We use `static-storage-sync` pod deploying updated static content to the `nodes`
##### Dockerfile
```dockerfile=
FROM debian:stretch as static-storage-sync
ENV NEW_HOME=/home/www-data
RUN apt-get update \
&& apt-get install -y git openssh-client sudo \
&& rm -rf /var/lib/apt/lists/*
RUN usermod --shell /bin/bash www-data \
&& usermod -d /home/www-data www-data
RUN mkdir -p ${NEW_HOME}/.ssh
COPY system/ssh/* ${NEW_HOME}/.ssh/
RUN chmod 600 -R ${NEW_HOME}/.ssh/* \
&& chown -R 33:33 ${NEW_HOME}/.ssh
RUN echo "StrictHostKeyChecking no" >> /etc/ssh/ssh_config
ENV SYNC_GIT_REPO_URL=""
ENV SYNC_STATIC_STORAGE_PATH="/home/idc/www"
ENV SYNC_GIT_BRANCH="staging"
CMD ["/bin/bash", "-c", \
"env | sort | grep SYNC_ \
&& mkdir -p ${SYNC_STATIC_STORAGE_PATH} \
&& sudo chown -R 33:33 ${SYNC_STATIC_STORAGE_PATH} \
&& rm -rf ${SYNC_STATIC_STORAGE_PATH}/.git/index.lock \
&& su -c \"git clone ${SYNC_GIT_REPO_URL} ${SYNC_STATIC_STORAGE_PATH} || echo Static storage already existing && echo Static storage cloned \" www-data \
&& cd ${SYNC_STATIC_STORAGE_PATH} \
&& su -c \"git checkout ${SYNC_GIT_BRANCH} --force\" www-data \
&& su -c \"git add *\" www-data \
&& su -c \"git stash\" www-data \
&& su -c \"git pull\" www-data \
&& cat" ]
```
#### Redis
We use `redis clusters` deployed over `helm charts`:
* `redis-session`
* `redis-general`
* `redis-frontend`
#### Registries
`Registries`:
* `dawnbreather`
This is an access to the private space on `Docker Hub` where we store our `container images` (temporary space).
## CI/CD
### Jenkins
Our Jenkins is deployed over `helm chart` in `k8s cluster` and has `k8s` integration out of the box. So, we don't have any `agents` running all the time and they are `ephemeral`, being created `on-demand` as `pods` in the `cluster`.
#### Jenkinsfile
We handle only `master`, `staging` and `develop` branches.
```groovy=
def String determineRepoName() {
return scm.getUserRemoteConfigs()[0].getUrl().tokenize('/').last().split("\\.")[0]
}
def stageSwitcher = [
printEnvironmentVariables : true,
compileStaticContent : true,
copyStaticContentToGitStorage : true,
cleanMagentoCache : true,
updateStaticContent : true,
]
/* Declarative pipeline must be enclosed within a pipeline block */
pipeline {
environment {
// IMG_REG_URL="harbor.idcreator.com"
// IMG_REG_CREDS=credentials("harbor-credentials")
STATIC_STORAGE_GIT_URL="git@gitea.scopicsoftware.com:idc/static-content-storage.git"
STATIC_STORAGE_LOCAL_REPO_PATH="/home/jenkins/git-static-storage"
RANCHER_BASE_URL="https://rancher.idcreator.com/"
RANCHER_TOKEN=credentials("rancher-token")
CLUSTER_NAME="prod"
}
//agent {
// kubernetes {
// label "ci-${determineRepoName()}"
// yamlFile 'k8s-ci-pod.yml'
// defaultContainer 'jnlp'
// }
//}
agent { label "${BRANCH_NAME}-agent" }
stages {
stage('Print environment variables') {
when { expression { "${stageSwitcher.printEnvironmentVariables}" == "true" } }
steps {
script {
echo sh(script: 'env|sort', returnStdout: true);
}
}
}
stage('Compile static content'){
when { expression { "${stageSwitcher.compileStaticContent}" == "true" } }
steps {
script {
//container('node-builder'){
sh '''
cd badge-maker-source
npm install
bower install --allow-root
# wget https://storage.googleapis.com/idc-kendo/kendo.all.min.js -O bower_components/kendo-ui-core/js/kendo.all.min.js
gulp build --env production
'''
//}
//container('php-fpm'){
sh '''
composer install
php bin/magento setup:upgrade
php bin/magento setup:di:compile
php bin/magento setup:static-content:deploy
'''
//}
}
}
}
stage('Copy static content to Git storage'){
when { expression { "${stageSwitcher.copyStaticContentToGitStorage}" == "true" } }
steps {
sh '''
export git_static_storage_path=${STATIC_STORAGE_LOCAL_REPO_PATH}
export current_working_dir=`pwd`
export gssp=$git_static_storage_path
export cwd=$current_working_dir
git clone ${STATIC_STORAGE_GIT_URL} $gssp || echo 0
cd $gssp
git checkout ${BRANCH_NAME}
git pull --force
rsync -a --info=progress2 --delete $cwd/badge-maker/ $gssp/badge-maker
rsync -a --info=progress2 --delete $cwd/app/ $gssp/app
rsync -a --info=progress2 --delete $cwd/bin/ $gssp/bin
rsync -a --info=progress2 --delete $cwd/lib/ $gssp/lib
rsync -a --info=progress2 --delete $cwd/generated/ $gssp/generated
rsync -a --info=progress2 --delete $cwd/pub/ $gssp/pub
rsync -a --info=progress2 --delete $cwd/tcpdf-fonts/ $gssp/tcpdf-fonts
rsync -a --info=progress2 --delete $cwd/var/ $gssp/var
rsync -a --info=progress2 --delete $cwd/vendor/ $gssp/vendor
rsync -a --info=progress2 --delete $cwd/dev/ $gssp/dev
rsync -a --info=progress2 --delete $cwd/setup/ $gssp/setup
rsync -a --info=progress2 --delete $cwd/update/ $gssp/update
rsync -a --info=progress2 --delete $cwd/update/ $gssp/update
rsync -a --info=progress2 --delete $cwd/index.php $gssp
rsync -a --info=progress2 --delete $cwd/info.php $gssp
rsync -a --info=progress2 --delete $cwd/auth.json $gssp
rsync -a --info=progress2 --delete $cwd/composer.json $gssp
rsync -a --info=progress2 --delete $cwd/composer.lock $gssp
find $gssp/vendor -type d -name ".git*" -print0 | xargs -0 -I {} /bin/rm -rf "{}"
git add .
git commit -m "${GIT_COMMIT}"
git push origin "${BRANCH_NAME}"
'''
}
}
stage('Clean magento cache'){
when { expression { "${stageSwitcher.cleanMagentoCache}" == "true" } }
steps {
sh '''
php bin/magento cache:flush
'''
}
}
stage('Update static content'){
when { expression { "${stageSwitcher.updateStaticContent}" == "true" } }
steps {
sh '''
export PROJECT_NAME=production
export NAMESPACE_NAME=prod
export WORKLOAD_NAME=static-content-each-node
export WORKLOAD_LABELS="updated=${BRANCH_NAME}-${BUILD_NUMBER}"
rwu upgrade-workload
'''
}
}
}
}
```