Mongo YCSB
===
###### tags: `kubernetes`
The core workloads consist of six different workloads:
### Workload A: Update heavy workload
This workload has a mix of 50/50 reads and writes. An application example is a session store recording recent actions.
### Workload B: Read mostly workload
This workload has a 95/5 reads/write mix. Application example: photo tagging; add a tag is an update, but most operations are to read tags.
### Workload C: Read only
This workload is 100% read. Application example: user profile cache, where profiles are constructed elsewhere (e.g., Hadoop).
### Workload D: Read latest workload
In this workload, new records are inserted, and the most recently inserted records are the most popular. Application example: user status updates; people want to read the latest.
### Workload E: Short ranges
In this workload, short ranges of records are queried, instead of individual records. Application example: threaded conversations, where each scan is for the posts in a given thread (assumed to be clustered by thread id).
Workload F: Read-modify-write
In this workload, the client will read a record, modify it, and write back the changes. Application example: user database, where user records are read and modified by the user or to record user activity.
### issue: https://github.com/kubernetes/kubernetes/issues/19825
testing 1st
load worka m1.1
17/5/2019 16:36
17/5/2019 17:00
thread 100
record 7000000
```json=
"fsUsedSize" : 8985096192.0,
"fsTotalSize" : 10434699264.0,
```
Pods CPU Usage

Pods Memory Usage

Pods Network IO

-----------------------
testing 1st
run worka m1.1
17/5/2019 17:06
17/5/2019 17:40
Pods CPU Usage

Pods Memory Usage

-----------------------
testing 2nd
load worka m1.2
17/5/2019 17:53
17/5/2019 17:00
Pods CPU Usage

Pods Memory Usage

-----------------------
testing 2nd
run worka m1.2
20/5/2019 09:17
17/5/2019 09:50
Pods CPU Usage

Pods Memory Usage

-----------------------
testing 3rd
load workb m1.1
20/5/2019 10:01
17/5/2019 10:26
Pods CPU Usage

Pods Memory Usage

-----------------------
testing 3rd
run workb m1.1
20/5/2019 10:38
17/5/2019 11:16
Pods CPU Usage

Pods Memory Usage

-----------------------
testing 4th
load workb m1.2
20/5/2019 14:32
17/5/2019 14:57
Pods CPU Usage

Pods Memory Usage

-----------------------
testing 4th
run workb m1.2
20/5/2019 15:00
17/5/2019 15:37
Pods CPU Usage

Pods Memory Usage

-----------------------
testing 5th
load worka m2.2
20/5/2019 16:38
17/5/2019 16:55
Pods CPU Usage

Pods Memory Usage

-----------------------
testing 5th
run worka m2.2
20/5/2019 16:55
17/5/2019 17:12
Pods CPU Usage

Pods Memory Usage

-----------------------
testing 6th
load workb m2.2
20/5/2019 17:29
17/5/2019 17:46
Pods CPU Usage

Pods Memory Usage

-----------------------
testing 6th
run workb m2.2
20/5/2019 17:46
17/5/2019 18:08
Pods CPU Usage

Pods Memory Usage

```
LAST DEPLOYED: Mon May 27 03:04:35 2019
NAMESPACE: mongo-test
STATUS: DEPLOYED
RESOURCES:
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mongodb-mongo Bound pvc-e841e555-8024-11e9-8580-d2cfa35ed84c 3Gi RWO default 54m
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
mongodb-mongo-84b76cd55b-bgtnf 0/1 Running 0 2m34s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mongodb-mongo LoadBalancer 10.0.213.7 65.52.187.145 27017:30090/TCP 54m
==> v1beta1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
mongodb-mongo 0/1 1 0 2m34s
```
```
LAST DEPLOYED: Mon May 27 03:04:35 2019
NAMESPACE: mongo-test
STATUS: DEPLOYED
RESOURCES:
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mongodb-mongo Bound pvc-e841e555-8024-11e9-8580-d2cfa35ed84c 3Gi RWO default 54m
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
mongodb-mongo-84b76cd55b-bgtnf 1/1 Running 0 3m2s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mongodb-mongo LoadBalancer 10.0.213.7 65.52.187.145 27017:30090/TCP 54m
==> v1beta1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
mongodb-mongo 1/1 1 1 3m2s
```
```
LAST DEPLOYED: Mon May 27 02:56:48 2019
NAMESPACE: mongo-test
STATUS: DEPLOYED
RESOURCES:
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mongodb-mongo Bound pvc-e841e555-8024-11e9-8580-d2cfa35ed84c 2Gi RWO default 44m
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
mongodb-mongo-84b76cd55b-dbvss 0/1 ContainerCreating 0 17s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mongodb-mongo LoadBalancer 10.0.213.7 65.52.187.145 27017:30090/TCP 44m
==> v1beta1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
mongodb-mongo 0/1 1 0 18s
```
```
apiVersion: v1
kind: Node
metadata:
annotations:
node.alpha.kubernetes.io/ttl: "0"
volumes.kubernetes.io/controller-managed-attach-detach: "true"
creationTimestamp: "2019-05-15T09:01:00Z"
labels:
agentpool: nodepool1
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/instance-type: Standard_D4s_v3
beta.kubernetes.io/os: linux
failure-domain.beta.kubernetes.io/region: eastasia
failure-domain.beta.kubernetes.io/zone: "0"
kubernetes.azure.com/cluster: MC_AKS-MongoTest-Group_AKS-MongoTest_eastasia
kubernetes.io/hostname: aks-nodepool1-29460110-0
kubernetes.io/role: agent
node-role.kubernetes.io/agent: ""
storageprofile: managed
storagetier: Premium_LRS
name: aks-nodepool1-29460110-0
resourceVersion: "100289"
selfLink: /api/v1/nodes/aks-nodepool1-29460110-0
uid: f5e7946d-76ef-11e9-8580-d2cfa35ed84c
spec:
podCIDR: 10.244.0.0/24
@@@
```
```
ubuntu@bimo-dev:~/helm/charts/stable/mongodb$ helm status mongodb-mongo
LAST DEPLOYED: Thu May 16 06:18:44 2019
NAMESPACE: mongo-test
STATUS: DEPLOYED
RESOURCES:
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mongodb-mongo Bound pvc-74efb236-77a2-11e9-8580-d2cfa35ed84c 2Gi RWO default 5m52s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
mongodb-mongo-84c45468c6-cxfpb 1/1 Running 0 5m52s
==> v1/Secret
NAME TYPE DATA AGE
mongodb-mongo Opaque 1 5m52s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mongodb-mongo LoadBalancer 10.0.2.18 168.63.220.218 27017:31105/TCP 5m52s
==> v1beta1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
mongodb-mongo 1/1 1 1 5m52s
```
```
ubuntu@bimo-dev:~/helm/charts/stable/mongodb$ kubectl -n mongo-test get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mongodb-mongo Bound pvc-74efb236-77a2-11e9-8580-d2cfa35ed84c 2Gi RWO default 4m29s
ubuntu@bimo-dev:~/helm/charts/stable/mongodb$ kubectl -n mongo-test edit pvc mongodb-mongo
persistentvolumeclaim/mongodb-mongo edited
ubuntu@bimo-dev:~/helm/charts/stable/mongodb$ kubectl
```
```
ubuntu@bimo-dev:~/helm/charts/stable/mongodb$ helm list
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
agile-squirrel 1 Thu May 16 03:49:42 2019 DEPLOYED prometheus-8.11.2 2.9.2 monitoring
mongodb-mongo 1 Thu May 16 06:18:44 2019 DEPLOYED mongodb-5.17.0 4.0.9 mongo-test
ubuntu@bimo-dev:~/helm/charts/stable/mongodb$ helm status mongodb-mongo
LAST DEPLOYED: Thu May 16 06:18:44 2019
NAMESPACE: mongo-test
STATUS: DEPLOYED
RESOURCES:
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mongodb-mongo Bound pvc-74efb236-77a2-11e9-8580-d2cfa35ed84c 2Gi RWO default 10m
==> v1/Secret
NAME TYPE DATA AGE
mongodb-mongo Opaque 1 10m
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mongodb-mongo LoadBalancer 10.0.2.18 168.63.220.218 27017:31105/TCP 10m
==> MISSING
KIND NAME
extensions/v1beta1, Resource=deployments mongodb-mongo
```
```
## resize storage in values.yaml
accessModes:
- ReadWriteOnce
size: 2Gi ### change to higher number and should be the exact same number as pvc
annotations: {}
```
Delete deployment
```
LAST DEPLOYED: Mon May 27 02:12:40 2019
NAMESPACE: mongo-test
STATUS: DEPLOYED
RESOURCES:
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mongodb-mongo Bound pvc-e841e555-8024-11e9-8580-d2cfa35ed84c 2Gi RWO default 30m
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mongodb-mongo LoadBalancer 10.0.213.7 65.52.187.145 27017:30090/TCP 30m
==> MISSING
KIND NAME
extensions/v1beta1, Resource=deployments mongodb-mongo
```
```
ubuntu@bimo-dev:~/helm/charts/stable/mongodb$ helm status mongodb-mongo
LAST DEPLOYED: Thu May 16 06:31:37 2019
NAMESPACE: mongo-test
STATUS: DEPLOYED
RESOURCES:
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mongodb-mongo Bound pvc-74efb236-77a2-11e9-8580-d2cfa35ed84c 2Gi RWO default 13m
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
mongodb-mongo-84c45468c6-bzr76 0/1 ContainerCreating 0 13s
==> v1/Secret
NAME TYPE DATA AGE
mongodb-mongo Opaque 1 13m
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mongodb-mongo LoadBalancer 10.0.2.18 168.63.220.218 27017:31105/TCP 13m
==> v1beta1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
mongodb-mongo 0/1 1 0 13s
```
```
ubuntu@bimo-dev:~/helm/charts/stable/mongodb$ helm status mongodb-mongo
LAST DEPLOYED: Thu May 16 06:31:37 2019
NAMESPACE: mongo-test
STATUS: DEPLOYED
RESOURCES:
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mongodb-mongo Bound pvc-74efb236-77a2-11e9-8580-d2cfa35ed84c 4Gi RWO default 13m
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
mongodb-mongo-84c45468c6-bzr76 0/1 Running 0 30s
==> v1/Secret
NAME TYPE DATA AGE
mongodb-mongo Opaque 1 13m
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mongodb-mongo LoadBalancer 10.0.2.18 168.63.220.218 27017:31105/TCP 13m
==> v1beta1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
mongodb-mongo 0/1 1 0 30s
```
IP not change, data still there.
```
UPGRADE FAILED
ROLLING BACK
Error: PersistentVolumeClaim "mongodb-mongo" is invalid: spec.resources.requests.storage: Forbidden: field can not be less than previous value
Error: UPGRADE FAILED: PersistentVolumeClaim "mongodb-mongo" is invalid: spec.resources.requests.storage: Forbidden: field can not be less than previous value
ubuntu@bimo-dev:~/helm/charts/stable/mongodb$ helm list
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
mongodb-mongo 2 Mon May 27 02:12:40 2019 FAILED mongodb-5.17.0 4.0.9 default
```
## MongoDB crash
```
Welcome to the Bitnami mongodb container
Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb
Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues
nami INFO Initializing mongodb
mongodb INFO ==> Deploying MongoDB with persisted data...
mongodb INFO ==> No injected configuration files found. Creating default config files...
mongodb INFO
mongodb INFO ########################################################################
mongodb INFO Installation parameters for mongodb:
mongodb INFO Persisted data and properties have been restored.
mongodb INFO Any input specified will not take effect.
mongodb INFO This installation requires no credentials.
mongodb INFO ########################################################################
mongodb INFO
nami INFO mongodb successfully initialized
INFO ==> Starting mongodb...
INFO ==> Starting mongod...
2019-05-27T02:25:20.670+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2019-05-27T02:25:20.673+0000 I CONTROL [initandlisten] MongoDB starting : pid=29 port=27017 dbpath=/opt/bitnami/mongodb/data/db 64-bit host=mongodb-mongo-84b76cd55b-tnjt2
2019-05-27T02:25:20.673+0000 I CONTROL [initandlisten] db version v4.0.9
2019-05-27T02:25:20.673+0000 I CONTROL [initandlisten] git version: fc525e2d9b0e4bceff5c2201457e564362909765
2019-05-27T02:25:20.673+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.0j 20 Nov 2018
2019-05-27T02:25:20.673+0000 I CONTROL [initandlisten] allocator: tcmalloc
2019-05-27T02:25:20.673+0000 I CONTROL [initandlisten] modules: none
2019-05-27T02:25:20.673+0000 I CONTROL [initandlisten] build environment:
2019-05-27T02:25:20.673+0000 I CONTROL [initandlisten] distmod: debian92
2019-05-27T02:25:20.673+0000 I CONTROL [initandlisten] distarch: x86_64
2019-05-27T02:25:20.673+0000 I CONTROL [initandlisten] target_arch: x86_64
2019-05-27T02:25:20.673+0000 I CONTROL [initandlisten] 1024 MB of memory available to the process out of 16041 MB total system memory
2019-05-27T02:25:20.673+0000 I CONTROL [initandlisten] options: { config: "/opt/bitnami/mongodb/conf/mongodb.conf", net: { bindIpAll: true, ipv6: true, port: 27017, unixDomainSocket: { enabled: true, pathPrefix: "/opt/bitnami/mongodb/tmp" } }, processManagement: { fork: false, pidFilePath: "/opt/bitnami/mongodb/tmp/mongodb.pid" }, security: { authorization: "disabled" }, setParameter: { enableLocalhostAuthBypass: "true" }, storage: { dbPath: "/opt/bitnami/mongodb/data/db", directoryPerDB: false, journal: { enabled: true } }, systemLog: { destination: "file", logAppend: true, logRotate: "reopen", path: true, quiet: false, verbosity: 0 } }
2019-05-27T02:25:20.673+0000 W STORAGE [initandlisten] Detected unclean shutdown - /opt/bitnami/mongodb/data/db/mongod.lock is not empty.
2019-05-27T02:25:20.673+0000 I STORAGE [initandlisten] Detected data files in /opt/bitnami/mongodb/data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2019-05-27T02:25:20.674+0000 W STORAGE [initandlisten] Recovering data from the last clean checkpoint.
2019-05-27T02:25:20.674+0000 I STORAGE [initandlisten]
2019-05-27T02:25:20.674+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2019-05-27T02:25:20.674+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
2019-05-27T02:25:20.674+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=256M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
2019-05-27T02:25:23.160+0000 I STORAGE [initandlisten] WiredTiger message [1558923923:160090][29:0x7f97c5e07080], txn-recover: Main recovery loop: starting at 22/256 to 23/256
2019-05-27T02:25:23.160+0000 I STORAGE [initandlisten] WiredTiger message [1558923923:160542][29:0x7f97c5e07080], txn-recover: Recovering log 22 through 23
2019-05-27T02:25:23.248+0000 I STORAGE [initandlisten] WiredTiger message [1558923923:248752][29:0x7f97c5e07080], file:index-3-6563628209115643515.wt, txn-recover: Recovering log 23 through 23
2019-05-27T02:25:23.319+0000 I STORAGE [initandlisten] WiredTiger message [1558923923:319003][29:0x7f97c5e07080], file:index-3-6563628209115643515.wt, txn-recover: Set global recovery timestamp: 0
2019-05-27T02:25:23.390+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
2019-05-27T02:25:23.413+0000 I CONTROL [initandlisten]
2019-05-27T02:25:23.413+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2019-05-27T02:25:23.413+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2019-05-27T02:25:23.413+0000 I CONTROL [initandlisten]
2019-05-27T02:25:23.426+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/opt/bitnami/mongodb/data/db/diagnostic.data'
2019-05-27T02:25:23.429+0000 I NETWORK [initandlisten] waiting for connections on port 27017
2019-05-27T02:25:24.025+0000 I FTDC [ftdc] Unclean full-time diagnostic data capture shutdown detected, found interim file, some metrics may have been lost. OK
2019-05-27T02:25:24.031+0000 W FTDC [ftdc] Uncaught exception in 'FileStreamFailed: Failed to write to interim file buffer for full-time diagnostic data capture: /opt/bitnami/mongodb/data/db/diagnostic.data/metrics.interim.temp' in full-time diagnostic data capture subsystem. Shutting down the full-time diagnostic data capture subsystem.
2019-05-27T02:25:24.896+0000 E STORAGE [thread1] WiredTiger error (28) [1558923924:896231][29:0x7f97bde8c700], log-server: __posix_file_write, 579: /opt/bitnami/mongodb/data/db/journal/WiredTigerTmplog.0000000003: handle-write: pwrite: failed to write 128 bytes at offset 0: No space left on device Raw: [1558923924:896231][29:0x7f97bde8c700], log-server: __posix_file_write, 579: /opt/bitnami/mongodb/data/db/journal/WiredTigerTmplog.0000000003: handle-write: pwrite: failed to write 128 bytes at offset 0: No space left on device
2019-05-27T02:25:24.896+0000 E STORAGE [thread1] WiredTiger error (28) [1558923924:896311][29:0x7f97bde8c700], log-server: __log_fs_write, 229: journal/WiredTigerTmplog.0000000003: fatal log failure: No space left on device Raw: [1558923924:896311][29:0x7f97bde8c700], log-server: __log_fs_write, 229: journal/WiredTigerTmplog.0000000003: fatal log failure: No space left on device
2019-05-27T02:25:24.896+0000 E STORAGE [thread1] WiredTiger error (-31804) [1558923924:896339][29:0x7f97bde8c700], log-server: __wt_panic, 520: the process must exit and restart: WT_PANIC: WiredTiger library panic Raw: [1558923924:896339][29:0x7f97bde8c700], log-server: __wt_panic, 520: the process must exit and restart: WT_PANIC: WiredTiger library panic
2019-05-27T02:25:24.896+0000 F - [thread1] Fatal Assertion 50853 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 409
2019-05-27T02:25:24.896+0000 F - [thread1]
***aborting after fassert() failure
2019-05-27T02:25:24.906+0000 F - [thread1] Got signal: 6 (Aborted).
0x5580423d91d1 0x5580423d83e9 0x5580423d88cd 0x7f97c45290e0 0x7f97c41abfff 0x7f97c41ad42a 0x5580409d96a9 0x558040ad7ea6 0x558040b4a629 0x558040960af8 0x558040960f12 0x558040be61b3 0x558040be678e 0x558040be6c50 0x558040bb58ba 0x7f97c451f4a4 0x7f97c4261d0f
----- BEGIN BACKTRACE -----
{"backtrace":[{"b":"55803FFBD000","o":"241C1D1","s":"_ZN5mongo15printStackTraceERSo"},{"b":"55803FFBD000","o":"241B3E9"},{"b":"55803FFBD000","o":"241B8CD"},{"b":"7F97C4518000","o":"110E0"},{"b":"7F97C4179000","o":"32FFF","s":"gsignal"},{"b":"7F97C4179000","o":"3442A","s":"abort"},{"b":"55803FFBD000","o":"A1C6A9","s":"_ZN5mongo32fassertFailedNoTraceWithLocationEiPKcj"},{"b":"55803FFBD000","o":"B1AEA6"},{"b":"55803FFBD000","o":"B8D629"},{"b":"55803FFBD000","o":"9A3AF8","s":"__wt_err_func"},{"b":"55803FFBD000","o":"9A3F12","s":"__wt_panic"},{"b":"55803FFBD000","o":"C291B3"},{"b":"55803FFBD000","o":"C2978E","s":"__wt_log_fill"},{"b":"55803FFBD000","o":"C29C50","s":"__wt_log_allocfile"},{"b":"55803FFBD000","o":"BF88BA"},{"b":"7F97C4518000","o":"74A4"},{"b":"7F97C4179000","o":"E8D0F","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.0.9", "gitVersion" : "fc525e2d9b0e4bceff5c2201457e564362909765", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "4.15.0-1040-azure", "version" : "#44-Ubuntu SMP Thu Feb 21 14:24:01 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "55803FFBD000", "elfType" : 3, "buildId" : "CAFFB91244620B4FC8BEBB82BA92281961A94322" }, { "b" : "7FFE8079B000", "path" : "linux-vdso.so.1", "elfType" : 3, "buildId" : "966CBD1E5C6A8A018B257A3260CD26B0F54BC069" }, { "b" : "7F97C5978000", "path" : "/usr/lib/x86_64-linux-gnu/libcurl.so.4", "elfType" : 3, "buildId" : "816839E99AF235E30CC31450E2ABFABDAA257D24" }, { "b" : "7F97C5761000", "path" : "/lib/x86_64-linux-gnu/libresolv.so.2", "elfType" : 3, "buildId" : "EAD5FD817712E63C1212D1EE7D7EE1B9C29F93A7" }, { "b" : "7F97C52C8000", "path" : "/usr/lib/x86_64-linux-gnu/libcrypto.so.1.1", "elfType" : 3, "buildId" : "A214BCD55713CB1E0B9AA61C07319C9A83A2268C" }, { "b" : "7F97C505C000", "path" : "/usr/lib/x86_64-linux-gnu/libssl.so.1.1", "elfType" : 3, "buildId" : "2BEF491D3EF8E727DF943799D1309AA357BA7D4C" }, { "b" : "7F97C4E58000", "path" : "/lib/x86_64-linux-gnu/libdl.so.2", "elfType" : 3, "buildId" : "DB2CAEEEC37482A98AB1416D0A9AFE2944930DE9" }, { "b" : "7F97C4C50000", "path" : "/lib/x86_64-linux-gnu/librt.so.1", "elfType" : 3, "buildId" : "86B35D63FACD97D22973E99EE9863F7714C4F53A" }, { "b" : "7F97C494C000", "path" : "/lib/x86_64-linux-gnu/libm.so.6", "elfType" : 3, "buildId" : "4E49714C557CE0472C798F39365CA10F9C0E1933" }, { "b" : "7F97C4735000", "path" : "/lib/x86_64-linux-gnu/libgcc_s.so.1", "elfType" : 3, "buildId" : "51AD5FD294CD6C813BED40717347A53434B80B7A" }, { "b" : "7F97C4518000", "path" : "/lib/x86_64-linux-gnu/libpthread.so.0", "elfType" : 3, "buildId" : "16D609487BCC4ACBAC29A4EAA2DDA0D2F56211EC" }, { "b" : "7F97C4179000", "path" : "/lib/x86_64-linux-gnu/libc.so.6", "elfType" : 3, "buildId" : "775143E680FF0CD4CD51CCE1CE8CA216E635A1D6" }, { "b" : "7F97C5BF8000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "606DF9C355103E82140D513BC7A25A635591C153" }, { "b" : "7F97C3F53000", "path" : "/usr/lib/x86_64-linux-gnu/libnghttp2.so.14", "elfType" : 3, "buildId" : "57FE530E3C6E81FD243F02556CDC09142D176A2E" }, { "b" : "7F97C3D31000", "path" : "/usr/lib/x86_64-linux-gnu/libidn2.so.0", "elfType" : 3, "buildId" : "52F90A61AFD6B0605DAC537C5D1B8713E8E93889" }, { "b" : "7F97C3B14000", "path" : "/usr/lib/x86_64-linux-gnu/librtmp.so.1", "elfType" : 3, "buildId" : "82864DDD2632F14010AD7740D09B7270901D418D" }, { "b" : "7F97C38E7000", "path" : "/usr/lib/x86_64-linux-gnu/libssh2.so.1", "elfType" : 3, "buildId" : "E12F1273FAC9E2BE7526C7C60D64CF80F846385D" }, { "b" : "7F97C36D9000", "path" : "/usr/lib/x86_64-linux-gnu/libpsl.so.5", "elfType" : 3, "buildId" : "1667EE4ED5224694326899E760722B7B366CEB41" }, { "b" : "7F97C3470000", "path" : "/usr/lib/x86_64-linux-gnu/libssl.so.1.0.2", "elfType" : 3, "buildId" : "F365E3485410A0833832DC04313E2318637E6A37" }, { "b" : "7F97C300A000", "path" : "/usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.2", "elfType" : 3, "buildId" : "D153794665C673EF207DC199FC6A36C3BB59A8C3" }, { "b" : "7F97C2DBF000", "path" : "/usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "4986F4E8DB61C236489DDC53213B04DB65A2EAA0" }, { "b" : "7F97C2AE5000", "path" : "/usr/lib/x86_64-linux-gnu/libkrb5.so.3", "elfType" : 3, "buildId" : "811575446A67638D151C4829E7040205D92F9C9B" }, { "b" : "7F97C28B2000", "path" : "/usr/lib/x86_64-linux-gnu/libk5crypto.so.3", "elfType" : 3, "buildId" : "19CE7A9BC33E0910065BDFE299DCACFF638BF06E" }, { "b" : "7F97C26AE000", "path" : "/lib/x86_64-linux-gnu/libcom_err.so.2", "elfType" : 3, "buildId" : "2EB9256EE03E4D411C25715BB6EC484BF9B09E66" }, { "b" : "7F97C249F000", "path" : "/usr/lib/x86_64-linux-gnu/liblber-2.4.so.2", "elfType" : 3, "buildId" : "EDE2EA44C0B018BBDB20D71A1C8AC99F0CC3F99F" }, { "b" : "7F97C224E000", "path" : "/usr/lib/x86_64-linux-gnu/libldap_r-2.4.so.2", "elfType" : 3, "buildId" : "EB45F0CC6A96D38B78D97C87D5D4A3E0706B2079" }, { "b" : "7F97C2034000", "path" : "/lib/x86_64-linux-gnu/libz.so.1", "elfType" : 3, "buildId" : "908B5A955D0A73FB8D31E0F927D0CDBA810CB300" }, { "b" : "7F97C1D1D000", "path" : "/usr/lib/x86_64-linux-gnu/libunistring.so.0", "elfType" : 3, "buildId" : "2E457FF72C4E6A267C0B10E06C3FB8C4F32487EE" }, { "b" : "7F97C1984000", "path" : "/usr/lib/x86_64-linux-gnu/libgnutls.so.30", "elfType" : 3, "buildId" : "1C1BC93C559CFE2EBD1B5676FA4B355118EDF38E" }, { "b" : "7F97C174F000", "path" : "/usr/lib/x86_64-linux-gnu/libhogweed.so.4", "elfType" : 3, "buildId" : "1D3666D2FA45541887E96DED01529116996812AD" }, { "b" : "7F97C1518000", "path" : "/usr/lib/x86_64-linux-gnu/libnettle.so.6", "elfType" : 3, "buildId" : "43D18C6AB6EDE083BE2C5FAA857E379389819ACB" }, { "b" : "7F97C1295000", "path" : "/usr/lib/x86_64-linux-gnu/libgmp.so.10", "elfType" : 3, "buildId" : "45ACF9508A033A2AE2672156491BC524A3BF20CD" }, { "b" : "7F97C0F85000", "path" : "/lib/x86_64-linux-gnu/libgcrypt.so.20", "elfType" : 3, "buildId" : "917AB7D78C8C49FE3095ABFF95FAB28575D704BB" }, { "b" : "7F97C0D79000", "path" : "/usr/lib/x86_64-linux-gnu/libkrb5support.so.0", "elfType" : 3, "buildId" : "932297A42269A54BCDB88198BA06BD63B13E1996" }, { "b" : "7F97C0B75000", "path" : "/lib/x86_64-linux-gnu/libkeyutils.so.1", "elfType" : 3, "buildId" : "3CFF3CE519A16305A617D8885EA5D3AE3D965461" }, { "b" : "7F97C095A000", "path" : "/usr/lib/x86_64-linux-gnu/libsasl2.so.2", "elfType" : 3, "buildId" : "A54D193AB95897B4BFE387E6578064711115AB75" }, { "b" : "7F97C06F5000", "path" : "/usr/lib/x86_64-linux-gnu/libp11-kit.so.0", "elfType" : 3, "buildId" : "86F00B032B270ED5297EB393B30EDEF76B890573" }, { "b" : "7F97C04C1000", "path" : "/lib/x86_64-linux-gnu/libidn.so.11", "elfType" : 3, "buildId" : "CCC0C44563E10F70FCF98D0C7AFABC9801F7159B" }, { "b" : "7F97C02AE000", "path" : "/usr/lib/x86_64-linux-gnu/libtasn1.so.6", "elfType" : 3, "buildId" : "D03612373D33091A4678A032C5D7341FB56FE7DC" }, { "b" : "7F97C009A000", "path" : "/lib/x86_64-linux-gnu/libgpg-error.so.0", "elfType" : 3, "buildId" : "8B9D1F17D242A08FEA23AF32055037569A714209" }, { "b" : "7F97BFE91000", "path" : "/usr/lib/x86_64-linux-gnu/libffi.so.6", "elfType" : 3, "buildId" : "AA1401F42D517693444B96C5774A62D4E8C84A35" } ] }}
mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x5580423d91d1]
mongod(+0x241B3E9) [0x5580423d83e9]
mongod(+0x241B8CD) [0x5580423d88cd]
libpthread.so.0(+0x110E0) [0x7f97c45290e0]
libc.so.6(gsignal+0xCF) [0x7f97c41abfff]
libc.so.6(abort+0x16A) [0x7f97c41ad42a]
mongod(_ZN5mongo32fassertFailedNoTraceWithLocationEiPKcj+0x0) [0x5580409d96a9]
mongod(+0xB1AEA6) [0x558040ad7ea6]
mongod(+0xB8D629) [0x558040b4a629]
mongod(__wt_err_func+0x90) [0x558040960af8]
mongod(__wt_panic+0x39) [0x558040960f12]
mongod(+0xC291B3) [0x558040be61b3]
mongod(__wt_log_fill+0x3E) [0x558040be678e]
mongod(__wt_log_allocfile+0x430) [0x558040be6c50]
mongod(+0xBF88BA) [0x558040bb58ba]
libpthread.so.0(+0x74A4) [0x7f97c451f4a4]
libc.so.6(clone+0x3F) [0x7f97c4261d0f]
----- END BACKTRACE -----
```
```
buntu@bimo-dev:~/helm/charts/stable/mongodb$ kc -n mongo-test describe pod mongodb-mongo-84b76cd55b-tnjt2
Name: mongodb-mongo-84b76cd55b-tnjt2
Namespace: mongo-test
Priority: 0
PriorityClassName: <none>
Node: aks-nodepool1-29460110-0/10.240.0.4
Start Time: Mon, 27 May 2019 02:12:50 +0000
Labels: app=mongodb
chart=mongodb-5.17.0
pod-template-hash=84b76cd55b
release=mongodb-mongo
Annotations: <none>
Status: Running
IP: 10.244.0.66
Controlled By: ReplicaSet/mongodb-mongo-84b76cd55b
Containers:
mongodb-mongo:
Container ID: docker://7f3348fd164fb2f8cb7a13f2192d42387bc99e4e722248e79fe34126136c0e3a
Image: docker.io/bitnami/mongodb:4.0.9
Image ID: docker-pullable://bitnami/mongodb@sha256:c65e80c5f461c19b89492fa90a9007b1b7d71d1e18f7421893b63157579946bf
Port: 27017/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 134
Started: Mon, 27 May 2019 02:25:17 +0000
Finished: Mon, 27 May 2019 02:25:25 +0000
Ready: False
Restart Count: 6
Limits:
cpu: 1
memory: 1Gi
Requests:
cpu: 1
memory: 1Gi
Readiness: exec [mongo --eval db.adminCommand('ping')] delay=20s timeout=5s period=10s #success=1 #failure=6
Environment:
MONGODB_SYSTEM_LOG_VERBOSITY: 0
MONGODB_DISABLE_SYSTEM_LOG: no
MONGODB_ENABLE_IPV6: yes
MONGODB_ENABLE_DIRECTORY_PER_DB: no
KUBERNETES_PORT_443_TCP_ADDR: aks-mongot-aks-mongotest-gr-e26b16-9588c2bd.hcp.eastasia.azmk8s.io
KUBERNETES_PORT: tcp://aks-mongot-aks-mongotest-gr-e26b16-9588c2bd.hcp.eastasia.azmk8s.io:443
KUBERNETES_PORT_443_TCP: tcp://aks-mongot-aks-mongotest-gr-e26b16-9588c2bd.hcp.eastasia.azmk8s.io:443
KUBERNETES_SERVICE_HOST: aks-mongot-aks-mongotest-gr-e26b16-9588c2bd.hcp.eastasia.azmk8s.io
Mounts:
/bitnami/mongodb from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-724zn (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mongodb-mongo
ReadOnly: false
default-token-724zn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-724zn
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 15m (x8 over 16m) default-scheduler pod has unbound immediate PersistentVolumeClaims
Normal Scheduled 15m default-scheduler Successfully assigned mongo-test/mongodb-mongo-84b76cd55b-tnjt2 to aks-nodepool1-29460110-0
Normal SuccessfulAttachVolume 15m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-e841e555-8024-11e9-8580-d2cfa35ed84c"
Normal Pulling 7m55s (x5 over 15m) kubelet, aks-nodepool1-29460110-0 pulling image "docker.io/bitnami/mongodb:4.0.9"
Normal Pulled 7m53s (x5 over 15m) kubelet, aks-nodepool1-29460110-0 Successfully pulled image "docker.io/bitnami/mongodb:4.0.9"
Normal Created 7m52s (x5 over 15m) kubelet, aks-nodepool1-29460110-0 Created container
Normal Started 7m52s (x5 over 15m) kubelet, aks-nodepool1-29460110-0 Started container
Warning BackOff 5m7s (x18 over 9m44s) kubelet, aks-nodepool1-29460110-0 Back-off restarting failed container
```
```
LAST DEPLOYED: Mon May 27 02:12:40 2019
NAMESPACE: mongo-test
STATUS: DEPLOYED
RESOURCES:
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mongodb-mongo Bound pvc-e841e555-8024-11e9-8580-d2cfa35ed84c 2Gi RWO default 20m
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
mongodb-mongo-84b76cd55b-tnjt2 0/1 CrashLoopBackOff 7 20m
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mongodb-mongo LoadBalancer 10.0.213.7 65.52.187.145 27017:30090/TCP 20m
==> v1beta1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
mongodb-mongo 0/1 1 0 20m
```
## BABABABABA
```
ubuntu@bimo-dev:~$ kubectl -n mongo-test describe pod mongodb-mongo-84c45468c6-smlr2
Name: mongodb-mongo-84c45468c6-smlr2
Namespace: mongo-test
Priority: 0
PriorityClassName: <none>
Node: aks-nodepool1-29460110-0/10.240.0.4
Start Time: Thu, 16 May 2019 06:49:56 +0000
Labels: app=mongodb
chart=mongodb-5.17.0
pod-template-hash=84c45468c6
release=mongodb-mongo
Annotations: <none>
Status: Running
IP: 10.244.0.49
Controlled By: ReplicaSet/mongodb-mongo-84c45468c6
Containers:
mongodb-mongo:
Container ID: docker://af158060cfbb303609440437d1f7b1d71a134a396ca2040102f87e8f24562885
Image: docker.io/bitnami/mongodb:4.0.9
Image ID: docker-pullable://bitnami/mongodb@sha256:2afb5289b8ed2268ee27a405f6329df42cb4fb882d459f08e65296a460e4c696
Port: 27017/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 14
Started: Fri, 17 May 2019 01:19:28 +0000
Finished: Fri, 17 May 2019 01:19:36 +0000
Ready: False
Restart Count: 159
Limits:
cpu: 1
memory: 1Gi
Requests:
cpu: 1
memory: 1Gi
Liveness: exec [mongo --eval db.adminCommand('ping')] delay=30s timeout=5s period=10s #success=1 #failure=6
Readiness: exec [mongo --eval db.adminCommand('ping')] delay=20s timeout=5s period=10s #success=1 #failure=6
Environment:
MONGODB_ROOT_PASSWORD: <set to the key 'mongodb-root-password' in secret 'mongodb-mongo'> Optional: false
MONGODB_SYSTEM_LOG_VERBOSITY: 0
MONGODB_DISABLE_SYSTEM_LOG: no
MONGODB_ENABLE_IPV6: yes
MONGODB_ENABLE_DIRECTORY_PER_DB: no
KUBERNETES_PORT_443_TCP_ADDR: aks-mongot-aks-mongotest-gr-e26b16-9588c2bd.hcp.eastasia.azmk8s.io
KUBERNETES_PORT: tcp://aks-mongot-aks-mongotest-gr-e26b16-9588c2bd.hcp.eastasia.azmk8s.io:443
KUBERNETES_PORT_443_TCP: tcp://aks-mongot-aks-mongotest-gr-e26b16-9588c2bd.hcp.eastasia.azmk8s.io:443
KUBERNETES_SERVICE_HOST: aks-mongot-aks-mongotest-gr-e26b16-9588c2bd.hcp.eastasia.azmk8s.io
Mounts:
/bitnami/mongodb from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-724zn (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mongodb-mongo
ReadOnly: false
default-token-724zn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-724zn
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 6m48s (x159 over 18h) kubelet, aks-nodepool1-29460110-0 Successfully pulled image "docker.io/bitnami/mongodb:4.0.9"
Warning BackOff 103s (x3722 over 13h) kubelet, aks-nodepool1-29460110-0 Back-off restarting failed container
```
```
ubuntu@bimo-dev:~$ cat crash_log
Welcome to the Bitnami mongodb container
Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb
Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues
nami INFO Initializing mongodb
mongodb INFO ==> Deploying MongoDB with persisted data...
mongodb INFO ==> No injected configuration files found. Creating default config files...
mongodb INFO ==> Enabling authentication...
mongodb INFO
mongodb INFO ########################################################################
mongodb INFO Installation parameters for mongodb:
mongodb INFO Persisted data and properties have been restored.
mongodb INFO Any input specified will not take effect.
mongodb INFO This installation requires no credentials.
mongodb INFO ########################################################################
mongodb INFO
nami INFO mongodb successfully initialized
INFO ==> Starting mongodb...
INFO ==> Starting mongod...
2019-05-17T01:08:51.967+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2019-05-17T01:08:51.970+0000 I CONTROL [initandlisten] MongoDB starting : pid=29 port=27017 dbpath=/opt/bitnami/mongodb/data/db 64-bit host=mongodb-mongo-84c45468c6-smlr2
2019-05-17T01:08:51.970+0000 I CONTROL [initandlisten] db version v4.0.9
2019-05-17T01:08:51.970+0000 I CONTROL [initandlisten] git version: fc525e2d9b0e4bceff5c2201457e564362909765
2019-05-17T01:08:51.970+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.0j 20 Nov 2018
2019-05-17T01:08:51.970+0000 I CONTROL [initandlisten] allocator: tcmalloc
2019-05-17T01:08:51.970+0000 I CONTROL [initandlisten] modules: none
2019-05-17T01:08:51.970+0000 I CONTROL [initandlisten] build environment:
2019-05-17T01:08:51.970+0000 I CONTROL [initandlisten] distmod: debian92
2019-05-17T01:08:51.970+0000 I CONTROL [initandlisten] distarch: x86_64
2019-05-17T01:08:51.970+0000 I CONTROL [initandlisten] target_arch: x86_64
2019-05-17T01:08:51.970+0000 I CONTROL [initandlisten] 1024 MB of memory available to the process out of 16041 MB total system memory
2019-05-17T01:08:51.970+0000 I CONTROL [initandlisten] options: { config: "/opt/bitnami/mongodb/conf/mongodb.conf", net: { bindIpAll: true, ipv6: true, port: 27017, unixDomainSocket: { enabled: true, pathPrefix: "/opt/bitnami/mongodb/tmp" } }, processManagement: { fork: false, pidFilePath: "/opt/bitnami/mongodb/tmp/mongodb.pid" }, security: { authorization: "enabled" }, setParameter: { enableLocalhostAuthBypass: "false" }, storage: { dbPath: "/opt/bitnami/mongodb/data/db", directoryPerDB: false, journal: { enabled: true } }, systemLog: { destination: "file", logAppend: true, logRotate: "reopen", path: true, quiet: false, verbosity: 0 } }
2019-05-17T01:08:51.970+0000 W STORAGE [initandlisten] Detected unclean shutdown - /opt/bitnami/mongodb/data/db/mongod.lock is not empty.
2019-05-17T01:08:51.970+0000 I STORAGE [initandlisten] Detected data files in /opt/bitnami/mongodb/data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2019-05-17T01:08:51.970+0000 W STORAGE [initandlisten] Recovering data from the last clean checkpoint.
2019-05-17T01:08:51.970+0000 I STORAGE [initandlisten]
2019-05-17T01:08:51.970+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2019-05-17T01:08:51.970+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
2019-05-17T01:08:51.970+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=256M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
2019-05-17T01:08:54.118+0000 E STORAGE [initandlisten] WiredTiger error (28) [1558055334:118385][29:0x7f6342afa080], connection: __posix_fs_rename, 253: /opt/bitnami/mongodb/data/db/journal/WiredTigerTmplog.0000000001 to /opt/bitnami/mongodb/data/db/journal/WiredTigerLog.0000000261: file-rename: rename: No space left on device Raw: [1558055334:118385][29:0x7f6342afa080], connection: __posix_fs_rename, 253: /opt/bitnami/mongodb/data/db/journal/WiredTigerTmplog.0000000001 to /opt/bitnami/mongodb/data/db/journal/WiredTigerLog.0000000261: file-rename: rename: No space left on device
2019-05-17T01:08:55.662+0000 E STORAGE [initandlisten] WiredTiger error (28) [1558055335:662308][29:0x7f6342afa080], connection: __posix_fs_rename, 253: /opt/bitnami/mongodb/data/db/journal/WiredTigerTmplog.0000000001 to /opt/bitnami/mongodb/data/db/journal/WiredTigerLog.0000000261: file-rename: rename: No space left on device Raw: [1558055335:662308][29:0x7f6342afa080], connection: __posix_fs_rename, 253: /opt/bitnami/mongodb/data/db/journal/WiredTigerTmplog.0000000001 to /opt/bitnami/mongodb/data/db/journal/WiredTigerLog.0000000261: file-rename: rename: No space left on device
2019-05-17T01:08:57.222+0000 E STORAGE [initandlisten] WiredTiger error (28) [1558055337:222892][29:0x7f6342afa080], connection: __posix_fs_rename, 253: /opt/bitnami/mongodb/data/db/journal/WiredTigerTmplog.0000000001 to /opt/bitnami/mongodb/data/db/journal/WiredTigerLog.0000000261: file-rename: rename: No space left on device Raw: [1558055337:222892][29:0x7f6342afa080], connection: __posix_fs_rename, 253: /opt/bitnami/mongodb/data/db/journal/WiredTigerTmplog.0000000001 to /opt/bitnami/mongodb/data/db/journal/WiredTigerLog.0000000261: file-rename: rename: No space left on device
2019-05-17T01:08:57.229+0000 W STORAGE [initandlisten] Failed to start up WiredTiger under any compatibility version.
2019-05-17T01:08:57.229+0000 F STORAGE [initandlisten] Reason: 28: No space left on device
2019-05-17T01:08:57.229+0000 F - [initandlisten] Fatal Assertion 28595 at src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp 704
2019-05-17T01:08:57.229+0000 F - [initandlisten]
***aborting after fassert() failure
```
"message":"40316: Insufficient permission to disable user
,"message":"40315: Insufficient permission to update user
GET https://portal-management.wise-paas.io/mp/v3/users/Fransiscus.Bimo@advantech.com.tw/summary?guid=4eb45acd-418c-4abe-8cba-d3240a99d4af&check_valid=true 504 (Gateway Timeout)
POST
https://dashboard-adv-training-default-space.wise-paas.io/api/admin/users
https://dashboard-adv-training-default-space.wise-paas.io/admin/users/create
{"name":"noname","email":"noname@noname.com","login":"noname"}
POST
https://portal-scada-adv-training-default-space.wise-paas.io/api/Users
{"userName":"noname@noname.com","userDesc":"noname","scope":["edit_config","edit_value","get_value","manage_account","manage_event","manage_alarm","system_setting"]}
1 First cfscope traversal. If there is orgid in cfscope and the current org is the same, then you need to verify that the permission is tenant, if it is tenant, then the verification ends, the user has the highest authority of the dashboard, both admin.
2 If it is not tenant, then check if the user is developer. If it is developer, then check if the space corresponding to developer is consistent with the current space. If it is consistent, first give the user a minimum permission. In the dashboard, use developer here. Distinguish as a sign.
3 Cfscope traversal, if it is tenant, the verification is finished in the first step, if not, then get the current app registered srpid, get the user's permission to the corresponding srpid app from the scope. (ps: For developer, if the relevant permissions are not found in step 3, the second part also sets the default viewer permissions for it)
4 Then give the permission to the user of this operation.
In general, these verifications are done in the middleware of the filter, and all authenticated interfaces need to be verified.
```json=
{"username":"john.washburn@advantech.com","firstName":"John","lastName":"Washburn","country":"TW","cfScopes":[{"name":"Adv-Training","spaces":[{"name":"Level-I"},{"name":"Level-II"}],"sso_role":"developer"}]
```
To embrace new technology adoption, need to raise awareness and understanding of the term itself through creating story that comes from different pieces of technologies like structure-centric could be ERP road pricing, parking or citizen-centric that is smart health care to make a hype and more tangible to people to make them understand.
Dont just rely on somebody's solutions failed.
40% of the projects involves end-users and stakeholders which means using old approach.
To find it more successful need to involves more stakeholders when design and deploy.
Only build solution would not make it success because one may hate the usability. Need to pay attention to how they want it.
The technology may not accurate like what we foreseen.
Success depends on the readiness and inovation of the technology.
Goal for 2020
1. Sustainability
2. Transformation of digital
3. Understand people

One thing I think can be modified a bit. After we have integrated sso, the equivalent of authority management is to be given to sso to do
I see your individual interface now, when you call, do not verify the token. Can refer to the mail sent.
Generally like this token verification, you can consider using filters, or other middleware. Before the interface is called, first call the sso interface to verify that the token is valid.
https://docs.google.com/spreadsheets/d/1q-9HjJSt3UIkq5OBtX7opd7S5Jk83WK5JYaKR_vRi8Q/edit#gid=0
username: dvGy4WhDBwb5TbQXVn68JLFx
password: TmUE4Nz9Xk486yXEkxkhS7dR
mongos1: 114.55.124.177
mongos2: 47.111.72.198
fr.bimo@gmail.com
密码:1*HngMDM1VgQW$c
user dvGy4WhDBwb5TbQXVn68JLFx
pass TmUE4Nz9Xk486yXEkxkhS7dR
47.111.72.198 30000
| Advantech | Attlasian |
| -------- | -------- |
| Multi-tenancy is a software architecture technology designed to allow the same system or program component to be used in a multi-user environment while ensuring data isolation between users.| Any tenant can be served by any compute node that gives your the ability tospin up new compute nodes wiht zero downtime upgrade|