# Setup reloaded
This document should represent best practice setup of the environment end-to-end.
TODO shortterm:
- azure devops integration for hcl
- deployment repo and activity tracker
- file repository
- central KB, including HCL

- integration with snow (tickets -> dev issues) (link tickets to lines in code)

TODO longterm:
- CI/CD pipeline with interface to UAT (inlc testing framework)

- business user, platform, etc testing reporting

all quick wins except CI/CD
----
## Dependencies
## 0. init
- **Common sense**:
- set timezone `ln -sf /usr/share/zoneinfo/Region/City /etc/localtime`
- locale, Uncomment en_US.UTF-8 UTF-8 and other needed locales in /etc/locale.gen, and generate them with:`locale-gen`
- `hostnamectl set-hostname xys` corrsp. /etc/hosts
- root password `passwd`
- use ssh-keys not passwords:
```bash=
# generate strong private / public key-pair
ssh-keygen -o -a 100 -t ed25519 -f ~/.ssh/<some name> -C "your email"
# copy to server for password login
ssh-copy-id -i ~/.ssh/<some name> username@server
# edit hosts, also for virtual net
ssh you@devmachine # vs ssh you@blafoo@123.34234.234.234 ;)
# secure ssh login to remote
sudo vim /etc/ssh/sshd_config
| permitRootLogin no
| sshPort 12345
etc...
```
- **user management**
create new users, with adequate priveledges, _n_ for dev, one for admin etc.
([source](http://0pointer.net/blog/ip-accounting-and-access-lists-with-systemd.html))
**systemd can do per-service IP traffic accounting, as well as access control for IP address ranges.**
Three new unit file settings have been added in this context:
- IPAccounting= is a boolean setting. If enabled for a unit, all IP traffic sent and received by processes associated with it is counted both in terms of bytes and of packets.
- IPAddressDeny= takes an IP address prefix (that means: an IP address with a network mask). All traffic from and to this address will be prohibited for processes of the service.
- IPAddressAllow= is the matching positive counterpart to IPAddressDeny=. All traffic matching this IP address/network mask combination will be allowed, even if otherwise listed in IPAddressDeny=.
This is how one solves the port issues! I give simple unit configuration file for all of the STI services.
To have a proper setup, please stick your heads together and come up witha modus operandi on how your dev process works taking into account abilities, availability and count of developers.
## 1. base
It is important that deps are correctly installed.
First, let us get the EPEL to have access to new things:
```bash
cd /tmp
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
sudo yum install epel-release-latest-7.noarch.rpm
```
then update the repo cache
```bash=
sudo yum update
```
```bash=
sudo yum update && sudo yum upgrade -y
sudo yum group install "Development Tools"
sudo yum install python python-dev python3 \
make \
gcc-c++ \
vim \
git \ # learn!
htop \ # learn
nmap \ # learn
netcat # learn
```
and other deps I might have missed here for rhel7 specifics. Summarizing, we are currently running:
```shell=
[gh0st@rh]$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.6 (Maipo)
[gh0st@rh]$ gcc --version
gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36)
[gh0st@rh]$ uname -r
3.10.0-957.21.3.el7.x86_64
[gh0st@rh]$ openssl version
OpenSSL 1.0.2k-fips 26 Jan 2017
[gh0st@rh]$ bash --version
GNU bash, version 4.2.46(2)-release (x86_64-redhat-linux-gnu)
[gh0st@rh]$ tar --version
tar (GNU tar) 1.26
[gh0st@rh]$ vim --version
VIM - Vi IMproved 7.4 (2013 Aug 10, compiled Jun 22 2018 06:34:52)
[gh0st@rh]$ git --version
git version 1.8.3.1
[gh0st@rh]$ ip -V
ip utility, iproute2-ss170501
```
## 2. nodejs
We use the node version manager https://github.com/nvm-sh/nvm
Read carefully what it does - we use it for building the `React` based ffrontend ui and it is the runtime of the backend.
It's a simple bash script to manage multiple active node.js versions.
```sh
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh | bash
```
or Wget:
```sh
wget -qO- https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh | bash
```
<sub>The script clones the nvm repository to `~/.nvm` and adds the source line to your profile (`~/.bash_profile`, `~/.zshrc`, `~/.profile`, or `~/.bashrc`).</sub>
<sub>**Note:** If the environment variable `$XDG_CONFIG_HOME` is present, it will place the `nvm` files there.</sub>
```sh
export NVM_DIR="${XDG_CONFIG_HOME/:-$HOME/.}nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
```
**Note:** You can add `--no-use` to the end of the above script (...`nvm.sh --no-use`) to postpone using `nvm` until you manually use it.
You can customize the install source, directory, profile, and version using the `NVM_SOURCE`, `NVM_DIR`, `PROFILE`, and `NODE_VERSION` variables.
Eg: `curl ... | NVM_DIR="path/to/nvm"`. Ensure that the `NVM_DIR` does not contain a trailing slash.
<sub>*NB. The installer can use `git`, `curl`, or `wget` to download `nvm`, whatever is available.*</sub>
**Note:** After running the install script, if you get `nvm: command not found` or see no feedback from your terminal after you type:
```sh
command -v nvm
```
simply close your current terminal, open a new terminal, and try verifying again.
- your system may not have a [`.bash_profile file`] where the command is set up. Simply create one with `touch ~/.bash_profile` and run the install script again
- you might need to restart your terminal instance. Try opening a new tab/window in your terminal and retry.
If the above doesn't fix the problem, open your `.bash_profile` and add the following line of code:
`source ~/.bashrc`
## 3. Using nvm
To download, compile, and install the *latest release of node*, do this:
```bash
nvm install node
```
or to get a list of available versions:
```bash
nvm ls-remote
```
to change the version back to the **latest**
```bash
nvm use node
```
Anyway, we are fine by just running the first command, it will install `node` to
`~/.nvm/versions/node/v12.6.0/bin/node`.
For good measure we will update `npm` the *node package manager* with `npm itself`:
```bash
npm i -g npm
```
Short for `npm install --global npm` - installs the packages in `~/.nvm/versions/node/v12.6.0/node_modules`. We will have multiple `node_modules` folders, as in `/sti/backend` there will be the dependencies of the sti backend app
which outputs something like:
```shell
~ nvm install 12.6
Downloading and installing node v12.6.0...
Downloading https://nodejs.org/dist/v12.6.0/node-v12.6.0-linux-x64.tar.xz...
########################################################### 100.0%
Computing checksum with sha256sum
Checksums matched!
Now using node v12.6.0 (npm v6.9.0)
Creating default alias: default -> 12.6 (-> v12.6.0)
```
We need globally (for this user still in `~/.nvm`) the packages:
```bash=
npm install -g npm eslint prettier typescript pm2
```
- `npm`: just upgrade
- `eslint`: linter, also included in the `sti/frontend/node_modules`
- `prettier`: code formatter, also included in the `sti/frontend/node_modules`
- `typescript`: tsc index.ts (.ts compiler)
- `pm2`: prod testing (process manager, cluster manager)
## 4. docker
Following the official docs (read it all):
First remove everything causing collisions:
```bash=
sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine
```
then install docker et al:
Install the latest patch release, or go to the next step to install a specific version:
```bash=
sudo yum -y install docker-ee docker-ee-cli containerd.io
````
If prompted to accept the GPG key, verify that the fingerprint matches
`77FE DA13 1A83 1D29 A418 D3E8 99E5 FF2E 7668 2BC9`, and if so, accept it.
### Manage Docker as a non-root user
Create the docker group. `sudo groupadd docker`.
Add your user to the docker group. `sudo usermod -aG docker $USER`.
Log out and log back in so that your group membership is re-evaluated. ...
Verify that you can run docker commands without sudo .
## 5. docker-compose
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. To learn more about all the features of Compose, see the [list of features](https://docs.docker.com/compose/#features).
To install, run this command to download the current stable release of Docker Compose, then apply executable permissions to the binary:
```bash
sudo curl -L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
```
Note: If the command docker-compose fails after installation,
check your path. You can also create a symbolic link to /usr/bin
or any other directory in your path
(e.g. ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose)
Using Compose is basically a three-step process:
- Define your app’s environment with a *Dockerfile* so it can be reproduced anywhere.
- Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.
- Run `docker-compose up` and compose starts and runs your entire app.
While a **docker-compose.yml** looks like this:
```yml=
version: '3.7'
services:
vscode:
container_name: vscode
build:
dockerfile: ./Dockerfile
restart: always
networks:
- sti
ports:
- "80:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
```
which `docker-compose`, merely a python script translates to the bash command
```shell
docker run -it \
-p 127.0.0.1:80:8080 \
-v /var/run/docker.sock:/var/run/docker.sock \
codercom/code-server:v2
```
On a current Linux OS (in a non-minimal installation), bash completion should be available. (`yum install bash-completions`)
Place the completion script in /etc/bash_completion.d/.
```bash
sudo curl -L https://raw.githubusercontent.com/docker/compose/1.24.1/contrib/completion/bash/docker-compose -o /etc/bash_completion.d/docker-compose
```
#### low-level userspace (systemd)
```bash=
sudo systemctl start docker
```
will start, where as `enable` will start on boot.
Systemd is a very powerul tool (pid one), which should be handled proficiently:
Control the systemd system and service manager.
- List failed units:
`systemctl --failed`
- Start/Stop/Restart/Reload a service:
`systemctl start/stop/restart/reload {{unit}}`
- Show the status of a unit:
`systemctl status {{unit}}`
- Enable/Disable a unit to be started on bootup:
`systemctl enable/disable {{unit}}`
- Mask/Unmask a unit, prevent it to be started on bootup:
`systemctl mask/unmask {{unit}}`
- Reload systemd, scanning for new or changed units:
`systemctl daemon-reload`
Generally, **journalctl** and **dmesg** are your friend. user them.
## 5. Manage Docker as a non-root user ([source](https://docs.docker.com/install/linux/linux-postinstall/))
The Docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user root and other users can only access it using sudo. The Docker daemon always runs as the root user.
If you don’t want to preface the docker command with sudo, create a Unix group called docker and add users to it. When the Docker daemon starts, it creates a Unix socket accessible by members of the docker group.
To create the docker group and add your user:
```
sudo groupadd docker
```
Add your user to the docker group.
```
sudo usermod -aG docker $USER
```
Then activate the changes by `newgrp docker`
---
A common problem, which we encountered has following fix:
WARNING: Error loading config file: /home/user/.docker/config.json -
stat /home/user/.docker/config.json: permission denied
To fix this problem, either remove the ~/.docker/ directory
(it is recreated automatically, but any custom settings are lost),
or change its ownership and permissions using the following commands:
```sh
sudo chown "$USER":"$USER" /home/"$USER"/.docker -R
sudo chmod g+rwx "$HOME/.docker" -R
```
## 5. Configure Docker to start on boot
```bash
sudo systemctl enable docker
```
(ports may differ)
1. Use the command `sudo systemctl edit docker.service` to open an override file for `docker.service` in a text editor.
2. Add or modify the following lines, substituting your own values.
```none
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://127.0.0.1:2375
```
3. Save the file.
4. Reload the `systemctl` configuration.
```bash
$ sudo systemctl daemon-reload
```
5. Restart Docker.
```bash
$ sudo systemctl restart docker.service
```
6. Check to see whether the change was honored by reviewing the output of `netstat` to confirm `dockerd` is listening on the configured port.
```bash
$ sudo netstat -lntp | grep dockerd
tcp 0 0 127.0.0.1:2375 0.0.0.0:* LISTEN 3758/dockerd
```
## 7. Configuring remote access with `systemd` unit file
**This is important if we want to control the docker daemon from inside of the containers ( portainer, vscode).**
1. Set the `hosts` array in the `/etc/docker/daemon.json` to connect to the UNIX socket and an IP address, as follows:
```json
{
"hosts": ["unix:///var/run/docker.sock", "tcp://127.0.0.1:2375"]
}
```
2. Restart Docker.
3. Check to see whether the change was honored by reviewing the output of `netstat` to confirm `dockerd` is listening on the configured port.
```bash
$ sudo netstat -lntp | grep dockerd
tcp 0 0 127.0.0.1:2375 0.0.0.0:* LISTEN 3758/dockerd
```
For further troubelshooting consult https://docs.docker.com/install/linux/linux-postinstall/
Generally helpful for debugging is using the docker api via the socket like so:
```
curl -X GET --unix-socket /var/run/docker.sock localhost/images/json
[
{
"Containers": -1,
"Created": 1571299457,
"Id": "sha256:4701d93680c34d3dcd551e7c3dfb12685a251b7f4344484bb89b2bc5834ec650",
"Labels": {
"maintainer": "NGINX Docker Maintainers <docker-maint@nginx.com>"
},
"ParentId": "sha256:3064f3e80dbcbd87b901691a29a9f31dca4c58c2e220e960fa7eef1a381aedc4",
"RepoDigests": null,
"RepoTags": [
"sti_frontend_node:latest"
],
"SharedSize": -1,
"Size": 133293396,
"VirtualSize": 133293396
},
....
]
# or with netcat
echo -e "GET /images/json HTTP/1.0\r\n" | nc -U /var/run/docker.sock
HTTP/1.0 200 OK
Content-Type: application/json
Server: Docker/1.9.1 (linux)
Date: Mon, 07 Dec 2015 21:44:20 GMT
Content-Length: 277
# or
curl --unix-socket /var/run/docker.sock localhost/events
```
## 8. How does this relate to the sti?
The dockerfile for our services are started with `docker-compose` or individually using `docker run` - it is equivalent. Management the environment in the by mmg included `Portainer` application, once bootstrapt.
```sh
sti! ❯ find . -iname Dockerfile
./frontend/Dockerfile # nginx
./Dockerfile # vscode
./backend/Dockerfile # nodejs
./db/Dockerfile # postgres
```
```sh
sti ❯ ll --sort type
Permissions Size User Date Modified Name
drwxr-xr-x - xkef 2 Nov 18:04 .git
drwxr-xr-x - xkef 17 Oct 9:22 .vscode
drwxr-xr-x - xkef 2 Nov 12:52 backend
drwxr-xr-x - xkef 21 Oct 14:57 db
drwxr-xr-x - xkef 31 Oct 13:52 docs
drwxr-xr-x - xkef 2 Nov 12:53 frontend
drwxr-xr-x - xkef 2 Nov 12:34 scripts
.rw-r--r-- 335 xkef 17 Oct 13:08 .gitignore
.rw-r--r-- 983 xkef 17 Oct 13:08 Dockerfile
.rw-r--r-- 2.8k xkef 3 Oct 14:20 GETTING_STARTED.md
.rw-r--r-- 82k xkef 3 Oct 14:16 README.md
.rw-r--r-- 1.9k xkef 2 Nov 18:04 docker-compose.yml <---
```
## 9. Overview of the Dev Env
The main services in production build
| SERVICE | PORT | URI | config |
|----------|------|--------------------------|----------------------------------------------------------------------------------|
| backend | 8000 | http://backend_node | `./backend/.env` `./backend/src/utils/config.ts` `./backend/ecosystem.yml` |
| frontend | 80 | http://frontend_node | `./frontend/src/index.js` `./frontend/nginx/nginx.conf` |
| database | 5432 | tcp://database_node:5432 | `./db/**` |
This defines the ports (**.env**), the routing (**nginx**) and the PM2 cluster (**ecosystem.yml**).
PM2 is a daemon process manager that will help you manage and keep your application online. Getting started with PM2 is straightforward, it is offered as a simple and intuitive CLI, installable via NPM. The daemon merely acts as a controller of node processes running the transpiled `.ts code`, making
| Containers | Containers by image | Hosts | Created | IPs | Image name | Image tag | Restart # | State | Uptime | CPU | Memory |
|----------------------|---------------------|--------|----------------|------------|---------------------|-----------|-----------|---------------|------------|-------|---------|
| stip_portainer_1 | portainer/portainer | ubuntu | 22 minutes ago | 172.18.0.2 | portainer/portainer | latest | 0 | Up 10 minutes | 10 minutes | 0.06% | 19.1MB |
| stip_backend_node_1 | stip_backend_node | ubuntu | an hour ago | 172.18.0.6 | stip_backend_node | latest | 0 | Up 9 minutes | 9 minutes | 0.05% | 333.6MB |
| stip_vscode_1 | stip_vscode | ubuntu | 22 minutes ago | 172.18.0.4 | stip_vscode | latest | 0 | Up 10 minutes | 10 minutes | 0.00% | 227.3MB |
| stip_frontend_node_1 | stip_frontend_node | ubuntu | 22 minutes ago | 172.18.0.7 | stip_frontend_node | latest | 0 | Up 9 minutes | 9 minutes | 0.00% | 226.3MB |
| stip_database_node_1 | stip_database_node | ubuntu | an hour ago | 172.18.0.5 | stip_database_node | latest | 0 | Up 10 minutes | 10 minutes | 0.00% | 8.9MB |
| stip_pgadmin_1 | dpage/pgadmin4 | ubuntu | 22 minutes ago | 172.18.0.3 | dpage/pgadmin4 | latest | 0 | Up 10 minutes | 10 minutes | 0.00% | 96.5MB |
SET HOST FILE!
This can all be managed from the portainer web ui, itself bundled in the docker cluster config.

The persistent Volumes are map in the following way:

---
---
---
## 10. Install the sti application
To set up the env, I took inspiration from multiple sources. A good thing here is the ability of docker the do multi-stage builds. Check out [Dockerize PostgreSQL](https://docs.docker.com/engine/examples/postgresql_service/)
### clone repo
Navigate in the browser to Azure DevOps to generate credentials to `git clone` the repo in the develpment machine (10.204.192.10)



To add a new persistent volume, use portainer:

then attach the volume to the vscode container which is running like the other dev1,2_data.
## 11. Building
There is a build script included in the root of this repository, just run
```bash
chmod +x BUILD.SH && ./BUILD.SH
```
For individual builds you can check the **npm scripts**,
```bash
yarn && yarn build
```
will give you a `./dist` for backend with *ES6* code, and `./BUILD` with for frontend with *ES5* code to run in the browser.
For the backend we have additional capabilities of building a binary including most dependencies for deployment.
The `static` payload of a frontend build can already be served with a server.
## 12. Documentation
We have taken great effort to document this application well. There are currently a few approaches for different areas of the app:
- **GraphQL API Documention** in the `GraphQL-Playground`, when enabled to be found while backend server is running at
`0.0.0.0:8000/backend` in the browser.

- Deployment and Infrastructure Documentation: In `./docs` is a very extensive documentation found, with examples and step by step setup guide of anything related to Operating Systems as well as API and Network Debugging.

- **Code Documentation** is done with `jsdoc` or `typedoc`. It can be generated with
`yarn docs` in `./frontend` or `./backend`.
For convenience one can serve the *HTML* based documents via a static file server.
```bash
npm i -g serve
```
then
```bash
serve -s ./docs
```
## 13. Testing
As we were moving fast developing this application, we decided against introducing Unit Tests as of the first release.
As an alternative, we have introduced complete dictionaries of GraphQL request that can be made.
This can be found in `./docs/Insomnia_xxxxx.json` to be loaded with the **Insomnia** App (download at insomnia.rest). Further can of course be the Playground with it's documentation and autocompletion IDE a nice asset.

For logging, one has of course `stdout`, but for convenience we have bundled **Portainer** to inspect the individual components in a
single tool. **Portainer** can be found at `0.0.0.0:9000`.
## 11. Local Hot Reloading
Both frontend and backend can be developed in **hot-reloading** mode, just disable the conflicting container. Check the npm scripts.
## 14. Deployment
A readme will be supplied to prashant
```text
1. frontend
-----------
The xz-archive 'frontend_build_2019-18-10.tar.xz' contains:
| './build/*'
which is moved to:
| '/sti/public/'
for frontend1 and frontend2, replacing '/sti/public' s.t.
after the operation there is '/sti/public/index.html' of this
payload.
Make sure, permissions are _not_ escalated:
| ./etc :-|
| ./public : |-> drwx--x---
| ./logs : |-> www-data:www-data
| ./cache :-|
|
| ./**/*.* -rwxr-x--- (files by www-data)
no access to anything outside '/sti'.
NEVER use 777. It might be one nifty number, but
even in testing it’s a sign of having no clue what
you’re doing. Look at the permissions in the whole
path and think through what’s going on.
2. backend
----------
The xz-archive 'backend_build_2019-18-10.tar.xz' contains:
| './backend'
which is moved to:
| '/sti/backend'
for backend1 and backend2.
Before moving:
| i `pm2 kill`
| ii `pm2 delete all`
After replacing original './backend', start with
| iii `cd /sti/backend`
| iv `npm run:prod`
| v `npm pm2 startup`
Ensure all services are running, by testing
- network
- system io
- pid 1 logs
- sti service logs
----------------------------------------------------------------
Archive: sti_deployment_package_2019-18-10.zip
Length Date Time Name
--------- ---------- ----- ----
15030044 10-18-2019 04:04 backend_build_2019-18-10.tar.xz
1948464 10-18-2019 03:11 frontend_build_2019-18-10.tar.xz
--------- -------
16978508 2 files
----------------------------------------------------------------
```
## Cheatsheet
### Docker
- compose with scaling
```
docker-compose up --build --scale backend=4
```
- Stop all docker containers
```
docker stop $(docker ps -a -q)
```
- force re-creation
```
docker-compose up --build --force-recreate [...service]
```
- prune all containers and images (not volumes)
```
sudo docker system prune -a
```
- prune volumes
```
sudo docker volume prune
WARNING! This will remove all local volumes not used by at least one container.
Are you sure you want to continue? [y/N] y
Deleted Volumes:
stip_pgadmin
stip_dev2_data
stip_portainer_data
d39e4e62e7afe50b7333eb147a6892a481159461255cef19cc1776403340522d
stip_dev1_data
stip_frontend_data
Total reclaimed space: 295.1kB
```
### Performance
Check wrk config in ./backend/asses/apiStressTest.
```
iostat 2 10 -t -m
10/26/19 02:25:42
avg-cpu: %user %nice %system %iowait %steal %idle
0.50 0.00 0.25 3.27 0.00 95.97
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
scd0 0.00 0.00 0.00 0 0
sdb 0.00 0.00 0.00 0 0
sda 31.50 0.00 0.20 0 0
dm-0 0.00 0.00 0.00 0 0
dm-1 0.00 0.00 0.00 0 0
dm-2 0.00 0.00 0.00 0 0
dm-3 0.00 0.00 0.00 0 0
dm-4 35.50 0.00 0.20 0 0
dm-5 0.00 0.00 0.00 0 0
```
```
vmstat 2 20 -t -w
procs -----------------------memory---------------------- ---swap-- -----io---- -system-- --------cpu-------- -----timestamp-----
r b swpd free buff cache si so bi bo in cs us sy id wa st UTC
1 0 1288 629752 302240 5472292 0 0 1 58 9 7 1 0 96 3 0 2019-10-26 02:26:18
0 0 1288 629720 302240 5472256 0 0 0 244 118 502 1 0 95 3 0 2019-10-26 02:26:20
0 1 1288 629348 302240 5472288 0 0 0 302 236 845 1 1 93 6 0 2019-10-26 02:26:22
0 0 1288 629348 302240 5472288 0 0 0 218 99 460 1 0 95 4 0 2019-10-26 02:26:24
0 0 1288 629612 302240 5472300 0 0 0 170 130 515 1 1 95 4 0 2019-10-26 02:26:26
0 0 1288 629736 302240 5472300 0 0 0 212 95 430 1 0 96 3 0 2019-10-26 02:26:28
0 0 1288 629844 302240 5472308 0 0 0 316 169 635 1 1 94 4 0 2019-10-26 02:26:30
0 0 1288 629984 302240 5472320 0 0 0 228 117 484 0 0 96 4 0 2019-10-26 02:26:32
```
```
# Monitor open connections for specific port including listen, count and sort it per IP
watch "netstat -plan | grep :443 | awk {'print \$5'} | cut -d: -f 1 | sort | uniq -c | sort -nk 1"
```
- low-level tcp debugging
```
sudo tcpdump -i en0 host 192.168.0.227
sudo nmap -O --osscan-guess 192.168.0.227
sudo p0f -i en0 -p -o /tmp/p0f.log
sudo nmap -n -PN -sS -sV 192.168.0.94
```
- show docker ips
```json
gh0st@ubuntu:~$ sudo docker network inspect stip_default -f '{{json .Containers}}' | jq '.[] | {cont: .Name, ip: .IPv4Address}'
{
"cont": "stip_frontend_node_1",
"ip": "172.18.0.7/16"
}
{
"cont": "stip_backend_node_1",
"ip": "172.18.0.6/16"
}
{
"cont": "stip_vscode_1",
"ip": "172.18.0.4/16"
}
{
"cont": "stip_database_node_1",
"ip": "172.18.0.5/16"
}
{
"cont": "stip_pgadmin_1",
"ip": "172.18.0.3/16"
}
{
"cont": "stip_portainer_1",
"ip": "172.18.0.2/16"
}
```