**The Hive Project**
===
[TOC]
___
# The Hive
## Introduction
TheHive is a scalable **4-in-1 open source and free security incident response platform** designed to make life easier for **SOCs, CSIRTs, CERTs and any information security practitioner** dealing with security incidents that need to be investigated and acted upon swiftly. Thanks to **Cortex** a powerful free and open source **analysis engine**, you can analyze (and triage) observables at scale using more than 100 analyzers.
Last but not least, TheHive is highly integrated with **MISP (Malware Information Sharing Platform)**, the *de facto* standard of threat sharing, as it can pull events from several MISP instances and export investigation cases back to one or several ones. It also has additional features such as MISP extended events and health checking.
___
# Cortex
## Introduction
* cortex inside docker server
* elasticserach in a remote a server
* servers system: CentOS 7
## Installation
### Docker
**SET UP THE REPOSITORY**
```bash=
$ sudo yum install -y yum-utils
$ sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
```
**INSTALL DOCKER ENGINE**
```bash=
$ sudo yum install docker-ce docker-ce-cli containerd.io
```
Start Docker.
```bash=
$ sudo systemctl start docker
```
Verify that Docker Engine is installed correctly by running the **hello-world** image.
```bash=
$ sudo docker run hello-world
```
### Docker Compose
Run this command to download the current stable release of Docker Compose:
```bash=
$ sudo curl -L "https://github.com/docker/compose/releases/download/1.27.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
```
Apply executable permissions to the binary:
```bash=
$ sudo chmod +x /usr/local/bin/docker-compose
```
:::info
**Note:** If the command **docker-compose** fails after installation, check your path. You can also create a symbolic link to **/usr/bin** or any other directory in your path.
:::
For example:
```bash=
$ sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
```
Test the installation.
```bash=
$ docker-compose --version
docker-compose version 1.27.4, build 1110ad01
```
### Python
**Python PIP Installation**
```bash=
yum -y install python-pip
```
**Python3 Installation**
Update the environment.
```bash=
yum update -y
```
Install Python 3.
```bash=
yum install -y python3 python3-devel
```
In order to ensure that Python 3 is in fact installed and usable, we can drop into a Python 3 shell by running the following command.
```bash=
$ python3
Python 3.6.8 (default, Apr 2 2020, 13:34:55)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
```
### Analyzers and Responders
In Cortex the analyzers and responders can run locally or in docker (using docker images, pre-defined by the offical the hive link).
**Local Installation**
:::info
**Note:** If you do not intend to run the analyzers/responders locally, skip this step forward.
:::
Currently, all the analyzers and responders supported by TheHive Project are written in Python 2 or 3. They don't require any build phase but their dependencies have to be installed. Before proceeding, you'll need to install the system package dependencies that are required by some of them:
```bash=
yum install -y epel-release python-pip python-devel python3 python3-devel python3-pip perl-Image-ExifTool gcc g++ kernel-devel git openssl-devel
yum groupinstall 'development tools'
pip install --upgrade pip
pip install ssdeep
yum install ssdeep-devel
```
You may need to install Python's setuptools and update pip/pip3:
```bash=
pip install -U pip setuptools && pip3 install -U pip setuptools
# or
pip2 install -U pip setuptools && pip3 install -U pip setuptools
```
Once finished, clone the Cortex-analyzers repository in the directory of your choosing, example:
```bash=
cd /opt
git clone https://github.com/TheHive-Project/Cortex-Analyzers
```
### ElasticSearch 7.x
:::info
**Note:** This is a separated server. If you install inside the same server, later on the configuration use your localhost ip (127.0.0.1).
:::
The server you’re working on should be updated before you install ElasticSearch 7.x on CentOS 7. Just run the commands below to update it.
```bash=
yum -y update
reboot
```
ElasticSearch requires Java installed for it to run. The default Java installable on CentOS 7 is Java 8. Here are the commands to use for the installation.
```bash=
yum install java-11-openjdk java-11-openjdk-devel
```
Set Java home.
```bash=
$ cat > /etc/profile.d/java11.sh <<EOF
export JAVA_HOME=\$(dirname \$(dirname \$(readlink \$(readlink \$(which javac)))))
export PATH=\$PATH:\$JAVA_HOME/bin
export CLASSPATH=.:\$JAVA_HOME/jre/lib:\$JAVA_HOME/lib:\$JAVA_HOME/lib/tools.jar
EOF
```
Source created file to update your environment.
```bash=
source /etc/profile.d/java11.sh
```
Install and import the Elasticsearch PGP Key:
```bash=
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
```
Add the repository for downloading ElasticSearch 7 packages to your CentOS 7 system.
```bash=
$ cat <<EOF | sudo tee /etc/yum.repos.d/elasticsearch.repo
[elasticsearch]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=0
autorefresh=1
type=rpm-md
EOF
```
Once the repository is added, clear and update your YUM package index.
```bash=
yum clean all
yum makecache
```
Finally install ElasticSearch 7.x on your CentOS 7 machine.
```bash=
yum install --enablerepo=elasticsearch elasticsearch
```
Confirm ElasticSearch 7 installation on CentOS 7:
```bash=
$ rpm -qi elasticsearch
Name : elasticsearch
Epoch : 0
Version : 7.10.1
Release : 1
Architecture: x86_64
Install Date: Sáb 10 Out 2020 16:08:43 WEST
Group : Application/Internet
Size : 419357156
License : ASL 2.0
Signature : RSA/SHA512, Qua 23 Set 2020 04:37:33 WEST, Key ID d27d666cd88e42b4
Source RPM : elasticsearch-oss-7.9.2-1-src.rpm
Build Date : Qua 23 Set 2020 01:55:04 WEST
Build Host : packer-virtualbox-iso-1600176624
Relocations : /usr
Packager : Elasticsearch
Vendor : Elasticsearch
URL : https://www.elastic.co/
Summary : Distributed RESTful search engine built for the cloud
Description :
Reference documentation can be found at
https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html
and the 'Elasticsearch: The Definitive Guide' book can be found at
https://www.elastic.co/guide/en/elasticsearch/guide/current/index.html
```
**Optional**
You can set JVM options like memory limits by editing the file: /etc/elasticsearch/jvm.options
Example below sets initial/maximum size of total heap space
```bash=
$ vi /etc/elasticsearch/jvm.options
.....
-Xms1g
-Xmx1g
```
If your system has less memory, you can configure it to use small megabytes of ram.
```bash=
-Xms256m
-Xmx512m
```
**End Optional**
Start and enable elasticsearch service on boot:
```bash=
systemctl enable --now elasticsearch
```
Confirm that the service is running.
```bash=
$ systemctl status elasticsearch
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: active (running) since Sáb 2020-10-10 16:18:02 WEST; 1h 45min ago
Docs: https://www.elastic.co
Main PID: 39335 (java)
CGroup: /system.slice/elasticsearch.service
└─39335 /usr/share/elasticsearch/jdk/bin/java -Xshare:auto -Des.networkaddress.cach...
Out 10 16:17:48 elasticserach systemd[1]: Starting Elasticsearch...
Out 10 16:18:02 elasticserach systemd[1]: Started Elasticsearch.
```
Check if you can connect to ElasticSearch Service.
```bash=
$ curl http://127.0.0.1:9200
{
"name" : "elasticserach",
"cluster_name" : "hive",
"cluster_uuid" : "2_ebg7EuS_WI58zDDNTI1w",
"version" : {
"number" : "7.9.2",
"build_flavor" : "oss",
"build_type" : "rpm",
"build_hash" : "d34da0ea4a966c4e49417f2da2f244e3e97b4e6e",
"build_date" : "2020-09-23T00:45:33.626720Z",
"build_snapshot" : false,
"lucene_version" : "8.6.2",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
```
You should be able to create an index with curl
```bash=
$ curl -X PUT "http://127.0.0.1:9200/test_index"
{"acknowledged":true,"shards_acknowledged":true,"index":"test_index"}
```
## Configuration
### ElasticSearch Conf
Add the following lines to elasticsearch.yml.
**path:** /etc/elasticsearch/elasticsearch.yml
```bash=
network.host: X.X.X.X
discovery.type: single-node
cluster.name: hive
script.allowed_types: inline
thread_pool.search.queue_size: 100000
thread_pool.write.queue_size: 10000
gateway.recover_after_nodes: 1
bootstrap.memory_lock: true
```
> ---
> network.host: -> you can set 0.0.0.0 to listen to all ip's that you have, or you can put the ip you want to be listenning this service.
>
> cluster.name: -> define a name for you cluster
>
> ---
Restart ElasticSerach service
```bash=
systemctl restart elasticsearch
```
### ElasticSearch SSL
You need the default version for this, because of xpack.
First enable the **xpack security** feature, add the following line to the end of the elasticserach.yml file:
```
xpack.security.enabled: true
```
Now generate a private key and X.509 certificate.
Generate a certificate authority for your cluster.
Run the following command, inside the folder */usr/share/elasticsearch*:
```
bin/elasticsearch-certutil ca
```
Create a folder to contain certificates in the configuration directory of your Elasticsearch node. For example, create a certs folder in the config directory.
```
mkdir /etc/elasticsearch/certs
```
Generate certificates and private keys for the first node in your cluster:
```
bin/elasticsearch-certutil cert \
--ca elastic-stack-ca.p12 \
--dns localhost \
--ip 127.0.0.1,::1
--out /etc/elasticsearch/certs/node-1.p12
```
You are prompted to enter the password for your CA. You are also prompted to create a password for the certificate.
The output file is a PKCS#12 keystore that includes a node certificate, node key, and CA certificate.
Edit the file elasticsearch.yml and add the following lines:
```
#SSL
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: full
xpack.security.transport.ssl.keystore.path: certs/node-0.p12
xpack.security.transport.ssl.truststore.path: certs/node-0.p12
# HTTPS
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: certs/node-0.p12
```
:::info
**Note:** If you used the --dns or --ip options with the elasticsearch-certutil cert command and you want to enable strict hostname checking, set the verification mode to full. For a description of these values, see [Transport TLS/SSL settings](https://www.elastic.co/guide/en/elasticsearch/reference/7.9/security-settings.html#transport-tls-ssl-settings).
:::
If you secured the keystore or the private key with a password, add that password to a secure setting in Elasticsearch:
```
# SSL
bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
# HTTPS
bin/elasticsearch-keystore add xpack.security.http.ssl.keystore.secure_password
```
At the end restart the elasticsearch service.
### Cortex Conf
First make a folder for your cortex docker.
```bash=
mkdir /containers/cortex
```
Inside the folder create your docker-compose file.
```bash=
vi docker-compose.yml
```
Edit the file, and add the following configurations:
```bash=
version: "3.8"
services:
cortex:
image: 'thehiveproject/cortex:3.1.0-1'
container_name: cortex
restart: unless-stopped
volumes:
- ./application.conf:/etc/cortex/application.conf
- /var/run/docker.sock:/var/run/docker.sock
- ./tmp:/tmp
- /opt/cortex/Cortex-Analyzers/analyzers/:/opt/Cortex-Analyzers/analyzers/
- /opt/cortex/Cortex-Analyzers/responders/:/opt/Cortex-Analyzers/responders/
ports:
- '0.0.0.0:9001:9001'
```
:::info
**Hint:** You can change the volumes folders as you wish.
**E.g.:**
Instead of:
```
/containers/cortex/application.conf:/etc/cortex/application.conf
```
You can choose another folder, for example the one where your files are (if a it is a different one).
Something like:
```
/my_folder/application.conf:/etc/cortex/application.conf
```
**Note** If you will run the analyzers/responders in docker remove the following two lines from the file:
```
...
- /opt/Cortex-Analyzers/analyzers/:/opt/Cortex-Analyzers/analyzers/
- /opt/Cortex-Analyzers/responders/:/opt/Cortex-Analyzers/responders/
...
```
:::
:::warning
**Note:** If during the installation you didn't pull the Cortex-Analyzers git to inside of the **/opt** folder, you need to change the Cortex-Analyzers path in the cortex volumes (inside of docker-compose.yml).
:::
After the compose file we need to create the application.conf file inside the folder that you specified at volumes.
```bash=
vi application.conf
```
Edit the file, and add the following configurations:
:::spoiler
```bash=
# play.http.secret.key="generate_the_secret_key"
## ElasticSearch
search {
# Name of the index
index = cortex
# ElasticSearch instance address.
# For cluster, join address:port with ',': "http://ip1:9200,ip2:9200,ip3:9200"
# Change it for your elasticsearch server ip address and port (default 9200).
uri = "http://127.0.0.1:9200"
## Advanced configuration
# Scroll keepalive.
#keepalive = 1m
# Scroll page size.
#pagesize = 50
# Number of shards
#nbshards = 5
# Number of replicas
#nbreplicas = 1
# Arbitrary settings
#settings {
# # Maximum number of nested fields
# mapping.nested_fields.limit = 100
#}
## Authentication configuration
#user = ""
#password = ""
## SSL configuration
#keyStore {
# path = "/path/to/keystore"
# type = "JKS" # or PKCS12
# password = "keystore-password"
#}
#trustStore {
# path = "/path/to/trustStore"
# type = "JKS" # or PKCS12
# password = "trustStore-password"
#}
}
## Cache
cache.job = 10 minutes
## Authentication
auth {
method.basic = true
# "provider" parameter contains the authentication provider(s). It can be multi valued, which is useful
# for migration.
# The available auth types are:
# - services.LocalAuthSrv : passwords are stored in the user entity within ElasticSearch). No
# configuration are required.
# - ad : use ActiveDirectory to authenticate users. The associated configuration shall be done in
# the "ad" section below.
# - ldap : use LDAP to authenticate users. The associated configuration shall be done in the
# "ldap" section below.
# - oauth2 : use OAuth/OIDC to authenticate users. Configuration is under "auth.oauth2" and "auth.sso" keys
provider = [local]
ad {
# The Windows domain name in DNS format. This parameter is required if you do not use
# 'serverNames' below.
#domainFQDN = "mydomain.local"
# Optionally you can specify the host names of the domain controllers instead of using 'domainFQDN
# above. If this parameter is not set, TheHive uses 'domainFQDN'.
#serverNames = [ad1.mydomain.local, ad2.mydomain.local]
# The Windows domain name using short format. This parameter is required.
#domainName = "MYDOMAIN"
# If 'true', use SSL to connect to the domain controller.
#useSSL = true
}
ldap {
# The LDAP server name or address. The port can be specified using the 'host:port'
# syntax. This parameter is required if you don't use 'serverNames' below.
#serverName = "ldap.mydomain.local:389"
# If you have multiple LDAP servers, use the multi-valued setting 'serverNames' instead.
#serverNames = [ldap1.mydomain.local, ldap2.mydomain.local]
# Account to use to bind to the LDAP server. This parameter is required.
#bindDN = "cn=thehive,ou=services,dc=mydomain,dc=local"
# Password of the binding account. This parameter is required.
#bindPW = "***secret*password***"
# Base DN to search users. This parameter is required.
#baseDN = "ou=users,dc=mydomain,dc=local"
# Filter to search user in the directory server. Please note that {0} is replaced
# by the actual user name. This parameter is required.
#filter = "(cn={0})"
# If 'true', use SSL to connect to the LDAP directory server.
#useSSL = true
}
oauth2 {
# URL of the authorization server
#clientId = "client-id"
#clientSecret = "client-secret"
#redirectUri = "https://my-thehive-instance.example/index.html#!/login"
#responseType = "code"
#grantType = "authorization_code"
# URL from where to get the access token
#authorizationUrl = "https://auth-site.com/OAuth/Authorize"
#tokenUrl = "https://auth-site.com/OAuth/Token"
# The endpoint from which to obtain user details using the OAuth token, after successful login
#userUrl = "https://auth-site.com/api/User"
#scope = "openid profile"
# Type of authorization header
#authorizationHeader = "Bearer" # or token
}
# Single-Sign On
sso {
# Autocreate user in database?
#autocreate = false
# Autoupdate its profile and roles?
#autoupdate = false
# Autologin user using SSO?
#autologin = false
# Attributes mappings
#attributes {
# login = "login"
# name = "name"
# groups = "groups"
# roles = "roles" # list of roles, separated with comma
# organisation = "org"
#}
# Name of mapping class from user resource to backend user ('simple' or 'group')
#mapper = group
# Default roles for users with no groups mapped ("read", "analyze", "orgadmin")
#defaultRoles = []
# Default organization
#defaultOrganization = "MyOrga"
#groups {
# # URL to retreive groups (leave empty if you are using OIDC)
# #url = "https://auth-site.com/api/Groups"
# # Group mappings, you can have multiple roles for each group: they are merged
# mappings {
# admin-profile-name = ["admin"]
# editor-profile-name = ["write"]
# reader-profile-name = ["read"]
# }
#}
}
}
job {
runner = [local]
}
## ANALYZERS
#
analyzer {
# analyzer location
# url can be point to:
# - directory where analyzers are installed
# - json file containing the list of analyzer descriptions
urls = [
"/opt/Cortex-Analyzers/analyzers"
]
# Already specified with the image edit this if you have your analyzers inside a different folder
# Sane defaults. Do not change unless you know what you are doing.
fork-join-executor {
# Min number of threads available for analysis.
parallelism-min = 2
# Parallelism (threads) ... ceil(available processors * factor).
parallelism-factor = 2.0
# Max number of threads available for analysis.
parallelism-max = 4
}
}
# RESPONDERS
#
responder {
# responder location (same format as analyzer.urls)
urls = [
"/opt/Cortex-Analyzers/responders"
]
# Already specified with the image edit this if you have your analyzers inside a different folder
# Sane defaults. Do not change unless you know what you are doing.
fork-join-executor {
# Min number of threads available for analysis.
parallelism-min = 2
# Parallelism (threads) ... ceil(available processors * factor).
parallelism-factor = 2.0
# Max number of threads available for analysis.
parallelism-max = 4
}
}
```
:::
:::info
**Note:** The example above is for the local analyzers/responders. If you pretend to use the docker way, change the runner to "docker" and the urls of the analyzers/responders
**E.g.:**
Job runner
```
job {
runner = [docker]
}
```
Analyzers
```
urls = [
"https://dl.bintray.com/thehive-project/cortexneurons/analyzers.json"
]
```
Responders
```
urls = [
"https://dl.bintray.com/thehive-project/cortexneurons/responders.json"
]
```
Obs: You can't use custom analyzers/responders, only if you make docker images of them.
:::
**Some Hint**
Inside the docker-compose file the version is set to 3.8, to run this version you need at least Docker Engine 19.03.0+ (check widh docker --version) and at least Docker Compose 1.25.5 (check with docker-compose --version)
| Compose file format | Docker Engine release |
| -------- | -------- |
| 3.8 | 19.03.0+ |
| 3.7 | 18.06.0+ |
| 3.6 | 18.02.0+ |
| 3.5 | 17.12.0+ |
| 3.4 | 17.09.0+ |
:::info
**Hint:** If for some reason you have a previous version of Docker Engine or a previous version of Docker Compose and can't upgrade those, you can use **3.7** or **3.6** in docker-compose.yml
:::
**Volumes**
```
└── /opt
└── /containers
├── /cortex
│ ├── application.conf
│ ├── docker-compose.yml
│ └── /tmp
└── /certs
└── http.p12
```
Give some permissons to run the docker containers from inside the docker. And also add user cortex and make him the owner of the Cortex-Analyzers folder.
```
chmod 666 /var/run/docker.sock
adduser cortex
chown -R cortex:cortex /opt/Cortex-Analyzers
mkdir -p /opt/containers/cortex/tmp
chmod 1777 /opt/containers/cortex/tmp
```
Finnaly run the cortex container, inside the folder where the docker-compose.yml file is.
```
docker-compose up
```
After each new docker container run the following command inside the docker:
```
apt update -y && apt install python-pip -y && apt install python3-pip -y && pip install cortexutils && pip3 install cortexutils
```
:::info
**How to get inside the docker container?**
```
docker exec -ti <container name> /bin/bash
```
e.g.:
```
docker exec -ti cortex /bin/bash
```
**Note:**
If you are using the local analyzers/responders you need also to run the following commands inside the container, in the **opt** folder:
```
for I in $(find Cortex-Analyzers -name 'requirements.txt'); do pip install -r $I; done && \
for I in $(find Cortex-Analyzers -name 'requirements.txt'); do pip3 install -r $I || true; done
```
:::
:::warning
**Note:**
If you change the application file to take effect you just need to start&stop the container or just restart the same.
```
docker-compose start
docker-compose stop
docker-compose restart
```
:::
### Cortex AD Conf
Edit the application.conf file, and uncomment the following lines, and fill the fields.
```
auth {
method.basic = true
provider = [local, ad]
ad {
# The Windows domain name in DNS format. This parameter is required if you do not use
# 'serverNames' below.
domainFQDN = "mydomain.local or ip address"
# Optionally you can specify the host names of the domain controllers instead of using 'domainFQDN
# above. If this parameter is not set, TheHive uses 'domainFQDN'.
#serverNames = [ad1.mydomain.local, ad2.mydomain.local]
# The Windows domain name using short format. This parameter is required.
domainName = "MYDOMAIN"
# If 'true', use SSL to connect to the domain controller.
useSSL = false
}
}
```
Add locally the users from AD that you want to give access. e.g.:
- **Login:** dwuser
- **Name:** DW
- **Role:** read, write
:::danger
**Note:**
Use the AD username, but don't define any password.
:::
### Test Cortex API
```
curl -H 'Authorization: Bearer [API]' 'http://X.X.X.X:9001/api/analyzer?range=all'
```
The "X.X.X.X" is your cortex ip address or url.
" and the [API] your organization user API (with permissons read, analyze at least).
![](https://i.imgur.com/H8NgVkw.png)
### Cortex with Elasticsearch SSL and user
Create a folder for the http SSL certificate from elastic, follow the elastic docs for how to create it, and add the following line to your **docker-compose.yml**:
```
- /containers/certs/http.p12:/http.p12
```
**Example:**
```=bash
version: "3.8"
services:
cortex:
image: 'thehiveproject/cortex:3.1.0-0.2RC1'
container_name: cortex
restart: unless-stopped
volumes:
- /containers/cortex01/application.conf:/etc/cortex/application.conf
- /var/run/docker.sock:/var/run/docker.sock
- /tmp:/tmp
- /opt/Cortex-Analyzers/analyzers/:/opt/Cortex-Analyzers/analyzers/
- /opt/Cortex-Analyzers/responders/:/opt/Cortex-Analyzers/responders/
- /containers/certs/http.p12:/http.p12
ports:
- '0.0.0.0:9001:9001'
```
Edit the application file and change to:
```
search {
# Name of the index
index = cortex
# ElasticSearch instance address.
# For cluster, join address:port with ',': "http://ip1:9200,ip2:9200,ip3:9200"
uri = "https://127.0.0.1:9200"
## Advanced configuration
# Scroll keepalive.
#keepalive = 1m
# Scroll page size.
#pagesize = 50
# Number of shards
#nbshards = 5
# Number of replicas
#nbreplicas = 1
# Arbitrary settings
#settings {
# # Maximum number of nested fields
# mapping.nested_fields.limit = 100
#}
## Authentication configuration
user = "elastic"
password = "XXXXX"
## SSL configuration
keyStore {
path = "/http.p12"
type = "PKCS12" # or PKCS12
password = "XXXXX"
}
trustStore {
path = "/http.p12"
type = "PKCS12" # or PKCS12
password = "XXXXXX"
}
}
```
### Cortex SSL NGINX
## Firewalld
You will need to allow the port 9001 for cortex server, and 9200 for elasticsearch server, if you are using the default ports.
Check this rules [here](#Firewalld). ([cortex](#Cortex1) or [elasticserach](#ElasticSearch))
___
# Firewalld
## Cortex
First create an appropriate zone name (in our case, we have used cortex-access to allow access to the Cortex server).
```bash=
firewall-cmd --new-zone=cortex-access --permanent
```
Next, reload the firewalld settings to apply the new change. If you skip this step, you may get an error when you try to use the new zone name. This time around, the new zone should appear in the list of zones.
```bash=
firewall-cmd --reload
firewall-cmd --get-zones
```
Next, add the source IP address (**192.168.30.143/24**) and the port (**9001**) you wish to open on the local server as shown. Then reload the firewalld settings to apply the new changes.
:::info
**Note:** You can speficy an interface using **--add-interface=ens**, and for the source you can allow traffic from the entire network by setting the network ip.
:::
```bash=
firewall-cmd --zone=cortex-access --add-source=192.168.30.143/24 --permanent
firewall-cmd --zone=cortex-access --add-port=9001/tcp --permanent
firewall-cmd --reload
```
To confirm that the new zone has the required settings as added above, check its details with the following command.
```bash=
firewall-cmd --zone=cortex-access --list-all
```
## ElasticSearch
First create an appropriate zone name (in our case, we have used elasticsearch to allow access to the Cortex server).
:::info
**Note:** We didn't use the elasticsearch-access because of the maximum of 17 characters
:::
```bash=
firewall-cmd --new-zone=elasticsearch --permanent
```
Next, reload the firewalld settings to apply the new change. If you skip this step, you may get an error when you try to use the new zone name. This time around, the new zone should appear in the list of zones.
```bash=
firewall-cmd --reload
firewall-cmd --get-zones
```
Next, add the source IP address (**192.168.30.144/24**) and the port (**9200**) you wish to open on the local server as shown. Then reload the firewalld settings to apply the new changes.
:::info
**Note:** You can speficy an interface using **--add-interface=ens**, and for the source you can allow traffic from the entire network by setting the network ip.
:::
```bash=
firewall-cmd --zone=elasticsearch --add-source=192.168.30.144/24 --permanent
firewall-cmd --zone=elasticsearch --add-port=9200/tcp --permanent
firewall-cmd --reload
```
To confirm that the new zone has the required settings as added above, check its details with the following command.
```bash=
firewall-cmd --zone=elasticsearch --list-all
```
```bash=
```
```bash=
```
```bash=
```