:::success
# Research Project 1 - Exploring log analysis and traffic monitoring tools
:::
## Annotation
In this project, three tools were considered, the main functions of which are traffic monitoring and log analysis. This is the ELK stack, AlienVault SIEM (OSSIM) and Security onion. Currently, tools that inspect logs are important because they significantly reduce a developer's time to find errors, help to detect suspicious activity in time, determine if an incident belongs to a system security breach, and much more. In addition to monitoring logs about actions within the system, network audit is also important. Since it is thanks to traffic monitoring that many types of cyber attacks can be prevented, primarily related to leaked information, phishing and many other scenarios. Each of these tools is a large complex consisting of various tools and related services, so my tasks will include familiarization with them:
1. Create demo network views with given tools.
2. Initiate various activities on the client using Metasploit to see the tools in action.
3. Analyze the convenience and effectiveness of each of them, perhaps choose the most effective tool.
## Installation
All demo objects (configurations, images, etc.) were assembled in Docker using Docker-compose and in VirtualBox. I decided to consider the ELK stack in Docker, because building from the docker-compose.yml file is fast and convenient. As for OSSIM, this solution is not supported by Docker at all due to a malfunction in the hardware and, as a result, a malfunction in the network monitoring tools. In the case of the OS, this solution can only be partially assembled in the form of containers.
## ELK Stack
Since the ELK is a synthesis of three (even four) different instruments, I will collect it sequentially. Each service is configured individually and is a separate container, however, it will start at the same time.
* **Elasticsearch** (data storage and retrieval).
* **Logstash** (pipeline for processing, filtering and normalizing logs).
* **Kibana** (interface for easy search and administration).
* **Filebeat** (ship log file data to Logstash).
We start deploying the ELC by preparing configs for each service. First, as in the classics, we write ElasticSearch.
**configs/elasticsearch/config.yml**
```php=
cluster.name: "elk"
network.host: 0.0.0.0
xpack.security.enabled: true
xpack.license.self_generated.type: basic
```
Next up is Kibana and Logstash.
configs/kibana/config.yml
```php=
server.name: kibana
server.host: 0.0.0.0
server.publicBaseUrl: "http://localhost:5601"
monitoring.ui.container.elasticsearch.enabled: true
elasticsearch.hosts: [ "http://elasticsearch:9200" ]
elasticsearch.username: admin
elasticsearch.password: PassWd123
```
configs/logstash/config.yml
```php=
http.host: "0.0.0.0"
```
And now the most interesting thing is to raise all the services at once using Docker-сompose. Separately, we take into account the host that will have a shared folder with the Filebeast, necessary for collecting logs.
Docker-compose.yaml
```php=
version: '3.7'
services:
elasticsearch:
image: elasticsearch:7.16.1
volumes:
- ./configs/elasticsearch/config.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
- elasticsearch:/usr/share/elasticsearch/data
environment:
ES_JAVA_OPTS: "-Xmx512m -Xms512m"
ELASTIC_USERNAME: "admin"
ELASTIC_PASSWORD: "PassWd123"
discovery.type: single-node
networks:
- elk
ports:
- "9200:9200"
- "9300:9300"
logstash:
image: logstash:7.16.2
volumes:
- ./configs/logstash/config.yml:/usr/share/logstash/config/logstash.yml:ro
- ./configs/logstash/pipelines.yml:/usr/share/logstash/config/pipelines.yml:ro
- ./configs/logstash/pipelines:/usr/share/logstash/config/pipelines:ro
environment:
LS_JAVA_OPTS: "-Xmx512m -Xms512m"
ports:
- "5044:5044"
- "5000:5000"
- "9600:9600"
networks:
- elk
depends_on:
- elasticsearch
kibana:
image: kibana:7.16.1
depends_on:
- elasticsearch
volumes:
- ./configs/kibana/config.yml:/usr/share/kibana/config/kibana.yml:ro
networks:
- elk
ports:
- "5601:5601"
beats:
image: elastic/filebeat:7.16.2
volumes:
- ./configs/filebeat/config.yml:/usr/share/filebeat/filebeat.yml:ro
- ./host_metrics_app:/host_metrics_app/:ro
networks:
- elk
depends_on:
- elasticsearch
vulnerable_node:
image: tleemcjr/metasploitable2
volumes:
- ./host_metrics_app:/var/log/apache2
container_name: vulnerable
command: bash -c "./bin/services.sh && while true; do sleep 2; done"
networks:
- elk
volumes:
elasticsearch:
networks:
elk:
driver: bridge
```
I found out about this folder in the process, but Metasploitable actually writes logs to the /var/log folder

```
#deploy everything
docker-compose up
```

And we make sure that Kibana's server has started.

Set the logstash processing format json for Logstash.
configs/logstash/pipelines.yml
```ph=
- pipeline.id: service_stamped_json_logs
pipeline.workers: 1
pipeline.batch.size: 1
path.config: "/usr/share/logstash/config/pipelines/service_stamped_json_logs.conf"
```
And now add the configuration for converting the log format to json format.
configs/logstash/pipelines/service_stamped_json_logs.conf
```ph=
input {
beats {
port => 5044
}
}
filter {
if [fields][service] not in ["host_metrics_app", "host_metrics_app"] {
drop {}
}
json {
source => "message"
}
date {
match => ["asctime", "yyyy-MM-dd HH:mm:ss.SSS"]
timezone => "UTC"
target => "@timestamp"
remove_field => ["asctime"]
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
index => "logs_%{[fields][service]}-%{+YYYY.MM.dd}"
user => "elastic"
password => "MyPw123"
}
}
```
This is the config for the Filebeast, which is usually deployed on the client, but in my case they all live in Docker. It has folders for forwarding logs.
```php=
filebeat.inputs:
- type: log
enabled: true
paths:
- /host_metrics_app/access.log
- /host_metrics_app/error.log
fields:
service: apache2
output.logstash:
hosts: ["logstash:5044"]
```
Restart the containers and you should now be able to work in the Kibana GUI. Everything is quite simple here, when it does not concern direct intervention in the configuration of services. My presentation reflects nice dashboard settings (put there the information on which you would like to see infographics). As for importing logs, we need to go to stack management and add our logs as index templates. After analysis and indexing, we will see which fields ElasticSearch found in them.

This completes a superficial acquaintance with Elasticsearch, although the tool itself presents many more useful functions, for debugging which it makes sense to dedicate an entire project to it. I can consider ElasticSearch as a fashionable and popular service that is narrowly developing in its field.
## AlienVault OSSIM - security information and event management system
For an installation of AlienVault OSSIM, the minimum system requirements are as follows:
* 2 CPU cores
* 4-8 GB RAM
* 50 GB HDD
* E1000 compatible network cards
To install AlienVault OSSIM:
1. In your virtual machine, create a new VM instance using the ISO as the installation source.
2. Once you have initiated the new Debian 8.x 64-bit instance, select Install AlienVault OSSIM (64 Bit) and press Enter.


*not correct IP on picture, I changed it to local because I needed to access the browser on my machine
Next, we get to the OSSIM web interface, which offers help in setting up hosts for monitoring. To do this, you need to choose what kind of use of OSSIM we want:
1. Monitor Network - Network monitoring (setting the network monitored by the OSSIM server)
2. Assets Discovery - Device Discovery (Automatic discovery of network devices in the organization)
3. Collecting logs and monitoring of network nodes - Collecting logs and monitoring network nodes
I chose to start by collecting logs and monitoring nodes on the network interface that looks inside my virtual network (Vulnerable client (Metasploitable2) and Attacker (machine with Metasploit) are already deployed in this network). They are on the same subnet (internal network) as the interface that is dedicated to monitoring.
External net for WebUI(Manager) - 10.0.0.0/24 (IP 10.0.0.99/24).
Internal net for Monitor & Hosts - 192.168.1.0/24
* Vulnerable client - 192.168.1.4/24
* Attacker - 192.168.1.5/24
After network setup and device discovery, the next step is to deploy HIDS on Windows/Linux devices for file integrity, monitoring, rootkit detection, and event logging. On Linux devices, this can be done using SSH.

After that, the system will write about a successful deployment and open up the possibility for me to manage the found host. However, that's all. After trying to connect to the host and start installing the monitoring agent, my laptop hardware system gave a critical error.

No matter how much I tried to reduce the amount of resources consumed by the virtual environment, the pressure on the system was greater than its capabilities. My laptop system requirements:
* 6 CPU cores
* 8 GB RAM
* 100 GB SSD (for project)
Virtual system requirements of Vulnerable Client:
* 1 CPU core
* 512 MB RAM
* 5 GB HDD
Virtual system requirements of Attacker:
* 1 CPU core
* 2 GB RAM
* 30 GB HDD
Thus, due to lack of resources, I couldn't even see the activity dashboard. Despite this, I have fully studied the mechanism for subsequent network configuration. After importing the hosts, which can be done in several ways:
* Using the Getting Started Wizard
* Scanning for new Assets
* Importing a CSV File
* Using SIEM Events
* Adding assets manually
We will need to additionally install a local client (HID) on our monitored network node.
```
#download installer
wget https://github.com/ossec/ossec-hids/archive/3.0.0.tar.gz -P /tmp/
cd /tmp/
tar xzf 3.0.0.tar.gz
#install
cd ossec-hids-3.0.0/
./install.sh
```
After installation, we will import the key for the agent from our server. It can be found in the dashboard Environment > Detection > HIDS > Agent, and on the host add it to the file `/var/ossec/bin/manage_agents`.
Unfortunately, I can't show the rules management process in this environment. But since I still chose this tool as the one under consideration, it is worth noting that the basic rules are already activated in the system and I keep track of standard vulnerabilities (those that were written in the AT&T Cybersecurity correlation rules). They are not subject to any change. You can create and configure orchestration rules to add specific policies for a specific event or alarm. OSSIM (USM)supports the following types of rules:
* Suppression rules (rules for suppressing events or alarms that make noise)
* Filtering rules (rules for the sensor to discard future events that match the rule)
* Alert rules (rules for identifying existing and emerging threats)
* Notification rules (rules for creating your own rules and receiving notifications)
* Response Rules (rules to respond to an event or alarm running an AlienApp).
## Security onion
:::danger
Minimum hard disk space - 200GB
:::
That dreaded message was enough to make me realize it was time for me to upgrade my laptop. But one way or another, I consider Security Onion among the tools that can compete with the ELK, because he took and implemented the entire ELK stack (a joke and the truth at the same time). In my presentation, I showed the architectures that can be created using CO, in the table I describe in detail all the points of this solution, so in the report I will not add much about it. This is really an Onion, where at almost every level (as far as I was able to understand, the components of the ELC stack are still indispensable) you can choose from several tools or connect all at once. All of these tools are free and available, and Onion has chosen what I think is a good tactic to enable them to be visualized through their console's GUI. However, given that each tool needs to be customized, using this system requires a lot of skill in working with open-source projects. It is worth mentioning the issue of security, it is questionable when it comes to such a large number of free tools. Similarly, developers also encourage users to be aware of this and do not take responsibility for changes or discovering vulnerabilities in free applications.
## Result table
The table, the link to which is presented below, contains the criteria that I have identified based on my experience with these tools. I was mostly worried about the ease of installation, the friendliness of these services to the user and other environments and tools, the quality of information display.
Link: [Result Table](https://docs.google.com/spreadsheets/d/1UanTlXOumjLbJ4YOByH6YSObiFRuR_3FfnVGUXtNkxE/edit?usp=sharing)
## Conclusion
In conclusion, I can say the following, these tools together create complete systems that require a high level of professionalism to configure, analyze and work with. Largely because of the importance of understanding how these tools interact with each other. I see the point in starting to automate the log management process in Elasticsearch. It is now popular, it has a wonderful containerization in Docker, the completeness of the documentation is also pleasing, and the free version is perfect for small companies and startups. However, when it comes to Elasticsearch customization, it is limited in the free version. You will not be able to set up notifications, not all stack settings will be available to you, including machine learning, but you are given complete freedom in studying the documentation for writing rules manually.
OSSIM is seriously outdated, the developers are focusing on the new USM cloud solution, even the documentation on OSSIM is supposedly temporarily absent. In OSSIM, you cannot manage logs, you can only look at them, but network monitors work well. Most of the setup guides will soon be older than me, and I can understand the company's refusal to host heavyweight sensor software locally. However, as I managed to find out, due to the fact that OSSIM is so closed (like a tin can), it is great for government organizations and those companies that do not need new items, they need a strict classic where they can be at least 90% sure in the security of your network. But at the moment, USM cloud solutions are all integrations with Azure and AWS, so I can assume that this solution is more of interest to Western countries.
And in conclusion, a little about Security Onion, I think that this is a cool tool that affects all areas that are somehow related to the process of processing logs and monitoring the network. Onion can offer two, three or five tools to solve one problem, you can choose, you can put everything. You can refuse and introduce your own. But I will again emphasize that for this it is necessary to be a professional in working with open-source tools, to understand the risks when working with them. And to work with it, you need to be an enthusiast who has his own view on solving problems with logs, so I saw a lot of information that Onion is preferred by computer security laboratories. Just like that, nothing comes out of the box here.
## References:
1. [OSSIM ISO](https://dlcdn.alienvault.com/AlienVault_OSSIM_64bits.iso)
2. [Alienvault Monitoring (experiment) - CompTIA Cybersecurity Analyst](https://cybersecurityhoy.files.wordpress.com/2020/08/14-alienvault-monitoring-siem-and-netflow.pdf)
3. [Journey into the AlienVault OSSIM/USM - pentest.blog](https://pentest.blog/unexpected-journey-into-the-alienvault-ossimusm-during-engagement/)
4. [Pentesting elasticsearch](https://book.hacktricks.xyz/network-services-pentesting/9200-pentesting-elasticsearch)