---
# System prepended metadata

title: SOC Overview

---

# 1. What is logs?
## Definition
Logs are record of events within a system. These records provide a detailed account what a system has been doing, capturing a wide range of events such user logins. file accesses, system error, network connections or changes to data or system configurations.
## The power of logs
What happened?
When did it happen?
Where did it happen?
Who is responsible?
Were their actions successful?
What was the result of their action?
## Log types
* **Application Logs**: Messages about specific applications, including status, errors, warnings, etc.
* **Audit Logs**: Activities related to operational procedures crucial for regulatory compliance.
* **Security Logs**: Security events such as logins, permissions changes, firewall activity, etc.
* **Server Logs**: Various logs a server generates, including system, event, error, and access logs.
* **System Logs**: Kernel activities, system errors, boot sequences, and hardware status.
* **Network Logs**: Network traffic, connections, and other network-related events.
* **Database Logs**: Activities within a database system, such as queries and updates.
* **Web Server Logs**: Requests processed by a web server, including URLs, response codes, etc.

## Log formats
* Semi-structured logs
    * Syslog message format: A widely adopted logging protocol for system and network logs.

        >TIMESTAMP HOSTNAME TAG[PID]: MESSAGE

        > May 31 12:34:56 WEBSRV-02 CRON[2342593]: (root) CMD ([ -x /etc/init.d/anacron ] && if [ ! -d /run/systemd/system ]; then /usr/sbin/invoke-rc.d anacron start >/dev/null; fi)

    * Window Event Log (EVTX) format: Proprietary Microsoft log for Windows systems.

        | TimeCreated         | Id    | LevelDisplayName | Message                                                         |
        | ------------------- | ----- | ---------------- | --------------------------------------------------------------- |
        | 31/05/2023 17:18:24 | 16384 | Information      | Successfully scheduled Software Protection service for re-start |
        | 31/05/2023 17:17:53 | 16394 | Information     | Offline downlevel migration succeeded.                              |
* Structured Logs
    * Field Delimited Formats: Comma-Separated Values (CSV) and Tab-Separated Values (TSV) are formats often used for tabular data.
   > "time","user","action","status","ip","uri"
   > "2023-05-31T12:34:56Z","adversary","GET",200,"34.253.159.159","http://gitlab.swiftspend.finance:80/"
   
   * JavaScript Object Notation (JSON): Known for its readability and compatibility with modern programming languages.
   >{"time": "2023-05-31T12:34:56Z", "user": "adversary", "action": "GET", "status": 200, "ip": "34.253.159.159", "uri": "http://gitlab.swiftspend.finance:80/"}
   
   * W3C Extended Log Format (ELF): Defined by the World Wide Web Consortium (W3C), customizable for web server logging. It is typically used by Microsoft Internet Information Services (IIS) Web Server.

    > #Version: 1.0
    > #Fields: date time c-ip c-username s-ip s-port cs-method cs-uri-stem sc-status
    > 31-May-2023 13:55:36 34.253.159.159 adversary 34.253.127.157 

    * eXtensible Markup Language (XML): Flexible and customizable for creating standardized logging formats.

    > <log><time>2023-05-31T12:34:56Z</time><user>adversary</user><action>GET</action><status>200</status><ip>34.253.159.159</ip><url>https://gitlab.swiftspend.finance/</url></log>

* Unstructures Logs
    * NCSA Common Log Format (CLF): A standardized web server log format for client requests. It is typically used by the Apache HTTP Server by default.

        >34.253.159.159 - adversary [31/May/2023:13:55:36 +0000] "GET /explore HTTP/1.1" 200 4886

    * NCSA Combined Log Format (Combined): An extension of CLF, adding fields like referrer and user agent. It is typically used by Nginx HTTP Server by default.
        >34.253.159.159 - adversary [31/May/2023:13:55:36 +0000] "GET /explore HTTP/1.1" 200 4886 "http://gitlab.swiftspend.finance/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/115.0"

## Log Standards
Log standards are guidelines that define how logs should be created, transmitted, and stored. They cover event selection, secure transmission, and retention periods. Examples include:

* **CEE (Common Event Expression)**: Unified structure for log data.

* **OWASP Logging Cheat Sheet**: Security-focused logging practices for developers.

* **Syslog Protocol**: Standard for separating log generation, storage, and analysis.

* **NIST SP 800-92**: Guidance on computer security log management.

* **Cloud-specific standards**: Azure Monitor Logs, Google Cloud Logging, Oracle Cloud Logging.

* **Virginia Tech IT Logging Standard**: Compliance and review guidelines.

## Log collection
* Log collection is the first step in log analysis, requiring aggregation from multiple sources.
* Maintaining accurate system time with NTP ensures a reliable event timeline.
* The process includes identifying sources, selecting a log collector, configuring parameters (time sync and event priorities), and testing the setup.

## Log Management
* Storage: Decide on a secure storage solution.
* Organisation: Classify logs based on their source, type, or other criteria for easier access later.
* Backup: Regularly back up your logs to prevent data loss.
* Review: Periodically review logs to ensure they are correctly stored and categorised.

## Log Centralisation
* Choose a Centralised System: Opt for a system that consolidates logs from all sources, such as the Elastic Stack or Splunk.
* Integrate Sources: Connect all your log sources to this centralised system.
* Set Up Monitoring: Utilise tools that provide real-time monitoring and alerts for specified events.
* Integration with Incident Management: Ensure that your centralised system can integrate seamlessly with any incident management tools or protocols you have in place.

## Log Deletion
Log deletion must be performed carefully to avoid removing logs that could still be of value. The backup of log files, especially crucial ones, is necessary before deletion.

## Log Analysis Process
* Data resources
* Parsing
* Normalisation
* Sorting
* Classification
* Enrichment
* Correlation
* Visualisation
* Reporting
# 2. Practical Activity: 
## a. Log Collection with rsyslog
Normally, all logs (sshd, cron, kernel, etc.) are stored together in `/var/log/syslog` or `/var/log/auth.log`.

This configuration redirects all logs related to sshd into a dedicated file:

`/var/log/websrv-02/rsyslog_sshd.log`

✅ Benefits:

* Easier monitoring and analysis of SSH events (login success, failure, brute force, key authentication).

* Saves time for administrators by avoiding manual filtering in general log files.

* Focused logs simplify incident investigation, e.g., brute force attacks.

1. Ensure rsyslog is installed. Check: `sudo systemctl status rsyslog`
2. Create a Configuration File: nano `/etc/rsyslog.d/98-websrv-02-sshd.conf`
3. Add the Configuration: Direct the ssh messages to specific log file: 

    > $FileCreateMode 0644

    > :programname, isequal, "sshd" /var/log/websrv-02/rsyslog_sshd.log
4. Restart rsyslog: `sudo systemctl restart rsyslog`
![image](https://hackmd.io/_uploads/B1wEeIhYlg.png)

## b. Log Management with logrotate
`logrotate` is a tool that automates log file rotation, compression, and management, ensuring that log files are handled systematically. 

```/var/log/myapp/*.log {
    daily
    rotate 7
    compress
    missingok
    notifempty
    create 0640 root adm
    postrotate
        systemctl restart App
    endscript
}
```

* `/var/log/myapp/*.log` → Targets all log files ending with .log inside `/var/log/myapp/`.
* `daily` → Rotate the log files every day.

* `rotate 7` → Keep up to 7 old log files. Older ones will be deleted.

* `compress` → Compress old log files to save space.

* `missingok` → If a log file is missing, don’t generate an error.

* `notifempty` → Do not rotate if the log file is empty.

* `create 0640 root adm` → After rotation, create a new log file with permissions 0640, owned by user root and group adm.

* `postrotate ... endscript` → After rotating, run the command inside.

* `systemctl restart myapp` → Restart the service App so it starts writing logs into the new file.

## Log Storage

Logs can be stored in various locations, such as the local system that generates them, a centralised repository, or cloud-based storage.

The choice of storage location typically depends on multiple factors:
	
* Security Requirements: Ensuring that logs are stored in compliance with organisational or regulatory security protocols.
* Accessibility Needs: How quickly and by whom the logs need to be accessed can influence the choice of storage.
* Storage Capacity: The volume of logs generated may require significant storage space, influencing the choice of storage solution.
* Cost Considerations: The budget for log storage may dictate the choice between cloud-based or local solutions.
* Compliance Regulations: Specific industry regulations governing log storage can affect the choice of storage.
* Retention Policies: The required retention time and ease of retrieval can affect the decision-making process.
* Disaster Recovery Plans: Ensuring the availability of logs even in system failure may require specific storage solutions.
## Log Retention

It is vital to recognise that log storage is not infinite. Therefore, a reasonable balance between retaining logs for potential future needs and the storage cost is necessary. Understanding the concepts of Hot, Warm, and Cold storage can aid in this decision-making:

* Hot Storage: Logs from the past 3-6 months that are most accessible. Query speed should be near real-time, depending on the complexity of the query.
* Warm Storage: Logs from six months to 2 years, acting as a data lake, easily accessible but not as immediate as Hot storage.
* Cold Storage: Archived or compressed logs from 2-5 years. These logs are not easily accessible and are usually used for retroactive analysis or scoping purposes.


## Log Deletion

Log deletion must be performed carefully to avoid removing logs that could still be of value. The backup of log files, especially crucial ones, is necessary before deletion.

It is essential to have a well-defined deletion policy to ensure compliance with data protection laws and regulations. Log deletion helps to:

* Maintain a manageable size of logs for analysis.
* Comply with privacy regulations, such as GDPR, which require unnecessary data to be deleted.
* Keep storage costs in balance.

## Log Analysis Process

Log analysis involves Parsing, Normalisation, Sorting, Classification, Enrichment, Correlation, Visualisation, and Reporting. It can be done through various tools and techniques, ranging from complex systems like Splunk and ELK to ad-hoc methods ranging from default command-line tools to open-source tools.
	
* Data Sources: Data Sources are the systems or applications configured to log system events or user activities. These are the origin of logs.


* Parsing: Parsing is breaking down the log data into more manageable and understandable components. Since logs come in various formats depending on the source, it's essential to parse these logs to extract valuable information.


* Normalisation: Normalisation is standardising parsed data. It involves bringing the various log data into a standard format, making comparing and analysing data from different sources easier. It is imperative in environments with multiple systems and applications, where each might generate logs in another format.


* Sorting: Sorting is a vital aspect of log analysis, as it allows for efficient data retrieval and identification of patterns. Logs can be sorted by time, source, event type, severity, and any other parameter present in the data. Proper sorting is critical in identifying trends and anomalies that signal operational issues or security incidents.

* Classification: Classification involves assigning categories to the logs based on their characteristics. By classifying log files, you can quickly filter and focus on those logs that matter most to your analysis. For instance, classification can be based on the severity level, event type, or source. Automated classification using machine learning can significantly enhance this process, helping to identify potential issues or threats that could be overlooked.


* Enrichment: Log enrichment adds context to logs to make them more meaningful and easier to analyse. It could involve adding information like geographical data, user details, threat intelligence, or even data from other sources that can provide a complete picture of the event.

* Enrichment makes logs more valuable, enabling analysts to make better decisions and more accurately respond to incidents. Like classification, log enrichment can be automated using machine learning, reducing the time and effort required for log analysis.


* Correlation: Correlation involves linking related records and identifying connections between log entries. This process helps detect patterns and trends, making understanding complex relationships between various log events easier. Correlation is critical in determining security threats or system performance issues that might remain unnoticed.


* Visualisation: Visualisation represents log data in graphical formats like charts, graphs, or heat maps. Visually presenting data makes recognising patterns, trends, and anomalies easier. Visualisation tools provide an intuitive way to interpret large volumes of log data, making complex information more accessible and understandable.


* Reporting: Reporting summarises log data into structured formats to provide insights, support decision-making, or meet compliance requirements. Effective reporting includes creating clear and concise log data summaries catering to stakeholders' needs, such as management, security teams, or auditors. Regular reports can be vital in monitoring system health, security posture, and operational efficiency.

## Log Analysis Techniques

Log analysis techniques are methods or practices used to interpret and derive insights from log data. These techniques can range from simple to complex and are vital for identifying patterns, anomalies, and critical insights. Here are some common techniques:

* **Pattern Recognition**: This involves identifying recurring sequences or trends in log data. It can detect regular system behaviour or identify unusual activities that may indicate a security threat.

* **Anomaly Detection**: Anomaly detection focuses on identifying data points that deviate from the expected pattern. It is crucial to spot potential issues or malicious activities early on.

* **Correlation Analysis**: Correlating different log entries helps understand the relationship between various events. It can reveal causation and dependencies between system components and is vital in root cause analysis.

* **Timeline Analysis**: Analysing logs over time helps understand trends, seasonalities, and periodic behaviours. It can be essential for performance monitoring and forecasting system loads.

* **Machine Learning and AI**: Leveraging machine learning models can automate and enhance various log analysis techniques, such as classification and enrichment. AI can provide predictive insights and help in automating responses to specific events.

* **Visualisation**: Representing log data through graphs and charts allows for intuitive understanding and quick insights. Visualisation can make complex data more accessible and assist in identifying key patterns and relationships.

* **Statistical Analysis**: Using statistical methods to analyse log data can provide quantitative insights and help make data-driven decisions. Regression analysis and hypothesis testing can infer relationships and validate assumptions.