# DevSecOps Lab 7
Ilya Kolomin i.kolomin@innopolis.university
Kirill Ivanov k.ivanov@innopolis.university
Anatoliy Baskakov a.baskakov@innopolis.university
## Task 1
1. Define what is Defect Management and give a brieve explanation of its processes.
* *Defect Management* is a process of converting found vulnerabilities into a human-readable report of uniform format. During DevSecOps pipelines, lots of vulnerabilities can be found and stored in lots of different formats. The goal of Defect Management tools is to gather these results, combine them and generate a proper report with all the relevant information
2. List and explain the different stages of the Vulnerability management lifecycle.
* **Discover**: We need to identify all components of the system and perform risk assessment.
* **Analyze**: All assets are categorized into groups and evaluated. Risk profile is determined based on vulnerability threats and how critical the assets are.
* **Report**: Results of the analysis are documented along with a security plan.
* **Manage/Remediate**: Fix vulnerabilities and mitigate threats according to their priorities.
* **Audit**: Check that the threats have been dealt with, eliminated.
* There exist alternative pipelines, e.g. where reporting step is performed last, with all the discovered *and* fixed vulnerabilities documented. Moreover, **analysis** step can be further split into separate steps, for example "categorization" and "resolution"
3. List and explain the key aspects of a complete software defect management process.
* **Describe**: It is not enough to simply find vulmnerabilities, they have to be thoroughly analyzed, both reasons and cosequencies
* **Audit**: Defect management is not limited to simple fixes - there should be tests to see that the problem was actually fixed, as well as some measures applied to avoid the vulnerability in the future
4. List and explain what metrics are used to mesure the severity of vulnerabilities?
* There are different vulnerability severity scores with different calculation formulas that depend on a number of factors
* These factors include:
* How *difficult* it is to exploit the vulnerability. Does it require some complicated number of steps? Is there social engineering required? Is physical access required?
* *Access levels* provided by exploitation of vulnerability. Is access limited? Is it a compromised root access?
* How many *components* of the system are affected.
* Whether it affects *availability*, *integrity* or *confidentiality*. Usually, integrity tends to be more critical than availability (does not mean that availability is not important, here we talk about priorities)
* An example of such a metric is CVSS - Common Vulnerability Scoring System
## Task 2 - Defect Management (Orchestration and correlation)
1. Deploy a vulnerability management tool of your choice ( Recommended defectdojo, Faraday, ArcherySec)
* We have deployed defectdojo on a separate VM which resides in the same private network with the runner instance.
```bash
user@kuber:~/lab7/django-DefectDojo$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
af805f38121e defectdojo/defectdojo-nginx:latest "/entrypoint-nginx.sh" 3 days ago Up 11 seconds 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp, 80/tcp, 0.0.0.0:8443->8443/tcp, :::8443->8443/tcp django-defectdojo_nginx_1
81981441adfb defectdojo/defectdojo-django:latest "/wait-for-it.sh mys…" 3 days ago Up 12 seconds django-defectdojo_celeryworker_1
a9da25c7f846 defectdojo/defectdojo-django:latest "/wait-for-it.sh mys…" 3 days ago Up 12 seconds django-defectdojo_celerybeat_1
4f06c81e840c defectdojo/defectdojo-django:latest "/wait-for-it.sh mys…" 3 days ago Up 12 seconds django-defectdojo_uwsgi_1
ffcd1998f688 defectdojo/defectdojo-django:latest "/wait-for-it.sh mys…" 3 days ago Up 12 seconds django-defectdojo_initializer_1
3c574b50e83e mysql:5.7.40 "docker-entrypoint.s…" 3 days ago Up 14 seconds 3306/tcp, 33060/tcp django-defectdojo_mysql_1
4dd7e5702799 rabbitmq:3.11.4-alpine "docker-entrypoint.s…" 3 days ago Up 14 seconds 4369/tcp, 5671-5672/tcp, 15691-15692/tcp, 25672/tcp django-defectdojo_rabbitmq_1
```
2. Write scripts to upload the results of SAST/SCA/DAST scans (use the tools of your choice to perform the security scan)
* We have a gitlab pipeline in place that performs the scans for our application
```yaml
zap-api:
stage: test
allow_failure: true
image:
name: ictu/zap2docker-weekly
entrypoint: [""]
artifacts:
name: zap-api-scan-report
paths:
- "${ZAP_REPORT_FOLDER}"
when: always
reports:
dotenv: zapapi.env
script:
- mkdir ${ZAP_REPORT_FOLDER}
- export ZAP_SCAN_TIMESTAMP=$(date -Iseconds)
- echo "ZAP_SCAN_TIMESTAMP=${ZAP_SCAN_TIMESTAMP}" >> zapapi.env
- >
/zap/zap-api-scan.py -j -a -m 20
-t http://${WEBGOAT_ADDRESS}/WebGoat/v3/api-docs
-f openapi
--hook=/zap/auth_hook.py
-r ${ZAP_REPORT_FOLDER_ARG}/webgoat-zap-report-${ZAP_SCAN_TIMESTAMP}-api.html
-J ${ZAP_REPORT_FOLDER_ARG}/webgoat-zap-report-${ZAP_SCAN_TIMESTAMP}-api.json
-x ${ZAP_REPORT_FOLDER_ARG}/webgoat-zap-report-${ZAP_SCAN_TIMESTAMP}-api.xml
-z "auth.password="${ZAP_AUTH_PASSWORD}"
auth.username="${ZAP_AUTH_USERNAME}"
auth.password_field=\"exampleInputPassword1\"
auth.username_field=\"exampleInputEmail1\"
auth.submit_field=\"/html/body/section/section/section/form/button\"
auth.loginurl=http://${WEBGOAT_ADDRESS}/WebGoat/login"
zap-full:
stage: test
allow_failure: true
image:
name: ictu/zap2docker-weekly
entrypoint: [""]
artifacts:
name: zap-full-scan-report
paths:
- "${ZAP_REPORT_FOLDER}"
when: always
script:
- mkdir ${ZAP_REPORT_FOLDER}
- >
/zap/zap-full-scan.py -j -a -m 20
-t http://${WEBGOAT_ADDRESS}/WebGoat
--hook=/zap/auth_hook.py
-r ${ZAP_REPORT_FOLDER_ARG}/webgoat-zap-report-$(date -Iseconds)-full.html
-J ${ZAP_REPORT_FOLDER_ARG}/webgoat-zap-report-$(date -Iseconds)-full.json
-z "auth.password="${ZAP_AUTH_PASSWORD}"
auth.username="${ZAP_AUTH_USERNAME}"
auth.password_field=\"exampleInputPassword1\"
auth.username_field=\"exampleInputEmail1\"
auth.submit_field=\"/html/body/section/section/section/form/button\"
auth.loginurl=http://${WEBGOAT_ADDRESS}/WebGoat/login"
semgrep:
# A Docker image with Semgrep installed.
image: returntocorp/semgrep
stage: test
rules:
# Scan changed files in MRs (diff-aware scanning):
- if: $CI_MERGE_REQUEST_IID
# Scan all files on the default branch and report any findings:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
variables:
# Add the rules that Semgrep uses by setting the SEMGREP_RULES environment variable.
SEMGREP_RULES: p/owasp-top-ten # See more rules at semgrep.dev/explore.
# Uncomment SEMGREP_TIMEOUT to set this job's timeout (in seconds):
# Default timeout is 1800 seconds (30 minutes).
# Set to 0 to disable the timeout.
# SEMGREP_TIMEOUT: 300
# Upload findings to GitLab SAST Dashboard
# SEMGREP_GITLAB_JSON: "1"
script: semgrep ci --json > ${SEMGREP_REPORT_FILE_PATH} || true
artifacts:
name: semgrep-scan-report
paths:
- "${SEMGREP_REPORT_FILE_PATH}"
dependency-check:
# OWASP Dependency check SCA
image:
name: owasp/dependency-check:${DEPCHECK_VERSION}
entrypoint: [""]
stage: test
allow_failure: true
cache:
- key: DEPCHECK-${DEPCHECK_VERSION}
paths:
- /usr/share/dependency-check/data
script:
- mkdir ${DEPCHECK_REPORT_FOLDER}
- >
/usr/share/dependency-check/bin/dependency-check.sh
--scan "$CI_PROJECT_DIR"
--format "XML"
--project "dependency-check scan: $(pwd)"
--out ${DEPCHECK_REPORT_FILE_PATH}
artifacts:
name: dependency-check-scan-report
paths:
- "${DEPCHECK_REPORT_FILE_PATH}"
```
3. Integrate the scripts into your CI pipeline to forward the results of your scan in your vulnerability management solution.
* Integrate defectdojo into CI pipeline:
Pipeline looks as follows:

Code:
```yaml
# Creates engagements to collect multiple reports in
defectdojo-initialize:
stage: .pre
image: alpine
allow_failure: true
variables:
DEFECTDOJO_ENGAGEMENT_LONGETIVITY_DAYS: 7
DEFECTDOJO_ENGAGEMENT_STATUS: "Not Started"
DEFECTDOJO_ENGAGEMENT_BUILD_SERVER: "null"
DEFECTDOJO_ENGAGEMENT_SOURCE_CODE_MANAGEMENT_SERVER: "null"
DEFECTDOJO_ENGAGEMENT_ORCHESTRATION_ENGINE: "null"
DEFECTDOJO_ENGAGEMENT_DEDUPLICATION_ON_ENGAGEMENT: "false"
DEFECTDOJO_ENGAGEMENT_THREAT_MODEL: "true"
DEFECTDOJO_ENGAGEMENT_API_TEST: "true"
DEFECTDOJO_ENGAGEMENT_PEN_TEST: "true"
DEFECTDOJO_ENGAGEMENT_CHECK_LIST: "true"
before_script:
- apk add curl jq coreutils
- START_DAY=`date +%Y-%m-%d`
- END_DAY=$(date -d "+${DEFECTDOJO_ENGAGEMENT_PERIOD} days" +%Y-%m-%d)
script:
- |
export DATA_RAW="{
\"tags\": [\"GITLAB-CI\"],
\"name\": \"${CI_PIPELINE_ID}\",
\"description\": \"${CI_COMMIT_DESCRIPTION}\",
\"version\": "null",
\"first_contacted\": \"${START_DAY}\",
\"target_start\": \"${START_DAY}\",
\"target_end\": \"${END_DAY}\",
\"reason\": \"CI Pipeline\",
\"tracker\": "null",
\"test_strategy\": "null",
\"threat_model\": \"${DEFECTDOJO_ENGAGEMENT_THREAT_MODEL}\",
\"api_test\": \"${DEFECTDOJO_ENGAGEMENT_API_TEST}\",
\"pen_test\": \"${DEFECTDOJO_ENGAGEMENT_PEN_TEST}\",
\"check_list\": \"${DEFECTDOJO_ENGAGEMENT_CHECK_LIST}\",
\"status\": \"${DEFECTDOJO_ENGAGEMENT_STATUS}\",
\"engagement_type\": \"CI/CD\",
\"build_id\": \"${CI_PIPELINE_ID}\",
\"commit_hash\": \"${CI_COMMIT_SHORT_SHA}\",
\"branch_tag\": \"${CI_COMMIT_REF_NAME}\",
\"deduplication_on_engagement\": \"${DEFECTDOJO_ENGAGEMENT_DEDUPLICATION_ON_ENGAGEMENT}\",
\"product\": \"${DEFECTDOJO_PRODUCT_ID}\",
\"source_code_management_uri\": \"${CI_PROJECT_URL}\",
\"lead\": "null",
\"requester\": "null",
\"preset\": "null",
\"report_type\": "null",
\"build_server\": ${DEFECTDOJO_ENGAGEMENT_BUILD_SERVER},
\"source_code_management_server\": ${DEFECTDOJO_ENGAGEMENT_SOURCE_CODE_MANAGEMENT_SERVER},
\"orchestration_engine\": ${DEFECTDOJO_ENGAGEMENT_ORCHESTRATION_ENGINE}
}"
- echo ${DATA_RAW}
- |
export ENGAGEMENTID=`curl --location --request POST "${DEFECTDOJO_URL}/api/v2/engagements/" \
--header "Authorization: Token ${DEFECTDOJO_API_TOKEN}" \
--header 'Content-Type: application/json' \
--data-raw "${DATA_RAW}" | jq -r '.id'`
- echo "DEFECTDOJO_ENGAGEMENTID=${ENGAGEMENTID}"
- echo "DEFECTDOJO_ENGAGEMENTID=${ENGAGEMENTID}" >> defectdojo.env
artifacts:
reports:
dotenv: defectdojo.env
defectdojo-publish:
stage: .post
needs: ["defectdojo-initialize", "zap-api", "semgrep", "dependency-check"]
image: alpine
allow_failure: true
variables:
DEFECTDOJO_SCAN_MINIMUM_SEVERITY: "Info"
DEFECTDOJO_SCAN_ACTIVE: "true"
DEFECTDOJO_SCAN_VERIFIED: "true"
DEFECTDOJO_SCAN_CLOSE_OLD_FINDINGS: "true"
DEFECTDOJO_SCAN_PUSH_TO_JIRA: "false"
DEFECTDOJO_SCAN_ENVIRONMENT: "Default"
before_script:
- apk add curl coreutils
- TODAY=`date +%Y-%m-%d`
script:
- echo "${DEFECTDOJO_ENGAGEMENTID}"
- |
curl --location --request POST "${DEFECTDOJO_URL}/api/v2/import-scan/" \
--header "Authorization: Token ${DEFECTDOJO_API_TOKEN}" \
--form "scan_date=\"${TODAY}\"" \
--form "minimum_severity=\"${DEFECTDOJO_SCAN_MINIMUM_SEVERITY}\"" \
--form "active=\"${DEFECTDOJO_SCAN_ACTIVE}\"" \
--form "verified=\"${DEFECTDOJO_SCAN_VERIFIED}\"" \
--form "scan_type=\"${DEFECTDOJO_SCAN_TYPE}\"" \
--form "engagement=${DEFECTDOJO_ENGAGEMENTID}" \
--form "file=@${FILE_UPLOAD_PATH}" \
--form "close_old_findings=\"${DEFECTDOJO_SCAN_CLOSE_OLD_FINDINGS}\"" \
--form "push_to_jira=\"${DEFECTDOJO_SCAN_PUSH_TO_JIRA}\"" \
--form "scan_type=\"${DEFECTDOJO_SCAN_SCAN_TYPE}\"" \
--form "environment=\"${DEFECTDOJO_SCAN_ENVIRONMENT}\""
parallel:
matrix:
- DEFECTDOJO_SCAN_SCAN_TYPE: "ZAP Scan"
FILE_UPLOAD_PATH: "${ZAP_REPORT_FOLDER}/webgoat-zap-report-${ZAP_SCAN_TIMESTAMP}-api.xml"
- DEFECTDOJO_SCAN_SCAN_TYPE: "Semgrep JSON Report"
FILE_UPLOAD_PATH: "${SEMGREP_REPORT_FILE_PATH}"
- DEFECTDOJO_SCAN_SCAN_TYPE: "Dependency Check Scan"
FILE_UPLOAD_PATH: "${DEPCHECK_REPORT_FILE_PATH}"
```
4. Provide an example flow of your tool and your results in which you detail how to organize engagements, tests and projects.
* First, a developer has finished coding some feature. They create a Merge Request.
* Merge Request triggers pipeline run: all SAST/DAST tools are executed, dependencies checked and reports generated.

* These reports are then uploaded to the running instance of defectdojo.


5. How does your tool :
* organize deduplication of findings
* It is possible to enable deduplication in defectdojo. When enabled, all findings are compared to automatically identify duplicates, and only deduplicated findings are provided.
* Moreover, defectdojo has two optins: it can either deduplicate vulnerabilities within the same build or track unique vulnerabilities across builds.
* Merge similar findings
* Defectdojo automatically marks the same vulnerabilities in the same files as related (for example, if one of them is found in source code and another in .jar package). It tags them as "related"

* Also, defectdojo GUI allows user to manually select findings and merge them.

* Track constantly changing code
* Defect management is a part of our CI pipeline: whenever code is updated, all scans are performed and their results are uploaded to the running defectdojo instance.
* Distinguish vulnerabilities in adjacent lines.
* The idea is that if a chunk of code contains multiple vulnerabilities, this code has to be thoroughly examined. It two adjacent lines both contain some kind of vulnerability, this has to raise suspicions.
* SAST that we used locates found vulnerabilities by file and line in code. This way, two same vulnerabilities of the same kind in the same file can still be distinguished by their line numbers. For example:


## **Bonus**
1. Configure an integration with the tool of your choice (Jira, Slack, etc…) for defect submission and tracking.