# Consumption Framework Settings
## Requirements
- elastic prod >8.0
- python
- docker
- Codigo: https://github.com/krol3/consumption
## Download python in Centos
```
sudo yum update -y
sudo yum install -y gcc openssl-devel bzip2-devel libffi-devel
```
- Download python
```
cd /usr/src
sudo wget https://www.python.org/ftp/python/3.10.0/Python-3.10.0.tgz
sudo tar xzf Python-3.10.0.tgz
```
## Setting the config.yml with the API keys
Organization ID / user IDs: https://cloud.elastic.co/account/members
Billing api key: https://cloud.elastic.co/account/keys
Api_key:
This API key is used to read from the monitoring cluster; it requires read privileges to the .monitoring-es-8* index names.
API KEY - consumption_destination
```
POST /_security/api_key
{
"name": "consumption_framework_destination",
"role_descriptors": {
"consumption_framework": {
"indices": [
{
"names": [
"consumption*"
],
"privileges": [
"read",
"view_index_metadata",
"index",
"auto_configure"
]
}
],
"cluster": [
"manage_ingest_pipelines",
"manage_ilm",
"manage_index_templates"
]
}
}
}
```
API KEY - monitoring_source
In the monitoring_source cluster, the script will be reading the .monitoring-es-8* indices. The user (or API key) used to connect to the cluster will therefore need to have the corresponding read permission. The below API call can be used to provision the required API key on the monitoring_source cluster:
```
POST /_security/api_key
{
"name": "consumption_framework_source",
"role_descriptors": {
"consumption_framework": {
"indices": [
{
"names": [
".monitoring-es-8*"
],
"privileges": [
"read"
]
}
]
}
}
}
```
## Import Kibana Dashboard
In the kibana_exports folder of the ZIP archive, there are .ndjson files for Kibana; these are saved objects intended for upload.
In Kibana, go to Stack Management -> Saved Objects and click Import (upper right, as of 8.12.1). Next, select Import again and then choose the .ndjson file (e.g. 8.11.2.ndjson) from your local filesystem. Click Done to close the sidebar window.
## Using Python
Using virtual environment
```bash
python3 -m venv venv
source venv/bin/activate
pip3 install -r requirements.txt
```
> To delete the env `deactivate` and `rm -r ./venv`
**Next, import initial Elasticsearch Service / Elastic Cloud billing data:** `get-billing-data` will recover the billing data from ESS, and index it into the target cluster. This will populate the organization overview dashboard.
```
python3 main.py get-billing-data --config-file config.yml --lookbehind=24 --force --debug
```
Finally, pull initial cluster usage data from the Monitoring Cluster:
```
python3 main.py consume-monitoring --config-file config.yml --lookbehind=24 --force --debug
```
## Using Docker
```
docker build -t elastic_consumption_framework:local .
docker run --rm elastic_consumption_framework:local --help
docker run --rm -v $(pwd)/config.yml:/app/config.yml elastic_consumption_framework:local get-billing-data --config-file /app/config.yml --lookbehind 24 --force --debug
docker run --rm -v $(pwd)/config.yml:/app/config.yml elastic_consumption_framework:local init --config-file /app/config.yml --lookbehind 24 --force --debug
```
Review the config.yml
```
docker run --rm -v $(pwd)/config.yml:/app/config.yml elastic_consumption_framework:local -- cat /app/config.yml
```