---
###### tags: `V3 Docs`
---
# Docs Download from Sync with Keating - July 29, 2022
[Vimeo](https://vimeo.com/734828523/bc5fab1e85)
Philosophy of infrastructure
To make sure it is not centralized make sure people are allowed to run the same infrastrucutre
Allow others to host their own version of our infrastructure, features and apps.
Main Goal: If someone wanted to fork DAOhaus and wanted to change something they could still create the entire DAOhaus experience including the centralized infrastructure
If DAOhaus went away someone else could run the infrastructure.
## Helm Charts
We do this with Helm Charts. Helm charts are a packaging tool for Kubernetes. Kubernetes is an orchestration tool for containers. It allow the definition of how the infrastructure is set-up in a configuration file.
Helm contains our infrastructure definitions and allows for specific configurations to replicate our infrastructure settings.
To see the most recent infrastruce configuration view [`Chart.yaml`](https://github.com/HausDAO/daohaus-monorepo/blob/develop/libs/infra-chart/daohaus/Chart.yaml) in the `daohaus-monorepo`
## Ceramic Node
Ceramic has a document style node requiring IPFS.
Ceramic charts in the [`daohaus-monorepo`](https://github.com/HausDAO/daohaus-monorepo/tree/develop/libs/infra-chart/ceramic)
To learn more about Ceramic nodes check out [Running Ceramic in production](https://developers.ceramic.network/run/nodes/nodes/)
We use Ceramic for storing data to compute and process using Jobs.
Cermaic allows for decentralized data storage.
Different levels of forking DAOhaus
1. Recreate DAOhaus entirely (setup infrastructure with Helm Charts)
2. Copy DAOhas front-end and use calculated data from infrastructre (use public data from Ceramic)
### How to point to our Ceramic definitions
- Coming once consumer jobs are done
Will allow for people creating their own apps to use that information.
## Jobs
How we take on-chain or subgraph data and put some sort of processing transformation on that data to turn it into a form that is easier for us to consume.
**Example**: For Hub we care about vault balances for different DAOs. Calculating vault balances is pretty complicated to do in a front-end process and will make the app slow. To save us from having to do this on the front-end client we do it with a back-end process on the server.
The jobs create a data pipeline. A producer is a task that is taking data from a public dataset, pushing it into a queue and another process taking things off of the queue to do work on
**Example**: Calculate vault totals and push them to Ceramic
**Example**: Aggregate DAO data across different networks on our subgraph into a database that can be queried easier
Creating a data processing pipeline on one end is the producer and on the other is the consumer. They can be chained together to accomplish complex data processing flows.
This only matters if teams are running their own infrastructure.
Consumers are calculating aggregate data.
From a developer level understanding the flow is key. At the code level it is really the same thing as running a server without endpoints. Code should be readable and relatively self-contained.
Linking to subgraphs from Jobs section.
## Nice to Have
Video tutorial of creating the infrastructure from scratch. To set-up the infrastructure you need to be able to run the `deployStaging` Helm install command in [`project.json`](https://github.com/HausDAO/daohaus-monorepo/blob/develop/libs/infra-chart/daohaus/project.json)
```json
helm upgrade -f ./daohaus/values.yaml test ./daohaus
```
## Keating's Interest
Decentralized infrastructure is one of the toughest problems within the application space. Once you introduce centralized points you lose transparency and interoperability. Centralizing increases security requirements. A decentralized solution as the basis for a trusted network of data is huge. Blockchain storage is expensive. The presents the opportunity for blockchains to provide the settlement layer for valid data.
[cats](https://github.com/BlockScience/cats) enforces trust in transforming data
IPFS is releasing their own virtual machine.
Ceramic is moving from document based databases to subraph based databases