# Band Feeder Architecture ## System Overview ![](https://i.imgur.com/vnJNJJf.png) ###### (out of date diagrame, will be updated later) Due to BandChain, a decentralized oracle, Many decentralized ecosystems want to connect with us (e.g., Ethereum chain or Binance Smart Chain). Band Feeder is the service that serves the purpose. The whole service works like a producer-consumer queue but with more complexity by combining many microservices. The service has a Task Creator for ordering the data that partners want by scheduling or when finding a deviation of data over a threshold. The Requester is the service that gets the orders from Task Creator and makes a data request on BandChain. Then the Relayer service will send the data to our partner ecosystem, depending on its task. The part before is a very overview of our system. We will introduce more microservice and more details in the next. ## Price Cacher Price Cacher is the service for caching price data from other third-party data providers' APIs. Because of the API limitation of each third party, the cacher service will solve the issue. Many microservices use price cacher services for querying price data, e.g., Real World Price or Off-Chain Prices service. There are two components in this microservice. - **Price Crawler**: The component for requesting the price data from third-party API and saving it to a database by scheduling. It is running as a background service. - **Pricer Gateway**: The API service for querying from the database. It is running by Cloud Run for now. ### System Architecture #### Price Crawler Every third-party API endpoints that we use have its price crawler. We have decided to cache price data from 16 data sources, so we have 16 price crawler services. They will query the price data by scheduling from their config file. We now deploy them as the docker containers inside the VM instance. #### Real World Price Crawler In addition to third-party API crawlers, we also have internal price crawlers for caching prices from our Real World Prices service. Everything is the same as a typical crawler, with the need to allow them for the internal connection. #### Pricer Gateway Pricer Gateway is the rest API to query the cached price data from the database. We deploy it as a Cloud Run service for the sake of scalability. ### Related Services/Connections #### Ingress - **Off-Chain Prices Service**: Internal HTTP call - **Real World Prices Service**: Internal HTTP call - **BandChain Oracle Validator**: External HTTP call #### Egress - **Third Party APIs**: External HTTP call - **Real World Prices Service**: Internal HTTP call ## Off-Chain Prices Because the price data constantly changes, we need an assistance service to detect it. But there is a fee for requesting price data on the chain, so for the sake of cost saving, we have decided to simulate the price data calculation on BandChain by the Off-Chain Prices service. ### System Architecture Off-Chain Prices is an API service for getting the simulated price data of the BandChain. It is running by Cloud Run for now. ### Related Services/Connections #### Ingress - **Task Creator Service**: Internal HTTP call #### Egress - **Prices Cacher Service**: Internal HTTP call ## Task Creator Each partner ecosystems have own price data feed condition. The Task Creator service will serve this goal by creating the price data request task and sending it to another related service, the Requester service. There are two ways to trigger task creating process. - **Time Schedule**: the new tasks will be emitted periodically on a given schedule as its configuration. - **Data deviation detection**: the new tasks will be emitted when the Task Creator service detects data deviation over its configuration threshold ### System Architecture All Task Creator microservice containers are now deployed in a VM instance. They have the exact code implementation but run with different configurations depending on which environment, partner, and price data type. The service uses Pub/Sub to send the task messages to the Requester service and recheck the result of the price data request. ### Related Services/Connections #### Ingress - **Requester Service's Pub/Sub Subscription**: Pub/Sub, Subscribtion the Requester service Topic for recheck the result. - **Off-Chain Prices Service**: Internal HTTP call, The Task Creator service calls the Off-Chain Prices service every minute for detecting data price deviation. #### Egress - **Task Creator Service's Pub/Sub Topics**: Pub/Sub, All Task Creator Services publish messages to the same topic depending on their environment. ## Requester The Requester service is for requesting guaranteed decentralization data from the BandChain. It will make on-chain requests following the task messages from the Task Creator service. After that, the Requester service will send the task messages to the Relayer service for relaying process in the next. ### System Architecture We have deployed the Requester service as a docker container in a VM instance, one per environment. For receiving the tasks message from the Task Creator service and sending the requesting result to the Relayer Service, the Requester service operate the communication by Pub/Sub. ### Related Services/Connections #### Ingress - **Task Creator Service’s Pub/Sub Subscribtion**: Pub/Sub, For receving the request task from the Task Creator service. #### Egress - **Requester Service’s Pub/Sub Topics**: Pub/Sub, All requesting results from the service will be published to this. - **BandChain**: External RPC call, we can treat the network as the external service. ## Relayer For communicating between the Bandchain and our partner ecosystems, the Relayer service acts as the last gate of this. When the service receives the price data message from the Requester service, it will build the specific message from that price data message depending on the target ecosystem and send it to the ecosystem. ### System Architecture Every ecosystem has its specific Relayer service. For now, we have 40 partner ecosystems from 2 environments, So the number of our Relayer service containers is the same. They have the exact code but run with different entry points and configurations. Depending on its environment, we have deployed them as a docker container on the same VM instance. The service communicates with the Requester service via Pub/Sub. ### Related Services/Connections #### Ingress - **Requester Service’s Pub/Sub Subscribtion**:Pub/Sub, Pub/Sub, For receving the price data from the Requester service. Each Relayer service filters its message from the topic by using an attribute tagged since the task was created, depending on its target ecosystem. #### Egress - **Signer Service**: Internal HTTP call, For building a verified message before sending it to the target ecosystem. - **Target Ecosystems**: External RPC/HTTP call, depending on the ecosystem. ## Signer In the Web3 ecosystem, transactions must have a signature verification string. The Signer service is the service for building the signed message for the Relayer service, depending on the ecosystem. Moreover, the Signer service has a logic for preventing possible wrong price data feeding. If the service finds any suspicious price data, it won't allow the message to be signed. ### System Architecture Each Relayer service has its Signer Service, So the number of sub-services is the same. For now, we have deployed all the Signer services as a Cloud Run and only access internal connections for security reasons. ### Related Services/Connections #### Ingress **Relayer Service**: Internal Http Call ## Updater Because the Signer service needs to examine the price data before signing a message, It must have the compare price data from another source for this process. The source is from the database that is continually updated by the Updater service. ### System Architecture The Updater has deployed as the Cloud Function and is triggered periodically on a given schedule by Cloud Schedule to update The Signer Database. The price data used in the update process is from the Real World Price service. ### Related Services/Connections #### Ingress **Cloud Scheduling**: Internal HTTP call #### Egress **Real World Price service**: Internal HTTP call ## Monitoring The Monitoring service is a must-have to ensure that our Band Feeder services are working well. It will monitor the correctness of price data that have been relayed on the partner ecosystems. It uses the Real World Price service as another price data source for price correctness checking. ### System Architecture Every ecosystem has its specific Monitoring service. So the number of our Relayer service containers is the same as the Relayer service containers. They have the exact code but run with different entry points and configurations. Depending on its environment, we have deployed them as a docker container on the same VM instance. ### Related Services/Connections #### Egress - **Real World Price service**: Internal HTTP call - **Target Ecosystems**: External RPC/HTTP call, depending on the ecosystem. ## Real World Prices Some of our services need another price data source from BandChain to compare and check the price data process. The Real World Prices service is built for this purpose. ### System Architecture Real World Prices is an API service for getting the simulated price data of the real world. It is running by Cloud Run for now. ### Related Services/Connections #### Ingress - **Updater Service**: Internal HTTP call - **Monitoring Service**: Internal HTTP call #### Egress - **Prices Cacher Service:**: Internal HTTP call ## Deployment Environment Detail For now, we have two deployment environments. - **production**: The system requests price data from mainnet Band Chain and relays it to the partner mainnet ecosystem. The price cacher services only have been deployed in this environment due to third-party API limitations. - **testnet-production**: The system requests price data from mainnet and testnet Band Chain and relays it to the partner mainnet ecosystem. The system uses some microservice of **production** environments such as the price cacher and the requester. The testnet relayer can subscribe to both the mainnet requester and testnet requester simultaneously. The environment is treated as a staging environment.