# GaiaNet x RaidGuild: Boardroom API integration
## Executive Summary
GaiaNet is a decentralized computing infrastructure that enables everyone to create, deploy, scale, and monetize their own AI agents that reflect their styles, values, knowledge, and expertise. GaiaNet network is also a decentralized marketplace for AI agent services.
The tool is intended as a proof of concept to...
## Resources
- [Statement of Work](https://docs.google.com/document/d/147cdfexRIupp-1ZZwO-wF4rxbK-n1qzSB_KjaiaDZOk/edit)
- [GaiaNet on Huggingface](https://huggingface.co/gaianet)
## Key Features
- GaiaNet is an open-source developer tool that enables Boardroom to shorten the development and sales cycle of custom GPTs
- **feature 2**:
- **feature 3**:
- **feature 4**:
# GaiaNet + Boardroom - Overview
## Boardroom Governance API
The Boardroom Governance API helps developers fetch and display governance data across DAOs and networks. API documentation can be found [here](https://docs.boardroom.io/docs/api/cd5e0c8aa2bc1-overview).
### Endpoints by DAO a.k.a Protocol
- **DAO/Protocol level**
- Get a DAO summary info
- /v1/protocols/{cname}
- Get all DAO proposals
- /v1/protocols/{cname}/proposals
- **Proposal level**
- Get a DAO proposal
- /v1/proposals/{refId}
- Get DAO proposal pending votes
- /v1/proposals/{refId}/pendingVotes
- Get votes on a propoal
- /v1/proposals/{refId}/votes
- Get DAO Discourse Posts
- /v1/discourseTopicPosts
- /v1/discourseTopics
- /v1/discourseCategories
- Get DAO members
- /v1/protocols/{cname}/voters
- **Member level**
- Pending votes by address
- /v1/voters/{address}/pendingVotes
- Get member details
- /v1/voters/{address}
- Get member votes
- Get member voting power across all protocols
- User info endpoints
- Delegate info endpoints
- Delegation info endpoints
- Delegation pitch info endpoint
- Transaction status
- Get most recent treasury Txs
- SIWE
- Not relevant
- Schemas
- Informational. Not an endpoint
- Adapters or governance frameworks -> [source](https://docs.boardroom.io/docs/api/tnl6gykv1jz9v-adapters)
## GaiaNet Node: Setup Workflow
A GaiaNet node consists of:
- a high-performance and cross-platform application runtime --> **gaianet CLI tool (bash script)** + vector + supervise
- a finetuned LLM --> **pretrained mode with (optional) custom knowledge base**
- a knowledge embedding model --> for different context material, you might need a different embedding model to achieve the optimal performance. **e.g. nomic-embed-text-v1.5.f1**.
- a vector database --> [Qdrant](https://qdrant.tech/) high performance vector search at scale
- a prompt manager
- an open API server --> [RAG API Server](https://github.com/LlamaEdge/rag-api-server) written in Rust following OpenAI specs
- a plugin system for calling external tools and functions using LLM outputs
### gaianet-node [repo](https://github.com/GaiaNet-AI/gaianet-node): in-depth review
- **Node install**: instructions can be found [here](https://docs.gaianet.ai/node-guide/install_uninstall). It uses a bash script to install different components required to run a custom LLM model.
- **install dot sh** ([source](https://github.com/GaiaNet-AI/gaianet-node/blob/main/install.sh)):
1. download `gaianet` CLI script from repo
2. download default [config.json](https://github.com/GaiaNet-AI/gaianet-node/blob/main/config.json) from repo
3. download empty [nodeid.json](https://github.com/GaiaNet-AI/gaianet-node/blob/main/nodeid.json) file from repo
4. (optional) install [vector](https://vector.dev/) for log aggregation (observability pipelines) if `--enable-vector` arg is included during setup
5. install [WasmEdge](https://github.com//WasmEdge/WasmEdge/): a lightweight, high-performance, and extensible WebAssembly runtime for LLMs execution. `ggml` plugin optional if CUDA is enabled
6. install [Qdrant](https://qdrant.tech/) vector database and init db directories
7. download [RAG API Server](https://github.com/LlamaEdge/rag-api-server): it provides a group of OpenAI-compatible web APIs for the Retrieval-Augmented Generation (RAG) applications
8. download GaiaNet [chatbot-ui](https://github.com/GaiaNet-AI/chatbot-ui) (a.k.a dashboard)
9. download [registry.wasm](https://github.com/GaiaNet-AI/gaianet-node/tree/main/utils/registry) script from repo
10. Executes the `registry.wasm` script: creates the node address and node key during the init process. Update the GaiaNet registry contract using the private key for the node status.
11. install [gaianet-domain](https://github.com/GaiaNet-AI/gaianet-domain) reverse proxy to expose node to the outside world
12. download [frpc.toml](https://github.com/GaiaNet-AI/gaianet-node/blob/main/frpc.toml) proxy config from repo
13. Generate device ID and display subdomain + other info for node registration
- **Node initialization + customization**: instructions can be found [here](https://docs.gaianet.ai/node-guide/customize). The `gaianet config` allows you to:
- Use [GaiaNet node preset config files](https://github.com/GaiaNet-AI/node-configs) and run `gaianet init --config <preset_config.json>`
- LLM Selection: select an LLM model of your preference
- Knowledge base selection: use existing or create your own
- Customize prompts: `system-prompt` vs `rag-prompt` in natural language to provide some context to the LLM
- **gaianet CLI** ([source](https://github.com/GaiaNet-AI/gaianet-node/blob/main/gaianet)):
- **Start a node**: by running `gaianet start`. `--local-only` can be specified to omit node registration.
- **Stop a node**: by running `gaianet stop`
- **Creating a custom knowledge base for RAG-based apps**:
- General concepts can be found [here](https://docs.gaianet.ai/creator-guide/knowledge/concepts).
- Workflow for creating embeddigs from an external knowledge base

- GaiaNet currently offers [tools](https://github.com/GaiaNet-AI/embedding-tools) for encoding and transforming chunks from a knowledge base in text and markdown formats. There's also a [web-based UI](https://tools.gaianet.xyz/) that assist in the process.
## Scope
### Project Requirements
- Establishing a secure and reliable connection between Boardroom Governance API and GaiaNet APIs
- Each of the 350 DAOs have their own node
- Developing mechanisms for data transfer from Boardroom Governance API to designated locations within GaiaNet nodes
- Data formatting adjustments (if necessary) to ensure compatibility with GaiaNet's data structures
### Spec + Architecture
- Implement a middleware API to fetch data from Boardroom Governance API.
- Provide an API docs endpoint to allow the creation of `LLM-generated` interfaces (i.e. using Langchain APIChain module)
- Fork the [gaianet-node repo](https://github.com/GaiaNet-AI/gaianet-node) to:
- Update `config.json` to include DAO `cname` and Boardroom API Key as config parameters.
- Update `install.sh` script to include module installation instructions
- Update `gaianet init` instructions to deploy the module API in case all config parameters are specified in `config.json`
- Implement an entrypoint like/within `gaianet` CLI to start/stop API server for data fetching and (potentially) embeddings generation.
- Update the [docs](https://github.com/GaiaNet-AI/docs) repo to include documentation on how to use the Boardroom module API to connect with the LLM model and/or download knowledge base embeddings.
- Demo App:
- Implement a demo chat app that connects to the module API for inference on DAO information.
---
### Architecture

- Out approach aims to match with Gaianet's aim to create decentralized marketplace of agent services. There's a potential to market DAO specific Boardroom agents that provide a real-time intelligence layer that includes a ready-to-use knowledge base and an API Server with custom data workflows for RAG apps.
- The diagram above showcases a fully-fledged RAG pipeline architecture that leverages GaiaNet DePIN nodes, LLMs frameworks/tooling and Boardroom Governance API to create AI agents for DAOs.
- The data pipeline will use [Pathway](https://pathway.com/), a data processing framework that offers a scalable Rust engine and an easy to use Python API to seamlessly integrate with existing LLM tooling and frameworks and create AI pipelines over data streams that can be easily deployed with Docker.
- The data pipeline will periodically extract relevant documents from Boardroom API (i.e. proposal + discouse data) to maintain a DAO knowledge base with contextual in-memory document retrieval based on user queries, neural feature embeddings (using the embedding model runnning in Gaia node) and a similarity search algorithm such as KNN.
- Every request to a `Query/Response API` will use the knowledge base vector store index to retrieve relevant document snippets and then use `Gaianet Node's RAG API Server` for real-time prompting, adaptive learning and response generation in natural language using the LLM model of choice.
- The `Indexer API` will also be available to fetch documents from the Document Vector Store and use them for example to finetune an LLM model.
- Our solution will also integrate with [LangChain framework](https://python.langchain.com/v0.2/docs/introduction/) and its interoperable components in two ways:
- They will chain together with Pathway's Vector store to access up-to-date documents and used them as context for the given user question in a prompt sent to the RAG API Server.
- Use LangChain `APIChain` module to create a `LLM-generated interface` with the Boardroom API. It will format user inputs into API requests that will be automatically executed to receive responses from the external API, providing real-time information from Boardroom.
- The APIChain will require two prompts: one for selecting the right API endpoint and another to create a concise reply to the user query based on that endpoint.
- We'll need to build/define the external API’s documentation that outline the API’s endpoints, methods, parameters, and expected responses. This will allow the LLM to formulating API requests and parse responses.
---
Links
- [Step-by-Step Tutorial on Integrating Retrieval-Augmented Generation (RAG) with Large Language Models](https://medium.com/@marketing_novita.ai/step-by-step-tutorial-on-integrating-retrieval-augmented-generation-rag-with-large-language-7c509cddf4ac)
- [Langchain: Interacting with APIs](https://python.langchain.com/v0.1/docs/use_cases/apis/)
- [Integrating an External API with a Chatbot Application using LangChain and Chainlit](https://towardsdatascience.com/integrating-an-external-api-with-a-chatbot-application-using-langchain-and-chainlit-b687bb1efe58)
- [OpenAI: function calling](https://platform.openai.com/docs/guides/function-calling)
- https://platform.openai.com/docs/assistants/tools/function-calling/quickstart
- [Langchain APIChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.api.base.APIChain.html)
- [Langserve: deploy LangChain runnables and chains as a REST API](https://python.langchain.com/v0.2/docs/langserve/#examples)
- Courses
- https://www.deeplearning.ai/short-courses/building-multimodal-search-and-rag/
- https://www.deeplearning.ai/short-courses/building-agentic-rag-with-llamaindex/
- https://www.deeplearning.ai/short-courses/preprocessing-unstructured-data-for-llm-applications/
- [RAG vs Finetuning: Which Is the Best Tool to Boost Your LLM Application?](https://www.kdnuggets.com/rag-vs-finetuning-which-is-the-best-tool-to-boost-your-llm-application)
- [Building data pipelines](https://www.kdnuggets.com/building-data-pipelines-to-create-apps-with-large-language-models)
- Pathway
- https://pathway.com/developers/user-guide/introduction/welcome
- https://pathway.com/developers/templates/langchain-integration
- https://pathway.com/developers/user-guide/llm-xpack/llm-examples
- https://pathway.com/developers/templates
- https://github.com/pathwaycom/llm-app/tree/main/examples/pipelines/demo-document-indexing
- https://github.com/pathwaycom/llm-app/blob/main/examples/pipelines/demo-question-answering/README.md
- https://docs.litellm.ai/docs/providers/custom_openai_proxy
- OpenAPI + API spec
- https://openapi.tools/
- https://ratemyopenapi.com/?utm_source=sponsor&utm_medium=banner&utm_campaign=openapi-tools
- https://medium.com/@angela.tt/the-easiest-and-quickest-way-to-generate-an-openapi-spec-for-an-existing-website-12b5ad6e36db
- https://blog.postman.com/creating-an-openapi-definition-from-a-collection-with-the-postman-api/
- https://stackoverflow.com/questions/57006723/postman-how-to-export-download-api-documentation-from-postman
- [How to Connect LLM to External Sources Using RAG?](https://markovate.com/blog/connect-llm-using-rag/)
- [Mastering RAG Databases for LLMs: A Step-by-Step Guide](https://myscale.com/blog/mastering-rag-databases-for-llms-step-by-step-guide/)
- [The ultimate intro to data mapping](https://flatfile.com/blog/ultimate-introduction-data-mapping/#commonDataMappingTechniques)
- [API data mapping](https://www.adeptia.com/blog/api-data-mapping)
#### Internal Notes about integrating LLMs with external APIs
- Pathway is now available on Langchain, a framework for developing applications powered by large language models (LLMs). You can now query Pathway and access up-to-date documents for your RAG applications from LangChain using Pathway Vector store
- Once you have a VectorStoreServer running you can access it from LangChain pipeline by using PathwayVectorClient
- The next step is to write a chain in LangChain. The next example implements a simple RAG, that given a question, retrieves documents from Pathway Vector Store. These are then used as a context for the given question in a prompt sent to the OpenAI chat.
- Explore key features like real-time document indexing, adaptive learning from updated documentation, and managing user sessions.
```
prompt = f"Given the following documents : \n {docs_str} \nanswer this query: {query}"
```
- The bot is reactive to changes to the corpus of documents: once new snippets are provided, it reindexes them and starts to use the new knowledge to answer subsequent queries.
- About communicating with external APIs:
- There are two primary ways to interface LLMs with external APIs:
- **Functions**: OpenAI functions is one popular means of doing this.
- https://platform.openai.com/docs/guides/function-calling
- **LLM-generated interface**: Use an LLM with access to API documentation to create an interface.
- https://www.klarna.com/us/shopping/public/openai/v0/api-docs/
- LangChain APIChain module
- designed to format user inputs into API requests. This will enable our chatbot to send requests to and receive responses from an external API, broadening its functionality.
- we need the external API’s documentation in string format to access endpoint details. This documentation should outline the API’s endpoints, methods, parameters, and expected responses
- The APIChain then requires two prompts: one for selecting the right API endpoint and another to create a concise reply to the user query based on that endpoint
- The APIChain can be configured to handle different HTTP methods (GET, POST, PUT, DELETE, etc.), set request headers, and manage the body of the request. It also supports JSON payloads, which are commonly used in RESTful API communications.
- For the APIChain class, we need the external API’s documentation in string format to access endpoint details. This documentation should outline the API’s endpoints, methods, parameters, and expected responses. This aids the LLM in formulating API requests and parsing the responses. It’s helpful to define this information as a dictionary and then convert it in to a string for later usage.
- The APIChain requires two prompts: one for selecting the right API endpoint and another to create a concise reply to the user query based on that endpoint
---
### Questions
- Is there any chance to get an API Key without providing our personal/billing/credit card info?
- Does the aim in general is to provide individual access to all available data endpoints in the Boardroom API or to provide endpoints that return aggregated data at different levels? (e.g. protocol, proposal, member, level)
### Proposal
| Phases | Description | ETA |
| -------- | -------- | -------- |
### Estimated Time
## Delivarables
- Nodes running Boardroom’s Governance API
- Documentation detailing the prompting of Governance functionality, including:
- Steps on how to search and import data
- Supported data types and search criteria
## Team
- E2T
- Santiago