# Use GaiaNet's Llama 3 nodes [GaiaNet](https://gaianet.ai/) provides both Llama 3 8b (fast) and Llama 3 70b (capable) nodes that you can use as direct replacement for OpenAI APIs. Unlike OpenAI, they are completely free and require no API key. Enjoy! ## Llama 3 8B Replace OpenAI configuration in your app with the following. |Config option | Value | |-----|--------| | Model Name (for LLM) | Meta-Llama-3-8B-Instruct.Q5_K_M | | Model Name (for Text embedding) | all-MiniLM-L6-v2-ggml-model-f16 | | API endpoint URL | https://llama3.gaianet.network/v1 | | API key | Empty or any value | Here is an example of how to configure a model provider in Dify. ![Configure a GaiaNet Llama3 8b model in Dify](https://hackmd.io/_uploads/BJG4CH7z0.png) ![Configure a GaiaNet embedding model in Dify](https://hackmd.io/_uploads/HJYrTHsfC.png) Once you select that model, you can chat with it in Dify. ![Chat with the GaiaNet Llama3 8b model in Dify](https://hackmd.io/_uploads/H1hDCH7zC.png) More details on how to use web UI and API to access the node: https://llama3.gaianet.network/ ## Llama 3 70B Replace OpenAI configuration in your app with the following. |Config option | Value | |-----|--------| | Model Name (for LLM) | Meta-Llama-3-70B-Instruct-Q5_K_M | | Model Name (for Text embedding) | all-MiniLM-L6-v2-ggml-model-f16 | | API endpoint URL | https://0xf8bf989ce672acd284309bbbbf4debe95975ea77.gaianet.network/v1 | | API key | Empty or any value | Here is an example of how to configure a model provider in Dify. ![Configure a GaiaNet Llama3 70b model in Dify](https://hackmd.io/_uploads/BJ24k8QMA.png) ![Configure a GaiaNet embedding model in Dify](https://hackmd.io/_uploads/HJjuTBiGR.png) Once you select that model, you can chat with it in Dify. ![Chat with the GaiaNet Llama3 70b model in Dify](https://hackmd.io/_uploads/rJsuJLmf0.png) More details on how to use web UI and API to access the node: https://0xf8bf989ce672acd284309bbbbf4debe95975ea77.gaianet.network/ ## Run your own GaiaNet node With [GaiaNet](https://gaianet.ai/), you can deploy any of the 10,000+ open source LLMs on Huggingface on your own laptop and make them available as API backends for your own app. You can also add your own knowledge base to have the LLM perform a context search before answering questions -- similiar to how OpenAI Assistant API works. To get started, follow this 5-minute quick guide: https://github.com/GaiaNet-AI/gaianet-node?tab=readme-ov-file#run-your-own-gaianet-node Once you are done, you can turn off the node. ``` gaianet stop ``` Change the chat model to Llama 3 8B. Execute the following command in the terminal shell. ``` gaianet config \ --chat-url https://huggingface.co/second-state/Llama-3-8B-Instruct-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Q5_K_M.gguf \ --chat-ctx-size 8190 \ --prompt-template llama-3-chat \ --snapshot https://huggingface.co/datasets/gaianet/none/resolve/main/none.snapshot.tar.gz ``` Intialize the node again. It will download the LLM file in this step. The LLM file is 5+GB and could take a while. ``` gaianet init ``` Start the node. ``` gaianet start ``` Upon successful start, you should be able to see the URL to the node's dashboard. On the dashboard, you can find the API endpoint URL and model name for the chat API and embedding API. ## More models to try with your own GaiaNet node Just do ``` gaianet config \ --chat-url ${chat-url} \ --chat-ctx-size ${chat-ctx-size} \ --prompt-template ${prompt-template} ``` and then ``` gaianet init gaianet start ``` | Description | chat-url | chat-ctx-size | prompt-template | | ----------- | -------- | -------- | -------- | | Chinese language Llama 3 8B | https://huggingface.co/second-state/Llama3-8B-Chinese-Chat-GGUF/resolve/main/Llama3-8B-Chinese-Chat-Q5_K_M.gguf | 8192 | llama-3-chat | | Llama 3 8B with 1M context size | https://huggingface.co/second-state/Llama-3-8B-Instruct-Gradient-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-Gradient-1048k-Q5_K_M.gguf | 1000000 | llama-3-chat | | Llama 3 70B | https://huggingface.co/second-state/Meta-Llama-3-70B-Instruct-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-Q5_K_M.gguf | 8192 | llama-3-chat | | Phi-3 3.8B | https://huggingface.co/second-state/Phi-3-mini-4k-instruct-GGUF/resolve/main/Phi-3-mini-4k-instruct-Q5_K_M.gguf | 4096 | phi-3-chat | | Mixtral 8x7B | https://huggingface.co/second-state/Mixtral-8x7B-Instruct-v0.1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-Q5_K_M.gguf | 4096 | mistral-instruct | | Qwen 72B uncensored | https://huggingface.co/second-state/Liberated-Qwen1.5-72B-GGUF/resolve/main/Liberated-Qwen1.5-72B-Q4_K_M.gguf | 8192 | chatml |