In this guide I’ll explain how you can train a LoRA of your VRM avatar that you can use to draw images of your 3D avatar in any Stable Diffusion Model that you can find. In this Tutorial we are going to train the LoRA for the tokenized identity Nature and I’m going to provide the file so you can play around with it in your own Stable Diffusion installation.
9/21/2023Over the past few weeks
8/18/2023I joined the Museum of Crypto Art (MOCA) as their CTO in early 2021 when Colborn Bell and Shivani Mitra were splitting with the other co-founder Pablo Rodriguez-fraile. We had the plan to truly decentralize the museum and stick to the cryptoart ideals. Back then MOCA had several builds in Somnium Space
8/17/2023Introduction I've been researching and tinkering a lot with locally hosted LLMs recently. There are several great tutorials out there which explain how to run LLaMa or Alpaca locally. After digged through a couple of them I decided to write a step-by-step on how to run Alpaca 13B 4-bit via KoboldAI and have chat conversation with different characters through TavernAI - entirely on your local machine. The performance of the quantized model loaded on the GPU is incredible and shows the potential for on-prem LLM systems. I'm aware of CPU based solutions like alpaca.cpp and played around with them. However in this guide I'll dig how to install the Alpaca that I personally like most on gaming hardware. It runs incredibly well on my RTX 4080 - see the video below. What makes this stack special? The ability to run this setup locally on gaming hardware is pretty neat. It amazed me for the same reasons Stable Diffusion amazes me. The modularity is another reason. You can configure the language model interface in KoboldAI and plug that API into other frontends: Instead of TavernAI you could embed it into Hyperfy, Webaverse or other web3xr platforms. We already saw ChatGPT integrations in Hyperfy and the Webaverse Character Studio already showed very powerful AI integrations. I hope this guide helps you to understand the modularity aspect. I'm currently exploring the Langchain framework which is going to allow the creation of more sophisticated LLM systems that are open and can be hosted on premise.
4/14/2023