In this document I'm going to explain how you can setup an extremely sophisticated VTubing Rig that costs you almost nothing but brings incredible results to the table. Prepare your Windows PC You don't need much to make this happen. The software is available for free but I highly recommend you to support the developers especially the indie devs @butz_yung and @ojousa_ma_yo who build the building blocks that this tutorials focuses on. Get yourself a regular webcam Download and install OBS Studio Download XRAnimator from Github and extract it to your computer Download VROOM from booth and extract it to your drive
3/30/2023Introduction I've been researching and tinkering a lot with locally hosted LLMs recently. There are several great tutorials out there which explain how to run LLaMa or Alpaca locally. After digged through a couple of them I decided to write a step-by-step on how to run Alpaca 13B 4-bit via KoboldAI and have chat conversation with different characters through TavernAI - entirely on your local machine. The performance of the quantized model loaded on the GPU is incredible and shows the potential for on-prem LLM systems. I'm aware of CPU based solutions like alpaca.cpp and played around with them. However in this guide I'll dig how to install the Alpaca that I personally like most on gaming hardware. It runs incredibly well on my RTX 4080 - see the video below. What makes this stack special? The ability to run this setup locally on gaming hardware is pretty neat. It amazed me for the same reasons Stable Diffusion amazes me. The modularity is another reason. You can configure the language model interface in KoboldAI and plug that API into other frontends: Instead of TavernAI you could embed it into Hyperfy, Webaverse or other web3xr platforms. We already saw ChatGPT integrations in Hyperfy and the Webaverse Character Studio already showed very powerful AI integrations. I hope this guide helps you to understand the modularity aspect. I'm currently exploring the Langchain framework which is going to allow the creation of more sophisticated LLM systems that are open and can be hosted on premise.
3/28/2023