The following covers all of the AI systems that are running and required for Webaverse to run properly.
Used for searching and doing ANN matching on anything, especially searching the Wikipedia or searching through a corpus for the closest match of something
STATUS: DEPLOYED
REPO: https://github.com/webaverse/weaviate-server
Weaviate - https://weaviate.io/
STATUS: DEPLOYED
GPT-3
OPT-175b
Unified Language Model
Used for rapidly determining sentiment, emotion and hate in text
STATUS: NOT DEPLOYED
XtremeDistil trained on GoEmotion dataset
Distilbert Toxicity detection
Used for all character voice generation. Needs to be super fast and human sounding
STATUS: DEPLOYED
REPO: https://github.com/webaverse/tiktalknet
TikTalkNet - https://github.com/webaverse/tiktalknet
Used for generating character portraits, backgrounds, objects, textures and in-game artwork
STATUS: DEPLOYED
REPO: https://github.com/webaverse/stable-diffusion-webui
DEPRECATED: https://github.com/webaverse/stable-diffusion
Used for ambient sounds in the world, as well as sound effects attached to objects and mobs
STATUS: DEPLOYEDREPO: https://github.com/webaverse/diffsound
DiffSound
Audio Diffusion – similar to DiffSound but samples are much better, probably due to datasets
Used for all music generated in Webaverse
STATUS: NOT DEPLOYED
This version of Audio Diffusion features fine-tuned models on specific pieces
Used for ambient audio generation in Webaverse. May be much faster to generate and process consistent long pieces than audio-only methods.
STATUS: NOT DEPLOYED
Used for generation of all 3D objects and features in the world, based on descriptions, images or general class types
STATUS: DEPLOYED
REPO: https://github.com/webaverse/stable-dreamfusion
Stable Dreamfusion - https://github.com/ashawkey/stable-dreamfusion
GET3D - https://github.com/nv-tlabs/GET3D
https://nv-tlabs.github.io/LION/ - not released yet
Used for generation of humanoid animations
STATUS: DEPLOYED
REPO: https://github.com/webaverse/motion-diffusion-model
Motion Diffusion
https://github.com/mingyuan-zhang/MotionDiffuse - Seems very similar
Used for describing images so that the game can incorporate user images into the story, analyze screenshots, generate labels for training data or prompts for inverted generation
STATUS: NOT DEPLOYED
Used for captioning or describing audio or sounds
STATUS: NOT DEPLOYED
https://github.com/TheoCoombes/ClipCap - uses CLAP from LAION to do many things, including captioning audio and audio2img
Generate animation from 2D images, especially synced with audio or text for characters and portraits
STATUS: NOT DEPLOYED
https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model
Add bones to objects, characters, mobs and pets that don't have a rig
STATUS: NOT DEPLOYED