Starting into 2025 MOCA has dedicted themselves to move on from building web3 native SaaS products into the world of open source software development. From technical pov that means our new museum codebase can be deployed by any museum or enthusiasts whearas MOCA has to see themselves as a "customer" to their code base. Simultanously we guide the development of functionality we think is useful for digital art museums.
But lets go back in time for a bit. We've built cool stuff over the last years. From the Multipass to multiplayer curation stacks like MOCA Show up to the metaverse exhibition product MOCA ROOMs we always tried to build tooling that lifts up artists and collectors by giving them tools to fuel their crypto art driven storytelling.
Me back in 2022 talking about the stuff we build at MOCA
Our team has learned a lot building all of this out and we'll make sure that our open source museum tech will inherit the most powerful facets. At the same time we're gonna lean heavily into agentic systems and other AI tech stacks which can be fueled by both cloud and locally hosted AI system which we enable via LiteLLM routed AI inference.
Some of your might remember that back in 22/23 we teamed up with Karan4D, one of the Nous Research founders (before that chapter started for him) to build a virtual curator for MOCA. Back then the tech simply wasn't there yet (and def not scalable) but now it finally is. So let me take you by the hand and lets explore the state of the MOCA 2.0 tech.
We're gonna release the very first iteration of this stack very soon. Right now we have two different types of collection views, the second of which is gonna be introduced as we reveal the Matt Kane Collection later this month. The screenshot shows the default view which is used to transport our Genesis, Permanent, Fundraiser Collections as well as the Daรฏm al-Yad Collection that was donated by Daรฏm during the last bear market.
The new viewer supports images, videos and 3D assets. It's designed to be very clean and snappy, enabling the visitor to focus on the art. Keep in mind that this is the mvp of our open source tech stack and that much more functionality is gonna be added as we build this out in the public. We can't wait to finally relese this early repo into the wild.
The MOCA Library contains an ever growing pile of writing about cryptoart and broader web3 culture scraped from websites that are imported in md format. For MOCAs very own deployment we want to massively increase the knowledge for our agents about cryptoart and web3 culture. We're already using these tools as we build the stack.
Our current prototyping environment includes over 600 markdown files which is gonna be increased by another few hundred articles that are currently being scraped and cleaned. Very soon we'll deploy this library for testing so that it can be accessed via chat which is gonna be integrated into the collection views as we start to tinker by injecting actual artwork data into the context window to tune the agent for meaningful dialogue.
In these two screenshots you can see raw database tables showcasing a bunch of extracted entities on the first image and how relationships on the second image. You can see how much additional data is being generated from documents with the goal to enable deeply enriched knowledge during human-to-agent and agent-to-agent inference.
The process of ingesting and extracting 637 documents into our knowledge Graph took 18 hours. We used Llama-3.3:70b from Venice.ai from our LiteLLM router as we can utilize our VCU balance that we obtain on daily basis for staked $vvv token. This "free compute" also enables MOCA to fuel our library RAG agent as we plug the library into the core backend.
Wanna dig deeper? Read the official announcement of the MOCA Library.
The MOCA deployment features immutable onchain curation. The idea is to build a plugin for the regular museum stack that enhances the regular ROOMs function with the web3 capabilities you already know from our legacy stack. How exactly that will be implemented shall be explored while we implement the ROOMs feature into the open source stack.
We target late summer 2025 to push the general ROOMs functionality into the new open source tech stack in a way that its being powered by the library. Imagine the rooms details page from the legacy ROOMs app enabling visitors to explore any ROOM thats configured for the museum, fueled by agentic chat. There will be an iteration of the current artwork-related UX e.g. the dynamic camera flights that ties into a unified chat experience.
In Q1 we tinkered a lot with knowledge which is how we realized that we need something more capable in that sense as the agent very often didn't get questions around injected information right. That research led to us finding R2R and the birth of the library within our tech stack. We played a lot with our prototype via dms and inside our internal chat.
We've already deployed the ElizaOS v2 (Version 1.0 beta) but are waiting for new plugins that re-enable Venice support which is what fuels the agent you see in the screenshot above. We expect that to be released within the next few weeks and can't wait to fork the final repo as we're gonna build a R2R plugin, allowing your DeCC0s to integrate with the MOCA Library, enhancing their knowledge with vast data around web3 arts and culture.
With all of that being said let me try to paint a picture of what lies behind the stuff I explained above. We see Art Decc0s as interoperable end user touch points. Visitors will not only talk to them in the web interfaces of your museum. We imagine them walking at your side, represented as vrm avatars, answering your questions as you explore art exhibits in interactive Hyperfy worlds. Magine "The Curator" in Ready Player One.
untitled, xyz and jivinci tinkered with early 3D Decc0s in Jan/Feb 2025
As Hyperfy v2 is getting more and more capable MOCA aims for a plugin that enables Hyperfy world builders to spawn interactive art exhibition via ROOMs alongside their favorite DeCC0 agents into whatever experience they build. We strongly believe this tech stack is gonna massively help museums around the globe to reach new audiences.
By enabling anyone to deploy the museum stack desscribed in this post so that they can make their own art collections and related cultural information accessible, we believe that MOCA can help to manifest arts and culture in an agent-driven immersive web.
Our journey is not just about technology, its about creating a cultural movement that comes from the heart. We believe that crypto art has the power to inspire, educate and unite people across the globe. Our platform is designed to foster a sense of community, creativity and wonder, providing a space where artists can express themselves, collectors can discover new talent and enthusiasts can immerse themselves in the beauty of art.
Join our Discord and to be with us as soon as our first iteration of our codebase is being released. Creative experimentation around Art Decc0s is already happening.