--- title: 33c3 Retrip description: robots: index, follow lang: en dir: ltr breaks: true --- # 33c3: Retrip {%hackmd theme-dark %} ###### tags: `metaverse lab` A project log for [Metaverse Lab](https://hackaday.io/project/5077) by [alusion](https://hackaday.io/hacker/46747-alusion) *Experiments with Decentralized VR/AR Infrastructure, Neural Networks, and 3D Internet.* Originally published 01/19/2017 at 03:36 https://hackaday.io/project/5077-metaverse-lab/log/52109-33c3-retrip --- <p><img class="lazy" src="https://cdn.hackaday.io/images/4266831484869541335.jpg"></p> <p><strong>32c3 Writeup: <a href="https://hackaday.io/project/5077/log/36232-the-wired">https://hackaday.io/project/5077/log/36232-the-wired</a></strong> <br></p> <p><a href="https://en.wikipedia.org/wiki/Chaos_Communication_Congress"><strong>Chaos Communication Congress</strong></a> is Europe's largest and longest running annual hacker conference that covers topics such as art, science, computer security, cryptography, hardware, artificial intelligence, mixed reality, transhumanism, surveillance and ethics. Hackers from all around the world can bring anything they'd like and transform the large halls with eye fulls of art/tech projects, robots, and blinking gizmos that makes the journey of getting another club mate seem like a gallery walk. Read more about CCC here:<br></p> <p><a href="http://hackaday.com/2016/12/26/33c3-starts-tomorrow-we-wont-be-sleeping-for-four-days/">http://hackaday.com/2016/12/26/33c3-starts-tomorrow-we-wont-be-sleeping-for-four-days/</a><br><a href="http://hackaday.com/2016/12/30/33c3-works-for-me/">http://hackaday.com/2016/12/30/33c3-works-for-me/</a></p> <p>Blessed with a window of opportunity, I've equipped myself with the new <a href="https://developers.google.com/tango/">Project Tango</a> phone and made the pilgrimage to Hamburg to create another mixed reality art gallery. After more than a year of practice honing new techniques I was prepared to make it 10x better. </p> <p><img class="lazy" src="http://imgur.com/6WUPVlo.jpg">It's been months since I've last updated so I think it's time to share some details on how I am building this years CCC VR gallery.<br></p> <p>Photography of any kind at the congress is very difficult as you must be sure to ask <em>everybody </em>in the picture if they agree to be photographed. For this reason, I first scraped the public web for digital assets I can use then limited meat space asset collection gathering to early hours in the morning between 4-7am when the traffic is lowest. In order to have better control and directional aim I <a href="https://i.imgur.com/E6LcTlA.jpg">covered one half the camera with my</a>&nbsp;<a href="https://i.imgur.com/E6LcTlA.jpg">sleeve</a><a href="https://i.imgur.com/E6LcTlA.jpg">s</a>&nbsp;and in post-processing enhanced the contrast to create digital droplets of imagery inside a black equirectangular canvas which I then made transparent. This photography technique made it easier to avoid faces and create the drop in space.</p> <p><img class="lazy" src="http://i.imgur.com/5cl2fuC.jpg">This is what each photograph looks like before wrapping it around an object. I used the <a href="https://hackaday.io/project/5077-metaverse-lab/log/47651-decentralized-avatars">ipfs-imgur translator</a> script and modified it slightly with a photosphere template instead of a plane. I now had a pallet of these blots that I can drag and drop into my world to play with. <br></p> <p><img class="lazy" src="https://cdn.hackaday.io/images/1329381484893249960.png"></p> <p>I then began to spin some ideas around for the CCC art gallery's visual aesthetic:<br> </p> <p><iframe src="//player.vimeo.com/video/199260483" allowfullscreen="" width="500" height="281" frameborder="0"></iframe>I started recording a ghost while creating a FireBoxRoom so that I can easily replay and load the assets into other rooms to set the table more quickly. This video is sped up 4x. After dropping the blots into the space I added some rotation to all the objects and the results became a trippy swirl of memories.</p> <iframe src="//player.vimeo.com/video/198641489" allowfullscreen="" width="500" height="281" frameborder="0"></iframe> <p>I had a surprise guest drop in while I was building the world out, he didn't know what to make of it.<br></p> <p><img class="lazy" src="https://cdn.hackaday.io/images/original/3110921484979266455.gif"></p> <p>Take a look into the crystal ball and you will see many very interesting things.<br></p> <p><iframe src="//player.vimeo.com/video/198964415" allowfullscreen="" width="500" height="281" frameborder="0"></iframe></p> <p> Here's a return to the equi view of one of the worlds created with this method of stirring 360 fragments. After building a world of swirling media I recorded 360 clips to use for the sky. Check out some of my screenshots here: <a href="http://imgur.com/a/VtDoS">http://imgur.com/a/VtDoS <br></a></p> <p><img class="lazy" src="https://cdn.hackaday.io/images/9068631484795191190.jpg"></p> <p>In November 2016, the first <a href="https://get.google.com/tango/">Project Tango consumer device</a> was released after a year of practice with the dev kit and a month of practice before the congress I was ready to scan anything. The device did not come with a 3D scanning application by default but that might soon change after I publish this log. I used the <a href="https://matterport.com/matterport-scenes/">Matterport Scenes</a> app for Project Tango to capture point clouds that averaged 2 million vertices or about a maximum file size of 44mb per ply file. </p> <p><img class="lazy" src="https://cdn.hackaday.io/images/original/8880671484893197661.gif"></p> <p><strong>Update**</strong> The latest version of JanusVR and JanusWeb (2/6/17) now supports ply files, meaning you can download the files straight into your WebVR scenes!</p> <p><img class="lazy" src="https://cdn.hackaday.io/images/5235701486514821552.jpg"></p> <p>Here are the steps in order to convert verts (ply) to faces (obj). I used the free software <a href="http://www.meshlab.net/">meshlab</a> for poisson surface reconstruction and <a href="https://www.blender.org/">blender</a> for optimizing. (special thanks /u/FireFoxG for organizing). <br></p> <ol><li>Open meshlab and import ascii file (such as the ply)</li><li>Open Layer view (next to little img symbol)</li><li><strong>SUBSAMPLING</strong>: Filters &gt; Sampling &gt; Poisson-disk Sampling: Enter Number of Samples as the resulting vertex number / number of points. Good to start with about the same number as your vertex to maintain resolution. (10k to 1mil)</li><li><strong>COMPUTE NORMALS</strong>: Filters &gt; Normals/Curvatures and Orientation &gt; compute Normals for Point Set [neighbours = 20]<span class="redactor-invisible-space"></span></li><li><strong>TRIANGULATION </strong>: Filters &gt; Point set &gt; surface reconstruction: Poisson<span class="redactor-invisible-space"></span><ol><li>Set octree to 9</li></ol></li><li>Export obj mesh (I usually name as out.obj) then import into blender.<ol><li>The old areas which were <em>open </em>become bigger triangles and the parts to keep are all small triangles of equal size.</li><li><span class="redactor-invisible-space"></span>Select a face slighter larger then the average and select &gt; select similar. On left, greater than functions to select all the areas which should be holes (<strong>make sure you are in face select mode</strong>)<img class="lazy" src="https://i.gyazo.com/f02dc153d000304abf772ca687b3ef3b.gif">Delete the larger triangles and keep all those that are same sized. There may be some manual work.<img class="lazy" src="https://i.gyazo.com/46bd06ac6384545491d150d994e704ad.png"></li></ol></li><li> UV unwrap in blender (hit U, then 'smart uv unwrap'), save image texture with 4096x4096 sized texture, then export this obj file back to meshlab with original pointcloud file.</li><li><strong>Vertex Attributes</strong> to texture (between 2 meshes) can be found under Filter-&gt;Texture (set to 4096) (source is original point cloud, Target is UV unwrapped mesh from blender).</li></ol> <p>That's it, the resulting object files may still be large and require decimating to be optimized for web. This is one of the most labor intensive steps but once you have a flow it takes about 10 minutes to process each scan. In the future there have been discussions to ply support in Janus using the particle system. Such a system would drastically streamline the process from scan to VR site in less than a minute! I made about 3 times as many scans during 33c3 and organized them in a way that I can more efficiently identify and prototype with. <br></p> <p><img class="lazy" src="http://i.imgur.com/MBxyToG.jpg"></p> <p>I made it easy to use any of these scans by creating a pastebin of snippets to include between the &lt;Assets&gt; part of the <a href="http://janusvr.com/guide/markuplanguage/index.html">FireBoxRoom</a>. This gallery was starting to come together after I combined the models with the skies made earlier.</p> <p><img style="width: 843px; height: 418px;" class="lazy" width="843" height="418" src="https://cdn.hackaday.io/images/original/6193501485047708391.gif"></p> <p>This is a preview of one of the crystal balls I created for the CCC VR gallery and currently works on cardboard, GearVR, Oculus Rift, and Vive and very soon Daydream. Here's a screenshot from the official chrome build on Android:</p> <p><img class="lazy" src="https://cdn.hackaday.io/images/1216341485082021484.jpg"></p> <p>Here's the WebVR poly-fill mode when you hit the Enter VR button, ready to slide onto a cardboard headset!<br></p> <p><img class="lazy" src="http://i.imgur.com/gkTFkVB.jpg"></p> <p>Enjoy some pictures and screenshots showing the building of the galleries between physical and virtual.</p> <p><img class="lazy" src="https://cdn.hackaday.io/images/3709201486512977493.jpg"></p> <p><img class="lazy" src="https://cdn.hackaday.io/images/3408181486512483087.png"><br></p> <p><img class="lazy" src="https://cdn.hackaday.io/images/6248491486512755513.png"></p> <p><img class="lazy" src="https://cdn.hackaday.io/images/166481486512505856.jpg"></p> <p><img class="lazy" src="https://i.imgur.com/yL9DXM6.jpg"></p> <p>Here's a video preview of a technique I made by scraping instragram photos of the event and processing them into a glitchy algorithm that outputs seamless tiled textures that I can generate crossfading textures with. The entire process is a combination of gmic and ffmpeg and creates a surreal cyberdelic sky but can be useful to fractal in digital memories.<br></p> <p><iframe src="//player.vimeo.com/video/199259868" allowfullscreen="" width="500" height="281" frameborder="0"><br></iframe></p> <h1>Old and New</h1> <p><img class="lazy" src="https://cdn.hackaday.io/images/original/5097691502066284447.gif"><br></p> <p>Much of my work has been inaccessible from stale or forgotten IPFS hashes because WebVR was still in its infancy. That was before, now it's starting to become more widely adopted with a community growing in the thousands and major browser support that one can keep track of @ <a href="https://webvr.rocks/">webvr.rocks</a>. I've since been optimizing my projects including the 2015 art gallery for 32c3 and created a variety of worlds from 33c3, easily navigable from an image gallery I converted to be a world select screen. .</p> <p>Here's a preview of what it looks like on <a href="https://hackaday.io/project/11279-avalon">AVALON</a>:</p> <p><img class="lazy" src="https://cdn.hackaday.io/images/691141486538735895.png"></p> <p>Clicking one of the portals will turn the browser into a magic window for an explorable, social, 3D world with an touch screen joystick for easy mobility.</p> <p><img class="lazy" src="https://i.imgur.com/7frJjr1.jpg"></p> <p>Another great feature to be aware of with JanusWeb is that pressing F1 will open up an in-browser editor and F6 will show the Janus markup code. <br></p> <p><img class="lazy" src="https://cdn.hackaday.io/images/5218061486869580460.png"></p> <p>I'm in the process of creating a custom avatar for every portal and a player count for the 2D frontend. Thanks for looking, enjoy the art. </p> <hr>