# Terraforms + WebXR
Main github: https://github.com/madjin/terraform-visualizer
Experiments with Terraforms by Mathcastles
https://tokens.mathcastles.xyz/terraforms/token-html/4
max supply 9911
- https://opensea.io/collection/terraforms
- https://opensea.io/assets/ethereum/0x4e1f41613c9084fdb9e34e11fae9412427480e56/2181
- https://enterdream.xyz/index.html?id=2181
- https://jaydenkur.com/terra/
Contract: https://etherscan.io/address/0x4e1f41613c9084fdb9e34e11fae9412427480e56
https://terraformexplorer.xyz/
What if terraforms had link to MP4 in the metadata?
But would require a render server snapshotting the piece and betray the composition some
We have a gif server for discord that does that btw but decided not to deploy bc we’re not sure best way to do it efficiently, aid anyone here has strong backend design background and knows a cost efficient way to compute and serve 11k 15mb gifs daily, holler
Haven’t gotten around to looking into that yet but maybe someone here knows the best pattern for keeping compute/bandwidth costs way down for something like that
any Terrain mode parcels all technically dynamic
I'd have to do some tests to observe what the threshhold for change is, but bc the whole castle is floating up and down, it samples the noise space at different positions over time
it's very subtle tho and perceiving the change visually is only possible over long time spans
so the sample rate could be low
https://opensea.io/assets/ethereum/0x4e1f41613c9084fdb9e34e11fae9412427480e56/2181
https://enterdream.xyz/index.html?id=2181
---
## hax
**https://tokens.mathcastles.xyz/terraforms/token-html/2181** :star:
npm install puppeteer-screen-recorder
### Perfect loop
https://github.com/bbc/video-loop-finder/blob/master/video_loop_finder.py
requires numpy / anaconda
https://github.com/onlyhavecans/perfect-gif
```
ffmpeg -i video/simple.mp4 -filter_complex "[0:v]trim=duration=6,setpts=PTS-STARTPTS,scale=416:600,fade=out:st=5:d=1:alpha=1[fadeout];[0:v]trim=duration=6:start=1,setpts=PTS-STARTPTS[scaled_input];[scaled_input][fadeout]xfade=transition=fade:duration=2:offset=4" output.mp4
```
https://tokens.mathcastles.xyz/terraforms/token-html/9911
will this work?
```bash!
#!/bin/bash
[[ ! -n $3 ]] && { echo "Usage: crossfadevideo <input.mp4> <fade in seconds> <output.mp4> [looptimes]"; exit; }
input="$1"
fade="$2"
duration="$(ffprobe -v error -select_streams v:0 -show_entries stream=duration -of default=noprint_wrappers=1:nokey=1 "$input")"
duration=$(echo "$duration-($fade)" | bc | sed 's/\..*//g')
[[ ${duration:0:1} == "." ]] && duration="0$duration"
output="$3"
[[ -n $4 ]] && loop=$4 && output="${output}.mkv"
set -x
ffmpeg -y -i "$input" -filter_complex "
[0:v]trim=duration=$fade,fade=d=$fade:alpha=1,setpts=PTS+($duration/TB)[fadeout];
[0:v]trim=$fade,setpts=PTS-STARTPTS[fadein];
[fadeout][fadein]overlay" "$output"
# we use mkv for looping since ffmpeg loops mp4 badly
[[ -n $loop ]] && ffmpeg -y -stream_loop $loop -i $output -c copy ${output/\.mkv/}
```
https://stackoverflow.com/questions/60043174/cross-fade-video-to-itself-with-ffmpeg-for-seamless-looping
https://ffmpeg.org/ffmpeg-filters.html#xfade
https://github.com/tungs/timecut
---
## Notes
check this out in AVP next time
https://maize-veil-butter.glitch.me/docu.html
Video doesn't play in AVP safari
Safari only does immersive VR not WebAR
Check this out: https://modelviewer.dev/examples/scenegraph/#animatedTexturesExample
animated glTF + mp4
import maps now supported in 16.4 https://twitter.com/robpalmer2/status/1640425021939040262
demo world
https://beta.anata.dev/CharacterStudio/test.html
https://twitter.com/dmarcos/status/1724480999583862793
nvidia
worth getting hardware accelerated ffmpeg?
https://docs.nvidia.com/video-technologies/video-codec-sdk/12.0/ffmpeg-with-nvidia-gpu/index.html
https://trac.ffmpeg.org/wiki/HWAccelIntro
the real bottleneck is this:
await new Promise((resolve) => setTimeout(resolve, 8000)); // Simulate frame exporting
Canvas as texture?
https://wangftp.wustl.edu/~dli/test/simple-cors-http-server.py

## NFT drop
snapshotting holders
> idea:
get info about NFTs held from each holder
generate assets from it?
4.6 AR to host 4 GB of files on arweave
get ascii from terraform, generate particles from it? create atlas + particles or use framework?
petals falling, like a sakurai tree
janusweb /tree/ (rip)
decaying world? shield? room? playcanvas site?
yard-sale/dist/config.json for example
can search and replace? it's only referenced there

WORKS!!!
``{"name":"hyperfy_dream2.mp4","type":"audio","file":{"filename":"chad-world.mp4","size":1400502,"hash":"f678419b0b8a4bb43561f8de5498d475","url":"files/assets/155157583/1/chad-world.mp4"}``
just changed a couple places
allocate a number of M3 NFTs into the mix, randomize into the 3D scenes, then airdrop to holders after the reveal
- boomboxheads
- chainbreakers
- M3TV video
- chuddies
- retrodoge
- genesis pass
- cryptoavatars
- vipe heros
- voxel wearables
name idea: Mementos
**Fixed camera only, no flying around**
Similar to: https://zora.co/collect/zora:0x7f0f1f3b1f42f0b27788dc8919ab418a0f113ce6/1

https://playcanvas.com/project/1178994/overview/lofi
Create 12 different scenes for asset swapping with support for dynamic objects that can be loaded from remote urls including some whitelisted nft collections that don't have CORS issues

https://zora.co/collect/zora:0x7f0f1f3b1f42f0b27788dc8919ab418a0f113ce6/1
https://sketchfab.com/m3org/models
- lofi hacker, terraform screensaver
- street scene, terraform glowing at night
- internet archive world
- desert + castle in sky in distance
- gaussian splat scene
- pure particles
- modelviewer templates
- hackerspaces (burning midnight oil)
launch collection from party dao? so royalties go back to whoever contributed nfts?
### Scenes
- [Webaverse homespace](https://github.com/madjin/homespace)
- [Cryptovoxels makers district](https://sketchfab.com/3d-models/makers-district-12-13-19-91b16c41a56a432382a9e6912e3c3f19)
- [Chuddies town](https://hyperfy.io/town)
- [Janusweb: Metacade](https://hyperfy.io/metacade)
- Anarchy Arcade: default map
- Hyperfy DAOtown
- https://sketchfab.com/3d-models/daotown-minimal-14f98f0a62594e559ab581a67ced8b67
- https://sketchfab.com/3d-models/daotown-building-kit-250e2caa0ce7454db1c7ffa48dc4093a
- Hackerspace scans
- https://github.com/madjin/nsl-vr
- https://github.com/madjin/noisebridge-vr
- https://github.com/madjin/hacklab-vr
- Internet Archive
- https://github.com/madjin/internet-archive-vr
- https://sketchfab.com/3d-models/internet-archive-building-2d76cc9e714f42bb9d090af73074296a
- https://sketchfab.com/3d-models/internet-archive-b895255e32fd408eab9cf6d4d092c1b9
- M3TV Soundstage
- https://sketchfab.com/3d-models/m3tv-soundstage-6206325ff5c643c9b1dd2ce5aadbec19
---
### Castle in the Sky
Internet Archive model
Terraform + particles
threejs / webxr app?
aframe has weird issue, won't play video sometimes
modelviewer + animated texture (video supported)
janusweb + ambient lighting?
airdrop pre-reveal state nfts, then do reveal
NOTE: post processing = no VR/AR support
modelviewer + video + post processing? https://modelviewer.dev/examples/postprocessing/#selective-effects bloom
bonkler as inspiration
mashup of influences
ffmpeg -hwaccel cuda -hwaccel_output_format cuda -i "input_file.mp4"
-c:a copy -vf "scale_cuda=-2:480" -c:v h264_nvenc "output_file.mp4"