Main github: https://github.com/madjin/terraform-visualizer
Experiments with Terraforms by Mathcastles
https://tokens.mathcastles.xyz/terraforms/token-html/4
max supply 9911
Contract: https://etherscan.io/address/0x4e1f41613c9084fdb9e34e11fae9412427480e56
https://terraformexplorer.xyz/
What if terraforms had link to MP4 in the metadata?
But would require a render server snapshotting the piece and betray the composition some
We have a gif server for discord that does that btw but decided not to deploy bc we’re not sure best way to do it efficiently, aid anyone here has strong backend design background and knows a cost efficient way to compute and serve 11k 15mb gifs daily, holler
Haven’t gotten around to looking into that yet but maybe someone here knows the best pattern for keeping compute/bandwidth costs way down for something like that
any Terrain mode parcels all technically dynamic
I'd have to do some tests to observe what the threshhold for change is, but bc the whole castle is floating up and down, it samples the noise space at different positions over time
it's very subtle tho and perceiving the change visually is only possible over long time spans
so the sample rate could be low
https://opensea.io/assets/ethereum/0x4e1f41613c9084fdb9e34e11fae9412427480e56/2181
https://enterdream.xyz/index.html?id=2181
https://tokens.mathcastles.xyz/terraforms/token-html/2181
npm install puppeteer-screen-recorder
https://github.com/bbc/video-loop-finder/blob/master/video_loop_finder.py
requires numpy / anaconda
https://github.com/onlyhavecans/perfect-gif
https://tokens.mathcastles.xyz/terraforms/token-html/9911
will this work?
https://stackoverflow.com/questions/60043174/cross-fade-video-to-itself-with-ffmpeg-for-seamless-looping
https://ffmpeg.org/ffmpeg-filters.html#xfade
https://github.com/tungs/timecut
check this out in AVP next time
https://maize-veil-butter.glitch.me/docu.html
Video doesn't play in AVP safari
Safari only does immersive VR not WebAR
Check this out: https://modelviewer.dev/examples/scenegraph/#animatedTexturesExample
animated glTF + mp4
import maps now supported in 16.4 https://twitter.com/robpalmer2/status/1640425021939040262
demo world
https://beta.anata.dev/CharacterStudio/test.html
https://twitter.com/dmarcos/status/1724480999583862793
nvidia
worth getting hardware accelerated ffmpeg?
https://docs.nvidia.com/video-technologies/video-codec-sdk/12.0/ffmpeg-with-nvidia-gpu/index.html
https://trac.ffmpeg.org/wiki/HWAccelIntro
the real bottleneck is this:
await new Promise((resolve) => setTimeout(resolve, 8000)); // Simulate frame exporting
Canvas as texture?
https://wangftp.wustl.edu/~dli/test/simple-cors-http-server.py
snapshotting holders
idea:
get info about NFTs held from each holder
generate assets from it?
4.6 AR to host 4 GB of files on arweave
get ascii from terraform, generate particles from it? create atlas + particles or use framework?
petals falling, like a sakurai tree
janusweb /tree/ (rip)
decaying world? shield? room? playcanvas site?
yard-sale/dist/config.json for example
can search and replace? it's only referenced there
WORKS!!!
{"name":"hyperfy_dream2.mp4","type":"audio","file":{"filename":"chad-world.mp4","size":1400502,"hash":"f678419b0b8a4bb43561f8de5498d475","url":"files/assets/155157583/1/chad-world.mp4"}
just changed a couple places
allocate a number of M3 NFTs into the mix, randomize into the 3D scenes, then airdrop to holders after the reveal
name idea: Mementos
Fixed camera only, no flying around
Similar to: https://zora.co/collect/zora:0x7f0f1f3b1f42f0b27788dc8919ab418a0f113ce6/1
https://playcanvas.com/project/1178994/overview/lofi
Create 12 different scenes for asset swapping with support for dynamic objects that can be loaded from remote urls including some whitelisted nft collections that don't have CORS issues
https://zora.co/collect/zora:0x7f0f1f3b1f42f0b27788dc8919ab418a0f113ce6/1
https://sketchfab.com/m3org/models
launch collection from party dao? so royalties go back to whoever contributed nfts?
Internet Archive model
Terraform + particles
threejs / webxr app?
aframe has weird issue, won't play video sometimes
modelviewer + animated texture (video supported)
janusweb + ambient lighting?
airdrop pre-reveal state nfts, then do reveal
NOTE: post processing = no VR/AR support
modelviewer + video + post processing? https://modelviewer.dev/examples/postprocessing/#selective-effects bloom
bonkler as inspiration
mashup of influences
ffmpeg -hwaccel cuda -hwaccel_output_format cuda -i "input_file.mp4"
-c:a copy -vf "scale_cuda=-2:480" -c:v h264_nvenc "output_file.mp4"