owned this note
owned this note
Published
Linked with GitHub
# Anata Optimization Notes
###### tags: `anata`
[toc]
## Traits Stats
- https://docs.google.com/spreadsheets/d/1ll3CTx8kslsrlRDSRUiyRfERSlEbwtz55bIChZVJbUs/edit?usp=sharing

## Optimization Guidelines
### VRchat
- https://docs.vrchat.com/docs/avatar-performance-ranking-system#pc-limits
- https://docs.vrchat.com/docs/avatar-performance-ranking-system#quest-limits

Quest limits

### Hyperfy
- https://docs.hyperfy.io/avatars#rank-system

### W3C Ecommerce Group Recommendations
- https://github.com/KhronosGroup/3DC-Asset-Creation/blob/main/asset-creation-guidelines/RealtimeAssetCreationGuidelines.md
## Material optimization
Files that have many images need to be texture atlassed, could use simplygon reduction pipeline with material casting. Here's a list of male assets that have a problematic amount of textures per each file that need to be more closely inspected, will work on a script to output a text file / blend for each
## Reduction only
If there's only 1 texture / 1 draw call but high poly then one should only apply decimation / reduction via simplygon on that asset to reduce the number of triangles. Ideally we want avatars to have 70k triangles MAX for PC avatar use
---
## Setting up Dev Environment
- Start in anata repo
- Run scripts/get_stats.sh to get metadata about glb files
- File size
- Draw Calls
- Triangles
- Run scripts/visualize_stats.py to preview and filter for problematic assets
- Move problematic assets for male / female to a new folder using to_optimize.sh
### get_stats.sh
Get statistics about all the glb files in working directories using gltf-pipeline. This outputs `metadata/stats/"$filename".csv` where `$filename` is either male female or shared:
`for body in "male" "female" "shared"; do ./scripts/stats.sh files/"$body" KB; done`
> Note: You can also change the output directory with a third argument:
> Example: `./stats.sh male KB output/folder`
Example output:
```
Name,Size (KB),Images,Draw calls,Triangles,File Path
Abstract_Vision_Brace_plus_Cursed_Brace.glb,1058.57,2,2,28,files/male/BRACE/Abstract_Vision_Brace_plus_Cursed_Brace/Abstract_Vision_Brace_plus_Cursed_Brace.glb
Abstract_Visions_of_Flame_Brace.glb,3140.28,4,4,10237,files/male/BRACE/Abstract_Visions_of_Flame_Brace/Abstract_Visions_of_Flame_Brace.glb
Admiral_Hat.glb,585.67,2,2,3768,files/male/HATS/Admiral_Hat/Admiral_Hat.glb
```
> Note: Maybe cleaner to just use the File Path as the Name, and use basename in other scripts for getting the Name
## visualize_stats.py
`visualize_stats.py` will create a graphic chart with draw calls + triangle results from stats.sh, as well as output a csv, json, and txt file with thresholds we set. It will help us to see how much we have optimized when we're able to compare side by side.
`for body in "male" "female" "shared"; do python3 scripts/visualize_stats.py metadata/stats/"$body".csv -draws 3 -tris 5000 -o all; done`
> The male/female/shared_viz.json is the main important file for the next script
## to_optimize.sh
After running `visualize_stats.py` we'll have information about the files we're going to optimize in a convenient json format. In `to_optimize.sh` we gather the full paths to the glbs that we want to optimize, and copy the files to a new folder while copying the same directory structure as our working dir.
```bash!
#!/bin/bash
## Copies files that need to be optimized into a new folder
## Make sure there's no lingering files in the various folders
## Usage: bash scripts/to_optimize.sh metadata/stats/male_viz.csv
## All: for body in "male" "female" "shared"; do bash scripts/to_optimize.sh metadata/stats/male_viz.csv; done
# Check if the input file is provided
if [ -z "$1" ]; then
echo "Error: Input file not provided."
exit 1
fi
# Check if the input file exists
if [ ! -f "$1" ]; then
echo "Error: Input file '$1' not found."
exit 1
fi
# Run the command and save the output to a variable
files=$(awk -F',' 'NR>1 { print $1 }' "$1")
# Iterate over each file and find its path
while IFS= read -r file; do
# Use the find command with -iname to search for the file by name
# Adjust the path and options as needed for your specific scenario
path=$(find files/male files/female files/shared -iname "$file")
# Check if the file path is found
if [ -z "$path" ]; then
echo "Warning: File '$file' not found."
continue
fi
# Extract the body and category using string manipulation
filename=$(basename "$path")
directory=$(dirname "$path")
folder=$(basename "$directory")
category=$(basename "$(dirname "$directory")")
body=$(basename "$(dirname "$(dirname "$directory")")")
# Process the found paths as needed
## Debugging
#echo "Filename is $filename"
#echo "Directory is $directory"
#echo "Category is $category"
#echo "Body is $body"
if ! mkdir -p "files/optimize/files/$body/$category/$folder"; then
echo "Error: Failed to create folder 'files/optimize/files/$body/$category/$folder'."
break
fi
if ! cp -v "$path" "files/optimize/files/$body/$category/$folder/"; then
echo "Error: Failed to copy file '$path' to 'files/optimize/files/$body/$category/$folder'."
fi
done <<< "$files"
if ! cp "metadata/stats/"$body"_viz.json" "files/optimize/glb_files.json"; then
echo "Error: Failed to copy JSON file to optimize folder"
fi
```
---
## Optimization Flow
- Optimize assets
- Refresh stats
- bash scripts/get_stats.sh files/optimize/files/male KB files/optimize
- python3 scripts/visualize_stats.py files/optimize/male.csv -o json
- Tweak optimization scripts
- Repeat
### blender scripts
- todo
### model-splitter
- https://github.com/playkostudios/model-splitter
Example: Split a model named "model.glb" into the folder "output" with 4 LOD levels (100%, 75%, 50%, and 25% mesh kept) and a texture size of 100%, 75%, 50% and 25% respectively
`model-splitter model.glb output 1 0.75:75% 0.5:50% 0.25:25%`
#### model_split.sh
```bash!
#!/bin/bash
# Script: model_split.sh
# Description: Process glb files using model-splitter based on input JSON file
# Usage: ./model_split.sh
# - Ensure that the 'model-splitter' command is installed and accessible in the system
# - Modify the 'input_file' variable to point to the appropriate JSON file
# - The processed files will be saved in the same directory as their respective glb files
input_file="files/optimize/glb_files.json"
# Loop over each glb file in the directory
while IFS= read -r line
do
# Get the file path using jq
file_path=$(echo "$line" | jq -r '.["File Path"]')
# Check if file path is empty
if [ -z "$file_path" ]; then
echo "Error: File path not found for line: $line"
continue
fi
# Get the output directory as the dirname of the file path
output_dir=$(dirname "$file_path")
# Run the model-splitter command
model-splitter "$file_path" "$output_dir" 1 0.75:75% 0.5:50% 0.25:25%
# Check if the model-splitter command was successful
if [ $? -ne 0 ]; then
echo "Error: model-splitter command failed for file: $file_path"
continue
fi
echo "Processed file: $file_path"
done < <(jq -r 'map(.["File Path"])[]' "$input_file")
```
### simplygon
- https://github.com/microsoft/Simplygon-API-Examples/tree/release/10.1/Src/Cs
- https://github.com/microsoft/Simplygon-API-Examples/tree/release/10.1/Src/Python
### Refresh stats
Run both of these scripts again to get fresh metadata about the triangles and draw calls of ALL the output 3D models generated from optimization scripts
`bash scripts/get_stats.sh files/optimize/files/male KB files/optimize`
`python3 scripts/visualize_stats.py files/optimize/male.csv -o json`
---
``
```mermaid
flowchart TB
Start(Start in anata repo)
GetStats(Run scripts/get_stats.sh)
Preview(Run scripts/visualize_stats.py)
Move(Move problematic assets for male/female)
Step2(Step 2)
Refresh(Refresh stats)
GetStats2(bash scripts/get_stats.sh files/optimize/files/male KB files/optimize)
Preview2(python3 scripts/visualize_stats.py files/optimize/male.csv -o json)
Tweak(Tweak optimization scripts)
Start --> GetStats
GetStats -->|File size, Draw Calls, Triangles| Preview
Preview -->|Filter problematic assets| Move
Move --> Step2
Step2 --> Refresh
Refresh --> GetStats2
GetStats2 --> Preview2
Preview2 --> Tweak
Tweak -->|Repeat| Step2
```
---
## Measuring Traits
**Setup**
```bash!
## Get stats for everything, refresh when needed
for body in "male" "female" "shared"; do bash scripts/get_stats.sh files/optimize/files/"$body" KB files/optimize; done
for body in "male" "female" "shared"; do python3 scripts/visualize_stats.py files/optimize/"$body".csv -o json; done
```
plotly bar charts: https://plotly.com/python/bar-charts/
**Stats on male female shared**
```
Summary for male.csv
Sum Average Median Minimum Maximum
Size (KB) 2156959.06 3283.042709 1553.97 4.5 37800.71
Images 992.00 1.509893 1.00 0.0 13.00
Draw calls 985.00 1.499239 1.00 1.0 6.00
Triangles 4417217.00 6723.313546 3340.00 4.0 182168.00
Summary for female.csv
Sum Average Median Minimum Maximum
Size (KB) 1238358.48 1837.327122 1309.77 34.14 82723.01
Images 797.00 1.182493 1.00 1.00 4.00
Draw calls 1251.00 1.856083 2.00 1.00 17.00
Triangles 11571477.00 17168.363501 12230.00 2.00 264156.00
Summary for shared.csv
Sum Average Median Minimum Maximum
Size (KB) 160634.31 1147.387929 714.385 13.93 6995.21
Images 146.00 1.042857 1.000 0.00 3.00
Draw calls 159.00 1.135714 1.000 1.00 3.00
Triangles 997454.00 7124.671429 3560.000 4.00 88884.00
```
stats per category next
**Unpack Textures**
```bash!
find ./ -iname "*.glb" -exec sh -c 'gltf-pipeline -i "$1" -s -o "$(basename "$1" .glb).gltf"' sh {} \;
```
---
## Textures
Original: 1.2 MB
- imagecompressor (web): 370 KB
- optipng: 580 KB
- imagemagick: 663 KB
- pngquant: 372 KB

Converted to jpg: 212 KB
- 80% quality (imagemagick): 141 KB
- 80% quality (jpegoptim): 114 KB


First convert all the glb to glTF recursively, separating the textures
`find ./files -iname "*.glb" -exec sh -c 'gltf-pipeline -i "$1" -s -o "$(dirname "$1")/$(basename "$1" .glb).gltf"' _ {} \;`
Make a CSV of all the PNG + JPG paths
`find ./files -type f -iname "*.png" -exec sh -c 'identify -format "%d/%f,%wx%h,%b\n" "$0" | awk -F, "{gsub(/[^0-9]/, \"\", \$3); printf \"%s,%s,%.2f,%s\\n\", \$1, \$2, \$3/1024, \$3}"' {} \; > output_png.csv`
`find ./files -type f -iname "*.jpg" -exec sh -c 'identify -format "%d/%f,%wx%h,%b\n" "$0" | awk -F, "{gsub(/[^0-9]/, \"\", \$3); printf \"%s,%s,%.2f,%s\\n\", \$1, \$2, \$3/1024, \$3}"' {} \; > output_jpg.csv`
> Example lines:
path,resolution,kilobytes,bytes
./files/shared/HEAD/Nature_Spirit_Bare/Horns 4.png,2048x2048,153.18,156856

1. First have all the final assets exported
2. Resize non-po2 images to power of 2
3. Optimize textures (pngquant, jpegoptim)
4. Rebundle into glTF binary files

First tweak analyze/category_stats.md to add new values later for before / after
Relies on json like male_viz.json
```json
[
{
"Category": "BRACE",
"Name": "Abstract_Vision_Brace_plus_Cursed_Brace",
"Size": 1058.57,
"Images": 2,
"Draw calls": 2,
"Triangles": 28,
"File Path": "files/optimize/files/male/BRACE/Abstract_Vision_Brace_plus_Cursed_Brace/Abstract_Vision_Brace_plus_Cursed_Brace.glb"
},
{
"Category": "BRACE",
"Name": "Abstract_Visions_of_Flame_Brace",
"Size": 3140.28,
"Images": 4,
"Draw calls": 4,
"Triangles": 10237,
"File Path": "files/optimize/files/male/BRACE/Abstract_Visions_of_Flame_Brace/Abstract_Visions_of_Flame_Brace.glb"
},
```
Gathered from files/ in `metadata/stats/` and `files/optimize`

Change default commands for `files/``version, compare ALL..
for body in "male" "female" "shared"; do ./scripts/get_stats.sh files/"$body" KB; done
for body in "male" "female" "shared"; do python3 scripts/visualize_stats.py metadata/stats/"$body".csv -draws 3 -tris 5000 -o all; done
- get_stats.sh
- visualize_stats.sh
- outputs `metadata/stats/*_viz.csv`
- to_optimize.sh
- analyze/category_stats_md.py
- high level detail in MD file