m3
omi
This blog post is also a war room: https://hyperfy.io/msf (WIP)
1. Community calls all sort of look and feel the same
2. Information Silos Increase Coordination Cost
Atm I'm interested in 3D pipelines that can convert text / apis into dynamic objects, to bring conversation threads and other live data into these spaces
You know when working on a project with other people how it can be hard to keep track of everything going on? To bring any big or creative idea to life you need a team with a unified and goal oriented vision of it. Art rooms (aka war rooms) can help.
War rooms are shared work spaces where people can come together and communicate with each other in a really visual way. They’re filled with all sorts of visual aids, like sketches, storyboards, sticky notes, and other tools that can make brainstorming and decision-making easier.
SPATIAL MEMORY > SHORT-TERM MEMORY
Notes plastered everywhere helps extend our memory as well since you eventually will know where information is. If you’re new, you should be able to get a sense of where the project’s at and where you may be able to contribute just by being in the room.
For remote teams it is better to use virtual tools. Metaverse organizations should be eating our own dog food more and use spatial products. We can't honestly build an open metaverse without getting our hands dirty and practice what we preach.
- Ideas come from other ideas.
Breakthroughs come from discovering new connections between seemingly disparate pieces of information. Thus the first step for each new project should be to assemble as many inputs in one space as possible. A thinking tool should make it easy bring together inputs of all kinds and remix them freely.- Take inspiration from physical workspaces.
Creative people tend to nest: a writer in their home office, a woodworker in their shop, an artist in their studio — full of work surfaces like desks and drafting tables, drawers and pegboards full of tools, pinboards, scraps of paper, and more. They are messy, informal, mixed together, freeform, and personal. Compare this to the neatly-sorted lists and grids of files, notes, and photos on our computer. A thinking tool should try to capture some of the informal, mixed-up, and personal nature of our physical workspaces.Source: https://museapp.com/principles/
Image Not Showing Possible ReasonsLearn More →
- The image was uploaded to a note which you don't have access to
- The note which the image was originally uploaded to has been deleted
Gardens are imperfect by design. They don't hide their rough edges or claim to be a permanent source of truth.
Source: https://maggieappleton.com/garden-historyImage Not Showing Possible ReasonsLearn More →
- The image was uploaded to a note which you don't have access to
- The note which the image was originally uploaded to has been deleted
Because garden notes are densely linked, a garden explorer can enter at any location and follow any trail they link through the content, rather than being dumped into a "most recent” feed.
People click hundreds of links each day. Every URL / tab is separate from each other. It's better if we can see a higher level picture in one place.
Combine images into
Use infinite canvas tools
Make them dynamic
Ctrl-P on a GitHub discussion page, save as PDF
Here is how you can convert a URL to a PDF using headless chromium, accomplishing the equivalent of ctrl+p on a website without a GUI.
chromium --headless --disable-gpu --print-to-pdf=153_test.pdf https://github.com/omigroup/gltf-extensions/discussions/153
With a little scripting knowledge you can create a combined PDF for all of the discussions in a github repo in order to more quickly browse all the community discussions faster.
Converting PDF to Images using Imagemagick
convert -density 300 -background white -alpha remove file.pdf -quality 90 output_%d.jpg
Can then pureref into a poster like such to see the big picture for that particular image
Btw GitHub action runners have chromium installed by default: https://github.com/actions/runner-images/blob/main/images/linux/Ubuntu2204-Readme.md
Benefit: Flip through all the pages of an orgs documentation for a quick rundown, something you can't do link by link, tab by tab. Browser navigation has so many speed bumps and silos.
We can't use our usual methods with GitHub API to fetch repo discussions yet, which is why we use the GraphQL method.
To pass schema validation, all GraphQL API calls must meet these standards:
- Clients must supply a first or last argument on any connection.
- Values of first and last must be within 1-100.
- Individual calls cannot request more than 500,000 total nodes.
source: https://docs.github.com/en/graphql/overview/resource-limitations
Here's the query, you can replace the owner and name with any public organization and project repo:
query {
repository(owner: "omigroup", name: "gltf-extensions") {
discussions(first: 5) {
totalCount
nodes {
id
category { name }
upvoteCount
updatedAt
createdAt
number
title
body
author { login }
comments(first: 30) {
nodes {
id
author{ login }
body
}
}
labels(first: 30) { nodes {
id name color description
} }
}
}
}
}
Make sure you install GH for CLI first, instructions here. Then save this as discussions.graphql then run this CLI one-liner to spit JSON out:
gh api graphql -F owner=‘omigroup’ -F repo=‘omigroup’ -F query=@discussions.graphql | tee output.json | python3 -m json.tool | tee test.json > out.json
Btw the tools here are installed by default in Linux images for GH action runners, meaning we can automate everything!
GH cli comes with Github ubuntu runner: https://github.com/actions/runner-images/blob/main/images/linux/Ubuntu2204-Readme.md#cli-tools
Notes
Documentation → Knowledge bases → LLMs → 2D interface (ChatGPT) → Avatars
We can summon autonomous AI agents that can pierce through the mountains of data we are constantly generating, helping us curate and map new relationships.
Source: https://twitter.com/graycrawford/status/1600921304453959680
https://twitter.com/elonmusk/status/1599259281584402432
Use a word cloud generator to create a visual representation of the most common words or phrases used in the meeting notes. This could help members quickly see which topics were discussed most frequently, and identify key themes or issues in the meetings.
Idea: Import content into blender as 3D object, then move X meters based on time
janusweb
https://www.tldraw.com/r/1670016548432 tldraw notes
We can't use regular methods with GitHub API to fetch repo discussions yet, but we can with this GraphQL method: https://api.github.com/repos/omigroup/gltf-extensions
To pass schema validation, all GraphQL API calls must meet these standards:
- Clients must supply a first or last argument on any connection.
- Values of first and last must be within 1-100.
- Individual calls cannot request more than 500,000 total nodes.
source: https://docs.github.com/en/graphql/overview/resource-limitations
Here's the query
query {
repository(owner: "omigroup", name: "omigroup") {
discussions(first: 5) {
totalCount
nodes {
id
category { name }
upvoteCount
updatedAt
createdAt
number
title
body
author { login }
comments(first: 30) {
nodes {
id
author{ login }
body
}
}
labels(first: 30) { nodes {
id name color description
} }
}
}
}
}
Save this as discussions.graphql then run this CLI one-liner to get json out:
gh api graphql -F owner=‘omigroup’ -F repo=‘omigroup’ -F query=@discussions.graphql | tee output.json | python3 -m json.tool | tee test.json > out.json
Prerequisites
https://github.com/cli/cli/blob/trunk/docs/install_linux.md#official-sources
Note: GH cli comes with Github ubuntu runner: https://github.com/actions/runner-images/blob/main/images/linux/Ubuntu2204-Readme.md#cli-tools
Notes
type -p curl >/dev/null || (sudo apt update && sudo apt install curl -y)
curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg \
&& sudo chmod go+r /usr/share/keyrings/githubcli-archive-keyring.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null \
&& sudo apt update \
&& sudo apt install gh -y
query {
repository(owner: "omigroup", name: "gltf-extensions") {
discussions(first: 100) {
nodes {
createdAt
number
title
}
}
}
}
## Get a list of the past 100 URLs
gh api graphql -F owner=‘omigroup’ -F repo=‘gltf-extension’ -F query=@numbers.graphql | tee output.json | python3 -m json.tool | tee test.json > numbers_100.json
## Sort numbers in order and save as numbers.txt
`jq '.data.repository.discussions.nodes[].number' numbers_100.json | sort -n > numbers.txt`
Capture the website with:
capture-website "https://github.com/omigroup/gltf-extensions/discussions/153" --type=jpeg --quality=1 --full-page --element=".discussion" --scale-factor=2 --output=body_153.jpg --overwrite
Grab the title of the discussion (can use variable if from a list) and save as $title
title=$(curl -s "https://github.com/omigroup/gltf-extensions/discussions/153" | pup 'span.js-issue-title.markdown-title text{}' | tr -s '[:space:]' ' ')
Convert title to image
convert -size 1360x100 xc:white -gravity Center -pointsize 64 -fill black -annotate 0 "$title" title.png
can also save as
"title_$id".png
Combine github title image with body image (replace image names)
convert title.png body.jpg -append combined_153.jpg
#!/bin/bash
# Downloads github discussions as JSON + screenshot
# Run as bash script.sh owner repo
# Example: bash script.sh omigroup gltf-extensions
# check if gh is installed
if ! command -v gh &> /dev/null
then
echo "Error: gh is not installed. Please install it from https://cli.github.com/manual/installation" >&2
exit 1
fi
# check if capture-website-cli is installed
if ! command -v capture-website &> /dev/null
then
echo "capture-website-cli is not installed. Installing now..."
if ! npm install --global capture-website-cli; then
echo "Error: Failed to install capture-website-cli" >&2
exit 1
fi
fi
# check if jq is installed
if ! command -v jq &> /dev/null
then
echo "jq is not installed. Installing now..."
if ! sudo apt-get install jq -y; then
echo "Error: Failed to install jq" >&2
exit 1
fi
fi
# check if pup is installed
if ! command -v pup &> /dev/null
then
echo "pup is not installed. Installing now..."
if ! sudo apt-get install pup -y; then
echo "Error: Failed to install pup" >&2
exit 1
fi
fi
# assign command-line arguments to variables
if [ $# -ne 2 ]; then
echo "Usage: $0 <owner> <repo>" >&2
exit 1
fi
owner="$1"
repo="$2"
# Define the GraphQL query
QUERY_ALL=$(cat <<EOF
query {
repository(owner: "$owner", name: "$repo") {
discussions(first: 4) {
totalCount
nodes {
id
category { name }
upvoteCount
updatedAt
createdAt
number
title
body
author { login }
comments(first: 30) {
nodes {
id
author{ login }
body
}
}
labels(first: 30) { nodes {
id name color description
} }
}
}
}
}
EOF
)
# Write the GraphQL query to a file
echo "writing the graphQL query to a file"
if ! echo "$QUERY_ALL" > numbers.graphql; then
echo "Error: Failed to write GraphQL query to file" >&2
exit 1
fi
## Get a list of the past X amount of URLs
echo "getting a list of URLs from past discussions"
if ! gh api graphql -F owner="$owner" -F repo="$repo" -F query=@numbers.graphql | tee "$owner"-"$repo".json | jq -r '.data.repository.discussions.nodes[].number' | sort -n > numbers.txt; then
echo "Error: Failed to get list of discussion numbers" >&2
exit 1
fi
# loop through each line in the numbers.txt file
while read -r number
do
# capture the website as a JPEG image
echo "capturing website"
if ! capture-website "https://github.com/$owner/$repo/discussions/$number" --type=jpeg --quality=0.5 --full-page --element=".discussion" --scale-factor=1 --output=body_"$number".jpg --overwrite; then
echo "Error: Failed to capture website for discussion $number" >&2
continue
fi
# get the width of the body image using ImageMagick
if ! width=$(identify -format "%w" body_"$number".jpg 2>/dev/null); then
echo "Failed to get width of image body_$number.jpg" >&2
continue
fi
# get the title of the discussion using curl and pup
if ! title=$(curl -s "https://github.com/$owner/$repo/discussions/$number" | pup 'span.js-issue-title.markdown-title text{}' | tr -s '[:space:]' ' ' 2>/dev/null); then
echo "Failed to get title for discussion $number" >&2
continue
fi
# create a title image using ImageMagick with the same width as the body image
echo "grabbing title"
if ! convert -size "${width}"x100 xc:white -gravity Center -pointsize 42 -fill black -annotate 0 "$title" title_"$number".png 2>/dev/null; then
echo "Failed to create title image for discussion $number" >&2
continue
fi
# combine the title and body images into a single image
echo "joining title and body"
if ! convert title_"$number".png body_"$number".jpg -append "$repo"_"$number".jpg 2>/dev/null; then
echo "Failed to combine title and body images for discussion $number" >&2
continue
fi
done < numbers.txt
# Clean up files
echo "cleaning up files"
if ! rm title_*.png body_*.jpg numbers.txt engine.bin numbers.graphql 2>/dev/null; then
echo "Failed to clean up temporary files" >&2
exit 1
fi
echo "finished processing"
This also outputs the JSON as well, can cronjob weekly task
What if github discussions can be transmogrified into a virtual world or roleplayed as NPCs?
pandoc-graphql
https://github.com/dennybritz/pandoc-graphql Turn your local documents into a GraphQL API using pandoc
Why would you ever want to do that?
- Write content in any format, such as Markdown or Jupyter, query it, and transform it to output formats like HTML, PDF and RSS using an API.
- Cleanly separate your content from the consuming application(s). For example, your blog posts or notes may live in a content-only Github repository and are then served to multiple websites or static site generators.
- Built-in support for academic citations via pandoc-citeproc
Here's a snippet from craig chat piped into Whisper (ty ifire)
cool concept: draw your conversations
https://github.com/OpenGenus/vidsum - generate a summary from a video + subtitles
gh api graphql -F owner='omigroup' -F repo='omigroup' -F query=@discussions.graphql | tee output.json | python -m json.tool | tee test.json
curl https://github.com/omigroup/omigroup/discussions/322
capture-website https://github.com/omigroup/omigroup/discussions/322 –element="discussion_bucket" –type=jpeg –quality=0.5 –output=screenshot3.jpg
what are some ways I can visualize a json data dump of my organizations meeting notes from past github discussions into something that would be useful for members to remember past details and coordinate on projects better?
One way to visualize a JSON data dump of your organization's meeting notes from past GitHub discussions is to use a data visualization tool or library to create charts, graphs, or other visual representations of the data. This can help members of your organization understand and remember the details of past meetings, and coordinate on projects more effectively.
Here are some specific examples of how you could visualize your organization's meeting notes data using different tools and techniques:
Use a charting library to create bar charts or line graphs that show the number of meetings held over time, or the number of discussions and comments in each meeting. This could help members see how active the organization has been over time, and identify trends or patterns in the data.
Use a word cloud generator to create a visual representation of the most common words or phrases used in the meeting notes. This could help members quickly see which topics were discussed most frequently, and identify key themes or issues in the meetings.
Use a network analysis tool to create a network diagram that shows the connections between different members of the organization based on who participated in the meetings and discussions. This could help members see who has been involved in the meetings, and identify potential collaboration opportunities.
Use a geospatial mapping tool to create a map that shows the location of the members who participated in the meetings. This could help members see where the meetings were held and who attended from different parts of the world.
Overall, there are many different ways you could visualize your organization's meeting notes data to make it more useful and accessible for members. The specific approach you choose will depend on the specific goals and needs of your organization, as well as the tools and techniques you have available.
convert -density 150 test.pdf out
Combining PDFs
Do this in the repo you saved numbers.txt
in from above
#!/bin/bash
# Read numbers.txt line by line and output PDF from browser
while read -r number; do
chromium --headless --disable-gpu --print-to-pdf=$number.pdf https://github.com/omigroup/gltf-extensions/discussions/$number
done < numbers.txt
## Combine into 1 PDF (requires pdftk to be installed)
pdftk $(ls -v *.pdf | tac) cat output combined.pdf
1/ Built many virtual worlds over last 6 months that bring together orgs and working groups whom share goals of building an open metaverse
Goal is to acknowledge each others existence via common spaces, and to then accelerate information sharing / collaboration processes
2/ Atm it still feels like each working group are too silo'd from the other, working in vacuums. It also takes too many clicks to access meeting notes
Coordination costs are high atm, I feel like a bumblebee in this xkcd comic hopping around zillion places to share relevant info
3/ When I join a discord / zoom call for anything this is how it feels, like walking into a generic empty office atm - they all start to blur and look the same
This is generally bad for onboarding, as a new person there's little context as to what's going on
4/ The metaverse is an extremely cross-disciplinary field, our work can get bottlenecked by the info systems we're using to collaborate within and across domains
Linear inputs achieve linear outputs, knowledge xfer rate is dial-up speeds when relying on powerpoints