# Tips for Knowledge Spaces ###### tags: `m3` `omi` ![](https://hackmd.io/_uploads/S1X6iwGIn.jpg) This blog post is also a war room: https://hyperfy.io/msf (WIP) https://www.fastcompany.com/3028471/google-ventures-your-design-team-needs-a-war-room-heres-how-to-set-one-up --- ## Coordination Challenges - Working group meetings are at a weekly or bi-weekly cadence - We often only pay attention to the previous week notes - Discord / Zoom calls all sorta look and feel similar - We are bombarded with URLs daily that they become easily forgotten - What if we can see a high level overview of the past 5 meetups? - Can this improve onboarding? - Can we expand organizations to have a longer memory? - How to do it without a huge lift on learning new things? - How can different but goal aligned organizations share ideas / notes and collaborate easier? **1. Community calls all sort of look and feel the same** ![](https://hackmd.io/_uploads/rJ3n-_fIh.png) ![](https://i.imgur.com/zx7LmAA.png) **2. Information Silos Increase Coordination Cost** ![](https://i.imgur.com/enijg8Y.png) ![](https://hackmd.io/_uploads/HJiNXkgJa.png) --- ## Art / War Rooms > Atm I'm interested in 3D pipelines that can convert text / apis into dynamic objects, to bring conversation threads and other live data into these spaces You know when working on a project with other people how it can be hard to keep track of everything going on? To bring any big or creative idea to life you need a team with a unified and goal oriented vision of it. Art rooms (aka war rooms) can help. ![](https://i.imgur.com/JAPnwB3.jpg) War rooms are shared work spaces where people can come together and communicate with each other in a really visual way. They’re filled with all sorts of visual aids, like sketches, storyboards, sticky notes, and other tools that can make brainstorming and decision-making easier. **SPATIAL MEMORY > SHORT-TERM MEMORY** Notes plastered everywhere helps extend our memory as well since you eventually will know where information is. If you’re new, you should be able to get a sense of where the project’s at and where you may be able to contribute just by being in the room. ### Virtual War Rooms For remote teams it is better to use virtual tools. Metaverse organizations should be eating our own dog food more and use spatial products. We can't honestly build an open metaverse without getting our hands dirty and practice what we preach. ![](https://i.imgur.com/s0zrDom.jpg) > 2. Ideas come from other ideas. > Breakthroughs come from discovering new connections between seemingly disparate pieces of information. Thus the first step for each new project should be to assemble as many inputs in one space as possible. A thinking tool should make it easy bring together inputs of all kinds and remix them freely. > 9. Take inspiration from physical workspaces. > Creative people tend to nest: a writer in their home office, a woodworker in their shop, an artist in their studio — full of work surfaces like desks and drafting tables, drawers and pegboards full of tools, pinboards, scraps of paper, and more. They are messy, informal, mixed together, freeform, and personal. Compare this to the neatly-sorted lists and grids of files, notes, and photos on our computer. A thinking tool should try to capture some of the informal, mixed-up, and personal nature of our physical workspaces. > > Source: https://museapp.com/principles/ ![](https://i.imgur.com/78klmwH.jpg) --- ## Tips from Digital Gardeners >![](https://hackmd.io/_uploads/B1F9eaG8h.png) > > Gardens are imperfect by design. They don't hide their rough edges or claim to be a permanent source of truth. > Source: https://maggieappleton.com/garden-history > >![](https://hackmd.io/_uploads/Hk6AlTz82.png) > > Because garden notes are densely linked, a garden explorer can enter at any location and follow any trail they link through the content, rather than being dumped into a "most recent” feed. ### Recycle Digital Assets - https://github.com/openmetaverse/omf-archive - https://github.com/omigroup/omi-archive People click hundreds of links each day. Every URL / tab is separate from each other. It's better if we can see a higher level picture in one place. --- ### Make Art Panels Combine images into ![](https://i.imgur.com/p8BMtMP.jpg) ![](https://i.imgur.com/R8r6OgS.jpg) **Use infinite canvas tools** - https://infinitecanvas.tools/ - https://www.pureref.com/ - https://www.tldraw.com/ ![](https://i.imgur.com/Vb6y27C.png) Make them dynamic - https://hyperfy.io/omigroup ![Image 6](https://i.imgur.com/SOE69Nt.png) --- ### Tip: Recycle PDFs into Posters ![](https://i.imgur.com/0PUeqRe.gif) **Ctrl-P on a GitHub discussion page, save as PDF** ![](https://i.imgur.com/vxROMl6.png) Here is how you can convert a URL to a PDF using headless chromium, accomplishing the equivalent of ctrl+p on a website without a GUI. `chromium --headless --disable-gpu --print-to-pdf=153_test.pdf https://github.com/omigroup/gltf-extensions/discussions/153` With a little scripting knowledge you can create a combined PDF for all of the discussions in a github repo in order to more quickly browse all the community discussions faster. **Converting PDF to Images using Imagemagick** `convert -density 300 -background white -alpha remove file.pdf -quality 90 output_%d.jpg` Can then pureref into a poster like such to see the big picture for that particular image ![](https://i.imgur.com/38Rn5v1.png) Btw GitHub action runners have chromium installed by default: https://github.com/actions/runner-images/blob/main/images/linux/Ubuntu2204-Readme.md Benefit: Flip through all the pages of an orgs documentation for a quick rundown, something you can't do link by link, tab by tab. Browser navigation has so many speed bumps and silos. ### Tip: Export GitHub Discussions via GraphQL We can't use our usual methods with [GitHub API](https://api.github.com/repos/omigroup/gltf-extensions) to fetch repo discussions yet, which is why we use the GraphQL method. ![](https://i.imgur.com/c875VHK.png) https://github.com/skevy/graphiql-app/releases electron app used > To pass schema validation, all GraphQL API calls must meet these standards: > > - Clients must supply a first or last argument on any connection. > - Values of first and last must be within 1-100. > - Individual calls cannot request more than 500,000 total nodes. > > source: https://docs.github.com/en/graphql/overview/resource-limitations Here's the query, you can replace the owner and name with any public organization and project repo: ``` query { repository(owner: "omigroup", name: "gltf-extensions") { discussions(first: 5) { totalCount nodes { id category { name } upvoteCount updatedAt createdAt number title body author { login } comments(first: 30) { nodes { id author{ login } body } } labels(first: 30) { nodes { id name color description } } } } } } ``` Make sure you install GH for CLI first, [instructions here](https://github.com/cli/cli/blob/trunk/docs/install_linux.md#official-sources). Then save this as discussions.graphql then run this CLI one-liner to spit JSON out: `gh api graphql -F owner=‘omigroup’ -F repo=‘omigroup’ -F query=@discussions.graphql | tee output.json | python3 -m json.tool | tee test.json > out.json` Btw the tools here are installed by default in Linux images for GH action runners, meaning we can automate everything! > GH cli comes with Github ubuntu runner: https://github.com/actions/runner-images/blob/main/images/linux/Ubuntu2204-Readme.md#cli-tools **Notes** - https://docs.github.com/en/graphql - https://docs.github.com/en/graphql/guides/using-the-graphql-api-for-discussions - https://docs.github.com/en/graphql/overview/explorer live web version - https://github.com/intel/dffml/blob/4dae1a3e6b6d37b81f71659599d1ddef800ac176/scripts/dump_discussion.py sauce - https://gist.github.com/pdxjohnny/9f3dc18f0a42d3107aaa2363331d8faa useful resource from intel - https://rbrahul.github.io/Awesome-JSON-Viewer/# useful json viewer --- ## Future Directions Documentation → Knowledge bases → LLMs → 2D interface (ChatGPT) → Avatars We can summon autonomous AI agents that can pierce through the mountains of data we are constantly generating, helping us curate and map new relationships. ![](https://i.imgur.com/kT7nTW5.jpg) Source: https://twitter.com/graycrawford/status/1600921304453959680 ### Trees of Thought ![](https://i.imgur.com/jeFROaH.jpg) https://twitter.com/elonmusk/status/1599259281584402432 > Use a word cloud generator to create a visual representation of the most common words or phrases used in the meeting notes. This could help members quickly see which topics were discussed most frequently, and identify key themes or issues in the meetings. ![](https://hackmd.io/_uploads/rJOBhl-Pn.png) Idea: Import content into blender as 3D object, then move X meters based on time ![](https://i.imgur.com/mpKpjK0.png) --- # FIN / NOTES (EXTRA) ### GitHub Discussions to Posters janusweb ![](https://i.imgur.com/3LT0HpY.jpg) https://www.tldraw.com/r/1670016548432 tldraw notes ## Graphql method We can't use regular methods with GitHub API to fetch repo discussions yet, but we can with this GraphQL method: https://api.github.com/repos/omigroup/gltf-extensions - https://docs.github.com/en/graphql - https://docs.github.com/en/graphql/guides/using-the-graphql-api-for-discussions - https://github.com/skevy/graphiql-app/releases electron app used - https://docs.github.com/en/graphql/overview/explorer live web version ![](https://i.imgur.com/c875VHK.png) > To pass schema validation, all GraphQL API calls must meet these standards: > > - Clients must supply a first or last argument on any connection. > - Values of first and last must be within 1-100. > - Individual calls cannot request more than 500,000 total nodes. > > source: https://docs.github.com/en/graphql/overview/resource-limitations Here's the query ``` query { repository(owner: "omigroup", name: "omigroup") { discussions(first: 5) { totalCount nodes { id category { name } upvoteCount updatedAt createdAt number title body author { login } comments(first: 30) { nodes { id author{ login } body } } labels(first: 30) { nodes { id name color description } } } } } } ``` Save this as discussions.graphql then run this CLI one-liner to get json out: `gh api graphql -F owner=‘omigroup’ -F repo=‘omigroup’ -F query=@discussions.graphql | tee output.json | python3 -m json.tool | tee test.json > out.json` **Prerequisites** https://github.com/cli/cli/blob/trunk/docs/install_linux.md#official-sources > Note: GH cli comes with Github ubuntu runner: https://github.com/actions/runner-images/blob/main/images/linux/Ubuntu2204-Readme.md#cli-tools **Notes** - https://github.com/intel/dffml/blob/4dae1a3e6b6d37b81f71659599d1ddef800ac176/scripts/dump_discussion.py sauce - https://gist.github.com/pdxjohnny/9f3dc18f0a42d3107aaa2363331d8faa useful resource from intel - https://rbrahul.github.io/Awesome-JSON-Viewer/# useful json viewer --- ### Github Discussions to Posters 1. You need gh installed, installation steps for linux are here: https://cli.github.com/manual/installation ```bash! type -p curl >/dev/null || (sudo apt update && sudo apt install curl -y) curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg \ && sudo chmod go+r /usr/share/keyrings/githubcli-archive-keyring.gpg \ && echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null \ && sudo apt update \ && sudo apt install gh -y ``` 2. save this as **numbers.graphql**, you can use last in place of first, 100 seems to be the max ``` query { repository(owner: "omigroup", name: "gltf-extensions") { discussions(first: 100) { nodes { createdAt number title } } } } ``` 3. Get a list of the ids for github discussions to construct the urls with as **numbers.txt** ```bash! ## Get a list of the past 100 URLs gh api graphql -F owner=‘omigroup’ -F repo=‘gltf-extension’ -F query=@numbers.graphql | tee output.json | python3 -m json.tool | tee test.json > numbers_100.json ## Sort numbers in order and save as numbers.txt `jq '.data.repository.discussions.nodes[].number' numbers_100.json | sort -n > numbers.txt` ``` ### capture-website-cli method Capture the website with: `capture-website "https://github.com/omigroup/gltf-extensions/discussions/153" --type=jpeg --quality=1 --full-page --element=".discussion" --scale-factor=2 --output=body_153.jpg --overwrite` Grab the title of the discussion (can use variable if from a list) and save as $title `title=$(curl -s "https://github.com/omigroup/gltf-extensions/discussions/153" | pup 'span.js-issue-title.markdown-title text{}' | tr -s '[:space:]' ' ')` Convert title to image `convert -size 1360x100 xc:white -gravity Center -pointsize 64 -fill black -annotate 0 "$title" title.png` > can also save as `"title_$id".png` Combine github title image with body image (replace image names) `convert title.png body.jpg -append combined_153.jpg` **[Example image output](https://i.imgur.com/o4cM4oX.jpg)** ### Batch GH Discussions to Images ```bash! #!/bin/bash # Downloads github discussions as JSON + screenshot # Run as bash script.sh owner repo # Example: bash script.sh omigroup gltf-extensions # check if gh is installed if ! command -v gh &> /dev/null then echo "Error: gh is not installed. Please install it from https://cli.github.com/manual/installation" >&2 exit 1 fi # check if capture-website-cli is installed if ! command -v capture-website &> /dev/null then echo "capture-website-cli is not installed. Installing now..." if ! npm install --global capture-website-cli; then echo "Error: Failed to install capture-website-cli" >&2 exit 1 fi fi # check if jq is installed if ! command -v jq &> /dev/null then echo "jq is not installed. Installing now..." if ! sudo apt-get install jq -y; then echo "Error: Failed to install jq" >&2 exit 1 fi fi # check if pup is installed if ! command -v pup &> /dev/null then echo "pup is not installed. Installing now..." if ! sudo apt-get install pup -y; then echo "Error: Failed to install pup" >&2 exit 1 fi fi # assign command-line arguments to variables if [ $# -ne 2 ]; then echo "Usage: $0 <owner> <repo>" >&2 exit 1 fi owner="$1" repo="$2" # Define the GraphQL query QUERY_ALL=$(cat <<EOF query { repository(owner: "$owner", name: "$repo") { discussions(first: 4) { totalCount nodes { id category { name } upvoteCount updatedAt createdAt number title body author { login } comments(first: 30) { nodes { id author{ login } body } } labels(first: 30) { nodes { id name color description } } } } } } EOF ) # Write the GraphQL query to a file echo "writing the graphQL query to a file" if ! echo "$QUERY_ALL" > numbers.graphql; then echo "Error: Failed to write GraphQL query to file" >&2 exit 1 fi ## Get a list of the past X amount of URLs echo "getting a list of URLs from past discussions" if ! gh api graphql -F owner="$owner" -F repo="$repo" -F query=@numbers.graphql | tee "$owner"-"$repo".json | jq -r '.data.repository.discussions.nodes[].number' | sort -n > numbers.txt; then echo "Error: Failed to get list of discussion numbers" >&2 exit 1 fi # loop through each line in the numbers.txt file while read -r number do # capture the website as a JPEG image echo "capturing website" if ! capture-website "https://github.com/$owner/$repo/discussions/$number" --type=jpeg --quality=0.5 --full-page --element=".discussion" --scale-factor=1 --output=body_"$number".jpg --overwrite; then echo "Error: Failed to capture website for discussion $number" >&2 continue fi # get the width of the body image using ImageMagick if ! width=$(identify -format "%w" body_"$number".jpg 2>/dev/null); then echo "Failed to get width of image body_$number.jpg" >&2 continue fi # get the title of the discussion using curl and pup if ! title=$(curl -s "https://github.com/$owner/$repo/discussions/$number" | pup 'span.js-issue-title.markdown-title text{}' | tr -s '[:space:]' ' ' 2>/dev/null); then echo "Failed to get title for discussion $number" >&2 continue fi # create a title image using ImageMagick with the same width as the body image echo "grabbing title" if ! convert -size "${width}"x100 xc:white -gravity Center -pointsize 42 -fill black -annotate 0 "$title" title_"$number".png 2>/dev/null; then echo "Failed to create title image for discussion $number" >&2 continue fi # combine the title and body images into a single image echo "joining title and body" if ! convert title_"$number".png body_"$number".jpg -append "$repo"_"$number".jpg 2>/dev/null; then echo "Failed to combine title and body images for discussion $number" >&2 continue fi done < numbers.txt # Clean up files echo "cleaning up files" if ! rm title_*.png body_*.jpg numbers.txt engine.bin numbers.graphql 2>/dev/null; then echo "Failed to clean up temporary files" >&2 exit 1 fi echo "finished processing" ``` This also outputs the JSON as well, can cronjob weekly task --- ### Webaverse - https://docs.webaverse.com/engineering/app/scene/scene-file-quickstart docs What if github discussions can be transmogrified into a virtual world or roleplayed as NPCs? ![](https://i.imgur.com/xASv22c.png) --- **pandoc-graphql** https://github.com/dennybritz/pandoc-graphql Turn your local documents into a GraphQL API using pandoc > **Why would you ever want to do that?** > - Write content in any format, such as Markdown or Jupyter, query it, and transform it to output formats like HTML, PDF and RSS using an API. > - Cleanly separate your content from the consuming application(s). For example, your blog posts or notes may live in a content-only Github repository and are then served to multiple websites or static site generators. > - Built-in support for academic citations via pandoc-citeproc ### Screenshots - https://github.com/sindresorhus/capture-website-cli Capture screenshots of websites from the command-line --- ## Hyperfy Experiment - https://hyperfy.io/qenwd1a23j (WARNING) open on desktop only - https://github.com/CraigChat/craig bot used - https://top.gg/bot/272937604339466240 bot used ![](https://i.imgur.com/3T6G29u.png) ![](https://i.imgur.com/QCb4ZDO.png) ![](https://i.imgur.com/oJ2n7rY.png) ![](https://i.imgur.com/f3hHC9T.png) ![](https://i.imgur.com/1zb6Ege.png) --- ## OpenAI ### Summarization ![](https://i.imgur.com/F7HC1uX.png) - https://github.com/miso-belica/sumy Module for automatic summarization of text documents and HTML pages. - https://github.com/juan-csv/GPT3-text-summarization Summarization, topic generation using GPT3 ![](https://github.com/juan-csv/gpt3-text-summarization/raw/master/Media/app.gif) ### Whisper - https://github.com/openai/whisper - https://replicate.com/openai/whisper - https://github.com/dsymbol/decipher - https://github.com/openai/whisper/discussions/98 Here's a snippet from craig chat piped into Whisper (ty ifire) ![](https://i.imgur.com/5serGgs.png) > cool concept: [draw your conversations](https://github.com/calebolson123/website/blob/0f69262648946e83529337ef33bc6662afbe4385/_posts/2022-10-14-draw-your-conversations.md) ![](https://i.imgur.com/RnyTSw9.png) ![](https://i.imgur.com/1neNccX.png) https://github.com/OpenGenus/vidsum - generate a summary from a video + subtitles --- ## Notes - https://api.github.com/orgs/omigroup - https://docs.github.com/en/rest/overview/endpoints-available-for-github-apps https://otter.ai/ gh api graphql -F owner='omigroup' -F repo='omigroup' -F query=@discussions.graphql | tee output.json | python -m json.tool | tee test.json curl https://github.com/omigroup/omigroup/discussions/322 capture-website https://github.com/omigroup/omigroup/discussions/322 --element="discussion_bucket" --type=jpeg --quality=0.5 --output=screenshot3.jpg --- ## ChatGPT > what are some ways I can visualize a json data dump of my organizations meeting notes from past github discussions into something that would be useful for members to remember past details and coordinate on projects better? One way to visualize a JSON data dump of your organization's meeting notes from past GitHub discussions is to use a data visualization tool or library to create charts, graphs, or other visual representations of the data. This can help members of your organization understand and remember the details of past meetings, and coordinate on projects more effectively. Here are some specific examples of how you could visualize your organization's meeting notes data using different tools and techniques: Use a charting library to create bar charts or line graphs that show the number of meetings held over time, or the number of discussions and comments in each meeting. This could help members see how active the organization has been over time, and identify trends or patterns in the data. Use a word cloud generator to create a visual representation of the most common words or phrases used in the meeting notes. This could help members quickly see which topics were discussed most frequently, and identify key themes or issues in the meetings. Use a network analysis tool to create a network diagram that shows the connections between different members of the organization based on who participated in the meetings and discussions. This could help members see who has been involved in the meetings, and identify potential collaboration opportunities. Use a geospatial mapping tool to create a map that shows the location of the members who participated in the meetings. This could help members see where the meetings were held and who attended from different parts of the world. Overall, there are many different ways you could visualize your organization's meeting notes data to make it more useful and accessible for members. The specific approach you choose will depend on the specific goals and needs of your organization, as well as the tools and techniques you have available. --- `convert -density 150 test.pdf out` ![](https://i.imgur.com/a4Yhxre.png) ![](https://i.imgur.com/rC560wc.jpg) **Combining PDFs** Do this in the repo you saved `numbers.txt` in from above ```bash! #!/bin/bash # Read numbers.txt line by line and output PDF from browser while read -r number; do chromium --headless --disable-gpu --print-to-pdf=$number.pdf https://github.com/omigroup/gltf-extensions/discussions/$number done < numbers.txt ## Combine into 1 PDF (requires pdftk to be installed) pdftk $(ls -v *.pdf | tac) cat output combined.pdf ``` ![](https://hackmd.io/_uploads/ryco2b-D3.png) --- ## thread 1/ Built many virtual worlds over last 6 months that bring together orgs and working groups whom share goals of building an open metaverse Goal is to acknowledge each others existence via common spaces, and to then accelerate information sharing / collaboration processes ![](https://hackmd.io/_uploads/r1A0Edh_3.png) 2/ Atm it still feels like each working group are too silo'd from the other, working in vacuums. It also takes too many clicks to access meeting notes Coordination costs are high atm, I feel like a bumblebee in this xkcd comic hopping around zillion places to share relevant info ![](https://hackmd.io/_uploads/H17erdnOn.png) 3/ When I join a discord / zoom call for anything this is how it feels, like walking into a generic empty office atm - they all start to blur and look the same This is generally bad for onboarding, as a new person there's little context as to what's going on ![](https://hackmd.io/_uploads/r1TgSd2O3.png) 4/ The metaverse is an extremely cross-disciplinary field, our work can get bottlenecked by the info systems we're using to collaborate within and across domains Linear inputs achieve linear outputs, knowledge xfer rate is dial-up speeds when relying on powerpoints ![](https://hackmd.io/_uploads/r1wZSunOh.png)