# OMI 10-12-23
[toc]
## AI meeting summary
The transcript includes discussions about fixing issues in a project. There is mention of updating README, pushing commits, and the need for someone with expertise in Blender to create plugins. The conversation also touches on audio capabilities in Blender and making changes to the audio specification. Formatting fixes are mentioned, as well as potential improvements to the JSON structure. Overall, it seems that the team is working on resolving technical issues and improving various aspects of the project.
The participants are discussing the usage of emitters in a scene. They debate whether emitters should be singular or plural and where they should be placed (scene level or node level). There is agreement that emitters should be controlled with animation and that having a context of being the scene is useful. The conversation also touches on the difference between global and positional emitters, their properties, and their compatibility with stereo audio and HRTF. Some inconsistencies in the documentation are identified, but overall progress is made in clarifying the specifications for audio emitters in glTF.
In this part of the transcript, the discussion revolves around audio properties and behaviors in a 3D environment. The participants discuss different approaches to implementing audio, such as using zero-dimensional (zero D) audio or positional audio with limited range. They also talk about the challenges of using non-Ambisonic music files and how to ensure consistent playback. There is a mention of a hypothetical animation spec that could animate scene properties like audio but no clear consensus on its use case.
The conversation then shifts towards the concept of global audio tied to specific nodes or objects in a scene. The participants explore ideas such as colliding objects triggering audio sources and toggling scene-level audio based on collision areas. They note the need for an animation spec for glTF that can address these issues effectively. There is also a brief discussion about autoplay versus playing property for controlling when an audio source starts or stops playing. The participants recall past debates regarding these properties and suggest options like animating volume instead of relying solely on autoplay.
The transcript ends with some proposed changes to the KHR Audio specification, including adding support for seeking positions in tracks and considering animations paired with behavior definitions through an animation pointer extension.
Overall, this part of the transcript highlights discussions surrounding various aspects of implementing and controlling audio in 3D environments, including spatialization, playback control, node-object associations, collisions triggering actions, and potential improvements to existing specifications.
The transcript includes discussions about changes and formatting related to KHR lights, JSON formatting, Schema titles, merging of a project, voting for the merge using emojis, access permissions for certain individuals, and plans for future PR updates. The conversation concludes with farewell messages exchanged between participants.
## Action items
Based on the transcript, here are the follow-ups and action items:
- Update the README and fix any inconsistencies.
- Have someone with a detailed eye review everything to ensure consistency.
- Push up a branch and check permissions.
- Look for someone experienced in Blender to create Blender plugins.
- Contact Temple Tim for assistance with a Blender implementation.
- Create a Blender plugin that uses sound strips to take in an arbitrary sound file and export it with a glTF object.
- Match the latest KHR audio spec update.
- Bundle audio with the exported glTF in Blender.
- Clarify the difference between global and positional emitters in the documentation.
- Document the use cases and differences between 2D, 3D, and zero-D audio.
- Add a vote to merge the changes made in the transcript.
- Ensure that everyone who should have access to the necessary resources has it.
- Make a PR to update the KHR audio spec in the glTF repo.
Please note that this is not an exhaustive list, and there may be additional follow-ups and action items that were not explicitly mentioned in the transcript.
## Outline
Based on the transcript provided, it is difficult to create a detailed outline with chapters as the conversation seems to be focused on a specific topic related to audio specifications. The timestamps mentioned in the transcript do not correspond to specific topics or chapters but rather indicate the time at which each sentence was spoken.
However, I can provide a general outline based on the key points discussed in the transcript:
- Chapter 1: Introduction to Audio Specifications
- 1:34:17 - Discussion about a data block menu
- 1:35:17 - Mention of a data block menu related to audio
- Chapter 2: Creating a Blender Plugin for Audio Export
- 1:36:21 - Action step to create a Blender plugin for exporting audio with a glTF object
- 1:36:56 - Mention of Humble Tim starting work on bundling audio with the exported glTF in Blender
- Chapter 3: Using GPT for Post-production
- 1:36:44 - Suggestion to use GPT for post-production
- 1:38:14 - Question about the release date of GPT chat
- 1:39:13 - Discussion about using GPT to write a README for creating content
- Chapter 4: Clarifying Audio Emitter Types
- 1:52:05 - Discussion on global and positional audio emitters
- 1:53:25 - Clarification that global emitters don't define positional data
- 1:54:17 - Explanation of syntax for adding global audio emitters to scenes
- Chapter 5: Defining Audio Emitters in Scenes and Nodes
- 2:01:50 - Discussion about allowing audio emitters on notes
- 2:02:42 - Question about the difference between global and positional emitters
- 2:09:50 - Clarification on the use of global audio as a child of a button in a scene
- Chapter 6: Documenting Audio Specifications
- 2:12:37 - Discussion on making title changes and clarifications in the documentation
- 2:21:12 - Reviewing past conversations and documentation on audio specifications
- Chapter 7: Miscellaneous Discussions
- 2:25:24 - Mention of volume and current time in audio
- 2:27:31 - Explanation of animating the current timestamp on the audio source
- 2:29:01 - Formatting changes in the documentation
- 2:34:18 - Addressing a question from Erin about using emojis and taking a photograph
- 2:39:22 - Discussion on updating a specific item
Please note that this outline is based on the limited information provided in the transcript and may not capture all the nuances or details of the conversation.
## Notes
Sure! Here are the shorthand bullet-point notes from the transcript:
- Someone suggests having someone with a detailed eye look over everything.
- They discuss packing sound into the Blend file.
- They mention using Fireflies recording for describing something.
- The idea of creating a Blender plugin that uses sound scripts to export sound with a glTF object is brought up.
- They ask if anyone has anything to add and mention using GPT for something in post.
- Humble Tim starts working on bundling audio with the exported glTF in Blender.
- They talk about having GPT write a README, which they think would be helpful for creating content.
- They mention the need to double-check if anything was missed.
- They discuss a change related to negative C and ask for comments.
- They notice a stray bracket and address it.
- They discuss a password and screen reading.
- They talk about pushing a commit with formatting fixes, including missing or extra commas.
- They refer to a big example and discuss a global example.
- They mention audio emitters of type global and positional and how animations come into play.
- They discuss the difference between global and positional emitters and suggest simplifying the documentation.
- They talk about the scene level and plural, and editing and publishing the document.
- They discuss a trailing comma and fixing it.
- They talk about global and positional emitters and clarify the emitter section description.
- They question the difference between global and positional emitters in behavior.
- They discuss listening to audio and the relevance of nodes.
- They notice a discrepancy in the document regarding global audio emitters.
- They discuss the global property and its relevance to the audio spec.
- They talk about defining how audio is emitted.
- They discuss defining five sources and overlapping global emitters.
- They discuss putting global audio as a child of a button in a scene.
- They ask about a specific term and mention a previous conversation.
- They discuss replay and autoplay and making changes.
- They suggest adding play and go over the changes made.
- They discuss formatting changes, changing bullets to dashes, and JSON formatting.
- They mention Erin and merging the changes.
- They talk about using emojis and voting to merge.
- They mention the Extensions room and sharing a screenshot.
- They ask if Aaron is able to do something.
- They confirm agreement and wrap up the discussion.
Please note that this is a summary of the main points in the transcript.