# ai-lab-workshop-20251008-initial-planning
Perfect — tomorrow’s session (Oct 8) is **Media Production with AI: Prep (Session 1 of 3)**, so this is the one focused on **pre-production**: planning, research, storyboarding, scheduling, and concept visualization.
Below is a **long brainstorm list** of things you could do, organized by type so you can mix and match depending on the group’s energy and what tech setup you have ready in the Lab.
---
## 🧭 Framing and Setup
* **Quick framing talk (5 min):** how pre-production determines the success of any media project, and how AI lets us compress the “thinking → making” loop.
* **Prompt scaffolding mini-demo:** show how structured prompts can replace traditional shot lists, treatments, or mood boards.
* **The “producer’s triangle”:** fast ↔ cheap ↔ good—then ask how AI might change that constraint.
---
## 🧠 Concept Development
* **Idea generation sprint:** give everyone a shared theme (“migration,” “time,” “thresholds”) and 10 min to co-develop a concept outline with ChatGPT or Claude.
* **Genre pivot exercise:** have AI rewrite a concept across genres (documentary → horror → rom-com) to illustrate tonal flexibility.
* **AI as creative partner:** prompt comparison—same idea pitched to 3 models, discuss differences in narrative logic and aesthetic taste.
---
## 🎬 Storyboarding & Visualization
* **Text-to-image storyboards:**
* Use DALL·E 3 or Midjourney to generate visual sequences from scripts.
* Compare coherence across models.
* Discuss prompt tokens like *“wide-angle cinematic lighting”* vs *“flat storyboard sketch.”*
* **Camera grammar exercise:** have Claude or Gemini output a JSON shot list (`scene`, `shot type`, `motion`, `duration`), then visualize it.
* **AI animatic:** feed the storyboard frames to Runway or Pika to get quick motion tests.
* **Prompt translation:** show how a screenplay paragraph → image prompt → generated frame → refined prompt chain can evolve.
---
## 📅 Planning, Scheduling, & Logistics
* **AI producer assistant:** demonstrate prompting ChatGPT Edu to generate call sheets, budgets, and production timelines.
* **Constraint play:** “We only have one camera, 2 hours, and a single hallway.” Let AI propose creative shooting plans.
* **Cast/crew matrix:** generate hypothetical credits and role descriptions to see how AI handles interpersonal logistics.
* **Notion/Sheets automation:** build an AI-generated production plan imported into Google Sheets or Airtable.
---
## 🎭 Script, Voice, and Tone
* **Script style translation:** same scene written by AI in three tonal registers (academic, comedic, noir).
* **Table-read with voice models:** use ElevenLabs or ChatGPT Voice to audition AI voice actors.
* **Dialogue polishing:** paste human-written dialogue, have the model suggest pacing/beat adjustments, then critique its taste.
---
## 🎨 Visual Moodboarding
* **Collective moodboard wall:** project a stream of generated images; participants upvote or sketch alternatives.
* **Prompt dissection:** show how adding art-historical or cinematic references changes style.
* **Cross-model remix:** take one prompt and pass it through DALL·E → Stable Diffusion → Ideogram to illustrate model bias and “house look.”
* **Style library exercise:** build a shared “Harvard AI Visual Lexicon” board—participants tag prompts and results for future projects.
---
## 🧩 Workflow Design
* **From chat to pipeline:** show how a text conversation can become a structured workflow (e.g., generate shot list → export → Storyboarder → Premiere).
* **API preview:** brief glimpse of Realtime API or Whisper for logging interviews in later sessions.
* **Version control for ideas:** demonstrate naming conventions, file versioning, or Git for media projects.
---
## 🧍 Hands-On Team Activities
* **3-person mini-crews:** writer + designer + producer each prompt different stages; recombine outputs.
* **“AI Pitch Meeting”:** teams develop a one-minute AI-generated pitch deck (3 slides, 1 image, 1 tagline, 1 budget line).
* **Lightning showcase:** end with quick 60-second presentations of each project’s AI-assisted pre-production materials.
---
## 🧩 Reflection & Critical Layer
* **Bias and taste discussion:** how model training data affects visual language (e.g., gendered camera gaze).
* **Ethical reflection:** synthetic casting, likeness rights, and authorship.
* **Meta-prompting:** have participants ask the AI to critique its own suggestions (“Why did you choose this shot?”).
* **Archivability:** how to store and cite AI-generated pre-production material in academic or creative workflows.
---
## 🧰 Possible Tools to Feature
| Category | Tools |
| ----------------- | ----------------------------------------------------------------------- |
| Chat interfaces | ChatGPT Edu (4o mini), Claude 3.5 Sonnet, Gemini 1.5 Pro |
| Visual generation | DALL·E 3 in ChatGPT, Midjourney v6, Ideogram, Pika Labs video |
| Storyboarding | Storyboarder (WonderUnit), Notion AI whiteboards, Figma AI plugin |
| Planning | ChatGPT Advanced Data Analysis, Google Sheets + GPT add-on, Airtable AI |
| Scriptwriting | Sudowrite, ChatGPT Edu Custom GPT “Script Doctor” |
| Voice/Audio | ElevenLabs, ChatGPT Voice, Whisper Live |
| Scheduling | Notion Calendar, AI Google Calendar Assistant |
---
Would you like me to turn this brainstorm into a **structured 90-minute run-of-show** (with time blocks, goals, and materials) for tomorrow’s session?
Perfect — those new directions push this session into a **richer hybrid of analog + digital media prototyping**, which aligns beautifully with the Learning Lab’s ethos. Here’s an expanded brainstorm structured around your three threads — **Nano-Banana & ingredients**, **line fidelity & AI visual precision**, and **pitch deck / lookbook assembly** — plus a few linking ideas that can make the whole workshop feel like one coherent experiment in *AI-supported creative pre-production*.
---
## 🍌 **Nano-Banana + Ingredients: AI Thinking with Tangible Media**
*(for context: Nano-Banana = an LLM “sketching” playground that can integrate “ingredients” like text snippets, prompts, and small files)*
### 1. Ingredient-Based Prompt Crafting
* **Physical Inputs Table:** lay out a “mise en scène” of analog items — postcards, markers, fragments of text, paper textures, screenshots, printed photos.
Participants choose 3–5 and feed their short descriptions into Nano-Banana as “ingredients.”
→ Output: a storyboard seed, a concept treatment, or a lookbook moodline.
→ Meta-point: showing how *prompting as collage* parallels analog montage.
### 2. Analog–Digital Roundtrip
* Participants **sketch an idea on paper**, photograph it, and upload it as an “ingredient.”
Nano-Banana then expands the sketch into a concept or storyboard panel.
→ Demonstrates the creative feedback loop between analog thinking and LLM remixing.
### 3. Ingredient Archetypes
* Introduce ingredient “types”:
* 📄 *Textual (script excerpts, poetic fragments)*
* 🎨 *Visual (color palette, texture, sketch)*
* 🔉 *Aural (vibe descriptor, song lyric)*
* 🧭 *Conceptual (theme, constraint, or emotion)*
Participants label their ingredients, and Nano-Banana combines them algorithmically.
→ This can evolve into a structured creative taxonomy for future workshops.
### 4. Ingredient Remix Game
* Teams exchange one ingredient with another group (physical or digital) and re-run Nano-Banana with the altered set.
→ The exercise shows how even small prompt perturbations shift the aesthetic dramatically.
---
## ✏️ **Line Fidelity & AI Visual Precision**
### 5. Line-Driven Visual Prompts
* Show how to *condition AI image generation* on specific line drawings:
* **DALL·E 3 “style reference” uploads** — use a scanned sketch or pencil drawing to constrain composition.
* **ControlNet / Scribble mode** — demonstrate how edge maps guide Stable Diffusion output.
* **Ideogram “match drawing style” prompts** — test prompt weighting for line vs. fill fidelity.
### 6. Progressive Constraint Pipeline
1. Start with a hand-drawn storyboard panel.
2. Generate a high-fidelity AI image that respects contours.
3. Regenerate with style modifiers (engraving, lithograph, blueprint).
4. Export variations to build a visual language library for the project.
→ You could display these on the projector as a “from hand to machine” timeline.
### 7. Physical–Digital Hybrid Output
* Use tracing paper, light tables, or overhead projectors:
* Trace AI outputs to reassert human gesture.
* Rescan the traced version → feed back into AI → show how human touch reconditions the model.
→ Creates an embodied understanding of “co-authorship” between artist and AI.
### 8. Error Aesthetics
* Deliberately push models into misalignment:
* Over-contrast sketches.
* Ask for “edge hallucinations” or “dream in ink.”
→ Opens discussion about productive failure and ambiguity as design tools.
---
## 📘 **Pitch Decks, Lookbooks, and AI-Generated Presentations**
### 9. Auto-Deck Builder
* Use ChatGPT’s **Canvas or File Upload** to turn a concept document + 3 images into a **slide deck outline**.
* Then refine slide-by-slide:
* *Slide 1:* Title & tag line
* *Slide 2:* Concept summary
* *Slide 3:* Moodboard / visual references
* *Slide 4:* Technical approach
* *Slide 5:* Schedule / resources
### 10. Dynamic Lookbooks
* Use **Notion**, **Figma**, or **Pitch.com** to live-assemble lookbooks with:
* AI-generated visuals from earlier exercises
* color palettes extracted by Claude or ChatGPT’s Vision tools
* text snippets (loglines, tone statements, keywords)
* Bonus: generate “tone comparison spreads” (e.g., *how it looks if Wes Anderson directed it vs. Wong Kar-Wai*).
### 11. StoryWorld Decks
* Show participants how to create a “World Bible” deck — maps, mood shots, key props, and style cues.
* ChatGPT or Claude can write world summaries in production-binder format.
* Combine text + visuals in Google Slides or Figma live.
* Encourage participants to include a “feeling palette”: emotions or temporal rhythms.
### 12. AI Deck Persona
* Let Nano-Banana or ChatGPT adopt the persona of a **creative producer or studio exec**, and have participants “pitch” their deck to it.
* The model gives structured feedback (“logline clarity,” “visual coherence,” “market pitch strength”).
* Turns AI into a rehearsal partner for real-world pitches.
---
## ⚗️ **Hybrid and Meta Experiments**
### 13. “Reverse Ingredient” Challenge
* Feed Nano-Banana a finished AI image or deck and ask it to infer what ingredients might have produced it.
→ Highlights the interpretive nature of AI “reading” media.
### 14. Concept → Script → Board Chain
* Chain models:
* ChatGPT: concept synopsis → Claude: script draft → DALL·E: storyboard panels → Nano-Banana: lookbook assembly.
* End with discussion on interoperability and “pipeline literacy.”
### 15. Tangible Pitch Wall
* Print AI-generated visuals on cardstock, pin them up, and let participants arrange decks physically.
* Photograph final arrangements as analog lookbooks.
* Optionally, use a webcam feed + CLIP model to auto-tag themes (“urban decay,” “ritual,” “thresholds”).
---
## 🧩 **Linkages to Broader Themes**
* **Human–AI Co-authorship:** gesture, materiality, and iteration.
* **AI as Translation Layer:** moving between modalities (sketch → prompt → deck).
* **Studio Pedagogy:** how embodied and social practices can be preserved in AI workflows.
* **Context Engineering:** designing metadata, tone cues, and “ingredient ontologies” for media projects.
---
Would you like me to draft a **90-minute plan** that integrates these (e.g., Nano-Banana ingredients → sketch-to-AI → deck assembly → reflection), with suggested timings, prompts, and materials list (markers, cameras, scanning stations, etc.)?
That’s *exactly* the right instinct — to use the studio’s equipment **symbolically and atmospherically** in Session 1 (“Prep”) without crossing into actual **production workflows** that you’ll reserve for “On Set.”
The key move is to make the physical studio act as a **thinking instrument**, not a recording environment yet.
Below are layered ways to bring in cameras, lights, screens, and projectors for *ideation, framing, and visualization* — rather than shooting.
---
## 🎥 **Cameras as Idea Machines (Not Capture Devices)**
### 1. Camera-as-Viewfinder
* Mount a camera or phone on a tripod but keep it **unplugged from recording**.
Let participants look through the lens to *frame compositions* for AI storyboards.
→ Then describe or photograph the framing for prompt input (“medium close-up from low angle, tungsten key, shallow DOF”).
* This turns the camera into a *prompt-building reference tool*.
### 2. Shot Grammar Exercise
* Place 2–3 cameras around the room aimed at blank backdrops.
Participants walk between them to *feel* what “coverage” means.
→ They then script or prompt the same micro-scene from each camera’s POV.
→ Connect to later AI shot-listing in Session 2.
### 3. Lens-as-Metaphor
* Offer lenses (or simulated focal lengths on phones) as *creative constraints*:
“What’s the 24 mm version of your concept? What’s the 85 mm version?”
→ Participants translate those physical optics into AI prompt parameters (wide, telephoto, portrait compression).
---
## 🟩 **Green Screen as Concept Canvas**
### 4. Projection Surface for Imagination
* Instead of compositing, use the green screen as a **live projection wall** for evolving AI imagery (storyboards, color palettes).
* Run a looping slide deck of participants’ generated frames.
* Lights low, slow fade transitions → transforms the studio into a “thinking cinema.”
### 5. “Standing in the Scene”
* Invite participants to stand in front of the projected AI scene, *not for filming*, but to **embody scale, light, and composition**.
* Have them describe how being “in” their AI image changes their prompt choices.
* Take stills (optionally) for reflection, not production assets.
### 6. Green-Screen Palette Wall
* Tape printed color swatches or keywords to the green surface.
* These serve as analog “tags” for AI generation (“tone: elegiac,” “light: sodium vapor”).
* Photograph the wall → upload to Nano-Banana as an “ingredient board.”
---
## 🖥️ **Screens and Projectors as Collaborative Instruments**
### 7. Multi-Screen Moodboard Loop
* Set up the Lab’s projectors or screens to cycle through:
* Participant prompts
* Generated images
* Live updates from Nano-Banana
* Treat it like a *media aquarium*: ideas swimming across screens while teams work.
* Great ambient energy; reinforces AI as part of the studio ecology.
### 8. Split-Screen Comparisons
* Side-by-side projection: hand-drawn sketches ↔ AI renderings ↔ modified re-prompts.
* Invite micro-critiques (“where does the machine misread the hand?”).
* Builds visual literacy before Session 2’s technical compositing.
### 9. “Projection Mapping Lite”
* Use one projector to cast simple geometric frames or grids onto paper or props on a table.
* Participants sketch within those light contours — a pre-visualization of how light and frame interact.
* Photograph the sketch under projection and feed it to the model for iteration.
---
## 💡 **Studio as a Pre-Production Playground**
### 10. Lighting as Metaphor
* Set up 3-point lighting rigs but use them theatrically:
* Toggle between warm/cool scenes as participants discuss mood.
* Let the “lighting state” correspond to AI tone cues (“noir,” “pastoral,” “clinical”).
* No recording — just embodied understanding of light language.
### 11. Spatial Storyboarding
* Use masking tape on the floor to block out a scene.
* Participants mark camera, subject, prop positions.
* Label with sticky notes: “Close-up,” “Cutaway,” “Insert.”
* Photograph layout and upload to AI for automatic diagram interpretation.
→ Serves as bridge to Session 2’s real blocking exercises.
---
## 🧩 **Integrative Nano-Banana + Studio Idea**
### 12. Ingredient Stations in Physical Space
* Place QR codes or NFC tags around the room:
* “Camera,” “Lighting,” “Set,” “Sound,” “Mood.”
* Scanning one sends an “ingredient” to Nano-Banana.
* Participants physically *collect* ingredients by moving through the studio.
→ Turns spatial exploration into prompt-building.
---
## 🧠 **Save for Session 2 (“On Set”)**
To maintain a clear pedagogical progression:
* **Avoid:** live recording, chroma key compositing, real-time transcription, or AI camera control.
* **Do:** concept visualization, framing practice, projection feedback, embodied composition.
This way, Session 2 can feel like a natural escalation from *imagination space → capture space.*
---
Would you like me to sketch a **hybrid floor plan + time flow** showing how these activities could unfold across the Lab’s physical zones (tables, projection wall, green screen, lighting corner)? It’d give you a visual map of how the pre-production “studio ecology” could operate tomorrow.
---
That’s a great and very strategic question. Editors, technical directors, and post-production people are actually *perfect* participants for a pre-production workshop—if you design it to foreground the *structural thinking*, metadata, and pipeline logic that make a production succeed later.
Below is a brainstorm of **pre-production workflows that resonate deeply with editors, post supervisors, and technical crew**, grouped by mindset.
All of them stay comfortably in the “Prep” phase—no cameras rolling yet—but tap into the habits and logics editors already live by.
---
## 🧩 1. **Thinking Like a Postperson in Pre-Production**
Editors and tech leads are naturally *reverse-engineers*. You can frame pre-production as “designing the edit before you shoot.”
### a. Edit-First Storyboarding
* Build a **“paper timeline”** using stills or AI frames: sequence them left-to-right on a wall or in Figma/Storyboarder.
* Editors annotate: *What will this cut feel like? Where would you want coverage?*
* Then feed that markup into ChatGPT or Claude to generate a **shot list** automatically from editorial intention.
### b. Data Management & Naming Logic
* Simulate file-naming schema early:
`PROJ_SC01_SH03_V01.mov` → show how LLMs can enforce or generate consistent naming conventions and metadata templates.
* Editors immediately recognize this as sanity insurance for future workflows.
### c. Proxy Thinking
* Generate **AI proxy assets** (temporary dailies): low-res, text-to-image stand-ins for scenes.
→ Editors can cut with these in mind to test pacing before footage exists.
### d. Conform & Metadata Design
* Have participants define **metadata schemas** for assets *before production*:
`scene, location, take_quality, emotion, sound_notes`.
→ Show how these schemas can later feed into automated transcription, tagging, or RAG workflows.
---
## 🧮 2. **Technical Planning & Pipeline Design**
### a. Folder Architecture Exercise
* Groups design a **directory tree** that anticipates post workflows.
→ Then prompt an LLM to generate automation scripts that would create that tree on a drive.
* Example: “Create a bash script that builds our media project folder structure.”
### b. File Round-Trip Simulation
* “Fake ingest”: drop a few images/audio clips into a shared folder.
* Ask AI to generate a *pre-flight report* (“missing metadata,” “duplicate names”).
→ Editors love this—it’s pre-production QA.
### c. LUT and Color Style Planning
* Instead of grading, generate **AI lookbook stills** that express color space intention: “Teal-orange grade,” “bleach bypass,” “Fuji Eterna.”
→ Editors can imagine how the final look will cut together; colorists appreciate being consulted early.
### d. Audio Tone Previz
* Use text-to-sound models or libraries to create **mood beds**—placeholder ambiences or emotional arcs.
→ Editors recognize this as temp music or reference tone that informs pacing.
---
## 📖 3. **Script-to-Edit Continuity Planning**
### a. Script Breakdown Automation
* Run scripts through Claude or ChatGPT to extract:
* Locations
* Characters
* Props
* Shot tags
→ Feeds directly into scheduling and editing prep (what will need continuity checks).
### b. Continuity & Coverage Maps
* Generate heatmaps or diagrams of scene transitions, coverage density, or expected cutting rhythm.
* For example: “Where do we expect jump cuts vs. match cuts?”
→ Editors enjoy visualizing editorial energy before footage exists.
### c. Pre-Labeling for Transcripts
* Build **tag taxonomies** editors will later apply to transcripts (topics, tone, characters).
→ Use AI to auto-suggest tags, then refine as a team.
---
## 🎞️ 4. **Editorial Aesthetics in Pre-Vis**
### a. “Edit Intent” Annotation
* Take AI storyboards or Nano-Banana frames and have editors label:
* “Cut on motion”
* “Hold for 3 sec”
* “J-cut / L-cut opportunity.”
→ Those notes can become structured prompt inputs (“Generate a storyboard anticipating J-cuts between dialogue lines.”).
### b. Montage Exercises
* Give everyone a shared text corpus (say, student interviews or public-domain footage descriptions).
* Have AI assemble *written montages*—juxtapositions, repetitions, visual rhymes.
→ Editors will instinctively recognize Eisensteinian principles encoded in language.
### c. Pre-Visualized Rhythms
* Use **pacing simulations**: LLM generates timestamps or beats per minute for scenes, visualized as waveform-like timelines.
→ Editors discuss how this anticipates rhythm and emotion.
---
## 🖥️ 5. **Tools & Cross-Department Handshakes**
### a. AI-Assisted Paper Edit
* Give them raw transcript text (from a hypothetical doc shoot).
* Let ChatGPT condense it into a *paper edit outline* (soundbite order).
* Editors evaluate and critique the AI’s editorial decisions.
### b. Pre-Production as Schema Design
* Frame pre-production as **schema thinking**—what structured data will make post easy?
* Shot logs
* Audio notes
* Scene metadata
* AI-assisted ingest logs
### c. ChatGPT + Airtable + Resolve API demo
* Show how you could, in theory, export an Airtable shot list → CSV → Resolve metadata panel.
* They’ll instantly see the utility for post workflows.
---
## ⚙️ 6. **Integrating the Studio Hardware Lightly**
Editors and techs will appreciate the **signal flow** aspects:
* Use the green screen stage as a **metadata playground**: tag imaginary shots (“wide green setup,” “close interview setup”) and simulate ingest forms.
* Project AI-generated look references on the screens—then label them with LUT intentions or exposure notes.
* Run a camera tethered to a monitor purely to simulate *ingest path*, not capture.
→ “What metadata could travel with this file?” becomes a creative question.
---
## 💬 7. **Meta-Conversation Prompts**
* *“If you could redesign pre-production to eliminate one post headache, what would it be?”*
* *“How could AI help you get dailies that cut together better?”*
* *“What metadata do you wish directors understood?”*
These prompts let technical participants surface process wisdom while still playing creatively.
---
If you want, I can sketch a **version of tomorrow’s run-of-show oriented toward editors/tech participants**, weaving in a few Nano-Banana “ingredient” stations and light studio use.
Would you like that?