# midjourney-tips-for-videos
Great—here’s a **deep-research formatted markdown draft** of up-to-date tips for using Midjourney for *videos*, including all the known parameters and workflows. You can paste this into your HackMD note and edit as needed.
---
# Midjourney Video Guide (2025 Edition)
### Quick Intro
In June 2025 Midjourney formally launched its first video workflow (the “Image-to-Video” model). ([Midjourney][1]) This means you still generate an image (or upload an image) and then animate it into a video clip. Videos cost more GPU time (≈ 8× image cost) and have specific parameters and constraints. ([Midjourney][2])
---
## 1. Workflow overview
1. Generate an image (via `/imagine` or similar) or upload an image you want as the **starting frame**. ([Midjourney][3])
2. Click the **Animate** button (High motion / Low motion) or add `--video` (see below) to animate. ([Midjourney][3])
3. The default clip is ~5 seconds. You can extend it in ~4-second increments up to ~21 seconds total via “Extend Auto” or “Extend Manual”. ([Midjourney][2])
4. Choose motion intensity (low or high) and optionally set an ending frame (for loops or transitions). ([Midjourney][3])
5. Adjust other video-specific parameters (batch size, resolution, etc.). ([Midjourney][3])
---
## 2. Video-specific parameters
Here are the key parameters specific to video generation in Midjourney:
| Parameter | Description | Notes |
| ------------------- | -------------------------------------------------------------------------------------------- | --------------------------------------------------------------- |
| `--video` | Tell Midjourney to generate a video (Image→Video) | Used when you upload image URL + prompt. ([Midjourney][3]) |
| `--motion low` | Use subtle movement / slower camera & subject motion (default) | Good for ambient scenes. ([WAGMI.TIPS][4]) |
| `--motion high` | Use bigger camera/character movement, more dynamic motion | May introduce glitches. ([Medium][5]) |
| `--raw` | Remove extra “creative flair” / style-interpretation, giving more direct influence to prompt | Useful for more control. ([WAGMI.TIPS][4]) |
| `--loop` | Use Starting Frame as ending frame to create a seamless loop | ([Midjourney][3]) |
| `--end <image_URL>` | Use a different ending frame image (via URL) to define end state | ([Midjourney][3]) |
| `--bs #` | Batch size: how many videos to generate per prompt (1, 2, or 4) | Batch cost scales (SD: 2 min / 4 min / 8 min) ([Midjourney][3]) |
Other constraints/settings:
* Resolution: Default SD (~480p) for most. HD (~720p) available on higher plans and based on starting image aspect ratio. ([Midjourney][3])
* Duration: Initial ~5 s, extendable up to ~21 s total (with ~4 s increments). ([Midjourney][2])
* Aspect ratio / dimension depend on starting image’s ratio. ([Midjourney][3])
---
## 3. General parameters that apply (images + videos)
In addition to the video-specific ones, many of the classic Midjourney parameters still apply. These can help you with scene composition, style, consistency. Examples:
* `--ar <ratio>` — aspect ratio (e.g., `16:9`, `4:3`). ([Run The Prompts][6])
* `--seed <number>` — fixes noise pattern for reproducibility. ([Run The Prompts][6])
* `--stylize` or `--s <value>` — controls how strongly the style is applied. ([Run The Prompts][6])
* `--chaos <value>` — controls variation/randomness. ([Run The Prompts][6])
* `--quality` or `--q <value>` — trade-off between speed and detail. ([Run The Prompts][6])
* Many others listed in the official parameter list. ([Midjourney][7])
When you’re making videos, combining video parameters + these image parameters gives you more control.
---
## 4. Best Practices and Prompting Tips for Video
Here are tips to get better results with video (especially for the new video workflow):
* Use *action verbs* and *temporal language* in your prompt: e.g., "camera pans", "dawn breaks", "then it emerges", "while the light flickers"… to give sense of progression. ([midjourneysref.com][8])
* Decide on motion style early: if you want a calm scene, use `--motion low`. For action, use `--motion high`. Using high motion may require more careful prompt & referencing to avoid weird glitches. ([Midjourney][2])
* If you want a seamless loop (e.g., ambient background), use `--loop` with same start & end frame. ([Midjourney][3])
* For *scene progression*, consider uploading a start image + end image (via `--end`) so you control how things change over time. ([Midjourney][3])
* Be mindful of resolution & aspect ratio: if you plan to export or use outside Midjourney, set a suitable ratio early (e.g., 16:9 for video output). The starting image ratio sets the video dimensions. ([Midjourney][3])
* Use `--raw` if you want your prompt to dominate the style rather than Midjourney’s default creative flair. Especially useful in video where you want consistent motion rather than unpredictable stylistic shifts. ([WAGMI.TIPS][4])
* Keep an eye on cost/time: Video generation takes ~8× GPU time compared to images, so frequent iterations cost more. ([Midjourney][2])
* When extending videos manually, tweak your prompt slightly to introduce new elements rather than blindly extending—this yields more varied/interesting motion. ([Midjourney][2])
---
## 5. Limitations / Things to know
* At the moment (2025) the workflow is **image-to-video**, not full text-to-video. You need a starting image or generate one first. ([midjourneysref.com][8])
* Resolution is still modest (SD default; HD optional) and video length is limited (~21 seconds max). ([WAGMI.TIPS][4])
* High motion may produce glitches/artifacts—camera movement and object motion are still evolving. ([Midjourney][2])
* Because each video uses more GPU time, cost (and wait time) will be higher. ([Midjourney][1])
---
## 6. Example Prompts
Here are a few example prompts combining concepts + parameters to inspire your own work:
* `/imagine a lonely spaceship drifting through a neon‐lit asteroid field, camera slowly orbiting the ship at dusk --motion low --ar 16:9 --loop --raw`
* `/imagine an ancient forest at twilight, mist swirling around giant roots, camera slowly dollying forward then rising above the canopy --motion high --ar 4:3 --end https://…/endingframe.jpg --q 2`
* `/imagine a futuristic city skyline at dawn, hover-cars streaking past glass towers, camera pans upward and zooms in on a flying vehicle --motion high --ar 16:9 --bs 2 --seed 12345`
---
## 7. Parameter Reference (Video + Generic)
**Video parameters:**
```
--video
--motion low | high
--raw
--loop
--end <image_url>
--bs 1|2|4
```
**General parameters (apply to video too):**
```
--ar <ratio>
--seed <number>
--stylize / --s <value>
--chaos <value>
--quality / --q <value>
--stop <value>
… (see full list) …
```
Refer to the official parameter list for the full set. ([Midjourney][7])
---
## 8. Recommended Workflow for Your Learning Lab
Since you oversee teaching & experimentation at the Learning Lab, here’s a suggested workflow to incorporate for students/faculty:
1. **Image generation first** — ask participants to generate a strong still image with clear composition and narrative.
2. **Define motion brief** — e.g., “camera rotates around subject”, “mist rises”, “character walks forward”.
3. **Animate with Low motion** — for initial experiments, use `--motion low`, keep it simple, `--loop` for ambient short clip.
4. **Discuss iteration** — examine how changes in prompt vs parameters affect motion & coherence.
5. **Step up to High motion** — explore dynamic camera moves and ask: what glitches appear? how to adjust prompt/parameters?
6. **Extend video** — show how to use “Extend” to lengthen from ~5 s to ~21 s, and let participants tweak prompt before each extension.
7. **Export & reflect** — review the output, discuss cost/time tradeoffs, aesthetic outcomes, and prompt-engineering strategies.
---
If you like, I can **generate a copy-ready markdown snippet** (with front‐matter and links) that you can paste directly into your HackMD document. Would you like that?
[1]: https://updates.midjourney.com/introducing-our-v1-video-model/?utm_source=chatgpt.com "Introducing Our V1 Video Model - Midjourney"
[2]: https://docs.midjourney.com/hc/en-us/articles/37460773864589-Video?utm_source=chatgpt.com "Video - Midjourney"
[3]: https://docs.midjourney.com/docs/en/video?utm_source=chatgpt.com "Midjourney Video Parameter"
[4]: https://wagmi.tips/future-tech/midjourney-video/?utm_source=chatgpt.com "The Complete Guide To Midjourney Video (2025) - Wagmi.tips"
[5]: https://medium.com/generative-ai/midjourney-video-is-here-5-tips-for-great-prompts-for-video-generation-9f968bcaa21f?utm_source=chatgpt.com "Midjourney Video Is Here: 5 Tips for Great Video Generation Prompts"
[6]: https://runtheprompts.com/resources/midjourney-info/midjourney-parameter-cheat-sheet-v7/?utm_source=chatgpt.com "Midjourney Parameter Cheat Sheet V7: The Easy Guide"
[7]: https://docs.midjourney.com/hc/en-us/articles/32859204029709-Parameter-List?utm_source=chatgpt.com "Parameter List - Midjourney"
[8]: https://midjourneysref.com/guide/Mastering-Midjourney-Video?utm_source=chatgpt.com "Midjourney Video Tutorial & Analysis: Why It May Transform Future ..."