## MIRI Task 01042026 Prepared by Maryam Mazraei ([Website](https://mmaz.co/), [X](https://x.com/mmazco), [LinkedIn](https://www.linkedin.com/in/maryam-mazraei/), [Github](https://github.com/mmazco)) Framed for Media Outreach Director or Pipeline Builder related roles in Comms team ## Outline 1. Overview and quick note on MIRI's strategic positioning within the AI safety ecosystem 2. Identity candidates for outreach proposal and how this fits with the entire strategy to spread MIRI's narrative and position in the market: understanding AI and the implications of ASI on humanity, labour/talent and policy 3. Experiments: MIRI's 2026 plans emphasize 'trying a range of experiments' ### Overview and thoughts: 'MIRI is a nonprofit with a goal of helping humanity make smart and sober decisions on the topic of smarter-than-human AI'. From my research, it seems the 2024 pivot to comms was due to a series of experiments leading to failure to drive alignment approaches ([source](https://www.lesswrong.com/posts/q3bJYTB3dGRf5fbD9/miri-2024-mission-and-strategy-update) pushing MIRI's strategy towards a more bold and decisive communications style, rejecting diplomatic compromise and advocating for policy to enforce a slow down of AI progression in its current form. This reminds of Bernie Sander's latest call for a moratorium in the development of data centres which led me to believe that due to the rise in populairty and discussion around Yudkowsky's 2023 [TIME op-ed](https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/) calling for shutdown (for 6 months) is one of the factors towards this comms pivot amongst other factors citing "shattering" the Overton window and its NYT bestseller title clearly is showing engagement on this narrative. Regardless of such an approach, if the strategy is to encourage dialogue and a practical response to the potential challenges of ASI then MIRI should also be aware of public discourse around its ideas. To support this, as an example, I ran the TIME op-ed through one of the tools I have been developing called Media Reaction Finder ([link](https://mrf.up.railway.app/?q=https%3A%2F%2Ftime.com%2F6266923%2Fai-eliezer-yudkowsky-open-letter-not-enough%2F)) to get a gauge on public reaction/discussion on this piece. On MIRI's general campaign on pushing its current narrative it must be aware of the reactions from the public in particular towards its approach. Even though citing in the 2024 Comms Strategy [report](https://intelligence.org/2024/05/29/miri-2024-communications-strategy/) *"Several people have told me we should be more diplomatic and less bold... we're not following their advice."* part of the comms strategy for this year would be to engage in dialogue and get MIRI's existing and new audience to understand its implications on ASI outcomes whilst keeping its decisive, bold approach. The responses on the web and social to this article is a good example of a exercise I would urge the comms team at MIRI to be aware of considering its AI safety stance directly impacts the consumer. {%preview https://screen.studio/share/Uijv3Jg9 %} The AI Zeitgeyser repeatedly asks: "Where's our The Day After for AI?" Current cultural representations (Terminator, M3GAN) either miss the actual risk mechanism or treat it as entertainment. The 2025 Korean film No Other Choice offers a more useful reference point. It depicts labor unrest and peer competition among factory workers being replaced by robotics in dark factories. While focused on automation rather than superintelligence, it demonstrates how cultural artifacts can make technological displacement viscerally real for general audiences. This is the kind of emotional resonance MIRI's message currently lacks. ### **Reaching different audiences requires different frames** **Research shows existential risk messaging performs poorly** with general audiences "lowest-performing theme across all demographics" according to the [2024 study](https://arxiv.org/html/2511.06525) "From Catastrophic to Concrete." The public responds better to jobs displacement, impacts on children, and mental health concerns. Although, MIRI explicitly rejects softening its message despite this evidence, I would still push this as a consideration, effective governance of artificial intelligence (AI) requires public engagement. **For right-leaning audiences**, effective angles include national security, China competition, and AI arms race. **For left-leaning audiences**, labor displacement, inequality, and corporate accountability resonate. **Common public confusions** MIRI tries to correct the "Terminator" physical robots frame (AI's power comes from information/internet capabilities), the "distant future" assumption (timelines have shortened dramatically), the "evil AI" misconception (risk comes from misaligned goals without malice), and faith in corporate self-regulation. In summary, an approach for the comms team is matching messenger to audience i.e. Yudkowsky for true believers and attention-generation, Soares for accessible explanation, Bourgon for institutional contexts and so on. **Gaps in MIRI's approach:** [AI Zeitgeyser](https://drive.google.com/file/d/1LDKptfPHrZp9MykZkUzitPaP8XQ-A-d0/view) document flagged: in many mainstream appearances, Nate's explanations of the doom scenario often feel "over-polished and too compressed" for general audiences. In my opinion watching the [abc interview](https://www.youtube.com/watch?v=1CjM_JQXyhc&t=1888s ), I was left deflated and disappointed that the potential doom scenarion was hardly given a face or verbalised, if this alarmist approach to calling out ASI doesnt have a explanation of how human extinction would look like, it weakens the argument. In summary, rhe actual mechanism of how superintelligence would lead to extinction gets lost. This is a real communications gap MIRI needs to address as recognised in the document. ### Strategic outreach approach and proposal drafts I propose organizing outreach into three tiers over a period of time from short to medium to long term based on achievability, growth in attention and audience relevancy. Each tier also has a lead inspired by the AI Zeitgeyser document and some based on my approach to fit the comms roadmap I see fit for MIRI. **Tier 1: achievable now (social/mainstream audience/talent)** Creators are already discussing AI with large audiences who could be nudged toward x-risk framing. These individuals often have charisma and reach but lack exposure to MIRI's specific arguments. **Key angle:** "Talent collapse" challenges, the accelerating displacement of creative and intellectual work that makes AI risk feel personally relevant, not abstractly existential. **Target: Jeremy Carrasco** **Platform:** TikTok (300K followers), YouTube (growing) **Content:** "How to Spot AI Videos" series (650K views on viral piece) **Why him:** Mitch Howe explicitly flagged him as someone MIRI wants in the x-risk conversation. His series ends with the warning "this tech will only improve" he's primed for the next logical step. Has "unusually strong teacher-flavored attention-holding charisma." **Outreach Proposal:** Hi Jeremy, I'm reaching out from MIRI (Machine Intelligence Research Institute), we're the team behind the recent NYT bestseller "If Anyone Builds It, Everyone Dies." Your "How to Spot AI Videos" series does something rare: it gives people practical tools while being honest that detection is a losing game. Your line about how "good fakes go unnoticed" is exactly the kind of clear-eyed thinking we need more of. We'd love to have you in conversation with Nate Soares, our Executive Director, for a piece exploring the question your work naturally raises: what happens when the fakes consistently outpace the spotters? Not doom-mongering a practical look at the trajectory of these capabilities and what it means for how we navigate information. The format is flexible and could work as a video collaboration, podcast conversation, or written Q&A. Happy to provide exclusive access to our technical assessments on video generation capabilities as source material. Would you be open to a quick call to explore? **Tier 2: longer-term cultivation (political/news/opinion)** The goal is inserting MIRI into the AI policy and governance conversation both directly with decision-makers and as the trusted expert source journalists call when covering AI governance, risk and safety. This includes political figures, policy journalists, and think tank voices. The approach varies by target: direct debate proposals for high-profile figures, source relationships for journalists. **Key angle:** National security and governance failure regulators cannot keep pace with capabilities development, and voluntary industry commitments consistently fail. **Target:** David Sacks / AI Policy Sphere **Platform:** news publication, opinion piece **Context:** White House AI and Crypto Czar, significant AI company investments Recent coverage: NYT piece (Nov 2025) on conflicts between his policy position and investment portfolio **Why relevant:** MIRI's moratorium stance is in direct tension with his "accelerate and profit" position but that tension is newsworthy. Rather than pitch Sacks directly (likely hostile), position MIRI as expert commentary source for journalists covering AI policy conflicts. **Outreach Proposal (to journalist covering AI policy):** Hi [Journalist], Following your recent coverage of David Sacks' role as AI Czar and the conflicts between his policy position and investment portfolio. MIRI (Machine Intelligence Research Institute) has been advocating for international AI governance since before the current administration. Our technical governance team has published draft international agreements and testified to the Canadian Parliament and UN advisory boards on AI verification challenges. We'd be valuable sources for your continued coverage: we can speak to why voluntary industry commitments consistently fail, what meaningful oversight would actually require technically, and why the "race to the bottom" dynamic between labs makes self-regulation structurally impossible. Nate Soares (Executive Director) or Malo Bourgon (CEO) are available for on record commentary. Our technical governance research is at techgov.intelligence.org. Happy to connect for background if useful. *Alternative - direct dialogue pitch to Sacks' team:* The debate over AI development pace is happening whether the principals engage or not. MIRI proposes a recorded public conversation between Nate Soares and David Sacks on the question: "Can market incentives produce safe AI, or do we need binding international constraints?" We disagree fundamentally that's precisely why this would be valuable. The public deserves to hear both cases made directly. **Tier 3: Personal Network (Culture-Intellectual)** Culture-forward thinkers and editors who shape how their audiences think about thinking about technology. These are the tastemakers I identified in my original application as an underexplored channel for MIRI. **Key angle:** MIRI's ideas are fundamentally humanistic about preserving human agency and flourishing but are currently packaged in technical/academic language that alienates this audience. **Target:** Joshua Citarella, Doomscroll Podcast **Platform:** YouTube podcast (Doomscroll), culture/internet commentary **Content:** Internet culture, politics, technology's impact on society, youth radicalization Why him: His audience engages with technology through a cultural and philosophical lens exactly the tastemaker demographic MIRI hasn't reached. Doomscroll's format allows for the kind of longer, nuanced conversation that gets lost in mainstream media hits. Joshua is 1-2 degrees from my existing network, making a warm introduction possible. **Outreach Proposal:** Hi Joshua, I'm reaching out about a potential Doomscroll episode with someone from MIRI (Machine Intelligence Research Institute) the team behind the NYT bestseller "If Anyone Builds It, Everyone Dies." Their argument isn't the sci-fi version of AI risk. It's about what happens when we build systems we can't understand or control and how the incentive structures driving AI development make that outcome likely. At its core, it's a humanist argument about preserving the conditions for human agency and flourishing. I think this maps onto a lot of what Doomscroll explores: how technology shapes culture, how systems outpace our ability to govern them, what futures are actually available to us. Would you be open to a conversation with Nate Soares (Executive Director) or Eliezer Yudkowsky for an episode? ### Proposed Experiments MIRI's 2026 plans emphasize "trying a range of experiments." The following leverage tools I've built which can be utilized or inspire some ideas. **Media Reaction Finder for Sentiment Tracking** I've built a tool called Media Reaction Finder that aggregates reactions to specific topics across publications. Applied to MIRI's work, this could: * Track who's covering AI safety and how they're framing it * Identify journalists who might be receptive to MIRI's framing * Monitor responses to MIRI media appearances in real-time I ran the TIME op-ed through this tool as an example useful for understanding engagement and sentiment, which I'd urge the comms team to consider for tracking campaign effectiveness. **Aleph inspired conversations with books forfor book amplification** I've implemented Aleph, an AI audiobook. Applied to "If Anyone Builds It, Everyone Dies," this could create: * An interactive interface for exploring the book's arguments with AI-assisted context can be similar in approach to [The Last Economy](https://ii.inc/web/the-last-economy) by Emad Mostaque (actually another great lead to speak to) * Digestible snippets for social media distribution * A research tool connecting the book's claims to MIRI's technical governance work and online supplement **Prediction market campaign** One challenge the Zeitgeyser identifies: Nate's media appearances often don't fully convey the concrete mechanism of doom. The argument feels abstract. Proposed experiment: A prediction market campaign that makes abstract risk concrete through public stake-taking. * Partner with existing platforms (Metaculus, Polymarket) or build a focused microsite * Create shareable questions: "When will AI systems demonstrate [specific capability]?" * Aggregate sentiment data MIRI can use: "80% of participants believe X by 2030" * Creates engagement (people commit to positions) rather than passive content consumption MIRI Communications Trial Task | Maryam Mazraei January 2026