# Intro AI is a versatile catalyst because it can adapt to many roles depending on context. It works as a thinking partner, helping people deepen reflection, challenge assumptions, and explore scenarios. It acts as a matchmaker, connecting people, projects, and resources based on shared values and goals. It becomes a scanner and radar, continuously surfacing funding opportunities, policy shifts, and emerging trends. It serves as a data and evidence engine, making it easier to collect, analyze, and share insights responsibly. It amplifies impact and storytelling, turning complex knowledge into accessible narratives. And it automates repetitive tasks, freeing human capacity for creativity and collaboration. Its versatility comes from being context-aware, adaptive, and augmenting rather than replacing human judgment — whether in personal reflection, community building, or systemic change, AI can flexibly shift roles to meet diverse needs. # A fundamental choice: open vs. closed AI At Changemappers, AI can be implemented as either open-source or closed-source, each with trade-offs relevant to our mission of connecting changemakers and supporting impact-driven projects: **Open-source AI** * **Pros**: Transparent code, community-driven innovation, faster experimentation, easier to audit for bias and alignment with our values. Flexible for tailoring to our specific archetypes and workflows. * **Cons**: may require more in-house expertise to maintain and scale effectively. **Closed-source AI** * **Pros**: Polished, stable infrastructure, easier integration with existing tools and dashboards. Reliable performance with minimal technical maintenance. * **Cons**: Opaque decision-making (“black box”), potential vendor lock-in, slower adaptation to our evolving needs, less customizable for our unique changemaker workflows. Privacy and ethical concerns. # 1. AI Thinking Partner (Cognitive Companion) **What it is:** a conversational, context-aware assistant that helps changemakers think better — not replace judgment. It supports structured reflection, systems mapping, scenario drilling, causal loop exploration, and hypothesis testing. User stories * As a **Local Practitioner**, I want an AI to run a 20-minute guided reflection on a stuck local project, so that I surface root causes and 3 concrete next steps. * As an **Innovation Catalyst**, I want the AI to run counterfactual scenarios for a prototype, so that I can see plausible failure modes and mitigation options. * As a **Network Weaver**, I want the AI to synthesize discussion notes into a one-page systems map, so that I can quickly explain leverage points to partners. * As a **Strategic Advisor**, I want the AI to emulate structured Socratic questioning and challenge assumptions (with sources), so that decision-makers’ blind spots are exposed. Acceptance criteria (MVP) * Interactive session history + exportable “thinking artefact” (map, hypothesis list). * AI returns alternative hypotheses (≥3) with confidence estimates and source provenance. * User can mark/annotate which suggestions are useful; system learns preferences. Implementation notes / risks * Start with templates (problem framing, 5 whys, hypothesis table) → iterate with real users. * Log provenance; keep “assistant suggestions” explicitly labeled to avoid over-trust. * Add “challenge me” toggle for aggressive counterfactuals vs. supportive prompts. --- # 2. Deepen Thinking Patterns (Cognitive Apprenticeship) **What it is:** tools and micro-learning nudges that train changemakers in mental models (systems thinking, Spiral Dynamics, ToC, complexity heuristics). User stories * As a **Local Practitioner**, I want short micro-lessons tied to my project (e.g., “how to spot systemically risky assumptions”), so I can apply them immediately. * As an **Institutional Changemaker**, I want a report that maps my org’s decision patterns to cognitive biases, so I can coach leadership. * As a **Coach / Mentor**, I want the AI to propose provocations and exercises tailored to a coachee’s profile, so sessions deepen structural thinking. Acceptance criteria * 5 canonical mental model modules available; each has a short tutorial, 2 practice prompts, and a checklist. * Progress tracking + suggested next readings/resources (curated). Implementation notes * Use short interactive exercises rather than long readings; pair with live facilitation templates. * Guard against ideological drift by offering multiple models and contrasting points of view. --- # 3. Matchmaking & Network Orchestration (Value-based matching) **What it is:** explainable matching engine that pairs projects ↔ volunteers, funders ↔ initiatives, mentors ↔ mentees, coalitions ↔ tasks — prioritized by values, skills, availability, and growth goals. Match reasons must be auditable. User stories * As a **Resource Mobilizer**, I want a ranked list of 10 candidate communities/projects aligned with my funder’s priorities, with the matching rationale, so I can shortlist fast. * As a **Volunteer**, I want the platform to suggest micro-tasks that match my skills and learning goals, so I grow while helping. * As a **Network Weaver**, I want suggested triads (A+B+C) that would produce synergy, so I can broker coalition pilots. Acceptance criteria * Matches show explicit reasons (e.g., shared values score 8/10; complementary skills yes/no). * Feedback loop: users accept/decline matches and AI updates weights. * Privacy controls: users control what profile fields are visible to which archetypes. Implementation notes / risks * Start with simple explainable scoring (values overlap, skills match, availability) and iterate to more nuanced graph-based algorithms. * Avoid opaque “black box” ranking — always show why. --- # 4. Opportunity & Funding Scanner (Dealflow & Alerts) **What it is:** continuous scanning and ranking of grants, tenders, awards, philanthropy streams, and local funding opportunities; surfacing only those that match the project’s stage, values and risk tolerance. User stories * As a **Local Practitioner**, I want alerts for funding calls in Hungary (and EU) that fit my small pilot, so I don’t miss deadlines. * As a **Resource Mobilizer**, I want a dashboard of donor trends and which themes are rising, so I can craft pitches. * As an **Innovation Catalyst**, I want the AI to pre-fill draft grant narratives using my Theory of Change and impact evidence, so I reduce admin time. Acceptance criteria * Feed ingests X public sources (start with EU funds, major foundations, local gov tenders). * Relevance score + estimated fit (based on budget size, eligibility) and auto-generated application outline. * Exportable calendar with deadlines and suggested team owners. Implementation notes / risks * Prioritize feed sources with stable APIs or structured notices; maintain human verification for shortlist. * Monitor ethics: don’t surface predatory funders or those with harmful conditions. --- # 5. Trusted Data Collection & Participatory Sensing **What it is:** community-owned data collection that preserves consent, provenance, and local context — structured templates, mobile reporting, privacy preserving aggregation (differential privacy / synthetic data when needed). User stories * As a **Local Practitioner**, I want to run a short community wellbeing poll and have the AI summarize sentiment and themes, so I can present results to funders. * As a **Network Weaver**, I want harmonized datasets from multiple communities (anonymized) so I can identify replicable interventions. * As a **Researcher partner**, I want metadata provenance and consent records attached to every dataset. Acceptance criteria * Consent workflow built into each collection template. * Data export includes provenance ledger and anonymization flags. * Basic QA: duplicate detection, outlier detection, and summary metrics. Implementation notes / risks * Use participatory methods: communities own raw data; platform provides aggregation tools. * For sensitive topics, default to local-first storage + strong encryption and minimum viable sharing. --- # 6. Policy Change Radar & Advocacy Assistant **What it is:** monitor legal/regulatory changes, public consultations, policy debates and map opportunities for strategic advocacy (who to influence, timeline, arguments that move the needle). User stories * As a **System Disruptor**, I want early warnings on policy proposals that affect my campaign, so I can mobilize rapid responses. * As an **Institutional Changemaker**, I want summarized consultation texts and a recommended policy brief (with citations) to brief leadership. * As a **Global Amplifier**, I want sentiment and narrative shifts tracked so I know when to run counter-narratives. Acceptance criteria * Continuous feed of policy changes filtered by region/theme. * Auto-drafted policy briefs with citation list and recommended asks. * Advocacy playbook generator: message + channels + suggested partners + timeline. Implementation notes / risks * Combine structured sources (parliament APIs) with news/social listening. * Risk: avoid surveillance framing — present only public info and consented inputs. --- # 7. Impact Evaluation & Adaptive Learning **What it is:** lightweight, causal-aware M\&E tools: ToC authoring, indicator suggestions, simple quasi-experimental estimators, cyclic learning prompts, and dashboards that turn evidence into adaptation recommendations. User stories * As a **Resource Mobilizer**, I want a one-click impact brief that synthesizes outcome indicators and suggests next experiments to improve ROI. * As an **Innovation Catalyst**, I want the AI to propose an A/B pilot design to test two intervention variants and estimate sample size. * As a **Local Practitioner**, I want automated monthly learning prompts based on submitted data that suggest 2 small pivots. Acceptance criteria * ToC editor + auto-suggested indicators based on intervention type. * Simple causal inference recommendations (e.g., matched comparison, stepped-wedge) with clear assumptions. * Learning loop: evidence → 3 prioritized adaptations. Implementation notes / risks * Build templates for common project types (training, livelihoods, policy campaigns). * Flag uncertainty explicitly — don’t overclaim causality. --- # 8. Narrative & Storytelling Engine (Amplification) **What it is:** help identify and craft authentic narratives for diverse audiences, produce ready-to-publish assets (short social copy, one-pager, multimedia script), and recommend Global Amplifiers likely to care. User stories * As a **Global Amplifier**, I want a pack of 3 human stories from the field (short, validated) plus visuals, so I can responsibly amplify. * As a **Local Practitioner**, I want a privacy-safe consented storytelling workflow that captures voices ethically. * As a **Network Weaver**, I want the platform to suggest which media outlets and influencers to approach. Acceptance criteria * Story packs include consent meta, suggested channels, and localization variants. * Ethical checklist for amplification (consent, harm assessment). * One-click export to social and email templates. Implementation notes / risks * Compelling story harvesting must include an explicit consent + benefit sharing mechanism. * Avoid creating “hero” narratives that strip agency from communities. --- # 9. Governance, Ethics & Safety Assistant **What it is:** tools that support polycentric governance: consent management, algorithmic transparency, audit logs, bias checks, dispute resolution facilitation, and policy templates for community governance. User stories * As a **Network Weaver**, I want an AI to run a bias and equity audit of a matching algorithm, so I can report to my community. * As an **Institutional Changemaker**, I want templates for community governance (sociocracy + transparency rules) adapted to our culture. * As a **Local Practitioner**, I want a safe reporting channel for abusive behaviour with clear remediation steps. Acceptance criteria * Explainability UI for any automated decision (who was matched and why). * Governance templates exportable and editable. * Incident logging workflow with triage suggestions. Implementation notes / risks * Governance must be community-driven — the AI suggests but communities decide. * Build escalation and human-in-the-loop safeguards. --- # 10. Automation & Productivity (Practical Ops) **What it is:** automating routine admin: scheduling, volunteer coordination, simple legal forms, content tagging, meeting-note synthesis. User stories * As a **Local Practitioner**, I want automatic volunteer scheduling for recurring tasks, so I don’t lose time in coordination. * As an **Innovation Catalyst**, I want meeting notes turned into action items and assigned owners. * As a **Resource Mobilizer**, I want templated MOUs and invoices prefilled. Acceptance criteria * Meeting notes → action items accuracy >70% (user-validated). * Volunteer schedule conflict detection. * Templates library accessible and localizable. Implementation notes * Keep automation reversible and human-validated. * Prioritize time-saving features with low risk. --- # 11. Knowledge Hub, Synthesis & Localisation **What it is:** curated, searchable repository of cases, playbooks, research — automatically summarized, translated and localized to community contexts. User stories * As a **Local Practitioner**, I want a localized “how-we-did-it” playbook for a similar village, translated and adapted to local norms. * As an **Innovation Catalyst**, I want to pull the 5 best empirical studies on my intervention in 10 minutes. * As a **Coach**, I want templates that fuse liberating structures and Spiral Dynamics frames. Acceptance criteria * Fast search with relevance ranking and provenance. * Localization suggestions (language, cultural notes). * Human curation layer for top resources. Implementation notes * Use community tagging and reputation to surface best resources. * Allow export into workshop packs. --- # 12. Futures, Scenario & Resilience Modeling **What it is:** lightweight scenario simulators for climate, funding volatility, policy shocks, and social resilience that help plan contingency actions. User stories * As a **System Disruptor**, I want to model how a policy shock could affect my campaign trajectory. * As a **Local Practitioner**, I want to test 3 plausible climate scenarios and see priority resilience actions. * As a **Resource Mobilizer**, I want funding stress tests for my grantee portfolio. Acceptance criteria * Scenario builder with levers, outputs, and sensitivity insights. * Exportable contingency plan. Implementation notes * Offer simple models with clear assumptions, not black-box forecasts. * Use scenario narratives paired with quantitative outputs. --- # Prioritization (Practical — what to build first) 1. **MVP (0–3 months)**: Matchmaking (explainable), Opportunity Scanner (basic feeds), AI Thinking Partner (templated guided sessions), Meeting-notes → actions. 2. **Phase 2 (3–9 months)**: Trusted Data Collection with consent flows, Impact Evaluation templates, Knowledge Hub. 3. **Phase 3 (9–18 months)**: Policy Radar, Governance & Ethics suite, Scenario Modeling, Advanced causal inference tools. Why these priorities: matchmaking + opportunity scanning deliver immediate user value (connect people & money). Thinking partner + admin automation reduce cognitive load and burnout (core MTP). Data, evaluation and policy tools follow so evidence and influence scale safely. ([hackmd.io][1]) --- # Minimum viable data & privacy checklist (must-have) * Explicit consent statements for every data capture. * Provenance metadata attached to every dataset and story. * Role-based access controls and visibility filters. * Audit log for automated decisions (who/what/why). * Local-first data ownership model (communities can request full deletion). --- # Key success metrics (what to measure first) * Time saved per project lead per month (admin reduction). * Match acceptance rate and downstream activation (volunteers placed, grants applied). * Number of validated local projects receiving funding due to the platform. * Community satisfaction / trust index (quarterly). * Evidence → adaptation loop speed (how quickly projects iterate after learning prompts). --- # Short, brutal risks & mitigations * **Over-trust in AI advice** → Display provenance + confidence + required human checks. * **Privacy harms** → Default to minimal sharing; strong consent & anonymization defaults. * **Power concentration** (amplifiers/gatekeepers) → Transparent partner selection + rotation + decisions audited. * **Algorithmic bias** → Regular bias audits + participatory governance.