## Project idea in tldr ([formal document here](https://docs.google.com/document/d/1Em5Zsxl2eofj6gz5j358b0R8oD4e9exMeq59wxyn_hs/edit?tab=t.0))

## Questions we need to validate
- does the prompt to chatgpt to orignal account vs prompt with `conversations.json` are returning same answers? (or is worse on latter case)
- does prompt with `conversations.js` to chat gpt vs local llm are returning same answers? (or is worse on latter case)
## ✅ experiment #0 26.01.15
> lesson learn: probably requires some trick to extract much relevant information from json file with given prompt so that the model could go through much lightweighted documentation.
in short it failed, my main llm with pro tier is too heavy `conversations.json`, which doesn't allowed new free tier gpt to accept the size of file.
## ✅ experiment #1 26.01.15
> lesson learn: the prompt to chatgpt to orignal account vs prompt with `conversations.json`, actually later one was much more summerized + comprehensive.
>
> pro: it's easier programming inside TEE, can just use API and attach `conversations.js` within the custom prompt
>
> cons: inital export is bit cumbersome for user for setup
1. Start from new fresh account A. Ask prompt: "based on my chat history, what are some life advices you can give it to me?", the response was dumb.

2. Fed 6 personal blog posts to the chat
3. Checked the same prompt got much personalized 
4. Extracted `conversations.json` from account A
5. Created another new fresh account B. Ask prompt: "based on my chat history, what are some life advices you can give it to me?", the response was dumb.

6. now attach account A's `conversations.json` file alongside with the same prompt. Now they provide personalized response!

## ✅ experiment #2 26.01.16
goal: test out our scinaro will work accurately without any coding
1. generate account A's conversations.json (we reuse the above account A)
3. generate account B's conversations.json (perform same step but with different personality, I feed [steve jobs](https://book.stevejobsarchive.com/))

4. rename conversations.json from A into conversations_A.json, from B into conversations_B.json

5. generate account C, feed both conversations_A.json and conversations_B.json, and ask [prompt](https://hackmd.io/GsS-ILYcQPydRlDkrz_DmA?both#prompt)
> Given the provided files are two different people's indivisual llm conversation history. conversations_A.json is full coversation history with person A, conversations_B.json is full coversation history with person B.
> You are the expert at coaching dating capability and human psychology. Follow this process step by step:
> 1. read conversations_A.json and extract the personal feature of the user who is prompting. Especially highlight on personality, personal characteristic where could be assume from the writing style and information delivered through the prompt. And label this information as person A's information.
> 2. read conversations_B.json and extract the personal feature of the user who is prompting. Especially highlight on personality, personal characteristic where could be assume from the writing style and information delivered through the prompt. And label this information as person B's information.
> 3. response this: {{CUSTOM PROMPT FROM B}}, but only makesure to refer conversations_A.json only. And add this as person A's information.
> 4. response this: {{CUSTOM PROMPT FROM A}}, but only makesure to refer conversations_B.json only. And add this as person B's information.
> 5. Finally you as a dating coach, base on collected information about A and B, estimate how much they will be capable/destine to each other, and provide this as number which is percentage in scale of 0%~100% and provide the reason base on keyword. Make sure to not include detail personal information that could be sensitive.

### flow
1. `delegation`
* user provide `conversations.js` inside enclave
2. `query`
* enclave receive prompt
- [enclave] agent running in enclave have mcp of browser access or open ai sdk access
- [enclave] agent foward the prompt to the broswer with the `conversations.js`
- [enclave] agent receive personalized response
- agent evaluate the answer and return output outside the enclave
- Q. how we evaluate the answer? We don't want to response raw answer
### mcp
https://github.com/billster45/mcp-chatgpt-responses
### f&q
- how do we extract `conversations.js`: <https://help.openai.com/en/articles/7260999-how-do-i-export-my-chatgpt-history-and-data>
## prompt
```
**Role & constraints**
You are an **affinity evaluator** and **relationship/cofounder coach**.
You will be given two uploaded files:
* `conversations_A.json` = full ChatGPT conversation history for Person A
* `conversations_B.json` = full ChatGPT conversation history for Person B
### Hard privacy rules (must follow)
1. **Do not quote** or reproduce any raw conversation text from either file.
2. **Do not include unique identifying details** (names, exact locations, employers/schools, exact dates, personal IDs, contact info, highly specific life events).
3. Only output **high-level, non-sensitive abstractions** (traits, preferences at a category level, interaction style, values, goals).
4. If a trait is too intimate/sensitive (e.g., medical, sexual, trauma details), **do not mention it**; instead, map it to a **neutral category label** like “Sensitive topic present → handled privately” without elaboration.
5. If you are not confident a trait is supported by repeated evidence, label it as **Low confidence** or **Unknown**.
### Evidence discipline
* Never use A’s data when describing B, and never use B’s data when describing A.
* Every claim must be tagged with:
* **[A]** supported by A file only
* **[B]** supported by B file only
* **[Both]** supported by both files independently
* **[Uncertain]** weak / inferred / not enough evidence
---
## Task (follow exactly)
### Step 1) Build Person A “Trait Snapshot” (A-only)
Read `conversations_A.json`. Extract a **coarse, privacy-preserving profile**:
**A1. Communication & thinking style** (e.g., direct vs indirect, analytical vs emotional, structured vs freeform)
**A2. Values & motivators** (e.g., novelty, stability, status, impact, autonomy, intimacy)
**A3. Relationship / teamwork tendencies** (e.g., conflict handling, reassurance needs, boundaries, collaboration style)
**A4. Lifestyle & pace** (e.g., work intensity, social energy, planning style)
**A5. Deal-breaker zones (abstract)**: 3–6 “may clash if mismatched” categories only
**A6. Confidence levels**: High / Medium / Low for each item
Output A as a bullet list + short “vector summary” using 1–5 scales:
* Openness, Conscientiousness, Extraversion, Agreeableness, Neuroticism (approximate)
* Attachment leaning (Secure / Anxious / Avoidant / Mixed / Unknown)
* Conflict style (Direct / Avoidant / Compromise / Escalate / Repair-focused / Unknown)
*(These are approximations; do not overclaim.)*
### Step 2) Build Person B “Trait Snapshot” (B-only)
Repeat Step 1 for `conversations_B.json`, same structure, B-only.
---
### Step 3) Answer “CUSTOM PROMPT FROM B” using A-only
You will now answer the following question **as if B is asking about A**, but you may only use `conversations_A.json` as evidence:
**CUSTOM_PROMPT_FROM_B:**
`{{PASTE B’S CUSTOM PROMPT HERE}}`
Rules:
* Do not reveal sensitive specifics.
* Summarize in coaching language: **what B should know about interacting with A** and **how to avoid friction**.
* Include “Confidence: High/Medium/Low”.
---
### Step 4) Answer “CUSTOM PROMPT FROM A” using B-only
Now answer the following question **as if A is asking about B**, but you may only use `conversations_B.json`:
**CUSTOM_PROMPT_FROM_A:**
`{{PASTE A’S CUSTOM PROMPT HERE}}`
Same rules as Step 3.
---
### Step 5) Compatibility estimate (privacy-preserving)
Provide a compatibility score **0–100%** with a clear rubric and short reasons.
#### Rubric (use these components; show subscores)
* **Values alignment (0–25)**
* **Communication fit (0–20)**
* **Lifestyle/pace fit (0–15)**
* **Conflict & repair fit (0–20)**
* **Goals & time horizon fit (0–10)**
* **Risk flags (subtract 0–20)** (only abstract categories)
Then compute:
* **Final Score = sum - risk_subtraction** (clamp 0–100)
#### Output must include
1. **Final Compatibility %**
2. **Top 5 compatibility “keywords”** (e.g., “autonomy”, “direct communication”, “high ambition”, “needs reassurance”, “structured planning”)
3. **Top 3 friction zones** (abstract categories only)
4. **2–4 actionable coaching suggestions** for each person (A tips, B tips)
5. **Confidence** (High/Medium/Low) + why (e.g., “limited evidence”, “strong repeated signals”)
---
## Output format (must match)
Return exactly this structure:
**A_TRAIT_SNAPSHOT**
* …
**B_TRAIT_SNAPSHOT**
* …
**ANSWER_TO_B_ABOUT_A (A-only)**
* …
**ANSWER_TO_A_ABOUT_B (B-only)**
* …
**COMPATIBILITY_REPORT**
* Subscores: …
* Risk flags: …
* Final score: X%
* Keywords: …
* Friction zones: …
* Coaching for A: …
* Coaching for B: …
* Confidence: …
---
## Important final rule
Do not mention “ChatGPT”, “prompting”, or “the logs say”. Write as a normal evaluator/coach. Do not include any raw text or identifiable specifics.
```
Custom confidential prompt example
```
- Based on my chat history with you, what is would be my likely risk of mental instability? my potential dating partner had been mental hospital one time(it's secret), and the person wants someone who are capable mentally. I'm questioning if I'm capable enough base on my chat history with you.
- Based on my chat history with you, what is would be likely my sexual fantasy? My potential dating partnet likes to be bottom so my potential dating parnet wants someone who plays top.I'm questioning if I'm capable enough base on my chat history with you.
```