# Thoughts on the "ATS email experiment" - The current TT email appears to be performing just fine, so i'm concerned about the premise -- We still get confused when we talk about success --- There are different success metrics mentioned in different conversations, sometimes talking about click-through rates, sometimes talking about actual successfully published transfers -- Please can we clarify and align on these -- The current email is evolving and, as mentioned, performing well. Shouldn't that be part of the scope? And how? -- **Is this *specific* problem even real?** - The approach, and the aims, lack some structure that they previously had. Not sure why that's happened - noreply is being stated as bad experience. I'm worried this assumption is gaining traction - The notion that "authors don't care about where they publish" is also gaining traction. I struggle to accept this - We only included publishers at the 'final-proof' stage. -- They said lot's of interesting things -- There's a gap in knowledge by not looking at the customised emails editors/publishers write -- This suggests that there's still a lot of editing to go -- But we seem to be saying we're nearly there - i don't feel that - The tiny scope of this pilot is going to give us small amounts of data, that will likeley not perform any differently, and will be hard to have confidence in ## Given the above Can we 'reframe' the experiment/project into a bigger one, about emails in general? We could potentially make a big difference by spending a lot of time on improving emails, as a dedicated project. Write better, clearer, nicer content. For all emails. Better emails all round are clearly needed, and we could approach this more scientifically, by experimenting with different versions (empathy, length, tone) of different letters (rejection, transfer, revision). This would give us more opportunities and more data. We could work with a content designer/writer who could spend time dissecting content into components and strucring emails technically