# Transferscript Architecture
## Server
Runs on `uvicorn` + `fastapi`, and provides (in order of priority)
* information to the client
* In the first instance: microscope name, names of ongoing visits, visit duration
* Total amount of data acquired/transferred in visit
* Data processing information
* an interface to PyPI
* to allow bootstrapping the client and installation of packages on microscope machines, which are on an isolated network and can't access PyPI directly.
* an interface to ISPyB
* to get above information
* to create DataCollections
* an interface to Zocalo
* to trigger data analysis on collection
* an interface to Graylog
* for auditing/monitoring purposes
* a back-channel from data analysis to data acquisition
* Bisect software uses data analysis outcomes to make data acquisition choices. This works by transferring a file to the microscope machine.
* a web interface to the user
* showing information that is relevant throughout acquisition
## Client
Runs on `textual`, probably want `requests` or a similar helper library to easily interface with the server.
* Connects to the server to get basic visit information
* Asks the user for the relevant visit (as there will normally be 2 commissioning plus the actual visit)
* Monitors a local directory containing either files from the microscope or detector images
* Triggers and monitors an rsync process to push these files onto the Diamond file system
* Sends notifications to the server about seen files and transferred files
* Shows transfer status and summary information
* Checks with server whether the client version is appropriate, and can self-update when needed
* Receives updates from the server to show ongoing information such as amount of data seen/transferred (therefore both clients, microscope and detector, can show the same information)
* Can allow some interactive control, eg. start a new data collection
The rsync process will need to run in a separate thread/process, user interface, file monitoring, and server communication likewise.
Communication between these can most likely be done via `Queue`s.
We will need some logic to deal with interruptions, eg. client crashes during a running rsync.
Server will have to remember some recent state to facilitate this. Leftover rsync processes are another concern.
## Deployment
When we reach the milestone of having a self-updating client then we can install on Krios-1 (m02) and test on in-house visits.
## Planning notes (2021-11-18)
* Server:
* Coordinate clients
* Trigger processing
* Data collection setup
* Maybe REST? Maybe stream? (FastAPI)
* Logging, feed into some dashboard
* Interfaces with ISPyB & Zocalo
* Bisect support (backchannel)
* Sanity checks (gain)
* Client:
* Run rsync, monitor process, parse output (separate thread; procrunner)
* Show status, including whether other machine is connected
* Automatic updates
* Some user interface (textual)
* Take data collection parameters, control processing
* Browser:
* No image viewing
* Transfer progress/volume/rates
* Data collections overview
https://diamondlightsource.slack.com/archives/G01NHU4FBR9/p1637248607045000