# Hand-Off Information I'm writing down information that either was not written down somewhere, or that I had to learn just by experience. Information that isn't included here might be written down separately, and won't be included in detail here. ## Getting Started Make sure you have a [Fritz](https://fritz.science) account. This is necessary for almost anything you will be doing in the future. Most of my scripts use my own Fritz token, so once you create an account go to your [profile](https://fritz.science/profile) and scroll to the bottom, where you can generate a unique API token. **This is extremely important, and needs to be secure. Don't share this with anyone.** You will also need the TNS API information, contact [me](mailto:mrchu@ucsd.edu) for the TNS API info. You should also be a member of the MMO and ZTF Slack channels. If you are not a member, let me know and I can invite you. ## Daily Tasks ### TNS Reporting & Misc. Fritz Functions The most important task that needs to be performed daily is the reporting of newly classified transients to TNS. The information on how to run the script and the installation is detailed on [Github](https://github.com/mrchu39/fritz-classification). Once the script is installed, ensure the proper dependencies are also installed. It may take some time to figure out how to correctly install SNID and its dependencies, so in the meantime make sure you are running the reporting function of the script (Function 5) daily. ### Automated Scripts I have several automated scripts running on the Caltech computer in separate Linux screens. You can see them by logging into my account and entering `screen -ls`. You can then reattach to see the individual processes with `screen -r [process]`. These generally are running 24/7, so if one of them crashes you should notice. #### RCF and RCF Deep Saved Summaries These essentially send a summary of what was saved to RCF and RCF Deep, as well as if they were assigned SEDM follow-up. These should send alerts to the `#rcf-saved-sources` and `#rcf-deep-saved-sources` channels on the ZTF Slack at midnight UTC (4/5pm PT depending on DST). If you don't see a report at that time, the code probably crashed (usually due to some Fritz issue). Check the error code, usually you can just restart it. If you want to improve it, you can check out the script and add in more catches for the multitude of Fritz issues. #### HST Image Analysis This downloads images captured by HST and runs some basic PSF image analysis to determine what is and isn't a real transient. This should also be running in the background, and you should get a report in the `#hst-uv` channel at the same time as the previous two. Most days, nothing gets reported. If you don't see a report, again, reattach to the screen and rerun it. #### Gayatri Page This is a [page](gayatri.caltech.edu:88) where any user can do the individual functions of the reporting script. The application running it is Flask, and it should be running constantly. This page is unlikely to go down, but if it does you should get a 500 error when you try to access it. If this happens, you should rerun it with `python -m flask run` in that screen. I would highly recommend you learn how to use Flask so you can debug more issues with the code, and add to it if necessary. ### RCF Scanning Since no one else seems to want to scan for RCF, you will likely also need to do scanning for RCF. Generally, this is done in weeklong shifts, but no one is signing up so you will likely need to do it relatively often. The instructions for RCF Scanning are [here](https://docs.google.com/document/d/1rMxHNcrzGTUL9IhdppHB7AOK_ZWUHpWWiJWQpNG8sJg/edit?usp=sharing). ### CLU Scanning You may also be assigned to CLU scanning once in a while. CLU scanning is similar to RCF scanning, but the criteria and assignments are different. The instructions are [here](https://hackmd.io/@atzanida/SyR5ctRs_). You can view and sign up for CLU scanning on a separate spreadsheet (ask for a link, it is editable so I don't want to publish it here). ## Observing ### Software You will also likely be assigned to observe with DBSP every month or two. This will be a long night, so be prepared to stay up until 5am or later. In order to connect, you need to [install and request VPN access](https://www.imss.caltech.edu/services/wired-wireless-remote-access/Virtual-Private-Network-VPN) from Caltech. Once installed, sign in with your Caltech credentials to connect. Next, you will need use a VNC to remotely operate the observing software. If you are using a Mac, this should already be built-in. You will need to enter in the appropriate address for whichever desktop window you need access to. The addresses and password are in a document about remote access that I can give you if you request. If both of these are working correctly, you should be able to access the P200 desktop GUI. ### The Day Before **Check the [DBSP observing manual](https://drive.google.com/file/d/11Yasnz_6vhxKTxfyMf1_1sK2NwZ0c0vL/view?usp=sharing) for more detailed information about observing in general.** The day before observing, email the [Palomar staff](mailto:palomar-setups@lists.astro.caltech.edu) to let them know you will be observing. Next, you should prepare the observing log (request access from me to this document). You can see information about your particular observing run on [Fritz](https://fritz.science/runs). You can download this into a CSV file, which appears as such: ![raw](https://i.imgur.com/3n0upxI.png) Here you can see the targets in the order they were assigned, but since certain targets are only visible at certain points in the night, you will need to reorder them based on their rise and set time (**note that the time on the spreadsheet is in UTC, subtract 7/8 hours depending on DST to get the time in PT**). You should also weight the high priority (higher number means higher priority) so that they all get observed if possible (i.e. you can skip lower priority ones if they get squeezed out by weather or longer observation times). You will also need to determine the exposure time based on the most recent magnitude of the transient (which can be viewed from the photometry plots on the transient's Fritz page). Information about how long to observe can be found in Section 9 of the observing manual. You can then translate all of this into a tentative observing plan, as follows: ![tentative_plan](https://i.imgur.com/Dria7QR.png) The last thing you should do in preparation is to download the starlist from the observing run page on Fritz. If you scroll to the bottom of the page, you'll see some plaintext information about each science object and offset stars. Select P200 as the facility, and copy the entire text into a text file. (Note: In the past, there was a parsing issue where colons are forbidden characters to the DBSP software. I'm not sure if this is still the case, you should ask the Palomar staff. If it is, just replace the colons in the coordinates with spaces.) ### The Day Of This should all be done before you begin actually observing, so once you are ready to observe you don't have to waste time on preparation. The Palomar staff should have responded and asked when you want to start calibrations. Generally, this is done in the afternoon several hours before sunset. Let them know when you want to start calibrations, then enter the Zoom call with the staff member assigned (code is on the remote access document). Calibrations and all remote operation are done on Palomar Window 10. Full instructions are in the observing manual. While doing calibrations (and later observing), keep track of this on the observing log spreadsheet (specifically the time and file number). In the meantime, remote copy the starlist file to the Palomar `user1@observer1.palomar.caltech.edu:/observer/observer/targets/[username]/` computer (you might need to create a folder for your user). Ask me or the Palomar staff for the password. On Window 12, you can import the starlist and parse it. If successful, you should see the standard stars, as well as the science objects and each of their offset stars. ### Observing After calibrations, you can start by taking standards. These must be taken after 18-degree (astronomical) twilight. The full details are on the observing manual, but if you have a bunch of early targets, you might want to wait until later to take them. As long as the standard is visible, you can essentially take them whenever. You can then get started on science objects. Take whatever is on your tentative plan, find it on the Fritz observing run page. Go to the right of the object, and find the icon which looks like a rectangle with four dots. If you click on it, a finding chart will be generated in a new tab. You should see the object, along with three offset stars. Find the brightest one (lowest magnitude number), and select the corresponding star in the list on Window 12. Then, click "load to telescope." Let the operator know that it is loaded, and they'll move the telescope there. While the telescope is moving, use this time to edit the file names and observing times in Window 10. (By the way, the blue side can observe up to 20 minutes for a single file, but the red side can only up to 15 minutes. So if you have, for example, a 20 minute exposure, you'll need to split it up such that it's 1200 on the blue and 2x600 on the red. See past observing runs for more examples.) You should also enter in the time observed into the observing log. Once the operator is at the object, they'll ask for the offsets themselves. The offsets are the arcsec numbers ("). You only need to read out to the tenths place. After it's moved to the target, the operator will usually make sure something is actually in the slit. If it is, they'll say you're ready to start exposing. Check to make sure everything is correct on Window 10, and you can expose simultaneously on both sides. While the telescope is exposing, load the next object into the telescope on Window 12, so it's ready to go after the current object is done exposing. Also, check the previous object's (not the one currently exposing) image on Window 11. The file should be saved as `red[number].fits` or `blue[number].fits` in `user1@observer1.palomar.caltech.edu:/remote/instrument7/DBSP/[date]/` (note that the date here is the morning of, night the night of). Open the file with ds9 and ensure that a trace is visible in the image. When the telescope is done exposing, you should here a Windows warning sound several times. Repeat the process for all the objects in the plan. If the weather goes bad partway through the night, the operator may need to close up for humidity concerns. Also keep track of the seeing by asking the operator, and be ready to switch from the 1-2" slits on Window 10 depending on the conditions. At the end of the night, keep track of what was and wasn't observed, and mark it as such on the observing run page on Fritz. Then, send an email out to [TDA](mailto:tda@lists.astro.caltech.edu) with a summary of the conditions and the objects observed. Again, all the detailed information about observing can be found in the manual. ## Reduction Generally you don't need to reduce after you observed; usually you'll get assigned to reduce someone else's data. Andy wrote a very detailed guide [here](https://hackmd.io/@atzanida/Syv2Xiq-Y). One note is I'd recommend running this on a Linux screen (one should already exist on my account). That way, if your computer goes to sleep the process keeps running on a detached screen and you don't need to start over. Also, at the end you will need to send another email to the TDA group with a summary of the reductions. Uploading the spectra to Fritz is detailed at the bottom of Andy's tutorial. ## Future Plans There were a few things that I was planning to work on, but ran out of time. ### Simplifying the TNS reporting If you open the TNS reporting code itself, you might notice how bulky it is. Much of the code is redundant, partially because of how it was written as I inherited it, and partially because of my own programming limitations. If you have some time after you understand how it works, you can probably simplify much of it to make it more streamlined and easier to understand. Also, I noticed on the [Fritz API page](https://docs.fritz.science/api.html) (bookmark this page, by the way) they incorporated a functionality to just upload classifications to TNS through the API. I'm not really sure how it works at the current stage, you might want to play around with it and get in contact with the Fritz/SkyPortal dev team to learn about how it works. If it does essentially the same thing as the reporting script, it would greatly simplify what we need to do on our end. Another thing is that you could easily simplify a few things in the TNS reporting. When uploading the images to Zooniverse, it already stores rlap score as metadata. You could also store SN type as metadata, so theoretically you could completely automate the Zooniverse + SNID + Superfit matching. You could also write a script that downloads the ASCII file of the downloaded and saved RCF sources automatically from gayatri. This means you don't have to sit around and wait 20-30 minutes each day for the file to be downloaded. The rest of the script requires some manual input, and it'd be a lot of work to automate the rest, but the downloading itself could easily be offset to the Caltech computers to spare you some time. ### Keeping Up to Date With Fritz Fritz is constantly being updated, which means that occasionally the API changes which breaks the various codes that rely on it. Aside from causing headaches, the consequence of this is you will need to update the code to keep track of these. The most recent change was that they limited the number of sources you can download in a single request, which helped ease strain on their system. This change should already be incorporated. Occasionally, they just rename their endpoints (which is always fun) which means you won't be able to notice anything is wrong unless you test each one to see what doesn't work anymore. Finally, sometimes functions move from one endpoint to another, like photometry getting its own endpoint rather than being included in a more general one. If none of this makes sense, don't worry, I didn't really understand any of this until several months in. Get very familiar with the Python requests module, and read up as much as you can on the Fritz docs. Also, don't be afraid to email the Fritz devs (their contact info is [here](https://fritz.science/about)). Any question you might think is dumb has already been asked by me. ### Adding Instruments to the Reporting Script Occasionally, someone will upload spectra from instruments not currently in the reporting code. For the most part, it should not be too hard to add these in. You'll need to change three functions, `get_tns_instrument_ID`, `write_ascii_file`, and `class_submission`. `get_tns_instrument_ID` just returns the numerical designation of the instruments in the classification report sent to TNS. This can be found [here](https://www.wis-tns.org/api/values). Make sure the string key used matches the one from the [Fritz API](https://fritz.science/api/instrument). `write_ascii_file` translates spectra from Fritz into an ascii file. Generally, these will be written out as a string in the JSON data returned from an API request with the spectrum endpoint. This will typically (but not always) be a header with '#' leading characters and the spectrum as a list of wavelengths and flux, separated into columns by spaces or tabs. `class_submission` generates a TNS report, most of the information about the instrument, spectrum, and observation is encoded in the headers of the spectra. If there is missing information, usually it's defined somewhere else. If unsure, just don't submit it (with small things like exposure time) with the report. ### ATLAS Forced Photometry One thing I actually got working before Fritz inevitably broke it was a daily ATLAS forced photometry requests script. Essentially, for newly saved sources, it pulls forced photometry from ATLAS and uploads it on Fritz. They actually incorporated this into a follow-up function which you can request through the API, which was working relatively well until recently. Essentially, how it works is Fritz will ask for forced photometry data from ATLAS's database, which gets queued into their system, then after it gets compiled, it gets returned to Fritz, which then uploads it to its own database. This process does not update on its own, so to check its status at any point of the process, I had to send an API request to Fritz. The issue that it ran into was if ATLAS has issues (e.g. a long queue, or requests that are taking especially long), the API requests to check the status start piling up and badgering the Fritz system, which was causing bottlenecks. Because of this, I had to shut it down. If there was a better way to track the status of these ATLAS requests other than sending repeated API requests to Fritz, it would allow the script to run without causing issues with Fritz itself. Part of this, in addition to the daily script, was trying to get forced photometry for all objects in the RCF group (which is now around 10,000). I got to around 85% complete before the same issue broke it. This script is confined in a Jupyter notebook, and there are some manual inputs necessary. I wouldn't worry about this, but you should ask Christoffer if this is something he's still interested in. If he is, let me know and I can give you the code and tell you how to use it. ### AT Reports This isn't really a future plan, but another thing you might see in my list of Linux screens is something called "AT Reports." This is essentially a backup for uploading new sources to Fritz if P48 goes offline for whatever reason. It's unlikely you'll need to use this, but if you do just run the `schedule_tns_check.py`, which will import objects discovered by AT into Fritz with the appropriate names. ## Some Random Notes To conclude, here are some random thoughts that I figured out that I either was told or had to figure out. Take good advantage of gayatri. Caltech's computers have more processing power, and they can run without you having to have a console open. Use the screens, and especially if you have code that needs to take hours to run, run it on their computers. Use Linux screens for this, create a new one with `screen -S [name]`, run the script, then press ctrl+A+D to detach from the screen. The process will continue to run even if you close the window. You can reattach to the screen with `screen -r [name]`. Use conda environments. We talked about this on the Zoom call, but it's highly likely that while you're at Caltech you'll be doing different projects that require different packages with their own dependencies. Create and manage different environments so that you don't have to worry about conflict among packages. See [here](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html) for more info. Report regular 9-5 hours in Kronos. When you get here, you'll report your time working with a portal called Kronos. Even though you probably won't be working normal hours (if you're observing, for example), report 9-5 hours anyway and try to even it out to 40 hours a week. Initially I reported the weird hours I was working and HR got irritated because there are labor laws in CA about how many hours you work per day and break times (which are hard to apply if you do astro research). Also, make sure to report an at least 30 minute lunch break or they'll also get upset with you. Ideally postbacs should get paid stipends like grad students, but for some reason you get treated like a regular employee.