# How to use the DBSP_DRP Pipeline
If you have any comments or questions about this tutorial please reach out to [Milan Roberson](mroberso@caltech.edu) or [Andy Tzanidakis](atzanida@caltech.edu)
___
In this short tutorial, I will show you how to reduce your spectroscopic data taken from the P200 DBSP instrument, using the `DBSP_DRP` pipeline on **gayatri.caltech.edu**.
If you would like to run the pipeline on your local machine, please refer to the [documentation page of DBSP_DRP](https://dbsp-drp.readthedocs.io/en/latest/index.html).
### Imporant Notes:
1. Make sure your machine doesn't go on sleep mode. If it does the pipeline will crash and you will have to start again.
2. Make sure you have stable WiFi. I found that if you lose connection for more than >5 minutes the pipeline will crash.
3. Make sure you have enough space on your local gayatri account. On average a DBSP reduction should take about ~1GB. I usually like working on the `../../scr2` directory where I have plenty of space. If you would like to have an scr2/ file under your name please contact the [Astro IT](help@astro.caltech.edu).
# 1. Getting Started with `DBSP_DRP`
### 1.1 - Checking the Observing Log Before Reducing the Data
It's always a good idea before reducing the night’s data to check the observing log for any issues throughout the night the reducer should know about.
The observing log can be found [here](https://docs.google.com/spreadsheets/d/14UHFalFfJRl2g7ZRWUMxyUnz1ZBJ0ZRk3--ZstKL-sQ/edit?usp=sharing).
Each night will have a unique Google-sheet where you can review the observing log - found at the bottom left of the google-sheet page. Each night is marked by the date the observing run date. In this tutorial I will be using data from the **DBSP_20210803** google-sheet observing log.
You're looking for any comments from the observers for issues about the data. For example, in the image below, while I was observing I had an issue with the red side of the file. In the comment column of the observing log, I added a comment like: `don't use this file`

In this case if I was reducing the data I would know not use this file while reducing the data (note this snapshot was from another night).
After inspecting all comments from the observing log and you feel like everything looks okay you can proceed with the next step.
### 1.2 - Copy-Paste Data from the Palomar Machine to Gayatri
At the top of each observing log, you will see the row:**Data at** indicating the path where the data is stored. In most cases, we usually leave the data on the Palomar machines.

You will need to transfer the night’s data to your gayatri account. This is the general workflow on how to do this:
1. Log in to your Caltech gayatri: `ssh -Y usr@gayatri.caltech.edu` - * log in to your personal gayatri account*
3. Go to a directory where you will be doing the reductions (i.e Desktop)
4. Using the example photo from above, we can copy-paste the data from the Palomar machine to your gayatri:
`scp -r user1@observer1.palomar.caltech.edu:/./remote/instrument7/DBSP/20210807/ .`
this will download the entire `20210807/` directory to the gayatri directory you're in. The password is the standard password we use to access the Palomar machines. Please contact Andy, or the Palomar staff if you need the password.
5. Go into that new date directory `cd 20210807/`
7. Make a new directory called `raw/`: `mkdir raw`
8. Move all .fits files to the raw/ directory: `mv *.fits raw/` - Make sure that all the night's files are now located in the raw directory
You're now ready to activate the `DBSP_DRP` pipeline.
### 1.2 - Activate the `DBSP_DRP` Developer Mode Pipeline
To activate the `dbsp_drp_dev` mode run the following commands:
1. `bash` (activate bash)
2. `source /home/mroberso/.bashrc` (source Mroberso’s bashrc)
3. `conda activate dbsp_drp_dev` (activate DBSP_drp developer mode)
If you followed all steps correctly you should see the following on your terminal window:

(*...you should see that you're in the dbsp_drp_dev environment & you're using bash*.) Also make sure that you're one directory behind the `raw/` directory where all the data is stored.
4. To run the pipeline on the data in `raw/`:
`dbsp_reduce -r raw/ -d . -j 14 -t -m` This will use 14 cores and **does not correct for the tellurics**. This is the fastest way to reduce the data. It takes ~2 hours.
`dbsp_reduce -r raw/ -d . -j 14 -m` This uses 14 cores and it **will correct for tellurics**. Since we're correcting for telluric, this typically takes a longer time, roughly 3-4 hours.
Once you run the command, the pipeline will take a few seconds to begin processing the data. It should begin by openning a GUI window after ~30s.
In Section 2 I will be covering the basic commands on how to use the interactive features of the pipeline.
# 2. Using the Interactive Steps
### 2.1 - Editing Tables
If you are successful to initialize the pipeline, the first thing that should appear is a GUI of a table with the headers of the `*red.fits` data (see image below)

In this step, you should scroll through all sources to make sure there are no missing entries from the data. **If there's a row with missing data, it will usually be highlighted in red.**
If you see a missing entry/wrong entry, you can modify it by double-clicking the cell and modify it. You can also right click on a row to delete the entire row or set the coordinates to zenith.
After reviewing all rows and you think everything looks okay, you can exit out the window.
It will do the same thing for all the `*blue.fits` data headers. Once you're done reviewing the blue headers, you can also exit out from the GUI.
If no issues occurred, the pipeline will continue with processing the data:

This step should take approximately ~1 hour until the user is prompted to proceed with the trace identification.
### 2.2 - Automatic & Manual Tracing
At this stage the pipeline should display the following message:

Click **Enter** to proceed
This interactive step will show you all the spectra you have collected on **red side** and then the **blue side**.
Your task is to inspect all spectra and decided if: the pipeline identified the correct trace, if the pipeline did not identify the trace **or** you want to add a manual trace.
To go through each spectrum, make sure to click on the spectrum and click the **left or right keyboard arrow key**.
Here is what each line represents:

The **green** and **red** lines represent the boundary of the spectrograph. The **orange** line(s) are the traces the pipeline automatically found. It's not always a guarantee that the pipeline will identify all the traces in the spectrum.
### How can I Tell Which Trace is my Science Target?
It is useful to know at what column you would expect your science target to be in. The fastest way is to click through the spectra and identify your standard star (you can see at the top of the GUI the name of the target).
In all cases, the pipeline should have identified the standard star and has placed an automatic trace. In this approximate column number (seen on the top right -- see image below) you would expect to see most of your science targets.

Please keep in mind that there are circumstances that there will be multiple traces in thar column region. In those cases if the pipeline hasn't already identified all traces, please make sure to add additional traces (see below on how to do this....)
### How to Add a Manual Trace?
There are a few cases you want to add manual traces to the spectrum:
A) The pipeline did not automatically identify your trace. *In this case you would see that there are no orange lines. This means that you need to add a manual trace - if not the pipeline will likely crash...*
B) Your transient signal is buried in the host trace and you might want to add a custom trace to adjust the boundaries of the signal the pipeline is reducing
To add a manual trace, hover your mouse on the column region you want to add a trace and click **M** on your keyboard. A blue line should appear (see image below) at the column you selected.
You can add as many manual traces you want. **Please keep in mind if you add N manual trace(s) on the red side of your target, you should also do the same on the blue side.**
If you are not happy with your trace, you can also delete it by hovering your mouse on the blue line and clicking **D**. This will remove the manual trace you just added.

Once you're done with marking and checking all spectra you can exit out the GUI.
### Adjusting the Limits of your Manual Traces
If you added manual traces you will need to specify the location, width, and background of your aperture for each side (i.e **blue** and **red**).
Depending on how many manual traces you added to your spectrum, the same number of blue lines should appear in the 1D spectrum (see image below).
Each bump in this 1D-spectrum represents the signal (in counts) of your spectrum. It is common for a few objects to fall in your slit during the observing run.
You can adjust the position of you manual traces by:
- Clicking one of the manual trace & while clicking dragging it to the desiered position. You usually want to put it at the center of each singal you want
Once you're happy with the location of your manual trace, you can adjust the width of the aperture
- Hover your mouse on top of the trace you want to adjust
- Right click, and hold it while slowly dragging it slightly to the right or left. You will see that the width of the aperture chaigning. Usually we try to capture most of the signal without too much of the background.

Finally, you will need to mark the background region of your manual traces to do that:
- Click **B** on your keyboard
- With your mouse click and drag the regions you want to specifcy the background. Two background regions is usually good enough.
- If you're not happy with your background selection, click **D** on the background you want to delete.
Remember to drag your mouse slowly. The background should not include part of your signals, also whenever possible, try to stay away at least ~10-20 pixels away from the boundaries of the spectrograph.

Once you're done with adjusting the traces you can exit out the GUI.
You will have to repeat the last two steps for the blue side.
### How do I know if the Pipeline is Done with Reductions?
If all went well, after a few hours your pipeline should return the following message with the elapsed time it took to reduce your data (in this case I was only reducing ~15 targets without the telluric corrections)

# 3. Final Processing: Adjusting the Splicing
### Where is the Final Reduced Data?
The final reduced data will be in the `spliced/` directory. In some cases, you will see there are multiple spectra under the same target name (with unique _a, _b,.etc names). I recommend you inspect all spectra, and decide if you need to reduce the data again, and deciding which spectrum is your science target.
### Adjusting the Splicing
To adjust the splicing of your science spectrum (see image example below):
- `cd spliced/`
- `dbsp_adjust_splicing TARGET_NAME_HERE.fits`


- Click on the **interpolate gaps**
- Ajust the **Red multiplier** and hit **Enter** such that the blue-red size flux/continum is matched.
Note that the flux scaling for the blue-red side should be matched. See the following two images: scale of 3.5 versus 0.5


In this case the 3.5 **Red multiplier** is better as we can see the blue-red side flux is scaled properly.
Once you're happy with your scaled spectrum, **click Save** and exit out from the window. Repeat this step for any final spectrum you want to upload to Fritz.
# 4. Uploading Reduced Data to Fritz
Please refer to [Dr. Igor Andreoni's GitHub page](https://github.com/igorandreoni/snippets#upload-spectra-to-fritz) for how to upload spectra to Fritz.