owned this note
owned this note
Published
Linked with GitHub
# UC San Diego School of Global Policy and Strategy Skills Course
# 3 weeks - Introduction to Data Management 2/20/2019 - 3/05/2019
* GPS Room 3201: 6:00 - 9:00pm
* TritonEd
* Schedule: Posted in TritonEd
## Data "Messy Survey"
#### Use the following link:
https://tritoned.ucsd.edu/webapps/blackboard/content/listContentEditable.jsp?content_id=_1491411_1&course_id=_18990_1
---
## Instructors:
* Arden Tran (Research IT)
* Mary Linn Bergstrom (Library)
* Gui Castelao (SIO)
* Reid Otsuji (Library)
* TA: Caio Mansini (GPS)
* Helpers: Rick Mccosh
## Need excel or other Office products?
- Go to office.com and use your A/D credentials as UCSD provides Microsoft Office 365 free to all students, faculty, staff with UCSD AD credentials (the login you use for WiFi)
- for more info check out:
https://acms.ucsd.edu/services/software/available-software/microsoft-individual.html
## Exercise 1
Paste your workbooks IF YOU DARE here:
https://drive.google.com/drive/u/1/folders/1JxKX3CkSzC1nYqeQkd84BvGoweE0UGtB
# Collaborative note taking!
### Need Help?
- #### If you need help- put a pink sticky on your laptop so an instructor can come assist
- Also, your instructors/helpers are on HackMD here, so feel free to throw a question or ask for help via the notes here as we go along
# Week 1 - Please Sign in here
#### Name - A###########
names removed at end of quarter.
## Data Management - Day 1
###
Lecture Slides will be posted in TritonED
Data management best practices
link to video:
https://www.youtube.com/watch?v=66oNv_DJuPc
Library Digital Collections
https://library.ucsd.edu/dc
Some collections research datasets are avaialble
# Data organization in Spreadsheets
### How many people have acidentially done something that made you sad?
the solver function <- I second that
i deleted it
deleted rows or cell content
version control sucks in spreadsheets. i created too many versions and lost track
### What kinds of operations do you do in spreadsheets?
All arithmetic operations, IF-Then functions, graphs, data visualization results
Equation solver, data organization
pivot table
### Which ones do you think spreadsheets are good for?
Graphs, data visualization, pivot tables
Excel is good for all operations
Excel best practice:
Seperate Sheets tabs for information
create change logs
### Changelogs are extremely useful whenever you're doing collaborative projects! Use one tab just to track changes on the project!
When using spreadsheet processors like Excel, each column represents a single dimensional variable. Do not "collapse" dimensions into a single column. That might cause you troubles.
# Exercise
## How to best organize the spreadsheet? (Input your suggestions below)
### Problems
Weight Column
- What is the measure? Pounds? Kilograms? Grams?
- 2014 Tab - Why are these two values highlighted? Are they important?
### Solutions
- Tips:
- Keep your original Data -> Create a new sheet as a "backup"
- Dates: keep track of day, month and year (and seasons, if possible)
- Save time: use "smart autofill" (select & drag)
- Text to Columns can be found here: "Data" ribbon -> "Text to Columns"
### Common Issues
#### What do you think would be common issues in spreadsheets?
- Dates Formatting
- Dates are a huge headache (especially in Stata, you'll see Muahaha)
- Observations Codification
- Keep in mind how observations are codified and keep the pattern
- Watch out for spaces, capital letters, and symbols
- Not filling in zeros -> There is a big difference between "." and "0" <- keep that in mind
- "#VALUE"
- Pay attention to format you save your "workbook"
- Some functions will not work if you save your workbook as .CSV file
* Windows Users: Windows will return a message saying that some functions cannot be used in the format you're saving your file
* If you use 'Macros' in your workbook, make sure you saved your file using the correct specification (Macro Allowed Workbook)
# Paste your Excel workbooks IF YOU DARE here:
https://drive.google.com/drive/u/1/folders/1JxKX3CkSzC1nYqeQkd84BvGoweE0UGtB
* Let's see what you came up with
Keep you original data
create a new sheet
break out columns
## Data validation
Cool function built into excel to allow specific values in cells
How to find it:
- Data ribbon
- Data Validation
Keep in mind that "$" represents absolutes
You can specify what kind of message the workbook will return to the user if he/she enters a not supported data type
## Sorting
Highlight the range you want to sort, and choose from the options: ascending, descending, or custom.
Custom sorting allows you to sort your data using custom specifications (different variables, or variables you created rather than alphabetically)
Use the "Table" function to better organize tables (it not only keeps the current formatting, but also does not change previous specifications)
# Week 2 - Please Sign in here
#### Name - A###########
names removed at end of quarter.
## SQlite3 and data download links
SQlite3 Download - https://www.sqlite.org/download.html
Survey database download - http://swcarpentry.github.io/sql-novice-survey/files/survey.db
Your terminal/shell not outputting with the columns spaced neatly?
- try: `.mode column`
Can't see the headers or column names?
- try: `.headers on`
### to exit SQlite3
```
.q
```
easiest way to load the survey.db in SQLite3:
copy database file into the same folder as the SQLite3 app.
to open the database in SQLite3:
```
.open survey.db
```
Once you finish the command, use semicolumn to close it.
```
;
```
look at the schema of the table
```
.schema
```
Basic slect statement format includes SELECT FROM
```
SELECT * FROM Survey;
```
```
SELECT quant FROM Survey
```
* SELECT DISTINCT cmd:
```
SELECT DISTINCT quant FROM Survey;
```
* **Database design is the most important step when creating a relational database!**
```
SELECT DISTINCT taken, quant FROM Survey;
```
```
SELECT * FROM person;
```
```
SELECT * FROM Pesron ORDER BY id;
```
* when writing select statments explicitly as for an ordering if you need it.
```
SELECT * FROM Person ORDER BY id;
```
* inverse order or ascend add DESC or ACS
```
SELECT * FROM Person ORDER BY id DESC;
```
```
SELECT * FROM Survey;
```
```
SELECT * FROM Survey ORDER BY taken, person;
```
```
SELECT * FROM Survey ORDER BY taken, person DESC;
```
```
SELECT taken, person, quant FROM Survey ORDER BY taken ASC, person DESC;
```
```
SELECT DISTINCT person, quant FROM Survey ORDER BY person DESC;
```
```
.schema
```
```
SELECT DISTINCT dated FROM Visited;
```
```
SELECT * FROM Site WHERE name = 'DR-1';
```
```
SELECT * FROM Visited WHERE site = 'DR-1';
```
```
SELECT * FROM Visited WHERE site = 'DR-1' ORDER BY dated DESC;
```
```
SELECT * FROM Visited WHERE site = 'DR-1' ORDER BY dated DESC LIMIT 2;
```
```
SELECT * FROM Visited WHERE site = 'DR-1' AND dated < '1930-03-22';
```
```
SELECT * FROM survey;
```
```
SELECT * FROM survey WHERE person='lake';
```
```
SELECT * FROM survey WHERE person='lake' OR person ='roe';
```
```
SELECT * FROM Survey WHERE person IN ('lake', 'roe');
```
```
SELECT * FROM Survey WHERE quant = 'sal' AND person = 'lake' OR person = 'roe';
```
**Fix the query:**
Suppose we want to select all sites that lie more than 42 degrees from the poles. Our first query is:
```
SELECT * FROM Site WHERE (lat > -48) OR (lat < 48);
```
Explain why this is wrong, and rewrite the query so that it is correct.
**Solution:**
Because we used `OR`, a site on the South Pole for example will still meet the second criteria and thus be included. Instead, we want to restrict this to sites that meet both criteria:
```
SELECT * FROM Site WHERE (lat > -48) AND (lat <= 48);
```
* what's up with this ...> ?
```
SELECT *
...>FROM person
...>WHERE id = 'dyer
...>;
...> #if you see this and your query doesn't run, it's most likely that the query syntax was not complete.
...>'; #from above `'dyer` was missing the `'` complete the sytax on the next line and the quary is valid.
```
SELECT reading -32 FROM survey WHERE quant = 'temp';
```
* Method to use simple operations on Queries (Tip: pay attention to PEMDAS when performing mathematical operations)
```
SELECT round(5*(reading -32)/9, 2) FROM survey WHERE quant = 'temp';
```
* Tip if you want view whats in a table use:
```
SELECT * FROM person LIMIT 5;
```
```
SELECT round(5*(reading -32)/9, 2) AS Celuis FROM survey WHERE quant = 'temp';
```
SELECT * FROM person limit 5;
```
#### Concatenation operator '||' -> The output shows the full name of the individual being queried
```
SELECT personal ||' '|| family AS full_name FROM person;
```
### Exercise
After further reading, we realize that Valentina Roerich was reporting salinity as percentages. Write a query that returns all of her salinity measurements from the Survey table with the values divided by 100.
### Solution:
```
SELECT reading/100 FROM survey WHERE quant = 'sal' AND person = 'roe';
```
```
SELECT * FROM person WHERE id = 'dyer' UNION SELECT * FROM person id = 'roe';
```
### How to work with NULL Values:
* `NULL` values are not '0'
```
SELECT * FROM Visited WHERE dated < '1930-01-01';
```
Notice that the NULL value doesn't show up.
```
SELECT * FROM Visited WHERE dated='NULL';
```
The query below is the correct parameter to query for NULL values.
```
SELECT * FROM Visited WHERE dated Is NULL;
```
Another way to perform the same operation:
```
SELECT * FROM Visited WHERE dated Is NOT NULL;
```
#### IN Operator
Used to query information "in" certain values
```
SELECT * FROM Visited WHERE Dated IN (`1927-02-08`);
```
If you want to compare NULL values, you have to especify that in your query. Due to its own nature, NULL values are especial. Although the cell of the table might be empty, it still has a vaue (the NULL value).
```
SELECT * FROM VISITED WHERE DATED IN (`1927-02-08`,NULL);
```
#### Statistical Operators
Min
```
SELECT min(dated) FROM Visited;
```
Max
```
SELECT max(dated) From Visited;
```
Average
```
SELECT avg(reading) FROM Survey WHERE quant ='sal';
```
Count
```
SELECT reading FROM Survey WHERE quant = 'sal';
SELECT count(reading) FROM Survey WHERE quant = 'sal';
```
Conditional Operators
```
SELECT * FROM Survey WHERE quant = 'sal' AND reading <=1.0;
```
Multiple Aggregates
```
SELECT min(reading), max(reading) FROM Survey WHERE quant='sal' AND reading <=1.0;
```
#### Question: what's the purpose of this query?
```
SELECT person, max(reading) FROM Survey WHEREqunt = 'sal' AND reading <=1.0;
```
Look at the structure to understand how combinations of simple queries can create very complex (and precise) queries
```
SELECT * FROM Suvey WHERE reading IN (SELECT max(reading)) FROM Survey WHERE quant = 'sal' AND reading <= 1.0;
```
Can you tell if this is a NULL value or an empty cell?
```
SELECT person, sum(reading) FROM Survey WHERE quat = 'missing';
```
Not necessary to input "IS NOT NULL". Statistical Operators assume that the cells under which calculations will be performed are not NULL by defult.
```
SELECT min(dated) FROM Visited WHERE dated IS NOT NULL;
```
#### How to obtain the mean of each column in the database?
In general terms, the following syntax collapses the different types of "quants" to their averages.
```
SELECT quant, avg(reading) FROM Survey GROUP BY quant;
```
The GROUP BY parameter creates groups over which operations will be performed, preventing verbosity in the code.
THe following query returns the average of readings by person and quant:
```
SELECT person, quant, avg(reading) FROM Survey GROUP BY quant, person ORDER BY quant, person;
```
* possible example to export output to csv
```
.once #command to output to csv
#example
.once /Users/quackit/sqlite/dumps/artists.csv #set path and .csv file name
SELECT * FROM Artists; # run query out put will be saved to specified path
```
https://www.quackit.com/sqlite/tutorial/export_data_to_csv_file.cfm
## Relational Database
Goal: To connect tables
Relational databases uses values from one table and connect this value to a corresponding value in a different table.
Ex. TABLE A, "id" has the same value in TABLE B, "number".
### Joining Tables
Merge two tables under the specific criteria
```
SELECT * FROM Site JOIN Visited ON (Site.name = Visited.site);
```
Primary key: original table
Foreign key: the table that you're using to 'merge' values
PostgreSQL - object relational database system - free to use.
PostGIS - create spatial queries
* Keep an eye out in PostGIS if you're taking ArcGIS next quarter!!!
SQLAlchemy - Python SQL Toolkit
---
# Introduction to Unix Shell
## Instructors:
* Arden Tran (Research IT) - Data Spreadsheets
* Mary Linn Bergstrom (Library) - Data Manegement
* Gui Castelao (SIO) - SQL
* Chris Olsen (SIO) - SQL
* Reid Otsuji (Library) - Unix Shell
* Andre Paloczy (SIO)
* TA: Caio Mansini (GPS)
* Helpers: Rick Mccosh (Medicine)
### Download data
https://github.com/LibraryCarpentry/lc-shell/raw/gh-pages/data/shell-lesson.zip
* save the zip file to your desktop
* unzip the file on your desktop. After the file is unzipped, there should be a `shell-lesson` folder on your desktop
### Shell Reference
https://explainshell.com/
# Week 3 - Please Sign in here
#### Name - A###########
names removed at end of quarter
---
### NOTES:
print working directory
```
pwd
```
see files in current folder
```
ls
```
display files
```
ls -l
```
* commands are case sensitive
another way to lookup to files
```
ls -lh
```
change directory to desktop
```
cd destop
```
move the directory back up
```
cd ..
```
move back to home folder
```
cd
```
take you back to the previous folder
```
cd -
cd .. #performs the same function as the one above
cd../.. #returns to the previous folder
```
lookup to commands help menu(for mac user)
```
man (the command you wanna use)
```
for windows users, https://explainshell.com/ provides command instructions
move to the folder
```
cd (folder name)
```
List the directories in your current path:
```
ls -F
```
You can also use additional parameters like this:
```
ls -a
ls -h
ls -lhap
```
Creates a directory with the name "firstdir":
```
mkdir firstdir
```
look up to txt files
```
cat (file name)
```
quickly look up to the file from head
```
head -n #of_lines_you_wanna_lookup (file name)
```
to view from the bottom
```
tail
```
press ```q``` to quit out
Rename the file.
*Hint: Pay careful attention to the following command because if you specify a path after the name of the file, it will just move the file to the path you specified.
```
mv (oldname) (newname)
```
create new empty file
```
touch (filename)
```
delete file (will not go to the recycle bin but disapear forever if you delete it from the terminal)
```
rm (filenam)
```
If you want to delete a directory you should use the following command:
```
rm -rf
```
But be careful! the command above deletes recursively, which means, it will remove the folder and additional folders inside the main folder.
check contents in a folder without moving to that folder
```
ls (folder name)
```
Performing loops on a shell:
```
for filename in *.doc
do
echo $filename
cp $filename backup_$filename
done
```
Identation matters!
Echo has the same function of "print" in any other programming language. Try it out:
```
echo Test
```
Sorting
```
sort -n lenghts.txt
```
Using pipes. Pipes allows the use to pass two commands at once. In the example below, we are sorting and using the word count (wc)
```
wc -l *.tsv | sort -n
```
More elaborate example:
```
wc -l *.tsv | sort -n | head -n 1
```
What does the command above do?
Regular expressions: famous for text mining purposes. You can alo use regular expressionsin Python and R.
First, lets se the environment:
```
mkdir results
```
Look for the year 1999 in all the tsv files:
```
grep 1999 *.tsv
```
Define a string to make a more precise mining. Count how many times 1999 shows up in your files.
```
grep -c 1999 *.tsv
```
Let's look for a keyword now:
```
grep -c revolution *.tsv
```
Now, let's ignore the difference between lower case and upper case letters in "revolution"
```
grep -ci revolution *.tsv
```
What about saving the output in a file in a different folder? No problem!
```
grep -i revolution *.tsv > /path/name_the_file.tsv
```
Select only those lines containing matches that form whole words. The test is that the matching substring must either be at the beginning of the line, or preceded by a non-word constituent character. Similarly, it must be either at the end of the line or followed by a non-word constituent character. Word-constituent characters are letters, digits, and the underscore. This option has no effect if -x is also specified. Source: grep manual (http://man7.org/linux/man-pages/man1/grep.1.html)
```
grep -iw revolution *.tsv > path/file
```
SED stands for "Stream Editor". Used to conduct basic text transformation.
```
sed '9352,9714d' guliver.txt > guliver-nofoot.txt
less -N guliver-nofoot.txt
sed '1,37d' guliver-nofoot.txt > guliver-nohead.txt
less -N guliver-nohead.txt
tr -d [:punct] < guliver-nohead.txt > guliver-noheadfootpunct.txt
```
Removing upper and lower case letters:
```
tr [:upper:] [:upper:] < guliver-noheadfootpunct.txt > guliver-clean.txt
```
Parsing out text into the shell:
```
tr ' '\n' < guliver-clean.txt | sort | uniq -c | sort -r > guliver.final.txt
```