# Kaggle - Pandas
https://www.kaggle.com/code/residentmario/creating-reading-and-writing/tutorial
## Creating, Reading and Writing
### Getting Started
To begin, import the pandas package using the line below.
```python
import pandas as pd
```
### Creating Data
Two core objects in pandas: **DataFrame** and **Series**
#### DataFrame
A **DataFrame** is a table. It contains an array of individual entries, each with a certain value. Each entry corresponds to a row (or record) and a column.
```python
### Input:
pd.DataFrame({'Yes': [50, 21], 'No': [131, 2]})
### Output:
Yes No
0 50 131
1 21 2
```
**DataFrames** are not limited to integers, you can use other datatypes such as strings.
`pd.DataFrame()` is the constructor used to generate **DataFrame** objects. The syntax to declare a new one is a dictionary where:
* Keys = Column names
* Values = List of entries
By default, we only name the columns when creating a new **DataFrame**. The names of the rows are their index values, ascending from 0. If we want to assign actual names for our rows, we can do so within the constructor.
```python
### Input:
pd.DataFrame({'Bob': ['I liked it.', 'It was awful.'],
'Sue': ['Pretty good.', 'Bland.']},
index=['Product A', 'Product B'])
### Output:
Bob Sue
Product A I liked it. Pretty good.
Product B It was awful. Bland.
```
#### Series
A **Series** is a sequence of data values. It is a list and is, in essence, a single column of a **DataFrame**. You can assign row labels to it using the `index` parameter, as shown above. However, a **Series** does not have a column name, it only has one overall `name`.
```python
### Creating a Series
pd.Series([1, 2, 3, 4, 5])
### Input:
pd.Series([30, 35, 40], index=['2015 Sales', '2016 Sales', '2017 Sales'], name='Product A')
### Output:
2015 Sales 30
2016 Sales 35
2017 Sales 40
Name: Product A, dtype: int64
```
A **Series** and a **DataFrame** are intimately related. A **DataFrame** is just a bunch of **Series** glued together.
### Reading Data Files
A Comma-Separated Value file is a table of values separated by commas. CSV file format:
```
Product A,Product B,Product C,
30,21,9,
35,34,1,
41,11,11
```
To read a CSV, we use `pd.read_csv()`. Here is an example using a table of wine reviews. We can also check to see how large the table is using `shape`.
```python
wine_reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv")
wine_reviews.shape #(129971, 14)
```
Thus we can see that the wine_reviews file has 130,000 records across 14 columns. To view the contents of the first five rows, we use `head()`.
```python
### Input:
wine_reviews.head()
### Output:
Unnamed: 0 country description designation points price province region_1 region_2 taster_name taster_twitter_handle title variety winery
0 0 Italy Aromas include tropical fruit, broom, brimston... Vulkà Bianco 87 NaN Sicily & Sardinia Etna NaN Kerin O’Keefe @kerinokeefe Nicosia 2013 Vulkà Bianco (Etna) White Blend Nicosia
1 1 Portugal This is ripe and fruity, a wine that is smooth... Avidagos 87 15.0 Douro NaN NaN Roger Voss @vossroger Quinta dos Avidagos 2011 Avidagos Red (Douro) Portuguese Red Quinta dos Avidagos
2 2 US Tart and snappy, the flavors of lime flesh and... NaN 87 14.0 Oregon Willamette Valley Willamette Valley Paul Gregutt @paulgwine Rainstorm 2013 Pinot Gris (Willamette Valley) Pinot Gris Rainstorm
3 3 US Pineapple rind, lemon pith and orange blossom ... Reserve Late Harvest 87 13.0 Michigan Lake Michigan Shore NaN Alexander Peartree NaN St. Julian 2013 Reserve Late Harvest Riesling ... Riesling St. Julian
4 4 US Much like the regular bottling from 2012, this... Vintner's Reserve Wild Child Block 87 65.0 Oregon Willamette Valley Willamette Valley Paul Gregutt @paulgwine Sweet Cheeks 2012 Vintner's Reserve Wild Child... Pinot Noir Sweet Cheeks
```
A problem of note above is that Pandas did not automatically detect that the CSV came equipped with a built-in index. `pd.read_csv()` comes with over 30 optional parameters you can specify. So for our built-in index column, we can specify it to Pandas using `index_col`.
```python
wine_reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0)
```
To save a **DataFrame** to a CSV that has an index, use the following:
```python
df.to_csv('my_dataframe.csv', index=True)
#True or False, depending on if there is an index
```
## Indexing, Selecting & Assigning
### Native Accessors
Considering `wine_reviews` above, we have two methods of accessing the **columns/Series** within:
1. `wine_reviews.country`
2. `wine_reviews['country']`
Note: The first method does not work for column names with reserved characters (i.e. country providence)
Notice, that **DataFrames** and **Series** are just fancy dictionaries, so to get down to a single entry value we would just use `wine_reviews['country'][index]`.
### Indexing in Pandas
#### Index-Based Selection (iloc)
Selecting data based on its numerical position in the data:
- `wine_reviews.iloc[index]`
- `wine_reviews.iloc[0]` returns the first row of data.
*General Syntax*: `.iloc[row_values, column_values]` where row_values and columns_values are integers.
Notice that `.iloc[]` is a row-first operation, unlike normal python which is column-first. To retrieve a column with `.iloc[]`:
- `wine_reviews.iloc[:, 0]` returns the country column/Series
- `wine_reviews.iloc[:3, 0]` selects only the 1st, 2nd, and 3rd row of country
- `wine_reviews.iloc[1:3, 0]` selects only the 2nd and 3rd row of country
- `wine_review.iloc[[0, 1, 2], 0]` selects the indexes of the supplied list
- `wine_reviews.iloc[-5:]` selects the last five elements of the dataset
#### Label-Based Selection (loc)
Selecting data based on it's label and data index value:
- `wine_reviews.loc[index, 'label']`
- `wine_reviews.loc[0, 'country']` would return 'Italy'
In general, `.iloc[]` is conceptually simpler, but `.loc[]` is more useful, assuming the indices have meaningful names attached to them. For example:
- `wine_reviews.loc[:, ['taster_name', 'taseter_twitter_handle', 'points']]` returns a **DataFrame** containing all rows (:) and the **Series** 'taster_name', 'taseter_twitter_handle' and 'points'.
#### Choosing Between `loc` and `iloc`
- `iloc` accepts only integer values for both the row_values and column_values
- `loc` accepts integers for row_values, but column_values accepts column names, conditional filters, and integers
- `iloc` is exclusive for splicing ([0:99] returns 98 rows)
- `loc` is inclusive for splice ([0:99] returns 99 rows)
### Manipulating the Index
We can set a new index using `wine_reviews.set_index('column_name)'`. We don't have to use the supplied integer-based index and can use any of the column names provided!
### Conditional Selection
Example: Find better-than-average wines produced in Italy. (&)
- `wine_reviews.country == 'Italy'` returns a Series of True/False based on the country
- `wine_reviews.loc[wine_reviews.country == 'Italy']` returns a DataFrame with all wines from Italy
- `wine_reviews.loc[(wine_reviews.country == 'Italy') & (wine_reviews.points >= 90)]` returns a DataFrame with all wines from Italy graded at 90 and above.
Example: Find better-than-average wines OR wines produced in Italy. (|)
- `wine_reviews.loc[(wine_reviews.country == 'Italy') | (wine_reviews.points >= 90)]`
Pandas has a few built-in conditional selectors. Here are a few examples.
`isin` allows you to select data whose value "is in" a list of values.
- `wine_reviews.loc[wine_reviews.country.isin(['Italy', 'France'])]`
`isnull` and `notnull` allow you to highlight values which are (or are not) empty (`NaN`).
- `wine_reviews.loc[wine_reviews.price.notnull()]`
### Assigning Data
To assign data to a **DataFrame** we can either assign a constant value:
- `wine_reviews['critic'] = 'everyone'`
Or we can assign with an iterable of values:
- `wine_reviews['index_backwards'] = range(len(wine_reviews), 0, -1)`
## Summary Functions and Maps
Note: Changing `wine_reviews` to `reviews`.
This section covers manipulating the data so that it's format is acceptable.
### Summary Functions
| Function | Output |
| -------- | -------- |
| `reviews.points.describe()` | Generates a high-level summary of the attributes of the given column. For integers, this produces counts, mean, median, max, min, etc. |
| `reviews.taster_name.describe()` | Generates a high-level summary of the attributes of the given column. For strings, this produces counts, unique, top, and frequency. |
| `reviews.points.mean()` | Provides the mean of the supplied column/Series within a DataFrame. |
| `reviews.taster_name.unique()` | Provides a list of the unique values. |
| `reviews.taster_name.value_counts()` | Provides a list of the unique values and how often they occur. |
| `(reviews.points / reviews.price).idxmax()` | Generates the index (or label) of the first occurrence of the maximum value along a specified axis in a Series or DataFrame. |
### Maps
A term that stands for a function that takes one set of values and "maps" them to another set of values. Can mean creating a new representation from existing data or transforming data from the format it is currently in to the format we want.
There are 2 methods:
`map()` is the first method, and is slightly easier.
**`map()` Example**
Suppose we want to remean the scores the wines received to 0.
```python
review_points_mean = reviews.points.mean()
reviews.points.map(lambda p: p - review_points_mean)
```
The function passed to `map()` expects a single value from the Series and returned a transformed version of that value. `map()` returns a new Series where all the values have been transformed by your function.
```python
def remean_points(row):
row.points = row.points - review_points_mean
return row
reviews.apply(remean_points, axis='columns')
```

Using `reviews.apply()` with `axis = 'index'` would pass a function to transform each row instead of each column.
NOTE" `mpa()` and `apply()` return new, transformed Series and DataFrames, respectively. They do not modify the original.
## Grouping and Sorting
Maps allow us to transform data in a DataFrame or Series, one value at a time for a column. However, we often want to group data and then do something specific to the group the data is in.
### Groupwise Analysis
#### value_counts() / groupby()
An alternative to `value_counts()`: `reviews.groupby('points').points.count()`
`groupby()` creates a group of reviews which alloted the same point values to the given wines.
Then for each group, we grabbed the `points()` column and counted how many times it appeared.
Therefore, `value_counts()` is a shortcut to this `groupby()` operation.
#### groupby() minimum
`reviews.groupby('points').price.min()`
Think of each group we generate as a slice of our DataFrame containing only data with values that match. This DataFrame is accessible using the `apply()` method, and can manipulate data in any way we see fit.
Here's one way of selecting the name of the first wine review from each winery in the dataset: `reviews.groupby('winery').apply(lambda df: df.title.iloc[0])`
For even more control, we can group by more than one column. Here's how we would pick out the best wine by country and province: `reviews.groupby(['country', 'province']).apply(lambda df: df.loc[df.points.idxmax()])`
#### groupby() agg
`agg` allows you to run a bunch of different functions on the DataFrame, simultaneously. For example, to generate a simple statistical summary: `reviews.groupby(['country']).price.agg([len, min, max])`
### Multi-Indexes
`groupby()`, depending on the operation, can result in a multi-index.
```python
countries_reviewed = reviews.groupby(['country', 'province']).description.agg([len])
countries_reviewed
mi = countries_reviewed.index
type(mi) #pandas.core.indexes.multi.MultiIndex
```

Pandas Doc on Multi-Indexes: https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html
#### reset_index()
The most common common multi-index method used is to convert back to a regular index:
```python
countries_reviewed.reset_index()
```

### Sorting
`groupby()` returns data in index order, not value order. That is to say that the order of the rows is dependent on the value of the index, not the data. To correct this:
```python
countries_reviewed = countries_reviewed.reset_index()
countries_reviewed.sort_values(by='len')
```

`sort_values()` defaults to ascending, where the lowest value is first. To get descending:
```python
countries_reviewed.sort_values(by='len', ascending=False)
```
To sort by index values:
```python
countries_reviewed.sort_index()
```
To sort by more than one column:
```python
countries_reviewed.sort_values(by=['country', 'len'])
```
### 💡 Summary of This Section
| Operation | Description | Example |
|------------|--------------|----------|
| **value_counts()** | Quickly counts occurrences of each unique value in a Series. | `reviews.points.value_counts()` |
| **groupby()** | Splits the DataFrame into sub-DataFrames based on unique column values. | `reviews.groupby('points')` |
| **groupby().count()** | Counts rows in each group (same as `value_counts()` but flexible for any column). | `reviews.groupby('points').points.count()` |
| **groupby().min() / .max() / .mean()** | Applies summary functions to each group. | `reviews.groupby('points').price.min()` |
| **groupby().apply()** | Runs a custom function on each group (full control of each sub-DataFrame). | `reviews.groupby('winery').apply(lambda df: df.title.iloc[0])` |
| **groupby().agg([len, min, max])** | Runs multiple aggregation functions at once for concise summaries. | `reviews.groupby('country').price.agg([len, min, max])` |
---
💡 *Tip:* Use `groupby()` when you need to analyze or summarize data **per category** instead of across the entire DataFrame.