---
# System prepended metadata

title: Pandas 2

---

# Pandas 2

---
title: Agenda
description:
duration: 300
card_type: cue_card
---

### Content

- Working with both rows & columns
- Handling duplicate records
- Pandas built-in operations
  - Aggregate functions
  - Sorting values
- Concatenating DataFrames
- Merging DataFrames

---
title: Working with rows & columns together
description:
duration: 1500
card_type: cue_card
---

### Working with rows & columns together

**Dataset:**
<https://drive.google.com/file/d/1E3bwvYGf1ig32RmcYiWc0IXPN-mD_bI_/view?usp=sharing>

Code:
```python=
!wget "https://drive.google.com/uc?export=download&id=1E3bwvYGf1ig32RmcYiWc0IXPN-mD_bI_" -O mckinsey.csv
```

> **Output**

```
--2024-02-20 05:57:50--  https://drive.google.com/uc?export=download&id=1E3bwvYGf1ig32RmcYiWc0IXPN-mD_bI_
Resolving drive.google.com (drive.google.com)... 74.125.128.102, 74.125.128.138, 74.125.128.139, ...
Connecting to drive.google.com (drive.google.com)|74.125.128.102|:443... connected.
HTTP request sent, awaiting response... 303 See Other
Location: https://drive.usercontent.google.com/download?id=1E3bwvYGf1ig32RmcYiWc0IXPN-mD_bI_&export=download [following]
--2024-02-20 05:57:50--  https://drive.usercontent.google.com/download?id=1E3bwvYGf1ig32RmcYiWc0IXPN-mD_bI_&export=download
Resolving drive.usercontent.google.com (drive.usercontent.google.com)... 142.250.145.132, 2a00:1450:4013:c14::84
Connecting to drive.usercontent.google.com (drive.usercontent.google.com)|142.250.145.132|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 83785 (82K) [application/octet-stream]
Saving to: ‘mckinsey.csv’

mckinsey.csv        100%[===================>]  81.82K  --.-KB/s    in 0.001s  

2024-02-20 05:57:51 (67.9 MB/s) - ‘mckinsey.csv’ saved [83785/83785]
```

Code:
```python=
import pandas as pd
import numpy as np

df = pd.read_csv('mckinsey.csv')
df
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/715/original/i1.png?1708410323">

\
**How can we add a row to our dataframe?**

There are multiple ways to do this.
- `concat()`
- `loc/iloc`

**How can we do add a row using the `concat()` method?**

Code:
```python=
new_row = {'country': 'India', 'year': 2000,'population':13500000, 'continent': "Asia", 'life_exp':37.08, 'gdp_cap':900.23}

df = pd.concat([df, pd.DataFrame([new_row])], ignore_index=True)
df
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/716/original/i2.png?1708410381">

\
**Why are we using `ignore_index=True`?**

- This parameter tells Pandas to ignore the existing index and create a new one based on the length of the resulting DataFrame.

Perfect! Our row is now added at the bottom of the dataframe.

**Note:**
- `concat()` doesn't mutate the the dataframe.
- It does not change the DataFrame, but returns a new DataFrame with the appended row.

Another method would be by using `loc`.

We will need to provide the position at which we want to add the new row.

**What do you think this positional value would be?**

- `len(df.index)` since we will add the new row at the end.

For this method we only need to insert the values of columns in the respective manner.

Code:
```python=
new_row = {'country': 'India', 'year': 2000,'population':13500000, 'continent': "Asia", 'life_exp':37.08, 'gdp_cap':900.23}
new_row_val = list(new_row.values())
new_row_val
```

> **Output**

```
['India', 2000, 13500000, 'Asia', 37.08, 900.23]
```

Code:
```python=
df.loc[len(df.index)] = new_row_values
df
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/717/original/i3.png?1708410777">

\
The new row was added but the data has been duplicated.

**What you can infer from last two duplicate rows?**

- DataFrame allow us to feed duplicate rows in the data.

**Now, can we also use `iloc`?**

Adding a row at a specific index position will replace the existing row at that position.

Code:
```python=
df.iloc[len(df.index)-1] = ['Japan', 1000, 1350000, 'Asia', 37.08, 100.23]
df
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/718/original/i4.png?1708410844">

\
**What if we try to add the row with a new index?**

Code:
```python=
df.iloc[len(df.index)] = ['India', 2000, 13500000, 'Asia', 37.08, 900.23]
```

> **Output**

```
    ---------------------------------------------------------------------------
    IndexError                                Traceback (most recent call last)
    <ipython-input-72-551519eb141e> in <cell line: 1>()
    ----> 1 df.iloc[len(df.index)] = ['India', 2000, 13500000, 'Asia', 37.08, 900.23]

    /usr/local/lib/python3.10/dist-packages/pandas/core/indexing.py in __setitem__(self, key, value)
        813             key = com.apply_if_callable(key, self.obj)
        814         indexer = self._get_setitem_indexer(key)
    --> 815         self._has_valid_setitem_indexer(key)
        816 
        817         iloc = self if self.name == "iloc" else self.obj.iloc

    /usr/local/lib/python3.10/dist-packages/pandas/core/indexing.py in _has_valid_setitem_indexer(self, indexer)
       1516             elif is_integer(i):
       1517                 if i >= len(ax):
    -> 1518                     raise IndexError("iloc cannot enlarge its target object")
       1519             elif isinstance(i, dict):
       1520                 raise IndexError("iloc cannot enlarge its target object")

    IndexError: iloc cannot enlarge its target object
```

**Why are we getting an error?**

- For using `iloc` to add a row, the dataframe must already have a row in that position.
- If a row is not available, you’ll see this `IndexError`.

**Note:** When using the `loc[]` attribute, it’s not mandatory that a row already exists with a specific label.

**What if we want to delete a row?**

- use `df.drop()`

If you remember we specified axis=1 for columns.

We can modify this - `axis=0` for rows.

**Does `drop()` method uses positional indices or labels?**

- We had to specify column title.
- So **`drop()` uses labels**, NOT positional indices.

\
Let's drop the row with label 3.

Code:
```python=
df
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/719/original/i5.png?1708410945">


Code:
```python=
df = df.drop(3, axis=0)
df
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/720/original/i6.png?1708410982">

\
We can see that the **row with label 3 is deleted**.

We now have **rows with labels 0, 1, 2, 4, 5, ...**

`df.loc[4]` and `df.iloc[4]` will give different results.

Code:
```python=
df.loc[4] # The 4th row is printed
```

> **Output**

```
country       Afghanistan
year                 1972
population       13079460
continent            Asia
life_exp           36.088
gdp_cap        739.981106
Name: 4, dtype: object
```

Code
```python=
df.iloc[4] # The 5th row is printed
```

> **Output**

```
country       Afghanistan
year                 1977
population       14880372
continent            Asia
life_exp           38.438
gdp_cap         786.11336
Name: 5, dtype: object
```

**Why did this happen?**

It is because the `loc` function selects rows using row labels (0,1,2,4,..) whereas the `iloc` function selects rows using their integer positions (staring from 0 and +1 for each row).

So for `iloc`, the 5th row starting from 0 index was printed.

**How can we drop multiple rows?**

Code:
```python=
df.drop([1, 2, 4], axis=0) # drops rows with labels 1, 2, 4
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/721/original/i7.png?1708411045">

\
Let's reset our indices now.

Code
```python=
df.reset_index(drop=True,inplace=True) # since we removed a row earlier, we reset our indices
df
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/722/original/i8.png?1708411075">

---
title: Handling duplicate records
description:
duration: 1200
card_type: cue_card
---

### Handling duplicate records

If you remember, the last two rows were duplicates.

**How can we deal with these duplicate rows?**

Let's create some more duplicate rows to understand this.

Code:
```python=
df.loc[len(df.index)] = ['India', 2000, 13500000, 'Asia', 37.08, 900.23]
df.loc[len(df.index)] = ['Sri Lanka',2022 ,130000000, 'Asia', 80.00,500.00]
df.loc[len(df.index)] = ['Sri Lanka',2022 ,130000000, 'Asia', 80.00,500.00]
df.loc[len(df.index)] = ['India',2000 ,13500000, 'Asia', 80.00,900.23]
df
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/723/original/i9.png?1708411112">

\
**How to check for duplicate rows?**

-  We use `duplicated()` method on the DataFrame.

Code:
```python=
df.duplicated()
```

> **Output**

```
0       False
1       False
2       False
3       False
4       False
        ...  
1704    False
1705     True
1706    False
1707     True
1708    False
Length: 1709, dtype: bool
```

It gives True if an entire row is identical to the previous row.

However, it is not practical to see a list of True and False.

We can the `loc` data selector to extract those duplicate rows.

Code:
```python=
# Extracting duplicate rows

df.loc[df.duplicated()]
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/724/original/i10.png?1708411159">

\
**How do we get rid of these duplicate rows?**

- We can use the `drop_duplicates()` function.

Code
```python=
df.drop_duplicates()
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/725/original/i11.png?1708411195">

\
**But how do we decide among all duplicate rows which ones to keep?**

Here we can use the `keep` argument.

It has only three distinct values -
- `first`
- `last`
- `False`

The default is 'first'.

If `first`, this considers first value as unique and rest of the identical values as duplicate.

Code:
```python=
df.drop_duplicates(keep='first')
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/726/original/i12.png?1708411237">

\
If `last`, this considers last value as unique and rest of the identical values as duplicate.

Code:
```python=
df.drop_duplicates(keep='last')
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/727/original/i13.png?1708411277">

\
If `False`, this considers all the identical values as duplicates.

Code:
```python=
df.drop_duplicates(keep=False)
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/728/original/i14.png?1708411307">

\
**What if you want to look for duplicacy only for a few columns?**

We can use the `subset` argument to mention the list of columns which we want to use.

Code:
```python=
df.drop_duplicates(subset=['country'],keep='first')
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/729/original/i15.png?1708411335">

---
title: Quiz-1
description: 
duration: 60
card_type: quiz_card
---

# Question

What will be the output of the following piece of code?

Code
```python=
import pandas as pd

data = {'A': ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'],
        'B': ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three'],
        'C': ['small', 'large', 'large', 'small', 'small', 'large', 'large', 'small'],
        'D': [1, 2, 2, 3, 3, 4, 5, 6]}

df = pd.DataFrame(data)

print(sum(df.duplicated(subset=['A', 'B']))
```

# Choices

- [ ] 4
- [x] 2
- [ ] 0

---
title: Slicing the DataFrame
description:
duration: 1200
card_type: cue_card
---

### Slicing the DataFrame

**How can we slice the dataframe into, say first 4 rows and first 3 columns?**

- We can use `iloc`

Code:
```python=
df.iloc[0:4, 0:3]
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/730/original/i16.png?1708411380" width=300 height=200>

\
Pass in 2 different ranges for slicing - **one for row** and **one for column**, just like Numpy.

Recall, `iloc` doesn't include the end index while slicing.

**Can we do the same thing with `loc`?**

Code:
```python=
df.loc[1:5, 1:4]
```

> **Output**

```
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-89-494208dc7680> in <cell line: 1>()
----> 1 df.loc[1:5, 1:4]

/usr/local/lib/python3.10/dist-packages/pandas/core/indexing.py in __getitem__(self, key)
   1065             if self._is_scalar_access(key):
   1066                 return self.obj._get_value(*key, takeable=self._takeable)
-> 1067             return self._getitem_tuple(key)
   1068         else:
   1069             # we by definition only have the 0th axis

/usr/local/lib/python3.10/dist-packages/pandas/core/indexing.py in _getitem_tuple(self, tup)
   1254             return self._multi_take(tup)
   1255 
-> 1256         return self._getitem_tuple_same_dim(tup)
   1257 
   1258     def _get_label(self, label, axis: int):

/usr/local/lib/python3.10/dist-packages/pandas/core/indexing.py in _getitem_tuple_same_dim(self, tup)
    922                 continue
    923 
--> 924             retval = getattr(retval, self.name)._getitem_axis(key, axis=i)
    925             # We should never have retval.ndim < self.ndim, as that should
    926             #  be handled by the _getitem_lowerdim call above.

/usr/local/lib/python3.10/dist-packages/pandas/core/indexing.py in _getitem_axis(self, key, axis)
   1288         if isinstance(key, slice):
   1289             self._validate_key(key, axis)
-> 1290             return self._get_slice_axis(key, axis=axis)
   1291         elif com.is_bool_indexer(key):
   1292             return self._getbool_axis(key, axis=axis)

/usr/local/lib/python3.10/dist-packages/pandas/core/indexing.py in _get_slice_axis(self, slice_obj, axis)
   1322 
   1323         labels = obj._get_axis(axis)
-> 1324         indexer = labels.slice_indexer(slice_obj.start, slice_obj.stop, slice_obj.step)
   1325 
   1326         if isinstance(indexer, slice):

/usr/local/lib/python3.10/dist-packages/pandas/core/indexes/base.py in slice_indexer(self, start, end, step, kind)
   6557         self._deprecated_arg(kind, "kind", "slice_indexer")
   6558 
-> 6559         start_slice, end_slice = self.slice_locs(start, end, step=step)
   6560 
   6561         # return a slice

/usr/local/lib/python3.10/dist-packages/pandas/core/indexes/base.py in slice_locs(self, start, end, step, kind)
   6765         start_slice = None
   6766         if start is not None:
-> 6767             start_slice = self.get_slice_bound(start, "left")
   6768         if start_slice is None:
   6769             start_slice = 0

/usr/local/lib/python3.10/dist-packages/pandas/core/indexes/base.py in get_slice_bound(self, label, side, kind)
   6674         # For datetime indices label may be a string that has to be converted
   6675         # to datetime boundary according to its resolution.
-> 6676         label = self._maybe_cast_slice_bound(label, side)
   6677 
   6678         # we need to look up the label

/usr/local/lib/python3.10/dist-packages/pandas/core/indexes/base.py in _maybe_cast_slice_bound(self, label, side, kind)
   6621         # reject them, if index does not contain label
   6622         if (is_float(label) or is_integer(label)) and label not in self:
-> 6623             raise self._invalid_indexer("slice", label)
   6624 
   6625         return label

TypeError: cannot do slice indexing on Index with these indexers [1] of type int
```

**Why does slicing using indices doesn't work with `loc`?**

Recall, we need to work with explicit labels while using `loc`.

Code:
```python=
df.loc[1:5, ['country','life_exp']]
```
> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/731/original/i17.png?1708411415" width=250 height=255>

\
In `loc`, we can mention ranges using column labels as well.

Code:
```python=
df.loc[1:5, 'year':'life_exp']
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/732/original/i18.png?1708411454" width=400 height=230>

\
**How can we get specific rows and columns?**

Code:
```python=
df.iloc[[0,10,100], [0,2,3]]
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/733/original/i19.png?1708411490" width=400 height=155>

\
We pass in those **specific indices packed in `[]`**,

**Can we do step slicing?** Yes!

Code:
```python=
df.iloc[1:10:2]
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/734/original/i20.png?1708411534" width=600 height=225>

\
**Does step slicing work for `loc` too?** Yes!

Code:
```python=
df.loc[1:10:2]
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/735/original/i21.png?1708411566" width=600 height=225>

---
title: Break & Doubt Resolution
description:
duration: 600
card_type: cue_card
---

### Break & Doubt Resolution

`Instructor Note:`
* Take this time (up to 5-10 mins) to give a short break to the learners.
* Meanwhile, you can ask the them to share their doubts (if any) regarding the topics covered so far.

---
title: Pandas built-in operations
description:
duration: 900
card_type: cue_card
---

## Pandas built-in operations

### Aggregate functions

Let's select the feature `'life_exp'` -

Code:
```python=
le = df['life_exp']
le
```

> **Output**

```
0       28.801
1       30.332
2       31.997
3       36.088
4       38.438
         ...  
1704    37.080
1705    37.080
1706    80.000
1707    80.000
1708    80.000
Name: life_exp, Length: 1709, dtype: float64
```

**How can we find the mean of the column `life_exp`?**

Code:
```python=
le.mean()
```

> **Output**

```
59.486053060269164
```

What other operations can we do?

- `sum()`
- `count()`
- `min()`
- `max()`

... and so on

**Note:** We can see more methods by **pressing "tab" after `le.`**

Code:
```python=
le.sum()
```

> **Output**

```
101661.66468
```

Code:
```python=
le.count()
```

> **Output**

```
1709
```

What will happen we get if we divide `sum()` by `count()`?

Code:
```python=
le.sum() / le.count()
```

> **Output**

```
59.486053060269164
```

It gives us the **mean/average** of life expectancy.

---
title: Sorting Values
description:
duration: 1200
card_type: cue_card
---

### Sorting Values

If you notice, the `life_exp` column is not sorted.

**How can we perform sorting in Pandas?**

Code:
```python=
df.sort_values(['life_exp'])
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/736/original/i1.png?1708411848" width=625 height=420>

\
Rows get sorted **based on values in `life_exp` column**.

**By default**, values are sorted in **ascending order**.

**How can we sort the rows in descending order?**

Code:
```python=
df.sort_values(['life_exp'], ascending=False)
```

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/737/original/i2.png?1708411884" width=625 height=420>

\
**Can we perform sorting on multiple columns?** Yes!

Code:
```python=
df.sort_values(['year', 'life_exp'])
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/738/original/i3.png?1708411917" width=625 height=420>

\
**What exactly happened here?**

- Rows were **first sorted** based on **`'year'`**
- Then, **rows with same values of `'year'`** were sorted based on **`'lifeExp'`**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/707/original/download.png?1708408923" width=625 height=300>

\
This way, we can do multi-level sorting of our data.

**How can we have different sorting orders for different columns in multi-level sorting?**

Code:
```python=
df.sort_values(['year', 'life_exp'], ascending=[False, True])
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/739/original/i4.png?1708411951" width=625 height=420>

\
**Just pack `True` and `False` for respective columns in a list `[]`**

---
title: Quiz-2
description: 
duration: 60
card_type: quiz_card
---

# Question

How to sort a Pandas dataframe in place based on the values of columns `country` and `population` in descending order?

# Choices

- [ ] df.sort_values(['country','population'])
- [ ] df.sort_values(['country','population'], inplace=True)
- [x] df.sort_values(['country','population'], inplace=True, ascending=False)
- [ ] df.sort_values(['country','population'], inplace=True, ascending=True)

---
title: Concatenating DataFrames
description:
duration: 1500
card_type: cue_card
---

### Concatenating DataFrames

Often times our data is separated into multiple tables, and we would require to work with them.

Let's see a mini use-case of `users` and `messages`.

`users` $\rightarrow$ **Stores the user details** - **IDs** and **Names of users**

Code:
```python=
users = pd.DataFrame({"userid":[1, 2, 3], "name":["sharadh", "shahid", "khusalli"]})
users
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/740/original/i5.png?1708412098" width=220 height=175>

\
`msgs` $\rightarrow$ **Stores the messages** users have sent - **User IDs** and **Messages**

Code:
```python=
msgs = pd.DataFrame({"userid":[1, 1, 2, 4], "msg":['hmm', "acha", "theek hai", "nice"]})
msgs
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/741/original/i6.png?1708412124" width=220 height=200>

\
**Can we combine these 2 DataFrames to form a single DataFrame?**

Code:
```python=
pd.concat([users, msgs])
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/742/original/i7.png?1708412157" width=300>
<br/>


**How exactly did `concat()` work?**

- **By default**, `axis=0` (row-wise) for concatenation.
- **`userid`**, being same in both DataFrames, was **combined into a single column**.
  - First values of `users` dataframe were placed, with values of column `msg` as NaN
  - Then values of `msgs` dataframe were placed, with values of column `msg` as NaN
- The original indices of the rows were preserved.

**How can we make the indices unique for each row?**

Code:
```python=
pd.concat([users, msgs], ignore_index=True)
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/743/original/i8.png?1708412192" width=285 height=300>

\
**How can we concatenate them horizontally?**

Code:
```python=
pd.concat([users, msgs], axis=1)
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/744/original/i9.png?1708412224" width=375 height=200>

\
As you can see here,

- Both the dataframes are combined horizontally (column-wise).
- It gives 2 columns with **different positional (implicit) index**, but **same label**.

---
title: Merging DataFrames
description:
duration: 1500
card_type: cue_card
---

### Merging DataFrames

So far we have only concatenated but not merged data.

**But what is the difference between `concat` and `merge`?**

`concat`
- simply stacks multiple dataframes together along an axis.

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/708/original/d1.png?1708409121" width=700 height=300/>

\
`merge`
- combines dataframes in a **smart** way based on values in shared column(s).

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/709/original/d2.png?1708409138" height=200/>

\
**How can we know the name of the person who sent a particular message?**

We need information from **both the dataframes**.

So can we use `pd.concat()` for combining the dataframes? No.

Code:
```python=
pd.concat([users, msgs], axis=1)
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/745/original/i10.png?1708412258" width=350 height=200>

\
**What are the problems with here?**

- `concat` simply **combined/stacked** the dataframe **horizontally**.
- If you notice, `userid 3` for **user** dataframe is stacked against `userid 2` for **msg** dataframe.
- This way of stacking doesn't help us gain any insights.

We need to **merge** the data.

**How can we join the dataframes?**

Code:
```python=
users.merge(msgs, on="userid")
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/746/original/i11.png?1708412295" width=300 height=185>

\
Notice that `users` has a userid=3 but `msgs` does not.

- When we **merge** these dataframes, the **userid=3 is not included**.
- Similarly, **userid=4 is not present** in `users`, and thus **not included**.
- Only the userid **common in both dataframes** is shown.

\
**What type of join is this?** Inner Join

Remember joins from SQL?

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/710/original/joins.webp?1708409218" width=600 height=150>

\
The `on` parameter specifies the `key`, similar to `primary key` in SQL.

\
**What join we want to use to get info of all the users and all the messages?**

Code:
```python=
users.merge(msgs, on="userid", how="outer")
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/747/original/i12.png?1708412331" width=300 height=250>

\
**Note:** All missing values are replaced with `NaN`.

**What if we want the info of all the users in the dataframe?**

Code:
```python=
users.merge(msgs, on="userid", how="left")
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/748/original/i13.png?1708412364" width=300 height=220>

\
**Similarly, what if we want all the messages and info only for the users who sent a message?**

Code:
```python=
users.merge(msgs, on="userid", how="right")
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/749/original/i14.png?1708412401" width=300 height=222>

\
`NaN` in **name** can be thought of as an anonymous message.

But sometimes, the column names might be different even if they contain the same data.

Let's rename our users column `userid` to `id`.

Code:
```python=
users.rename(columns = {"userid": "id"}, inplace=True)
users
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/750/original/i15.png?1708412440" width=180 height=180>

\
**Now, how can we merge the 2 dataframes when the `key` has a different value?**

Code:
```python=
users.merge(msgs, left_on="id", right_on="userid")
```

> **Output**

<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/065/751/original/i16.png?1708412473" width=350 height=170>

\
Here,
- `left_on`: Specifies the **key of the 1st dataframe** (users).
- `right_on`: Specifies the **key of the 2nd dataframe** (msgs).

---
title: Quiz-3
description: 
duration: 60
card_type: quiz_card
---

# Question

Given two dataframes -

```python=
import pandas as pd
df1 = pd.DataFrame({'A':[10,30], 'B':[20,40], 'C':[30, 60]})
df2 = pd.DataFrame({'A':[10,30], 'C':[30, 60]})
df2.merge(df1, on = 'A', how = 'outer')
```

What would be the shape of the output dataframe?

# Choices

- [ ] (3, 4)
- [x] (2, 4)
- [ ] (3, 2)
- [ ] (4, 2)

---
title: Launch a feedback poll
description: To gather valuable feedback regarding pace adjustment
duration: 30
card_type: poll_card
---

# Description
Which of the following best depicts your current level of confidence about the pace and difficulty of the material covered in the last 3 lectures?

# Choices
- Super confident: Feeling super confident and comfortable with pace, ready to conquer the next lesson.
- Somewhat confident: Grasping most concepts and comfortable with content & pace, but a few concepts need brushing up.
- Not so confident: While I understand some concepts, I'm finding the pace a bit too fast at times.
- Feeling a bit lost: I'm finding it difficult to keep up with the pace or grasp certain topics.
- Completely lost: I'm struggling significantly with the pace and difficulty of the material.


---
title: Unlock Assignment & ask learner to solve in live class
description:
duration: 1800
card_type: cue_card
---
* <span style=“color:skyblue”>Unlock the assignment for learners</span> by clicking the **“question mark”** button on the top bar.
<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/078/685/original/Screenshot_2024-06-19_at_7.17.12_PM.png?1718804854" width=200 />
* If you face any difficulties using this feature, please refer to this video on how to unlock assignments.
* <span style=“color:red”>**Note:** The following video is strictly for instructor reference only. [VIDEO LINK](https://www.loom.com/share/15672134598f4b4c93475beda227fb3d?sid=4fb31191-ae8c-4b18-bf81-468d2ffd9bd4)</span>
### Conducting a Live Assignment Solution Session:
1. Once you unlock the assignments, ask if anyone in the class would like to solve a question live by sharing their screen.
2. Select a learner and grant permission by navigating to <span style=“color:skyblue”>**Settings > Admin > Unmuted Audience Can Share**, then select **Audio, Video, and Screen**.</span>
<img src="https://d2beiqkhq929f0.cloudfront.net/public_assets/assets/000/111/113/original/image.png?1740484517" width=400 />
3. Allow the selected learner to share their screen and guide them through solving the question live.
4. Engage with both the learner sharing the screen and other students in the class to foster an interactive learning experience.



### Practice Coding Question(s)

You can pick the following question and solve it during the lecture itself.

This will help the learners to get familiar with the problem solving process and motivate them to solve the assignments.

<span style="background-color: pink;">Make sure to start the doubt session before you solve this question.</span>

Q. https://www.scaler.com/hire/test/problem/23265/ - Population greater than 10 mn
