## ASSESSMENT SUMMARY
| Grade | SKILL |
|:------------------------------------ |:------------------------------------:|
|  | Data Visualization and Communication |
|  | Machine Learning |
|  | Scripting and Command Line |
## ASSESSMENT DETAILS
### Data Visualization and Communication
**Summary**: Up to scratch visualisation and communication. The conjecture was clearly stated and later on validated with. Agreeable presentation was included as well.
1. Objective was clearly stated which made it clear on what could be expected from the analysis
2. Picked out that the given data was quite big of visual analysis
3. Fittingly resampled the given large dataset in order to make it fit for visual analysis.
4. Here, we see a unique approach towards the start of EDA was followed
5. Identified 'delay time' as a crucial feature. Hence, explored it more by plotting it against other important features. This helped gain crucial insights about the data.

6. Satisfying reasoning given for not including every feature in the EDA process, by identifying their importance.(blank plot in the above image)
> cancelled flights trimmed out for visualisation purpose
7. The necessary type conversions and encoding to be done was pinpointed.
8. Identified the feature 'departure' being not so contributing towards the analysis.
> 'departure' was omitted as there is only one departure("HKG" aka Hong Kong)
9. Distribution of features was identified with the help of plots and the following was concluded. One such plot is shown
> Claim vs no claim distributions for each feature look similar, but some entries have higher occurrences of claims.

### Machine Learning
**Summary**:Feature engineering is done with utmost care. What is done and why is it done is clearly mentioned in the presentation.
#### Feature Engineering
1. A good start to the pre-processing was given by mentioning the constraints withing which it has to be done
> is_claim is either 0 or 800
> std_hour is always 0-24
> week is between 1-52
2. Identified 'Airline' being the sole feature with nulls and dealt with the same by dropping out that feature.
3. Extra time features were engineered which will help in prediction. Reasoning was included in the form of visualisation in the presentation
> 'weekday' is added as a supplement
4. Categorical features were appropriately dealt with by wither encoding them or using one-hot encoding.
Following snippet shows how the null values were dealt with. Similar formatting and commenting done for other feature engineering tasks which made it easy to understand.
```python=
def preprocess(df):
"""
Preprocess data. Returns a new object
"""
df = df.copy(deep=True)
df['flight_date'] = pd.to_datetime(df['flight_date'])
# Convert is_claim to class labels
df.loc[df['is_claim'] == 800, 'is_claim'] = 1
# Add Weekday column
df['Weekday'] = df['flight_date'].dt.weekday
# Fill null values. Only Airline contains null values.
df['Airline'].fillna('Unknown', inplace=True)
# Drop delay_time < 0 but keep cancelled flights
df = df.loc[~df['delay_time'].str.startswith('-')]
```
#### Machine Learning Model
1. Initially, different ways to frame the problem statement were identified. This will help in laying out different possible models and choosing the best of them.
> 1.) Regression to predict delay_time
> 2.) classification to predict is_claim
2. Finally, option 2, that is, classification was chosen, and the following reason given for the same
> Objective is to obtain an expected claim amount
> `E(claim)=800*P(is_claim=1 | features)`
3. Decided to compare the models using Mean absolute and mean square errors. Identified 2 cases for calculating both, namely dynamic and binary pricing, and chose the best of it.
> '(MAE/MSE(prob)) using P(is_claim)*800' is dynamic as it includes probability
> '(MAE/MSE) using predicted class*800' is static
4. While calculating baseline accuracy, identified that 'is_claim' is a rare event and is
> insensitive to the training data
5. Hence, used precision and recall for prediction instead of accuracy
```python=
def evaluate_baseline(y_true):
"""
Baseline accuracy. Simply predicts all zeros
"""
y_pred = np.zeros(len(y_true))
mae = mean_absolute_error(y_true * 800, y_pred)
mse = mean_squared_error(y_true * 800, y_pred)
f1 = f1_score(y_true, y_pred)
accuracy = accuracy_score(y_true, y_pred)
recall = recall_score(y_true, y_pred)
return {'F1 Score': f1, 'accuracy': accuracy, 'recall': recall, 'MAE': mae, 'MSE': mse}
```
6. Started the prediction with naive bayes model and obtained 5-fold cross validation for the same. This helped in analysing the usefulness of the model to the fullest.
7. Similar modelling was done using logistic regression and it was concluded that
> compared to naive bayes, logistic regression has lower recall but also lower MAE and MSE.
8. Identified primary focus was to obtain low MAE and MSE, and hence opted for logistic model.
9. Considerations and trade-offs were mentioned, that helped in clarifying why using dynamics pricing is better than binary pricing.
### Scripting and Command Line
**Summary**: The project was done using Python and was containerized using docker. Included a README file and a presentation.
1. README was included. It talked about the project structure and how to set up everything.
2. requirements file was included. It clearly mentioned the packages along with their versions to be installed.
3. Proper comments were used inside the code.
4. Code is readable with uniform spacing and consistent styling.
5. With the help of docker, the whole model with all the dependencies was containerized. This results in easy access of the code and platform independency of the model.
6. Presentation included almost everything from data exploration, data pre-processing to training.
7. Possible improvements were identified that can help with more enhanced prediction
> 1.) more data can be helpful
> 2.) can test more models for prediction that might perform better
> 3.) Testing can be done.