## ASSESSMENT SUMMARY
| Grade | SKILL |
|:------------------------------------ |:------------------------------------:|
|  | Data Visualization and Communication |
|  | Machine Learning |
|  | Scripting and Command Line |
## ASSESSMENT DETAILS
### Data Visualization and Communication
**Summary**: Up to scratch visualisation and communications. Hypotheses were clearly stated and later on validated. Included a presentation as well.
1. Explored the null values.
2. Traditional data analysis was done.
3. Visualized total number of flights for each day,week and for different flights


### Machine Learning
**Summary**:Feature engineering is done with utmost care. What is done and why is it done is clearly mentioned in the README and presentation. All steps and final observations were included in the presentation.
#### Feature Engineering
1. Identified airline as the only feature containing nulls.
2. Machine Learning used in early stages in order to decide whether to get rid of or fix(predict) the columns having missing values.
3. In order to predict missing data, accuracy of 3 models namely RandomForest, AdaBoost, and CatBoost was compared and finally it was decided to get rid of the airline feature
> With accuracy of 39%, over 1000 test data, the prediction could mislead the model.
```python=
df = df.dropna()
df.to_csv(os.path.join(DATA_DIR, 'flight_fixed.csv'), index=None)
```
4. Identified the importance of weather data in predicting the delay. Weather data was gathered from online resources and saved properly as a separate dataset.
> In addition to the training dataset, a weather dataset is added to improve the prediction results. Being one of the most common factor of airline delays, weather dataset is a great resource to boost the overall performance.
5. Extra features engineered and added to the original dataset.
```python=
def get_day_of_the_week(date_string):
return datetime.strptime(date_string, "%Y-%m-%d").weekday()
df['day'] = df['flight_date'].apply(get_day_of_the_week)
df['flight_datetime'] = pd.to_datetime(df['flight_date'].astype(str) + ' ' + df['std_hour'].astype(str) + ':00', format='%Y-%m-%d %H:%M')
```
6. Genuine formatting of the saved weather data was carried out and again saved to use it while prediction
7. Weather data combined with main data after doing certain pre-processing.
```python=
combined_df = df.join(weather_df, on='flight_datetime')
combined_df.to_csv(os.path.join(DATA_DIR, 'flight_weather.csv'),index=None)
```
5. Original dataset labels split to 2 datasets namely delayed and cancelled. Hence, delay_time dropped from original dataset.
```python=
df_delayed = combined_df
df_delayed = df_delayed[df_delayed['delay_time'] != 'Cancelled']
df_cancelled = combined_df
df_cancelled.loc[df_cancelled.delay_time != "Cancelled", "delay_time"] = 0
df_cancelled.loc[df_cancelled.delay_time == "Cancelled", "delay_time"] = 1
df_cancelled = df_cancelled.rename(columns={"delay_time": "is_cancelled"})
df_claimed = combined_df
df_claimed = df_claimed.drop(labels=['delay_time'],axis=1)
```
6. Outliers were removed from newly created delayed dataset in order to prevent any undefined behaviour in prediction.
7. Best features were identified. Graph plotted to check popular flights and destination. 10000 kept as a base to be a popular flight
```python=
total_flights = 10000
popular_airlines = [k for k, v in dict(df.groupby(['Airline']).size() > total_flights).items() if v == True]
popular_destination = [k for k, v in dict(df.groupby(['Arrival']).size() > total_flights).items() if v == True]
```
8. Following plots show the count of popular airlines and destinations. These were used to conduct fruitful feature engineering

9. Cancellation date compared with average delay and the following was plotted

10. Importance of weather data was identified by comparing it with delay

#### Machine Learning Model
1. Unique approach used in order to get the best model.
2. Classification and Regression techniques both were used. Reason for doing so was
>
-One is to create a regressor to predict `delay_time` based on the features given in the training dataset. The amount of cancellations and delay > 3 hours is very low in the dataset, causing a huge data imbalance. This data imbalance reduces the performance of a classifier, thus the idea of having a regressor.
-Another objective makes use of a classifier to directly predict whether customers claim the money or not. Despite the regressor's high performance, adding another source of prediction will increase the overall performance. Again, due to data imbalance, the metrics used to assess the performance of the classifier is precision and recall. Using accuracy to evaluate the performance does little to no use as it will always result in near perfect prediction.
3. The following was concluded
>The classifier and regressor works together side by side. The regressor will predict the amount of delay the plane will have. With MSE of 0.42 and MAE of 0.25, the regressor is highly capable of determining the overall prediction. To increase the performance, the classifier acts as another determining factor. With precision of 0.51 and recall of 0.6, the classifier should perform alright. If both agrees/disagrees that the customer can claim the money, then the prediction is easily made. However, if both predicts differently, a resolution has to be made. How far off the regressor is from "3 hours" will determine the prediction? This range of how "far off" it should be is by using the MAE. Using the MSE to penalize is too much, considering the regressor performs really well compared to the classifier.
4. Basically 4 models were there
>**Claim classification** is the most direct approach to solve the problem. Given a set of features, predict whether is_claim is 0 or 800
**Delay prediction** predicts the amount of delay given the set of features. This prediction disregard the “cancellation”, but performs really well for its task
**Outlier detection** is very well suited for the data. Outlier detection is normally used to classify in a highly unbalanced data. However, the model failed to converge and was not added to the overall model
**Cancellation classification** was dropped due to its low performance
5. Finally all the models were saved as pickle files.
6. Final analysis was done and it was found that model performs better with additional weather dataset. Other coclusions made were
>-Decision tree outperforms the other classifier in classifying claims
-Extra trees model performs the worst when weather data is removed, having a negative R2 value means that it is worse compared to just having the mean of the whole data
-Cancellation classification has a lower f1-score compared to claim classification despite having similar features and almost similar labels
7. Final prediction was made by comparing the output of both classifier and regressor. Tie breakeing algorithms were included in case of a tie.
8. Out of the 4 algorithms used for tie breaking, 2 and 4 of outperformed 3 and 1.

9. A way to improve algorithms 1 and 3 was found that was to set a threshhold of 2.5 to the MAE
### Scripting and Command Line
**Summary**: The analysis was made with Jupyter notebook.. Presentation and README included for productive understanding of the project.
2. README.txt was included. It was divided into the following sections which made it easier to understand various aspects
> Problem Description
> Solution Description
> Setup
> Environment
> Evaluation
> System Architecture
> Trade-offs
> Further Discussion
3. Necessary packages required for prediction along with their versions were mentioned in the Requirements.txt file.
4. Code is readable with uniform spacing and consistent styling.
5. Code and text styling is consistent throughout the notebook.
6. 5 separate notebooks were included which divided different phases of projects
7. A presentation contained 7 brief sections with good styling. It not only included about various stages of the project, but also had visuals in order to make it more presentable.
8. Clear explanations of what has been done and what are the possible improvements that can be made in the current approach.