# EV4 Presentation Overview
## Design Retrospective
### Frontend
### Backend
Previously, bulk import was unwieldy due to relying on the GoogleV3 geocoder, which does not allow bulk geocoding. As a result. During this evolution, database caching of the results was used to avoid repeat calls to the external API. This was changed during this evolution as an optimization.
The `address` field becoming optional also introduced a surprising bug during this evolution due to no guards preventing queries that are null or empty strings. The introduced wrapper provided a clean way to address this change.
## Design Evaluation
### Frontend
#### Strengths
#### Weaknesses
### Backend
- Students that can log in as users are handled by maintaining a foreign key to Users. This also conveniently satisfies single directional cascade deletes.
#### Strengths
- Synchronizing Student and their User login account is primarily done via update and create methods in a single serializer, making it easy to maintain. This would allow further synchronizations if additional fields are added. This is in-line with the ethos of **thin views and thick serializers**. Although this might be due to preference, the `backend/api.py` has become very long, so this balance was welcome.
#### Weaknesses
- User objects can be unwieldy at times due to additional fields added to each (`managed_schools`, `linked_student`, ...etc) which might not be used by a given user of some group. However, using additional subclasses would likely complicate serializers.
- The backend bulk import endpoint is unwieldy to update, and is essentially a massive method that does all error checking. As a result, it is a prime candidate for refactoring.
### FROM DANA:
#### weaknesses
- backend for managing bus logs got complicated quickly
- lots of columns in a table, with many foreign keys meaning messing with one thing could potentially mess with another
- many conditions for whether or not something was valid
- driver on run? route has run? bus on run?
- handling timeout in a lazy way
- every time a bus run is accessed, check if it has maybe timed out
- the worry here was what if i missed a place when interacting with the BusRun object in the backend? don't think i did, and was hyper-vigilent for it, but hey, it could happen
- RETROSPECTIVE:
- nginx has a thing that limits payload size, and we didn't do a good enough job of testing for it and large files for bulk import would fail on our ev3 system due to file size (we did find the timeout case tho).
- in this ev, we increased the max client body size to account for larger files, as well as added a check to handle the case for if a user goes beyond it (FE will tell them to split things up)
#### strengths
- how we interact with tranzit_traq
- threading
- means one call doesn't determine the success or failure of another
- also means that the website useability doesn't change at all
- bad response
- FE never interacts with tranzit_traq, just a BE table
- if trazit_traq gives a bad response, our BE table won't be updated and the FE only has one error case to check (bus location not found)
- handling timeout only interacts with runs as theyre needed, so at worst case all active runs will be checked and at best case only one run will be checked at a time, which means less spinning of wheels in the backend
- calculating eta was incredibly easy because we had set ourselves up for it well
- RETROSPECTIVE:
- it was easy to make changes to how previous objects work (`route`, `stop`) to include new data like `bus, driver, run` and `eta` respectfully
- yay django!!
- no quarantine this thing because we fixed things....
#### evaluation
* overall, our code for bus logs + tracking is maintainable and clean enough to work with if future changes need to be made
* would have been cool to get an actual thread scheduler working with django, but we have something that works and picked using something that would work over something that would be cleaner
* something broke our migrations, so automated deployment is broken. didn't fix it bc last ev, but the fix would be to work like real software devs and push our migrations instead of making them from scratch at deployment.