# Banter - Project Report

This document highlights our project in its final state. We have given a brief of all the work completed. We recommend you read this doc **[here](https://hackmd.io/@aneeshmelkot/ryp8vYydo)** and not as a PDF.
## Contents
[TOC]
## Context
**Banter** is a social network for students. It is a full-stack application that students can use to interact and exchange with other students on their campus, and also trade their items with other students.
Our application is a fully decompoed, loosly coupled and highly cohesive microservice architecture where every service is independent, isolated and individually upgradable, maintainable and deployable.
### Vision
To build a system that can be retrofitted into a school's existing digital infrastructure. This system/application would enhance the experience of students and also provide a new revenue stream to the school. This application can be provided as a **SaaS**.
### Features
Essentially our app has these main features.
- **Billboard**:
A newsfeed for students to post content. Content can have short/long descriptions along with an image.
- **Clubs**
A clubs page where students can create or join clubs and engage in specialized discourse
- **Events**:
An events page to check out or create upcoming events on campus.
- **Marketplace**:
Shop or marketplace which is our USP that facilitates students to trade their stuff like furniture or textbooks or electronics.
- **Order Management**: Dashboards for both seller and buyer to track and manage orders.
- **Admin Dashboard**: A full fledged admin panel for moderators to manage content on the site
- **Advertisements**: Ads will be featured on all pages. This can be used as a revnue stream for either the school or the developers.
### Purpose
#### Improve student experience
The app is built to imrove the student experience by creating a campus wide social network that can be moderated. The shop is there to promote exchange and barter within the student community.
#### Revenue stream for school
- **Advertisement Space**: Ads dashboard is provided for school moderators to put up advertisement campaigns. This can boost the school's revenue and cut out other platforms like *Facebook*.
#### Revenue stream for developers
- **Transaction Fees**: A ***2%*** transaction fee is levied on all transactions upto a maximun of ***$5***.
- **Subscription model**: The product will be avialable 24/7 in the cloud for schools. Will be offered as a SaaS with monthly/yearly subscription model.
## Specifications
We have used a wide range of open source frameworks and tools to decompose the application into microservices. These are some of the components that we have used.
### Tools and Libraries
| Name | Version | Purpose
| -------- | ------- | --------
| [VS Code](https://code.visualstudio.com/) | *1.72* | Code editor.
| [Git](https://git-scm.com/doc) | *2.38* | Version Control system.
| [Github](https://github.com/) | *N/A* | Git projects hosting service.
| [Python](https://docs.python.org/3/) | *3.10.7* | Primary language for our backend.
| [HTML](https://developer.mozilla.org/en-US/docs/Web/HTML) | *5* | Front end markup language.
| [CSS](https://developer.mozilla.org/en-US/docs/Web/CSS) | *3* | Front end styling markup language.
| [JavaScript](https://developer.mozilla.org/en-US/docs/Web/JavaScript) | *ES6* | Primary frontend language.
| [JQuery](https://api.jquery.com/) | *3.6.1* | Front end JS library.
| [React](https://reactjs.org/docs/getting-started.html) | *18.2* | Primary frontend UI framework.
| [SQL MariaDB](https://mariadb.org/documentation/) | *10.11* | Primary Database for all microservices. Flavor of MySQL.
| [Bootstrap](https://getbootstrap.com/docs/5.2) | *5.2* | Front end library for UI styling and components.
| [RabbitMQ](https://www.rabbitmq.com/documentation.html) | *3.11* | Message queue for the microservices event bus.
| [Kafka](https://kafka.apache.org/documentation/) | *3.3* | Message queue for the microservices event bus.
| [Docker](https://docs.docker.com/) | *20.10.14* | Container runtime for our services.
| [Kubernetes](https://kubernetes.io/docs/home/) | *1.23* | Container orchertrator for our microservices.
### Cloud Providers
We have used [Linode](https://www.linode.com/docs/) as our cloud provider.

> *Linode* is a relatively new **`IaaS`** platform with very cheap prices compared to GCP. [color=#2E8D5D]
### Kubernetes Specs
Provider | CPU | RAM | Storage | Price/Month
--| -- | -- | -- | --
Linode | 1 | 2GB | 50GB | $10
As you can see the prices are very nominal for a *Linode Kubernetes Cluster* ([`LKE`](https://www.linode.com/products/kubernetes/)) and we went ahead with this provider for our cloud computational needs. Mentioned specs was more than sufficent to build and test out our prototype. We ideally wanted to use [Terraform](https://www.terraform.io/docs) for easy provisioning/tracking of the services.
> **`Terraform`** has a provider for `Linode`.
> The Kubernetes container runtime is `Docker`. [color=#3069DE]
### Event Bus

We went with [`Apache Kafka`](https://kafka.apache.org/documentation/) as our message broker as we prioritized throughput as opposed to low latencies.
### CDN

We used [`Cloudinary`](https://cloudinary.com/documentation) as our CDN service. This is a free CDN service that provides a REST API for our static assets like *Images* and *Videos* and abstracts the storage and management of assets. Internally cloudinary uses S3 buckets for object storage.
## Architecture
The following section outlines or architectural choices and different components of our project.
### Entity Relations
This is our ERD outlining the major actors in our application.

### Monolith Database Schema

The above diagram shows the database schema for **Banter**. It features extensive relationships centered around all the entities.
### App Architecture

### Marketplace Architecture *(Ext of previous)*

### Services
As you can see above, the implemented architecture is a decomposition of the monolith architecture into 10 different microservices. Each microservice represents one business logic and is decomposed as such. The overall goal was to convert the application into several services that each have a single responsibility. We did this by adopting an event driven architecture.
The services are -
- **Web App Service**: This is the front end React application that the users interact with. We have used *Tabler UI* components for UI elements.
- **User Service**: This service is responsible for registering and signing in users and provides an interface for other services within the application to fetch user data. This service features functions such as
- Register `User`
- Signin user
- Fetch user details
- **Billboard/Newsfeed Service**: This servic is responsible for the news feed in the application and features functions such as -
- Fetch posts
- For `All`, `User`, `School`
- Create/Delete post
- **Events Service**: This service provides endpoints for creating and managing Events on a campus. This has functions such as -
- Fetch Events
- For `All`, `User`, `School`
- Insert/Create Events
- **Clubs Service**: The clubs service is responsible for facilitating users to create/join clubs and interact with other users. This service features endpoints for -
- Create/delete `Club`
- Join/Leave club memberships for `User`
- Fetch clubs
- For `All`, `School`
- Fetch club details/Info
- For `Club`
- Fetch club members
- For `Club`
- **CDN Service**: This is a service that we have built for other services to work with and handle static data like `Images` & `Video` content generated by the users. This service uses the [Cloudinary](https://cloudinary.com/) API internally and provide a layer of abstraction for our other internal services. These are the functions it supports -
- Upload Image/Video
- *Input*: `Object` (Image/Video)
- *Response*: Static url for hosted object
- Fetch Resource Object (Image/Video)
:::danger
:wastebasket: We currently do not support resource deletion of uploaded assets on our CDN provider.
:::
- **Ads Service**: This service will aggregate the ads created by business owners. The ads are then requested by other services to display along with its data. We have provided an API for other services to -
- Fetch Ads and dynamically inject into other content like `Newsfeed` and `Store`.
- For `School`
- Create/Delete ads.
> ***Example***: The `Newsfeed` service requests `Ads` and then inject it into the Newsfeed posts for the users to see. [color=#FD7E14]
- **Product Service**: This service provides an API and all necessary endpoints to operate the `Storefront` on the app. Every school has 1 storefront/marketplace. This service also internally uses the `CDN` service to store and retrieve product images. Its functions are -
- Fetch products
- For `All`, `School`, `User`
- Create/Delete Products
- **Orders Service**: This service is one of the primary services in the application which facilitates students to buy and sell products with other students. This service is loosely coupled with the `Product` service to create an e-commerce ecosystem. This service has functions such as -
- Create Order
- `Order` created with `Product` & `User`.
- Fetch Order
- For `All`, `User`
- Delete Order
- **Message Queue**: This service forms the backbone of the entire application architecture. We have used the `Publish`/`Subscribe` pattern to store and queue messages for our overarching event driven architecture. We propose to use `Apache Kafka` or `RabbitMQ` as the message broker. All other services have subscribed to specified topics and also laterally publish certain messages for others to consume.
:::info
Every service and/or sub-services listed here emits **message**(s). These messages propogate through the message queue and are be broadcast to other services that are **listening** or **subscribed** to this message topic. :incoming_envelope:
:::
## UI Design
This section highlights our design philosophy.
### Color
This is the color palette we have used throughout the app to provide an aesthetic, seamless experience.

### Typography
This is the font that we used in the application. It is called **`Inter`**.

### UI Framework
We used [**`Tabler`**](https://tabler.io/) as our UI framwork. This came with reusable UI components that we have used throughout the application.

### Responsiveness
We have used a mobile friendly design and the application comfortably conforms to any mobile screen. This was made possible by Flexbox. Below you can see some mobile screenshots of our app.

## Backend Design
As mentioned in the previous documents, our app is composed of various services all talking to the message queue in an event driven architecture.
### Project Structure
Here is the project structure of our application. All services are build using Python Flask and follow a structure as seen for the
The infra directory contains the k8s deployments and services defined for all the microservices in our application.
```
.
└── banter_micro/
├── advertisement/
│ ├── tests/
│ ├── .dockerignore
│ ├── Dockerfile
│ ├── app.py
│ └── models.py
├── billboard/
│ ├── tests/
│ ├── .dockerignore
│ ├── Dockerfile
│ ├── app.py
│ └── models.py
├── client/
│ ├── node_modules/
│ ├── public/
│ ├── src/
│ ├── .gitignore
│ ├── package.json
│ └── README.md
├── clubs/
│ ├── tests/
│ ├── .dockerignore
│ ├── Dockerfile
│ ├── app.py
│ └── models.py
├── event_bus/
│ ├── tests/
│ ├── .dockerignore
│ ├── Dockerfile
│ ├── app.py
│ └── models.py
├── events/
│ ├── tests/
│ ├── .dockerignore
│ ├── Dockerfile
│ ├── app.py
│ └── models.py
├── infra/
│ └── k8s/
│ ├── billboard.yaml
│ ├── advertisements.yaml
│ ├── clubs.yaml
│ ├── event_bus.yaml
│ ├── shop.yaml
│ ├── user.yaml
│ └── events.yaml
├── shop/
│ ├── tests/
│ ├── .dockerignore
│ ├── Dockerfile
│ ├── app.py
│ └── models.py
└── user/
│ ├── tests/
│ ├── .dockerignore
│ ├── Dockerfile
│ ├── app.py
│ └── models.py
```
### Docker
This is the `Dockerfile` for the service container creation. All services have a slightly modified version of this Dockerfile.
`Dockerfile`
```dockerfile!
FROM python:3.10.7-slim-buster
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0"]
```
For local multi container testing, we have put in this `docker-compose.yaml` for all the services with respective service names -
`docker-compose.yaml`
```yaml!
name: banter-billboard
services:
web:
build: .
ports:
- 5050:5000
db:
image: mariadb:10.9.2
restart: always
ports:
- 3306:3306
environment:
- MARIADB_ROOT_PASSWORD=${MARIADB_ROOT_PASSWORD}
- MARIADB_DATABASE=${MARIADB_DATABASE}
volumes:
- banter-db-data:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8085:8080
volumes:
banter-db-data:
driver: local
```
The above composewill containerize the Service along with the DB and a service called Adminer for DB management. Similar compose file in place for other services.
### Kubernetes
We are using the following k8s types.
- Deployment
- Service
- PersistentVolume
- Secret
- Ingress
> The following YAMLs that you see are for **1 service** in our application. [color=#4895ef]
This is the `yaml` for `Deployments` and `Services` of one of our Microservices. All other services' yamls look similar but with respective service names instead. It is all composed in 1 yaml file `<service_name.yaml>` with `---` as the delimeter.
```yaml!
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
labels:
app: banter-billboard
spec:
replicas: 3
selector:
matchLabels:
app: banter-billboard
template:
metadata:
labels:
app: banter-billboard
spec:
containers:
- name: banter-billboard
image: docker.io/library/banter-billboard
imagePullPolicy: Never
ports:
- containerPort: 5000
env:
- name: DATABASE_PATH
value: "mysql+pymysql://root:admin@banter-billboard-db-svc/banter"
- name: SECRET_KEY
value: "secret"
---
apiVersion: v1
kind: Service
metadata:
name: banter-billboard-svc
spec:
ports:
- port: 5000
protocol: TCP
targetPort: 5000
selector:
app: banter
type: LoadBalancer
```
The following is the yaml file for the service's DB deployment. All services have a similar deployment file for the DB as every service has a DB attached.
```yaml!
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: banter-billboard-db-deployment
labels:
app: banter-billboard-db
spec:
replicas: 1
selector:
matchLabels:
app: banter-billboard-db
template:
metadata:
labels:
app: banter-billboard-db
spec:
containers:
- name: maria-db
image: mariadb
imagePullPolicy: Always
env:
- name: MARIADB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mariadb-credentials
key: MARIADB_ROOT_PASSWORD
- name: MARIADB_DATABASE
value: banter
ports:
- containerPort: 3306
name: db-container
volumeMounts:
- name: maria-persistent-storage
mountPath: /var/lib/mysql
- name: adminer
image: adminer
imagePullPolicy: Always
ports:
- containerPort: 8080
volumes:
- name: maria-persistent-storage
persistentVolumeClaim:
claimName: maria-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: banter-billboard-db-svc
labels:
app: banter-billboard-db
spec:
ports:
- port: 3306
protocol: TCP
name: maria-db
selector:
app: maria-db
type: NodePort
```
All deployments are ephemeral. Hence, all Databases will need a `PersistentVolume` to retain the date. Here we have defined a persistent volume storage in K8s
```yaml!
apiVersion: v1
kind: PersistentVolume
metadata:
name: maria-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: maria-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
```
Lastly all secrets in k8s are stored in a `Secret` as follows. Here the value itself is Base64 encoded. This is a Secret file for one of the service's DB Root Password.
```yaml!
apiVersion: v1
kind: Secret
metadata:
name: mariadb-credentials
type: Opaque
data:
MARIADB_ROOT_PASSWORD: YWRtaW4=
```
### Microservice
Essentially, every microservice has the following design. We have paired up every service with its own highly available database. Every service stores its data only in its coupled DB. Every service is typically 2-3 containers structured as follows -
- **1** Service
- **1** Database (`Maria DB` or `Mongo DB`)
- **1** Database Manager (`Adminer` or `Mongo Express`)
> *Database Manger* was optional and we have typically attached other containers as well for monitoring and logging on a need basis. (**`Sidecar Pattern`**) [color=#FD7E14]
:::warning
:pushpin: **Services fetch/receive data from other services through network requests only or by subscribing to messages from the message queue.**
:::
:::danger
:warning: **A service never read/query from another service's database directly.**
:::

### Kubernetes Pod
Every microservice was packaged as a Pod that looks like the image shown below. Services and their datastores are in seperate containers. Both the containers form a `Pod`.

### Message Queue / Event Bus
The most important part of this entire architecture is the message queue.
This was be using `Apache Kafka`. It forms the backbone of the entire app and facilitates the event driven architecture. Every service in the application emits events onto the message queue when different events are triggered. Services listen to all the events and consume events relavant to them.

> ***Example*** - When an *order* is created in the `Order` service, it will emit a message onto the queue. This message will have type/topic of `order_created`. Another service such as the `Product` service will be subscribed to this topic and will consume the `order_created` message and subsequently get the details from this message and update its relevant internal schema accordingy. [color=#FD7E14]
### Message Formats
The message format plays a crucial role in our application. A message in our system is fundamentally a `JSON` object with the following properties -
- **Message type/topic**: This contains an identifier for the type/topic of event the message is corresponds to.
- **Payload**: The date contained within a message
- User details if message is of type `user_created`.
- Order details if message is of type `order_created`, etc.
Similarly, different messages are defined for different events that can can be emitted by the services.
#### User Created
```javascript!
{
"topic": "user_created",
"payload": {
"user_id": "XXXX",
"name": "Aneesh",
"school_id": 1235,
"phone": "12334566",
"major": "CS",
"dp" : "http://dp.uri.com/XXXX",
}
}
```
#### User Logged In
```javascript!
{
"topic": "user_logged_in",
"payload": {
"user_id": "XXXX",
"name": "Aneesh",
"school_id": 1235,
"phone": "12334566",
"major": "CS",
"dp" : "http://dp.uri.com/XXXX",
}
}
```
#### School Created
```javascript!
{
"topic": "school_created",
"payload": {
"school_id": "XXXX",
"name": "UTA",
"desc": "Awesome",
"state": "TX",
"city": "Arlington",
"country" : "USA",
}
}
```
## Management
We have used [**`Agile Kanban`**](https://www.atlassian.com/agile/kanban) to manage the project.
We had up a task board as follows -
Backlog | TODO | In Progress | Completed | Blocked | Bugs | Ready
-|-|-|-|-|-|-|
A requirement/feature started out in the backlog and progressed through the different stages as shown above. The final stage is `Ready` where the task was fully tested and ready to be deployed and used.
Tasks were tagged with priority **`1-5`**. **`1`** being highest and **`5`** being lowest. Tasks will also be tagged with the developer's name.
We also had an internal priority list for the services and implemented them in the descending order of priority.
We also typically synced up twice a week to check up on project progress. We also used `Git` to track all branches and all the Team's commits.
## Testing
### Unit Testing
- Tests have been executed on each and every container to check the functionality of the code.
- We have used `PyTest` to run the predefined test cases per functionality and put in place a simple `CI piplene` to run tests on code push.
At its current state, the services are running on our machines in a local kubernetes cluster using Minikube. The completed services are working in isolation and some rudimentary unit tests have been written to ensure quality.
### Integration Tests
We have also put in some rudimentary integration tests as we didn't have sufficient time in our busy schedules.
## Deployment
As mentioned in the specifications, we have used [`Linode`](https://www.linode.com/) as our cloud provider. Using this we first prviosioned several compute instances for bootstrapping our project and for testing purposes.
Next we have provisioned a single node `Kubernetes` cluster ($10/month) and deployed our entire stack of services on this cluster.
Finally we have configured a LoadBalancer and an Ingress service and configured it with our domain.
You can find the app running on - [**https://banter.dev.aneeshmelkot.com/**](https://banter.dev.aneeshmelkot.com/)
## Screenshots
Here are some screenshots of the application.
#### Login/Register


#### Home Page

#### Billboard/ Newsfeed Page

#### Events Page

#### Profile Page

#### Clubs Pages
##### Clubs Home

##### Club Detail

#### Marketplace Pages
##### Shop Home Page

##### Cart Page

##### Purchases Page

##### My Products Page

#### Admin Pages
##### School Admin

##### Super Admin

## Demo
You can find a video of us showing the app's features here on YouTube - [](https://)
[
](https://youtu.be/j5_TiLGjRBU)
## Evaluation
These are some of our key evaluation criteria.
### Goals
We were able to achieve all our goals set forth at the start of the semester -
- Use cloud technologies to solve a problem.
- Build an application using microservices.
- Leverage containerization, kubernetes and devops best practices.
- Adopt a Test Driven Development and ensure good test coverage of all the main features.
- Build a pleasant and good user experience throughout on the app.
### Performance
- Team was focused and could transition through multiple stages of learning.
- Team had experience developing Monoliths and SOA applications and could build on top of this.
- App performs reliably and we have perforemd beta testing by oboarding many of our friends.
### Competition
- Currently our competitors are
- Shopify
- Facebook Marketplace
- Instagram Marketplace
- Our aim to retrofit our app into schools would cut out the middle man and enable schools and developers to improve revenue.
### Challenges
- We faced some **Time Crunch** as we were busy looking for full time opportunities as we are graduating this semester
- **Developeralysis**: Initally it was challenging to figure out what tech stack to use and what providers or frameworks to utilize.
- **Expertise**: Each team member specialized in one part of the stack and it was challenging to close the gap between our combined expertise.
- **Remote Sync**: It was challenging to sync up remotely and brainstorm and work on the application.
- Since the team memebers had varying **Schedules**, it was challenging to find a common time to catch up and collaborate.
- Since **Microservices** was a farily new concept for the team it was challenging to ramp up quickly and implement the application.
### Retrospect
In retrospect we were successful in delivering a complete, well-polished solution that is ready to be shipped out to end users. Even though we faced several challenges and the time was short, we were able to efficiently develop and deliver the proposed application and we await feedback.
## Conclusion
This project was primarly done to improve student experience as mentioned at the start. However along the way this project has led us to explore a wide variety of cloud services and technologies and enabled us to leverage these powerful tech to build a distributed solution. We have delivered everything that we had initially proposed using the technology stack that we initially proposed.
## Author
Aneesh Melkot |
:-----:|
