# SEE 22.06.2022 Notes - Server Performance
## Overview / Current contex
Our server application is both our `API gateway` (for some logic) and our biggest `workhorse`.
We have some services split to reduce the load of the main server container (`notifications`, `wallet-manager`) but overall at this stage there is minimal horizontal scaling possibilities of subsystems of the server. We are using, essentially, taking little advantage of the `gateway` possibilities of graphql - a language specifically designed for `querying` + `retreiving` data and not tied to any backend.
[N+1 problem](https://medium.com/doctolib/understanding-and-fixing-n-1-query-30623109fe89) ***The N+1 query problem happens when your code executes N additional query statements to fetch the same data that could have been retrieved when executing the primary query.***
That problem is common to our approach of querying data. Let's say we have the following query:
```grap
query Users
{
users(limit: 50, shuffle: false)
{
displayName
profile
{
tagsets
{
name
tags
}
references
{
description
}
location
{
country
city
}
}
}
}
```
What this actually does to our server is the following:
1. Hits the users query.
2. Authorizes the request (all requests with a guard)
3. Makes a query to the database retreiving the users properties.
4. Hits the user `profile` `field resolver`. Once per each user.
5. Authorizes the request.
6. Makes a query to the database to get the profile properties.
7. Hits the profile `tagsets`, `references`, `location` field resolvers. Once per each profile.
8. Authorizes the request to each field resolver.
9. Makes a query to the database to get the properties of each field resolver.
So, for 50 users, we have 1 query to get the users, 50 queries to get the profile, 50 queries to get the location, 50*(number of references), 50*(number of tagsets). On top of that we make multiple DB hits per each guard that we hit(multiply that by the sum of all above). So, realistically, we end up with 1000+ queries to the database for a single graphql hit.

The way manage our requests in graphql, apollo and express, is called `naive` approach in the literature. We are not batching requests, we are not re-using metadata, we hit the database on each authorization guard, then we go through Ory Passport Strategy and create new AgentInfo each time.
Not covered / prioritized / discussed until now:
- client performance - out of scope
- wallet-manager - out of scope
- notifications - out of scope
## Goals of the meeting
- highlight the gaps in our approach and potential solutions to those problems
- provide samples how some of the gaps can be filled
- agree on next steps and approach to minimizing server load and decreasing server `query` times
## Possible solutions
- dataloader - solves the N+1 problem by batching the requests to field resolvers. Does **NOT** solve the problem of hitting multiple user queries to build the AgentInfo in the AuthenticationService
- caching - caching the AgentInfo retrieval on each guard will save us a TON of time. We are making multiple calls to retrieve a data in the same context, which makes no sense, slows the application down, consumes unnecessary resources and costs extra money.
- optimizing our queries so they don't overfetch data from the db
## Actions
- Release AgentInfo caching + User / Org Profile field resolver dataloaders
- Refactor the dataloaders approach --> code cleanup & refactoring
- Prioritize next set of dataloaders to be added
- Fit-for-purpose queries on the server - no overfetching
- Next set of cache(s) to be identified
## Agreements
- Adding a new field resolver --> fetch the data with dataloader
###### tags: `SEE`