# Fauna -> Postgres
## Agenda
### π & π
#### supabase
- π Comes with [PostgREST].
- π€Question: does this have the same limitations on list-access and require a document only approach like firebase? (the-simian)
- πA: No. that's just marketing. Its postgres + conveniences, like auth and storage and management console
- π Single instance (or so it seems)
- π Well loved by many devs, thusfar
#### heroku
- π Does not comes with HTTP interface.
- π [Offers plans with standby replicas](https://devcenter.heroku.com/articles/heroku-postgres-ha)
- π [Super easy read-only replicas](https://devcenter.heroku.com/articles/heroku-postgres-follower-databases)
- π [Forking a database](https://devcenter.heroku.com/articles/heroku-postgres-fork)
- π [Dataclips](https://devcenter.heroku.com/articles/dataclips)
- π @the-simian has overal positive expreince
- backup and restore with heroku works well
- π "Battletested", has been around longer.
#### digital ocean
_TL;DR similar offering to heroku, plus we could store user uploads on DO too._
- π Does not comes with HTTP interface.
- π [Offers plans with standby replicas (HA)](https://docs.digitalocean.com/products/databases/postgresql/)
- π [Easy read-only replicas](https://docs.digitalocean.com/products/databases/postgresql/how-to/add-read-only-nodes/)
- π [Forking a database](https://docs.digitalocean.com/products/databases/postgresql/how-to/fork-clusters/)
- β DO also does "Spaces" ceph backed s3 compatible storage (we are looking at using s3 for storing user uploads) https://docs.digitalocean.com/products/spaces/
#### Amazon RDS
_We should at least consider it, but riba mentioned he has witnessed reliability issues._
- β amazon also has this "s3" product that we need for backing up user uploads until they hit a filecoin deal.
### Things to consider
- Trigram already wired everything with GraphQL (Would require more work to rewrite for REST)
- Fit for CI automation
- Do we need more then one instance and when ?
- Will we want more access control ?
- It will require middleware
- A: We already have middleware in nft.storage and we don't need complex auth.
### What do we need to know to make informed decision ?
- How much do we care about [PostgREST][] ?
- How much do we care about GraphQL interface ?
- Can we use stored procedures from GraphQL interface ?
- How do we manage different instances from PostgREST ?
- Is REST interface load balanced ?
- Where is PostgREST will send read / write requests ?
- Addon does not seem maintained.
- Is subzero cloud wortnh considering ?
- Does hasura loadbalance between multiple instances ?
- How did we add another component if we needed to add something not provided by supabase ?
- Heroku seems convenient for certain tools at a data-persistance/access tier of the stack (Redis, elastisearch (ELK)). Supabase, like firebase has differnt productization-centric conveniences like auth, storage, (eventually) lambda.
- How do you do feature development ?
- Fork the db seems like a great option ?
- How would you do a supabase ?
- How would it work on CI ?
- What's our budget constraintns ?
### What are the Action Items ?
- [ ] @hugomrdias - Find out how visible is to access Postgress from CF workers without REST API.
- [x] @gozala - Find out if hasura can loadbalance between multiple instances
- Short version is YES
> Hasura Cloud lets you scale your applications automatically without having to think about the number of instances, cores, memory, thresholds etc. You can keep increasing your number of concurrent users and the number of API calls and Hasura Cloud will figure out the optimizations auto-magically. Hasura Cloud can load balance queries and subscriptions across read replicas while sending all mutations and metadata API calls to the master. Learn more about Horizontal scaling with Hasura
For more details see https://hasura.io/docs/latest/graphql/core/guides/faq/index.html#how-does-hasura-scale-vertically-and-horizontally and https://hasura.io/learn/graphql/hasura-advanced/performance/2-horizontal-scaling/
- π They also seem to translate GraphQL query / mutations into SQL instead of having resolvers etc..
- π They provide [toolchain to do migrations and CI stuff](https://hasura.io/docs/latest/graphql/core/migrations/index.html#migrations)
- π They also allow using [REST API instead of GraphQL](https://hasura.io/docs/latest/graphql/core/guides/faq/index.html#can-i-use-rest-instead-of-graphql-apis)
- [ ] @hugomrdias & @mikeal - Talk to supabase about their scaling startagy
- @the-simian - Port niftysave schema to Postgres
- Coordinate with @hugomrdias on shared tables
- [ ] @the-simian, @gozala - migrate data from fauna to postgres. (might be useful https://www.graphile.org/)
- [ ] @the-simian, @gozala - update niftysave CRON jobs.
## Notes
- web3.storage schema ported to psql https://github.com/andrew/web3-schema/
- and hosted on heroku + postgREST.
- dagcargo psql schema https://raw.githubusercontent.com/nftstorage/dagcargo/master/maint/pg_schema_diagram.svg
- nft.storage schema - https://github.com/ipfs-shipyard/nft.storage/pull/263
## PostgreSQL schemas - Sep 1st
- [niftysafe](https://github.com/ipfs-shipyard/nft.storage/pull/358)
- [nft.storage]() TODO: @hugomrdias
- [web3.storage]() TODO: @alanshaw
#### Sharing tables
Discussion about whether we should share tables across niftysave nft.storage and web3.storage.
β‘οΈReached the conclusion that it might have benefits long term but any unification converastion should happen after initial migration.
#### CID v0 / v1
Discussion around whether we should normalize content CIDs in V0 to V1 in the `content` table.
- @ribasushi - already does that normalization but preserves source cid as well to be able to do traslation.
- @gozala - niftysave also normalizes to CIDv1 and preserves original source in the `Resource` of origin.
- This way searching by CID could find all matches regardless of input & upload CID versions.
Conclusion was to normalize to CIDv1 and store in binary represenattion as Postgres can do some optimizations.
web3.storage can pun this until initial migration is complete. If nft.storage does share table(s) with niftysave it wolud need to do it as part of the migration.
#### Singularize table names
- It is a best practice
- ORM tooling often adds `s` and you mind end up with `contentss`
#### Should niftysave / nft.starge share tables ?
- π Less coordinatiton required between @hugomrdias and @the-simian
- π Extra work is needed to let API that checks CID status work.
- Needs to hit multiple endpoints
- Need to merge results
- π Pin states need to be updated in multiple places
- π GraphQL may not be able to link to uploads associated with specific content.
- E.g. hasura translates GraphQL to SQL query which will not work.
- Custom resolver might work though
### Action Items
- [Find out if we can query jsonb from graphql](https://github.com/ipfs-shipyard/nft.storage/issues/370)
- [Figure out if we need pin locations](https://github.com/ipfs-shipyard/nft.storage/issues/371)
- [Singularize nft.storage table names](https://github.com/ipfs-shipyard/nft.storage/issues/372)
- [Singularize web3.storage table names](https://github.com/web3-storage/web3.storage/issues/436)
- [Verify that "content" table schema aligns between nft.storage & niftysave](https://github.com/ipfs-shipyard/nft.storage/issues/373)
- [Rename "accounts" table to "user"](https://github.com/ipfs-shipyard/nft.storage/issues/374)
- [Rename "auth_tokens" -> "auth_key"](https://github.com/web3-storage/web3.storage/issues/437)
- [Rename "account_keys" -> "auth_key"](https://github.com/ipfs-shipyard/nft.storage/issues/375)
- [Noramlize CIDs to V1 binray representation in nft.storage](https://github.com/ipfs-shipyard/nft.storage/issues/376)
- [Noramlize CIDs to V1 binray representation in web3.storage](https://github.com/web3-storage/web3.storage/issues/438)
- [Define schema for nft.storage](https://github.com/ipfs-shipyard/nft.storage/issues/377)
- [Define schema for web3.storage](https://github.com/web3-storage/web3.storage/issues/439)
- [Define schema for niftysave](https://github.com/ipfs-shipyard/nft.storage/issues/378)
- [Setup schema management system](https://github.com/ipfs-shipyard/nft.storage/issues/379)
- [Evaluate hasura which may impact how we do schema migrations](https://github.com/ipfs-shipyard/nft.storage/issues/379)
[PostgREST]:https://postgrest.org/en/v8.0/