# DEV Docs
# Shenanigan Overview
Shenanigan is an open source application dedicated to bringing athletes onto the blockchain
## Shenanigan Stack
### Backend Stack
- Javascript
- Typescript
- NodeJs
- Mongoose
- Koa
- GraphQL
- Jest
### App Stack
- React Native
- Relay Modern
- Jest
- Storybook
- Fastlane for deploys
### Frontend Stack
* Javascript
* Typescript
* React
* Styled Components + Styled System + Rebass
* Tests: Jest + @testing-library/react + relay-test-utils
### CI/CD
- CircleCI
- Fastlane
### Smart Contract Stack
- Solidity
- ERC-2535 Diamonds Standard
- ERC-1155 NFTs
- The Graph
- Hardhat for deploys
- Chai for testing
## Shenanigan Repositories
Let's review the main Shenanigan repositories
### shenanigan-monorepo
Shenanigan's monorepo contains the entire Shenanigan app stack
**package structure:**
- app: React Native mobile frontend
- server: NodeJS/koa server that handles GraphQL traffic and read and writes to the DB
- hardhat: smart contract code code plus deploymentand testing logic
- contracts: a minimalist structure to managing smart contract ABIs
- util: business logic helpers to provide for other packages
### ShenaniganDapp.github.io
Shenanigan's website built with React and styled components
### bot
A discord bot written in Node to supply our discord server with custom services
**structure**
- starboard: Sets a minimum amount of :star: emojis a post must receive to stay in a channel
- handlers:
- `addaddress`: adds Verified users into our addressbook to receive cred
- `token`: return relevant information about the PRTCLE token
- `score`: returns Cred scores for users (broken)
### scoreboard
Shenanigan's sourcecred instance. Contains the addressbook, logic for weekly distributions like liquidity reward calculations and user activation
## Running Commands
Commands to run our repos. Please ensure you run the package manager w/ `yarn install` before doing these
shenanigan-monorepo:
- App
`yarn app:start`
`yarn ios` or `yarn android`
- Server
`yarn server:build`
`yarn server:start`
- Hardhat
`yarn deploy`
ShenaniganDapp.github.io:
`yarn start`
bot:
`yarn dev`
scoreboard:
`yarn start`
To remember:
- For every change in relay queries, fragment, etcs on front, remember to run:
`yarn relay`
- For every change in graphql queries, mutations, etcs on backend, remember to run:
`yarn update-schema`
# Setup
## How to run Shenanigan /app
### App
Shenanigan's app is a react native frontend that relies on a connection a the Shenanigan server, whether locally or through AWS, and the mongoDB instance
**Step by Step**:
In the root of the shenanigan-monorepo
1. install dependencies
`yarn install`
2. Copy the `.env.sample` into a new `.env`
`cp .env.sample .env`
3. Enter your own personal`INFURA_ID`
4. Run `yarn app:start`
5. Run `yarn ios` or `yarn android`
*Note* If .env is using localhost, you will have to run the server locally as well
## How to run Shenanigan /server
### Server
Shenanigan's server is a NodeJS/Koa implementation that uses an Atlas MongoDB instance for a DB. We ask that you please respect the staging database.
From the shenanigan-monorepo root:
1. install dependencies
`yarn install`
2. copy the `.env.sample` into a new `.env`
`cp .env.sample .env`
4. Run
`yarn server:build`
`yarn server:serve`
8. Open http://localhost/graphiql to access the graphql playground
## How to run Shenanigan /hardhat
### Hardhat
Shenanigan's hardhat package hosts the Shenanigan smart contracts. Hardhat is a compile/testing environment for our solidity code.
From the shenanigan-monorepo root:
1. install dependencies
`yarn install`
2. Run
`yarn deploy`
## How to run /bot
### Bot
Shenanigan's bot package hosts a discord bot code. Before you can test this bot, you will want to register your own application and bot in the [discord developer portal](https://discord.com/developers/applications)
You will need two OAuth Tokens:
- A github Oauth token with the ability to read/write gists
- A discord bot token with this permissions integer `511040`
In the bot repository
1. install dependencies
`yarn install`
2. Create a `.env` file
```
DISCORD_API_TOKEN= {Get Bot API Token}
GITHUB_API_TOKEN= {Get Github API Token}
GIST_ID= {Enter Gist Id}
GITHUB_ADDRESS_FILE_PATH=ShenaniganDApp/scoreboard/contents/data/addressbook.json
WHITELISTED_CHANNELS=*
WHITELISTED_ROLES=*
REACTION_LIMIT=5
REFUND=false
```
3. Create a dev server on Discord and invite the bot
4. Run
`yarn dev`
*Note* This code is in rapid development. Some channel ids and gist urls are hardcoded. If you are having issues testing commands, you might need to hardcode new values for your dev server
# How to run /scoreboard
### Scoreboard
Shenanigan's scoreboard package hosts the Sourcered instance for Shenanigan
You will need to create two OAuth Tokens:
- Github
- Discord
Instructions for doing that can be found in the [SourceCred template instance repo](https://github.com/sourcecred/template-instance#supported-plugins)
In the bot repository:
1. install dependencies
`yarn install`
2. Create a ``.env` file
```
SOURCECRED_DISCORD_TOKEN= {Get Discord Token}
SOURCECRED_GITHUB_TOKEN= {Get Github Token}
```
3. Run
`yarn start`
Tips:
- Running `yarn start` takes a long time! Especially if your internet is slow
- After completing loading the graph the first time, you can run `yarn sourcecred serve` to load just the website
- The graph has a cache, so future runs are much faster. Make sure your .env is setup correctly so the command doesnt fail. When the `yarn start` command fails the cache is reset
# Git
## Code Philosophy
### Semantically atomic commits
> for each desired change, make the change easy (warning: this may be
> hard), then make the easy change
>
> [—Kent Beck][kbeck-tweet]
[kbeck-tweet]: https://twitter.com/KentBeck/status/250733358307500032
Please factor your work into semantically atomic commits. Each commit
should represent a single semantic change, and the code included in the
commit should be the minimal amount of code required to implement, test,
and document that change.
For instance, perhaps you want to change the behavior of a component,
and along the way you find that it is useful to refactor a helper
function. In that case, you can create two commits: one to effect the
refactoring, and one to implement the change that has been made easy by
the refactoring.
This doesn’t mean that you have to physically write the code in this
order! The Git commit graph is malleable: you can write the code all at
once and commit it piecewise with `git add -p`; you can split and join
commits with interactive rebases; etc. This is made even easier inside VSCode with the git GUI integrations. Just highlight the lines to commit and press `⌘K ⌘⌥S`. What matters is the final
sequence of commits, not how you got there.
At the end of the day, you may find that you have a somewhat long
sequence of somewhat short changes. This is great. The goal is for a
reviewer to be able to say, “yep, this commit is obviously correct” as
many times in a row as are necessary for a full feature to be developed.
<details>
<summary>Why create small commits?</summary>
Writing small commits can help improve the design of your code. It is
common to realize an elegant way to split apart some functionality out
of a desire to split a commit into smaller, more localized pieces.
It is easier to review a commit that does one thing than a commit that
does many things. Not only will changes to the code be more localized,
but it will be easier for the reviewer to keep the whole context in
their mind.
Investigating and fixing bugs is much easier when commits are small.
There are more commits to look through, but an 8-fold increase in the
number of commits only entails 3 additional steps of bisection, which is
not a big deal. On the other hand, once the offending commit is
identified, the cause is more apparent if the commit is tiny than if it
is large.
</details>
## When writing commit messages
### Summary of changes
Include a brief yet descriptive **summary** as the first line of the
message. The summary should be at most 50 characters, should be written
in the imperative mood, and should not include trailing punctuation. The
summary should either be in sentence case (i.e., the first letter of the
first word capitalized), or of the form “area: change description”. For
instance, all of the following are examples of good summaries:
- Improve error messages when GitHub query fails
- Make deploy script wait for valid response
- Upgrade Typescript to 4.4.2
- new-webpack: replace old scripts in `package.json`
- fetchGithubRepo: remove vestigial data field
If you find that you can’t concisely explain your change in 50
characters, move non-essential information into the body of the commit
message. If it’s still difficult, you may be trying to change too much
at once!
<details>
<summary>Why include a summary?</summary>
The 50-character summary is critical because this is what Git
expects. Git often assumes that the first line of a commit contains a
concise description, and so workflows like interactive rebases surface
this information. The particular style of the summary is chosen to be
consistent with those commits emitted by Git itself: commands like
`git-revert` and `git-merge` are of this form, so it’s a good standard
to pick.
</details>
### Description
After the initial line, include a **description** of the change. Why is
the change important? Did you consider and reject alternate formulations
of the same idea? Are there relevant issues or discussions elsewhere? If
any of these questions provides valuable information, answer it.
Otherwise, feel free to leave it out—some changes really are
self-documenting, and there’s no need to add a vacuous description.
<details>
<summary>Why include a description?</summary>
A commit describes a _change_ from one state of the codebase to the
next. If your patch is good, the final state of the code will be clear
to anyone reading it. But this isn’t always sufficient to explain why
the change was necessary. Documenting the motivation, alternate
formulations, etc. is helpful both in the present (for reviewers) and in
the future (for people using `git-blame` to try to understand how a
piece of code came to be).
</details>
### Test plan
After the description, include a **test plan**. Describe what someone
should do to verify that your changes are correct. This can include
automated tests, manual tests, or tests of the form “verify that when
you change the code in this way, you see this effect.” Feel free to
include shell commands and expected outputs if helpful.
Sometimes, the test plan may appear trivial. It may be the case that you
only ran the standard unit tests, or that you didn’t feel that any
testing at all was necessary. In these cases, you should still include
the test plan: this signals to observers that the trivial steps are
indeed sufficient.
<details>
<summary>Why include a test plan?</summary>
The value of a test plan is many-fold. Simply writing the test plan can
force you to consider cases that you hadn’t before, in turn helping you
discover bugs or think of alternate implementations. Even if the test
plan is as simple as “standard unit tests suffice”, this indicates to
observers that no additional testing is required. The test plan is
useful for reviewers, and for anyone bisecting through the history or
trying to learn more about the development or intention of a commit.
</details>
### Wrapping
Wrap all parts of the commit message so that no line has more than **72
characters**.
<details>
<summary>Why wrap at 72 characters?</summary>
This leaves room for four spaces of padding on eiThis page demonstrates a number of patterns which should generally be followed when writing smart contracts.
## Protocol specific recommendations
The following recommendations apply to the development of any contract system on Ethereum.
### External Calls
#### Use caution when making external calls
Calls to untrusted contracts can introduce several unexpected risks or errors. External calls may execute malicious code in that contract _or_ any other contract that it depends upon. As such, every external call should be treated as a potential security risk. When it is not possible, or undesirable to remove external calls, use the recommendations in the rest of this section to minimize the danger.
--------
#### Mark untrusted contracts
When interacting with external contracts, name your variables, methods, and contract interfaces in a way that makes it clear that interacting with them is potentially unsafe. This applies to your own functions that call external contracts.
```sol
// bad
Bank.withdraw(100); // Unclear whether trusted or untrusted
function makeWithdrawal(uint amount) { // Isn't clear that this function is potentially unsafe
Bank.withdraw(amount);
}
// good
UntrustedBank.withdraw(100); // untrusted external call
TrustedBank.withdraw(100); // external but trusted bank contract maintained by XYZ Corp
function makeUntrustedWithdrawal(uint amount) {
UntrustedBank.withdraw(amount);
}
```
--------
#### Avoid state changes after external calls
Whether using *raw calls* (of the form `someAddress.call()`) or *contract calls* (of the form `ExternalContract.someMethod()`), assume that malicious code might execute. Even if `ExternalContract` is not malicious, malicious code can be executed by any contracts *it* calls.
One particular danger is malicious code may hijack the control flow, leading to vulnerabilities due to reentrancy. (See [Reentrancy](./known_attacks#reentrancy) for a fuller discussion of this problem).
If you are making a call to an untrusted external contract, *avoid state changes after the call*. This pattern is also sometimes known as the [checks-effects-interactions pattern](http://solidity.readthedocs.io/en/develop/security-considerations.html?highlight=check%20effects#use-the-checks-effects-interactions-pattern).
See [SWC-107](https://swcregistry.io/docs/SWC-107)
--------
#### Don't use `transfer()` or `send()`.
`.transfer()` and `.send()` forward exactly 2,300 gas to the recipient. The goal of this hardcoded gas stipend was to prevent [reentrancy vulnerabilities](./known_attacks#reentrancy), but this only makes sense under the assumption that gas costs are constant. Recently [EIP 1884](https://eips.ethereum.org/EIPS/eip-1884) was included in the Istanbul hard fork. One of the changes included in EIP 1884 is an increase to the gas cost of the `SLOAD` operation, causing a contract's fallback function to cost more than 2300 gas.
It's recommended to stop using `.transfer()` and `.send()` and instead use `.call()`.
```
// bad
contract Vulnerable {
function withdraw(uint256 amount) external {
// This forwards 2300 gas, which may not be enough if the recipient
// is a contract and gas costs change.
msg.sender.transfer(amount);
}
}
// good
contract Fixed {
function withdraw(uint256 amount) external {
// This forwards all available gas. Be sure to check the return value!
(bool success, ) = msg.sender.call.value(amount)("");
require(success, "Transfer failed.");
}
}
```
Note that `.call()` does nothing to mitigate reentrancy attacks, so other precautions must be taken. To prevent reentrancy attacks, it is recommended that you use the [checks-effects-interactions pattern](https://solidity.readthedocs.io/en/develop/security-considerations.html?highlight=check%20effects#use-the-checks-effects-interactions-pattern).
--------
#### Handle errors in external calls
Solidity offers low-level call methods that work on raw addresses: `address.call()`, `address.callcode()`, `address.delegatecall()`, and `address.send()`. These low-level methods never throw an exception, but will return `false` if the call encounters an exception. On the other hand, *contract calls* (e.g., `ExternalContract.doSomething()`) will automatically propagate a throw (for example, `ExternalContract.doSomething()` will also `throw` if `doSomething()` throws).
If you choose to use the low-level call methods, make sure to handle the possibility that the call will fail, by checking the return value.
```sol
// bad
someAddress.send(55);
someAddress.call.value(55)(""); // this is doubly dangerous, as it will forward all remaining gas and doesn't check for result
someAddress.call.value(100)(bytes4(sha3("deposit()"))); // if deposit throws an exception, the raw call() will only return false and transaction will NOT be reverted
// good
(bool success, ) = someAddress.call.value(55)("");
if(!success) {
// handle failure code
}
ExternalContract(someAddress).deposit.value(100)();
```
See [SWC-104](https://swcregistry.io/docs/SWC-104)
--------
#### Favor *pull* over *push* for external calls
External calls can fail accidentally or deliberately. To minimize the damage caused by such failures, it is often better to isolate each external call into its own transaction that can be initiated by the recipient of the call. This is especially relevant for payments, where it is better to let users withdraw funds rather than push funds to them automatically. (This also reduces the chance of [problems with the gas limit](./known_attacks#dos-with-block-gas-limit).) Avoid combining multiple ether transfers in a single transaction.
```sol
// bad
contract auction {
address highestBidder;
uint highestBid;
function bid() payable {
require(msg.value >= highestBid);
if (highestBidder != address(0)) {
(bool success, ) = highestBidder.call.value(highestBid)("");
require(success); // if this call consistently fails, no one else can bid
}
highestBidder = msg.sender;
highestBid = msg.value;
}
}
// good
contract auction {
address highestBidder;
uint highestBid;
mapping(address => uint) refunds;
function bid() payable external {
require(msg.value >= highestBid);
if (highestBidder != address(0)) {
refunds[highestBidder] += highestBid; // record the refund that this user can claim
}
highestBidder = msg.sender;
highestBid = msg.value;
}
function withdrawRefund() external {
uint refund = refunds[msg.sender];
refunds[msg.sender] = 0;
(bool success, ) = msg.sender.call.value(refund)("");
require(success);
}
}
```
See [SWC-128](https://swcregistry.io/docs/SWC-128)
--------
#### Don't delegatecall to untrusted code
The `delegatecall` function is used to call functions from other contracts as if they belong to the caller contract. Thus the callee may change the state of the calling address. This may be insecure. An example below shows how using `delegatecall` can lead to the destruction of the contract and loss of its balance.
```sol
contract Destructor
{
function doWork() external
{
selfdestruct(0);
}
}
contract Worker
{
function doWork(address _internalWorker) public
{
// unsafe
_internalWorker.delegatecall(bytes4(keccak256("doWork()")));
}
}
```
If `Worker.doWork()` is called with the address of the deployed `Destructor` contract as an argument, the `Worker` contract will self-destruct. Delegate execution only to trusted contracts, and **never to a user supplied address**.
!!! Warning
Don't assume contracts are created with zero balance
An attacker can send ether to the address of a contract before it is created. Contracts should not assume that its initial state contains a zero balance. See [issue 61](https://github.com/ConsenSys/smart-contract-best-practices/issues/61) for more details.
See [SWC-112](https://swcregistry.io/docs/SWC-112)
------------
### Remember that Ether can be forcibly sent to an account
Beware of coding an invariant that strictly checks the balance of a contract.
An attacker can forcibly send ether to any account and this cannot be prevented (not even with a fallback function that does a `revert()`).
The attacker can do this by creating a contract, funding it with 1 wei, and invoking
`selfdestruct(victimAddress)`. No code is invoked in `victimAddress`, so it
cannot be prevented. This is also true for block reward which is sent to the address of the miner, which can be any arbitrary address.
Also, since contract addresses can be precomputed, ether can be sent to an address before the contract is deployed.
See [SWC-132](https://swcregistry.io/docs/SWC-132)
--------
### Remember that on-chain data is public
Many applications require submitted data to be private up until some point in time in order to work. Games (eg. on-chain rock-paper-scissors) and auction mechanisms (eg. sealed-bid [Vickrey auctions](https://en.wikipedia.org/wiki/Vickrey_auction)) are two major categories of examples. If you are building an application where privacy is an issue, make sure you avoid requiring users to publish information too early. The best strategy is to use [commitment schemes](https://en.wikipedia.org/wiki/Commitment_scheme) with separate phases: first commit using the hash of the values and in a later phase revealing the values.
Examples:
* In rock paper scissors, require both players to submit a hash of their intended move first, then require both players to submit their move; if the submitted move does not match the hash throw it out.
* In an auction, require players to submit a hash of their bid value in an initial phase (along with a deposit greater than their bid value), and then submit their auction bid value in the second phase.
* When developing an application that depends on a random number generator, the order should always be *(1)* players submit moves, *(2)* random number generated, *(3)* players paid out. The method by which random numbers are generated is itself an area of active research; current best-in-class solutions include Bitcoin block headers (verified through http://btcrelay.org), hash-commit-reveal schemes (ie. one party generates a number, publishes its hash to "commit" to the value, and then reveals the value later) and [RANDAO](http://github.com/randao/randao). As Ethereum is a deterministic protocol, no variable within the protocol could be used as an unpredictable random number. Also be aware that miners are in some extent in control of the `block.blockhash()` value<sup><a href='https://ethereum.stackexchange.com/questions/419/when-can-blockhash-be-safely-used-for-a-random-number-when-would-it-be-unsafe'>\*</a></sup>.
--------
### Beware of the possibility that some participants may "drop offline" and not return
Do not make refund or claim processes dependent on a specific party performing a particular action with no other way of ther side while still
fitting in an 80-character terminal. Programs like `git-log` expect that
this amount of padding exists.
(Yes, people really still use 80-character terminals. When each of your
terminals has bounded width, you can display more of them on a screen!)
</details>
### Example
Here is an example of a helpful commit message. [The commit in
question][example-commit] doesn’t change very many lines of code, but
the commit message explains the context behind the commit, links to
relevant issues, thanks people who contributed to the commit, and
describes a manual test plan. Someone reading this commit for the first
time will have a much better understanding of the change by reading this
commit message:
@ToDo: add example commit
## When submitting a pull request
Please create pull requests against `master` by default.
If your pull request includes multiple commits, please include a
high-level summary and test plan in the pull request body. Otherwise,
the text of your pull request can simply be the body of the unique
commit message.
Please be aware that, as a rule, we do not create merge commits. If you
have a stack of commits, we can either rebase the stack onto master
(preserving the history) or squash the stack into a single commit.
### Stacking PRs
A common problem appears when the current work you're doing relies on a past Pull Request that is under review. In order to work with the changes and avoid merge commits, checkout your new branch off the previous branch that includes the changes. Make your new changes on the checked out branch like normal. After submitting the PR, point the base branch to the previous PR branch that you checked out from earlier.
In your PR's description, make note of the PR's reliance on the previously checked out branch by writing
`relies on #372`
The end result should look like a stack of PRs.
```mermaid
graph LR
id0[PR #1 - original change]
id1[PR #2 - PR #1 changes]
id2[PR #3 - PR #1 and PR #2 changes]
id3[PR #4 - PR #1, PR #2, and PR #3 changes]
id0 --> id1 --> id2 --> id3
```
### Reviewing and Merging Stacked PRs
When reviewing stacked PRs, start at the beginning and work your way up to the newest changes. (DO NOT MERGE RIGHT AWAY) Instead, finish your review and move on to the next one. Once the newest changes have been reviewed, and approved, start to merge them in reverse order (newest changes first).
Merge them in reverse order:
```mermaid
graph LR
id0[PR #1 - all changes]
id1[PR #2 - #2, #3, #4 changes]
id2[PR #3 - #3 and #4 changes]
id3[PR #4 - new changes]
id3 --> id2 --> id1 --> id0 --> master
```
## Basics
### Checking out from master
Git uses branches to enable teams of developers to work on a multitude of features simultaneously. Make sure you have the most recent changes from `master` when starting a new. I recommend checking out the `HEAD` commit of master
`git checkout origin/master`
### Checkout a new branch
From the `origin/master` branch checkout a new branch
`git checkout -b your-new-branch`
### Stage and Commit your changes
Select which of the changes to be added
**Add all changes**
`git add .`
**Add files or folders**
```
git add <fileA>
git add <folderA>
```
**Add specific lines with piecewise**
`git add -p`
**Preferred method**
Piecewise add from within VSCode GUI.
Just highlight the lines to commit and press
`⌘K ⌘⌥S`
### Commit your staged changes
`git commit -m "A 50 char explanation of the change"`
### Push your changes to github
`git push origin <branchName>`
## Best Practices
### Avoiding merge commits
A clean commit history takes some work to keep up. To keep a clean history and avoid merge commits, make sure you always rebase with master before pushing changes.
`git pull origin master --rebase`
If you are working off of changes from a pre-existiong branch, pull from that branch as well
`git pull origin <branch-name> --rebase`
### Branch naming
Try to stick to the format `[kind-of-change]/[what-is-being-changed]` whenever possible.
Example: `fix/user-name-during-login`, `improvement/login-screen`, `feature/charge-customer-screen`.
### Useful Git Commands
**Remove all merged branches from local repo**
`git branch --merged | grep -v "\*" | grep -v master | xargs -n 1 git branch -d`
**Or if you want to just remove the references to remote when they are deleted**
`git config --global fetch.prune true`
**Remove all tags that are not on remote**
`git tag -l | xargs git tag -d`
`git fetch --tags`
**Autocorrect git**
`git config --global help.autocorrect 1`
**Remove multiple files after deleting them from disk**
`git ls-files --deleted -z | xargs -0 git rm`
**Git aliases**
```
[alias]
lg1 = log --graph --abbrev-commit --decorate --date=relative --format=format:'%C(bold blue)%h%C(reset) - %C(bold green)(%ar)%C(reset) %C(white)%s%C(reset) %C(dim white)- %an%C(reset)%C(bold yellow)%d%C(reset)' --all
lg2 = log --graph --abbrev-commit --decorate --format=format:'%C(bold blue)%h%C(reset) - %C(bold cyan)%aD%C(reset) %C(bold green)(%ar)%C(reset)%C(bold yellow)%d%C(reset)%n'' %C(white)%s%C(reset) %C(dim white)- %an%C(reset)' --all
lg = log --color --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit
co = checkout
ec = config --global -e
cob = checkout -b
cm = !git add -A && git commit -m
st = status
```
To add this alias to your git command line, copy and paste to `~/.gitconfig`.
## How to Handle Conflicts
### How to avoid simple conflicts using Git Stash
If you try to pull some code from master to your branch, and you receive an error message, try to stash your changes. Follow this sequence of commands:
`git stash` (this will put your changes to git stack)
`git pull origin master` (this will bring changes from master)
`git stash pop` (this will bring your changes back to your working dir)
### Cherry-picking
Sometimes, merge conflicts will arise from a rebase. Normally, just solve the merge conflicts and push. If the history has gotten too messy to fix, check out a new branch from the parent branch, and cherry-pick the commits individually onto the new branch
`git log --oneline`
in a new terminal window:
`git checkout -b <new-branch>`
`git chery-pick <commit hash>`
# Guides
## Code Review
### Code Review
Reviewing code is as important to a delevoper as writing the code itself. Many times, you will learn more reading than you will writing the code. For this reason, reviewing the code of all members on all codebases is encouraged. Ask questions, write comments, and ask for changes
Each review will award Cred to the reviewer, so review, review, review!
**Helpful channels to stay up to date on PRs**
- turn on all notification to `#github` discord channel to make sure you see all github activities related to Feedback House
- discuss Pull Requests on PR, but also on discord `#dev` channel
## Embrace Copy and Paste
### Copy and Paste
Embrace copy and paste coding
- Existing code has more chances to work well
- Existing code has the same patterns as the code of our codebase
- Copy Paste and modify
# GraphQL
## GraphQL Spread fields
### Graphql Spread fields
@shenanigan/server brings a lot of helper tools to define GraphQL with less boilerplates
One of the concepts it brings is predefined `fields`
like error and success fields:
```javascript
export const errorField = {
error: {
type: GraphQLString,
resolve: ({ error }) => error,
},
};
export const successField = {
success: {
type: GraphQLString,
resolve: ({ success }) => success,
},
};
```
All mutations should return an error and a success field, so to make uniform we spread this fields when defining a mutation
```javascript
type Args = {};
const mutation = mutationWithClientMutationId({
name: 'Awesome',
inputFields: {},
mutateAndGetPayload: async (args: Args, context: LoggedGraphQLContext) => {
const company = await Company.findOne({
_id: context.company._id,
});
if (!company) {
return {
error: context.t('Company not found'),
};
}
return {
error: null,
success: context.t('Awesome runned'),
};
},
outputFields: {
...errorField,
...successField,
},
});
```
This enables our code to be more uniform. We have this concept on mongoose field and also graphql fields
## GraphQL Node Interface
### Node Interface
Most types on GraphQL, when using Relay, implements the `NodeInterface`.
This is required to use what Relay calls `global ids`, which is basically an ID made by concatening the type with the record internal id (which in our case, most of the time is a `ObjectId` from MongoDB), and base64 encoding it after.
Every type that implements that interface has to have an `id` field, which resolves to that global id mentioned above.
See https://facebook.github.io/relay/graphql/objectidentification.htm
### Why is that important?
This allows Relay to cache all records in a single Store, since they should not, in theory, collide with each other, even if their types are different.
This also allow us to query for any id, and return their data on GraphQL, by using the node(id: ID!) field.
Example:
```javascript
query NodeExample {
node(id: "Vmlld2VyOjU5MzliYTVkZjdhZDljMDAxMDcyMzc2Mw") {
id
... on User {
name
}
}
}
```
Since the return type of the `node` field is the `NodeInterface` itself, we need to use a fragment to specialize to the type we want.
### Setup on Server
For this to work we must define the mapping for each Type and their respective internal object, and also tell GraphQL how to load this type from given global id.
This can be done by adding it to the NodeInterface resolver, which looks like this:
```typescript
import { GraphQLObjectType } from "graphql";
import { fromGlobalId, nodeDefinitions } from "graphql-relay";
import { GraphQLContext } from "../../graphql/types";
type Load = (context: GraphQLContext, id: string) => any;
type TypeLoaders = {
[key: string]: {
type: GraphQLObjectType;
load: Load;
};
};
const getTypeRegister = () => {
const typesLoaders: TypeLoaders = {};
const getTypesLoaders = () => typesLoaders;
const registerTypeLoader = (type: GraphQLObjectType, load: Load) => {
typesLoaders[type.name] = {
type,
load
};
return type;
};
const { nodeField, nodesField, nodeInterface } = nodeDefinitions(
(globalId, context: GraphQLContext) => {
const { type, id } = fromGlobalId(globalId);
const { load } = typesLoaders[type] || { load: null };
return (load && load(context, id)) || null;
},
obj => {
const { type } = typesLoaders[obj.constructor.name] || { type: null };
return type;
}
);
return {
registerTypeLoader,
getTypesLoaders,
nodeField,
nodesField,
nodeInterface
};
};
const {
registerTypeLoader,
nodeInterface,
nodeField,
nodesField
} = getTypeRegister();
export { registerTypeLoader, nodeInterface, nodeField, nodesField };
```
See: https://github.com/ShenaniganDApp/shenanigan-monorepo/blob/master/packages/server/src/graphql/modules/node/typeRegister.ts
## GraphQL Mutations
## GraphQL Mutations - Best Practices
### General Guidelines
Small mutations, with small scopes, always.
Do not create mutations that edit all fields in a gigantic model.
### Unauthenticated users
99% of the cases, we don't want unauthenticated users to execute mutations, that is why we should always start mutations as
```typescript=
const { user } = context;
if (!user) {
throw new Error('Unauthenticated user');
}
```
### References to other models
When referencing ids for other documents, always validate if those ids are valid first. Example, if you have the model User, and the model Group, and you are creating a mutation to add an user to a group:
```javascript
const userResult = await UserModel.findOne({ _id: fromGlobalId(user).id });
if (!userResult) {
return {
error: 'User not found',
};
}
// same for group
```
Errors
Try to always return an error, nullable, string field. And return errors on it:
```javascript
if (validationCondition) {
return {
error: 'Meaningful error message',
}
}
```
### Graphql Filtering
## GraphQL Filtering
## How it works currently
Currently, filtering on some types (most of the time `connections`) is done by having multiple arguments for each criteria we want to filter the dataset on.
This has some drawbacks:
If the `type` has many fields, it would make the argument list bigger, which can cause some cluttering.
If you need to specify the same filters somewhere, you need to duplicate them.
You need to manually filter each argument in the loader.
Not easily possible to `OR` two criterias together.
**Practical example:**
Let's say we have an `User` type, with some fields
```javascript
type User {
id: !Id
parent: User
name: String
status: String
role: String
}
```
And the following `users` field (in this case a Relay connection), which would allow us to search for users with a specific `parent` and `status`:
```javascript
//...
users(
after: String
first: Int
before: String
last: Int
parent: Id,
status: String,
)
//...
```
Querying it:
```javascript
//...
users(first: 1000, parent: "123456789", status: "ACTIVE") {
//...
}
//...
```
To also allow to search for users having any of given statuses, we would be required to change the type of `status` to a list:
```
status: [String],
```
Since GraphQL has [input coercion](http://facebook.github.io/graphql/#sec-Lists) for lists, single scalar results would still work as expected, so no changes would be required besides this one.
But what if we want to filter users not having some statuses? We would be required to include another argument:
```
statusNotIn: [String],
```
Now, if for some motive we are required to search for users having a specific status `OR` having some specific parent, we would be required to break this into multiple connections, which is great, but would create issues if you want to display all the data in a single place, since you would be required to join both results together later.
### Using input objects
One way to easily handle that is to start using input objects to filter on fields defined in the specific type, using the `User` above as example. It would looks like this:
```javascript
type UserFilter {
AND: [UserFilter],
OR: [UserFilter],
status: String,
status_in: [String],
parent: Id,
parent_in: [String],
}
// later...
users(
after: String
first: Int
before: String
last: Int
filter: UserFilter,
)
// ...
```
Querying:
```javascript
// variables
{
"filter": {
"OR": [
{
"parent": "123456789"
},
{
"status_in": ["INACTIVE", "DISABLED"]
}
]
}
}
//...
users(first: 1000, filter: $filter) {
//...
}
//...
```
The format of each field in the `TypeFilter` is basically `fieldName_${operator}`, where each `operator` translates to a specific MongoDB operator, by prepending it with a `$`. Just fieldName means an `$eq` comparison.
For example
```javascript
{
"parent": "random-id",
"status_in": ["INACTIVE", "DISABLED"]
}
```
would translate to
```javascript
{
"parent": "random-id",
"status": {
"$in": ["INACTIVE", "DISABLED"]
}
}
```
References
http://facebook.github.io/graphql/#sec-Input-Objects
https://www.graph.cool/docs/tutorials/designing-powerful-apis-with-graphql-query-parameters-aing7uech3/
## GraphQL Field Naming
## GraphQL Fields Naming Convention
### General Guidelines
Types should be named using upper camel case, while fields should use lower camel case. Fields should be separated by newlines only, commas should not be used. Example:
```javascript
type User {
firstName: String
lastName: String
}
```
In case there are acronyms in the field/type name, always capitalize only the first letter of the acronym:
```
type User {
# ...
mainCep: String
}
```
### Booleans
Boolean fields should always be prefixed with is, has or should. Picking the one that better fits the choosen name:
```javascript
type User {
# ...
isActive: Boolean
hasPendingMessages: Boolean
shouldUseDefaultSettings: Boolean
}
```
# React, Relay, and Component Architecture
## React Best Practices
Shenanigan utilizes components, hooks, and contexts to the full extent to provide smooth state management for client side data.
### Components
When writing a component for Shenanigan frontends, use React's functional components. If you have the choice between functional or Class Components, always use functional as it allows for cleaner, more readable code.
```typescript
function Welcome(): ReactElement {
return <h1>Hello World</h1>;
}
```
**Naming Components**
Component names are written in PascalCase.
When naming components, think about the function of the component. Each component should represent an element inside the UI.
e.g. `CommentList.tsx`
**Memoizing**
Often times our components rely on state from their parent components via `props`. When this happens, if the parent component state changes, the child component will re-render with the parent. This can cause performance problems
To avoid this, **memoize**.
Memoization will only re-render the component when it's dependencies change, instead of whenever the parent re-renders
You can memoize the whole component by wrapping the function in the `useMemo` hook, or memoize functions that are passed down by wrapping them in `useCallback`
Further reading: https://reactjs.org/docs/hooks-reference.html#usecallback
### Hooks
Hooks allow us to use React state management features without writing components. We can use hooks to add modular state into our components. If two components require the same state logic, we can use hooks to accomplish this.
Keep in mind the state and return values of the hook. Make sure each hook solves a simple task.
For example, here is a hook to return the ETH balance of an address
```typescript
function useAddressBalance(address: string) {
const { provider } = Web3Context()
const [balance, setBalance] = useState(new BigNumber(0))
const updateBalance = useCallback(() => {
if (ethers.utils.getAddress(address)) {
let stale = false
provider.getBalance(address)
.then(value => {
if (!stale) {
setBalance(value)
}
})
.catch(() => {
if (!stale) {
setBalance(null)
}
})
return () => {
stale = true
setBalance()
}
}
}, [address])
useEffect(() => {
return updateBalance()
}, [updateBalance])
return balance
}
```
**Note** Hook readability can benefit a lot from helpers. If you find your hook getting long, try extracting. We could simplify our previous example by taking `provider.getBalance(address)` and `provider.getBalance(address)` and extracting them to helpers.
```typescript
export function isAddress(value:string) {
try {
ethers.utils.getAddress(value)
return true
} catch {
return false
}
```
```typescript
// get the ether balance of an address
export async function getEtherBalance(address, provider) {
if (!isAddress(address)) {
throw Error(`Invalid 'address' parameter '${address}'`)
}
return provider.getBalance(address)
}
```
### Contexts
Context provides a way to pass data through the component tree without having to pass props down manually at every level.
Use contexts when you want to share state between multiple components. A good example an `ethers` provider. Generally, we will only need one instance of an Ethereum provider running at a time. We will need this provider in many components throughout our application, so we can hoist this to the app root level.
Just wrap our app in a `Provider`
```typescript
<Web3ContextProvider>
<App />
</Web3ContextProvider>
```
and consume it in any component using the `useContext` hook
## Thinking in Relay
This section was inspired by this article: https://relay.dev/docs/principles-and-architecture/thinking-in-relay/
### What is Relay
Relay is a framework built to manage server state. However, it comes with many added benefits when making React applications including:
- declarative data-fetching
- component encapsulation
- built in pagination and cacheing
### Fragments
Relay batches together [GraphQL fragments](https://graphql.org/learn/queries/#fragments) to make a single network request for each view.
> Functional components use one or more GraphQL fragments to describe their data requirements. These fragments are then nested within other fragments, and ultimately within queries. And when such a query is fetched, Relay will make a single network request for it and all of its nested fragments. In other words, the Relay runtime is then able to make a single network request for all of the data required by a view!
Example of a GraphQL fragment:
```typescript
// AuthorDetails.react.js
const authorDetailsFragment = graphql`
fragment AuthorDetails_author on Author {
name
photo {
url
}
}
`;
```
`useFragment` will retrieve the data from the store
```typescript
// AuthorDetails.react.js
export default function AuthorDetails(props: Props) {
const data = useFragment(authorDetailsFragment, props.author);
// ...
}
```
### Queries
To fetch the data, place a query at the root of the view.
Queries will batch the data from all of its child fragments.
```typescript
// Story.react.js
const storyQuery = graphql`
query StoryQuery($storyID: ID!) {
story(id: $storyID) {
title
author {
...AuthorDetails_author
}
}
}
`;
```
`useLazyloadQuery` will fetch the query. You can then pass the fragment reference `author` (see `...AuthorDetails_author`) down to its component through props
```typescript
// Story.react.js
function Story(props) {
const data = useLazyLoadQuery(storyQuery, props.storyId);
return (<>
<Heading>{data?.story.title}</Heading>
{data?.story?.author && <AuthorDetails author={data.story.author} />}
</>);
}
```
### Declarative data-fetching
Relay's most powerful feature (in my opinion)
In order to avoid issues with *implicit data-fetching* losing data, relay requires that each component declares what server state they require. Data is masked between components so a component will only have access to data its declaritively specified.
This gives two primary benefits:
**Encapsulation**
You can see (and mutate) the data required by the component from within the component.
- This results in a much better frontend developer experience. When it is much more apparent what data is going where, frontend developers are much more in sync with one another.
**Data dependent renders**
Components are only re-rendered when their data requirements are replaced
## Writing Relay Powered Components
Once you understand the power of Relay, writing a Relay component is only the syntax of the relay compiler.
**Write React Before Relay**
Write a React functional component first
- Mock your required data
- Write only one component(don't encapsulate yet)
**Write Relay Fragment**
Start by mapping out your graphql fragment. The structural requirements will vary if you are dealing with `Node` or`Connection`
Example:,
A component that relies on `me` query's `username` and `addresses` field.
```typescript
type Props = {
me: MyUserComponent_me$key;
};
const MyUserComponent = (props: Props): ReactElement => {
const me = useFragment<MyUserComponent_me$key>(
graphql`
fragment MyUserComponent_me on User {
username
addresses
}
`,
props.me
return
<View>
<Text> Username: {me.username}</Text>
<Text> Address 0: {me.addresses[0]} </Text>
</View>
);
```
**Notes:**
- Notice, the fragment requires the `me` query to be passed down through `props`
**Working with connections**
Now, lets say we want to access all donations the `User` has made in a child component of `MyUserComponent` . This is a `Connection`. A GraphQL definition of a list.
Let's say we set the `createdDonations` field to hold `id` of donations.
We can retrieve a paginated list with the first 20 items. Also we can add the `@refetchable` directive to make the list refetchable. Pass the directive a`queryName` for the relay store hash.
**donationListFragmentSpec**
```typescript
const donationListFragmentSpec = graphql`
fragment UserDonationList_me on User
@argumentDefinitions(
count: { type: "Int", defaultValue: 20 }
cursor: { type: "String" }
)
@refetchable(queryName: "UserDonationListRefetchQuery") {
createdDonations(first: $count, after: $cursor)
@connection(key: "UserDonationList_createdDonations") {
pageInfo {
hasNextPage
endCursor
}
edges {
node {
id
amount
comment {
id
content
}
}
}
}
}
`
```
**DonationComponent**
```typescript
type Props = UserDonationProps;
export const UserDonationList = (props: Props): React.ReactElement => {
const {
data,
isLoadingNext,
loadNext,
hasNext,
refetch
} = usePaginationFragment(donationListFragmentSpec, props.me);
const [isFetchingTop, setIsFetchingTop] = useState(false);
const refetchList = () => {
if (isLoadingNext) {
return;
}
setIsFetchingTop(true);
refetch(10, { onComplete: () => setIsFetchingTop(false) });
}
//return a list for me.createdDonations
```
Now, we must tell `MyUserComponent` fragment about the child component fragment
**MyUserComponent**
```typescript
type Props = {
me: MyUserComponent_me$key;
};
const MyUserComponent = (props: Props): ReactElement => {
const me = useFragment<MyUserComponent_me$key>(
graphql`
fragment MyUserComponent_me on User {
username
addresses
...DonationList_me //Add a spread operator to the parent fragment
}
`,
props.me
return
<View>
<Text> Username: {me.username}</Text>
<Text> Address 0: {me.addresses[0]} </Text>
<DonationList me={me}/> //Pass the query through props
</View>
);
```
**Relay Mutations**
Mutations write new data to the backend. The primary mutatation flow looks like:
1. Get data from user
2. Use `useMutation` hook with desired graphql mutation definition
3. Update the data on the frontend when the mutation returns successful
Relay store is made powerful via the use of `global ids`, which allow for static lookup of any data. Mutations use the global `id` field to update the mutated data.
```typescript=
graphql`
mutation PostCreateMutation($input: PostCreateInput!) {
PostCreate(input: $input) {
success
error
postEdge {
node {
id
content
author {
id
name
}
meHasLiked
likesCount
}
}
}
}
`
```
In this example, relay will use the `id` field to find the data in the relay store and update all the returned `node` fields
**Updater Functions and Optimistic Updates**
Sometimes, the automatic update with `id` won't be enough, and we will have to change the store manually.
To do that, use an `updater` function
```typescript
export function connectionUpdater({ store, parentId, connectionName, edge, before = false }: ConnectionUpdater) {
if (edge) {
if (!parentId) {
// eslint-disable-next-line
console.log('maybe you forgot to pass a parentId: ');
return;
}
const parentProxy = store.get(parentId);
const conn = ConnectionHandler.getConnection(parentProxy, connectionName);
if (!conn) {
// eslint-disable-next-line
console.log('maybe this connection is not in relay store: ', connectionName);
return;
}
if (before) {
ConnectionHandler.insertEdgeBefore(conn, edge);
} else {
ConnectionHandler.insertEdgeAfter(conn, edge);
}
}
}
export const updater: SelectorStoreUpdater = store => {
const newEdge = store.getRootField('PostCreate').getLinkedRecord('postEdge');
connectionUpdater({
store,
parentId: ROOT_ID,
connectionName: 'Feed_posts',
edge: newEdge,
before: true,
});
};
```
Take a look at further documentation about the [store](https://relay.dev/docs/api-reference/store/)
**@directives**
Directives can be used to inject added functionality into your relay fragments. Examples include: `@argumentDefintions`, `refetchable`, `relay`
Reading the docs can be useful: https://relay.dev/docs/api-reference/graphql-and-directives/
## Folder Structure and Component Hierarchy
It is recommended you understand how to "Think in Relay" before reading this section. Relay is built for React, and thus focuses entirely on the Component architecture.
Now, I ask that you throw out everything you know about previous application hierarchies. Dumb and smart components, dispatchers, actions are useless to us.
### Components come first
Relay is designed to encapsulate components. no data from the relay store is shared between components, and only data that is declared is provided.
Every React `Component` is its own component file.
No need for views, smart and dumb components, HOC components, etc.
Naturally, the `component` directory holds the components.
When we are structuring our files, we can organize them in the layout of our app.
**Component Tree Structure**
Each Component should be modular.
For example, say we are creating a user profile page.
```mermaid
graph LR
id0[Profile]
id1[Header]
id2[Details]
id3[Links]
id4[ProfilePicture]
id5[Username]
id6[CoverPhoto]
id7[Biography]
id8[Stream]
id0 --> id1
id0 --> id2
id0 --> id3
id1 --> id4
id1 --> id5
id1 --> id6
id2 --> id7
id2 --> id8
id3 --> Link
```
This is a basic example. It is important to have the leaves of our tree be the most basic and simple components possible.
Subfolders in the `component` folder should group related child components.
Example: Say a `Stream` component has `Chat` and `Donate` as children. The `chat` and `donate` folders are subfolders of the parent `stream` folder.
### Utils, helpers, and hooks
# Subgraph
# Ceramic and IDX