# Testing
There are 3 types of testing that we use in our project:
- [Unit testing](https://gitlab.hlag.altemista.cloud/fis3/commons-ui/commons-ui-frontend/-/blob/master/docs/testing/unit-testing.md)
- [Integration testing](https://gitlab.hlag.altemista.cloud/fis3/commons-ui/commons-ui-frontend/-/blob/master/docs/testing/integration-testing.md)
- [Browser testing](https://gitlab.hlag.altemista.cloud/fis3/commons-ui/commons-ui-frontend/-/blob/master/docs/testing/browser-testing.md)
Test types are listed in order of priority. Consider this when choosing the type of test that covers your code best.
> There is also another type of testing - `performance`. Due to its specifics, we do not cover this type of testing here. More information about performance testing you can find [here](https://gitlab.hlag.altemista.cloud/fis3/commons-ui/commons-ui-frontend/-/blob/master/performance_tests/README.md).
## Folder Structure
We store all tests in the `__tests__` folder at component level (e.g. `packages/core/components/Button/__tests__`).
## Naming Convention
We name test files depending on their type:
```
*.TEST_TYPE.ts(x)
```
Where `TEST_TYPE` is one of three test types: `unit`, `integration` or `browser`. For example: `Button.integration.tsx`.
## Running Tests
> :warning: Running browser tests requires installed and launched Docker application with a certain image pulled. Please refer to the [Docker configuration documentation](https://gitlab.hlag.altemista.cloud/fis3/commons-ui/commons-ui-frontend/-/blob/master/docs/docker-configuration.md) for detailed instructions.
To run all the tests in the project use:
```bash
yarn test
```
To run all tests of a specific type, use the command:
```
yarn test:TEST_TYPE
```
Where `TEST_TYPE` is either `unit` (for both unit and integration tests) or `browser`.
To run a single specific test file:
```bash
yarn test:TEST_TYPE:single TEST_FILE_PATH
```
Where `TEST_FILE_PATH` is the path to the test file (e.g. `packages/core/components/Button/__tests__/Button.unit.tsx`).
> In the above example the file path contains forward slash (`/`) character. If you use VSCode on a Windows machine, the copied file path by default will include backslash (`\`) character. To change this:
>
> - In VSCode Command Palette open "User Settings" and then in "Copy Relative Path Separator" change separator to `/`.
> - _Or_ use extensions (e.g. [Copy Relative Path Posix](https://marketplace.visualstudio.com/items?itemName=rssowl.copy-relative-path-posix)).
The results of passed tests will be visible in the terminal.
For other available test commands refer to the [package.json](https://gitlab.hlag.altemista.cloud/fis3/commons-ui/commons-ui-frontend/-/blob/master/package.json) file.
# Unit Testing
## Description
The goal of unit testing is to determine if [React components](https://reactjs.org/docs/components-and-props.html) and other functions work as expected. Unit tests are used to verify at the component level if a part of code is working correctly. An isolated component or a method can be tested to assure that it meets requirements and desired functionality.
For unit tests we use [react-testing-library](https://testing-library.com/docs/react-testing-library/intro/) in combination with [Jest](https://jestjs.io/) testing framework. In order to test [sagas](https://redux-saga.js.org/), we use [redux-saga-test-plan](https://github.com/jfairbank/redux-saga-test-plan) library.
## Naming Convention
Unit test files are to be named after a general rule:
```
*.unit.ts(x)
```
## Running
As unit and [integration](https://gitlab.hlag.altemista.cloud/fis3/commons-ui/commons-ui-frontend/-/blob/master/docs/testing/integration-testing.md) tests are using the same running commands, you can use the following command to run _all_ unit and integration tests in the project:
```bash
yarn test:unit
```
To run only _one_ specific unit test file use this command:
```bash
yarn test:unit:single TEST_FILE_PATH
```
Where `TEST_FILE_PATH` is the path to the unit test file. For example:
```bash
yarn test:unit:single packages/core/components/Button/__tests__/Button.unit.tsx
```
The results of passed tests will be visible in git terminal.
## Debugging
The following command can be used to print out readable representation of the `DOM` tree of the node:
```typescript
// Debug document
screen.debug();
// Debug single element
screen.debug(screen.getByText('test'));
// Debug multiple elements
screen.debug(screen.getAllByText('multi-test'));
```
Also you can view built elements in [testing-playground](https://testing-playground.com/) using the following line of code:
```typescript
screen.logTestingPlaygroundURL();
```
You can read more abput debugging process [here](https://testing-library.com/docs/queries/about/#screendebug).
----
# Integration Testing
## Description
Integration tests are intended to check different parts of the application as a combined entity. By using this type of tests we check whether separate widgets are working correctly together or we expose defects that may arise in the interaction between the components. With the help of integration tests more complex scenarios can be tested. Typically, we use integration tests to build an entire application and test some parts of it (e.g. communication with backend or interaction with [Redux](https://redux.js.org/)).
Similar to [unit](https://gitlab.hlag.altemista.cloud/fis3/commons-ui/commons-ui-frontend/-/blob/master/docs/testing/unit-testing.md) tests, integration tests are based on [react-testing-library](https://testing-library.com/docs/react-testing-library/intro/) with [Jest](https://jestjs.io/) testing framework and run in the [jsdom](https://github.com/jsdom/jsdom) testing evnironment.
## Naming Convention
Integration test files are to be named after a general rule:
```
*.integration.ts(x)
```
## Running
As [unit](https://gitlab.hlag.altemista.cloud/fis3/commons-ui/commons-ui-frontend/-/blob/master/docs/testing/unit-testing.md) and integration tests are using the same running commands, you can use the following command to run _all_ unit and integration tests in the project:
```bash
yarn test:unit
```
To run only _one_ specific integration test file use this command:
```bash
yarn test:unit:single TEST_FILE_PATH
```
Where `TEST_FILE_PATH` is the path to the integration test file. For example:
```bash
yarn test:unit:single packages/core/components/Button/__tests__/Button_html.integration.tsx
```
The results of passed tests will be visible in git terminal.
## Debugging
The following command can be used to print out readable representation of the `DOM` tree of the node:
```typescript
// Debug document
screen.debug();
// Debug single element
screen.debug(screen.getByText('test'));
// Debug multiple elements
screen.debug(screen.getAllByText('multi-test'));
```
Also you can view built elements in [testing-playground](https://testing-playground.com/) using the following line of code:
```typescript
screen.logTestingPlaygroundURL();
```
You can read more about debugging process [here](https://testing-library.com/docs/queries/about/#screendebug).
---
# Browser Testing
## Description
Browser tests, as their name suggests, are run in the browser.
We use [jest-puppeteer](https://github.com/smooth-code/jest-puppeteer) library as a combination of [Jest](https://jestjs.io/) serving as a testing framework, [Puppeteer](https://pptr.dev/) providing a programmable [headless browser](https://developer.chrome.com/blog/headless-chrome/) and a higher-level API to control it. Moreover, there is also a homemade [jest-puppeteer-react](https://github.com/hapag-lloyd/jest-puppeteer-react) package which combines [jest-puppeteer](https://github.com/smooth-code/jest-puppeteer) with [Webpack](https://webpack.js.org/) and [webpack-dev-server](https://www.npmjs.com/package/webpack-dev-server) to render your [React](https://reactjs.org/) components so you don't have to setup a server and navigate to it.
> :warning: We use [Docker](https://www.docker.com/) when running browser tests locally, so make sure you have it installed and launched, as well as pulled a certain image. Please refer to the [Docker configuration documentation](https://gitlab.hlag.altemista.cloud/fis3/commons-ui/commons-ui-frontend/-/blob/master/docs/docker-configuration.md) for detailed instructions.
Browser testing is typically used in two cases:
1. For **image snapshot** testing
Image snapshot (or screenshot) testing is very useful for visual regression testing and sometimes is the best and easiest way to test an application or a standalone widget.
Mentioned above [jest-puppeteer-react](https://github.com/hapag-lloyd/jest-puppeteer-react) package includes [jest-image-snapshot](https://github.com/americanexpress/jest-image-snapshot) library, which provides us a possibility to take image snapshots of a rendered page and compare them with the help of the `toMatchImageSnapshot()` matcher. This matcher creates an `__image_snapshots__` folder in the directory the test is in (as a rule it's the `__tests__` folder at component level) and stores the baseline snapshot image there on the first run.
For the second and subsequent test runs the `toMatchImageSnapshot()` matcher will compare produced snapshots with the base ones. If there is a difference, it will create a new `__diff_output__` folder, where the diff image will be stored. The diff image is a combination of 3 images: the baseline snapshot, the newly produced one, and the overlay between these two to be able to see the difference. You can also access the diff image via a link in the terminal after the browser test running is finished.
You can also *update* base snapshots when you're good with changes by adding the `-u` flag when running a browser test:
```bash
yarn test:browser:single TEST_FILE_PATH -u
```
Where `TEST_FILE_PATH` is a file path where tests are stored (e.g. `packages/core/components/Button/__tests__/Button.browser.tsx`).
2. When an **access to specific browser APIs** is needed (e.g. [navigator](https://developer.mozilla.org/en-US/docs/Web/API/Navigator), [clipboard](https://developer.mozilla.org/en-US/docs/Web/API/Clipboard), [canvas](https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API) or [resize observer](https://developer.mozilla.org/en-US/docs/Web/API/Resize_Observer_API) API)
In [unit]() and [integration]() tests we are using [jsdom](https://github.com/jsdom/jsdom) environment to emulate a web browser, but [jsdom](https://github.com/jsdom/jsdom) does not have the capability to render visual content: [it does not do any layout or rendering](https://github.com/jsdom/jsdom#pretending-to-be-a-visual-browser), and has lots of [missing web APIs](https://github.com/jsdom/jsdom#unimplemented-parts-of-the-web-platform). So sometimes you have no other choice but to use browser testing to cover your needs.
## Guidelines
- **Browser tests should be stable**. Tests should reliably yield the same result without any flakiness, independent of the environment they're run in. Screenshots should look the same in every test run. There should not be any randomness involved in the test failing or not. [Here](http://mir.cs.illinois.edu/marinov/publications/LuoETAL14FlakyTestsAnalysis.pdf) is an interesting read about the possible sources of unstable (flaky) tests.
- **Each browser test should have a purpose and should be written in a ways that it completes as fast as possible**. A full run of our whole browser test suite takes a big chunk of our current CI pipeline duration. Since the project is still growing and evolving, the completion time must be taken into account.
- **No unneccessary browser tests.** Many things can be tested in unit or integration tests, which take a lot less time than browser tests. Only use a browser test if the thing you want to test cannot be tested with reasonable effort in a unit or integration test.
- **Combine simple and non-interactive tests.** If there are many tests which don't have any interaction logic but only test different initial component states, combining those into a single screenshot could improve test performance.
- **Avoid duplicate tests.** Don't create unneccessary screenshots. Example: There are plenty of tests which create an initial screenshot to assert the initial state, then interact with the component somehow and then take a final screenshot. If the initial screenshot is only there to assert correct initial value rendering then this could be done once in separate test and then all consecutive tests don't have to assert the initial state again.
- **Keep screenshot size small.** Taking and comparing screenshots takes time. The bigger the screenshot, the more time it takes, even if they are mostly empty. Carefully craft your testing layout to take up as little screen real estate as possible in order to keep the screenshot size low. Disable or avoid using elements irrelevant for the test, e.g. headers, footers, labels or wrapper elements like tiles or boxes.
- **Assert element existence.** Always make sure the elements you're screenshotting or interacting with do actually exist and are in the correct state, especially if they are dynamic elements like tooltips or dropdown elements or they dynamically load ressources. Use `page.waitForSelector()` or similar functions to intelligently wait the minimum required amount of time.
- **Avoid using fixed timeouts.** There are a few fixed timeout utilities available, e.g. `await waitForDebounce()`, `await waitForRedux()` or `await page.waitForTimeout()`. Those do not tell you anything about the thing you're waiting for actually happening or not. Whenever possible, use an intelligent timeout like `page.waitForSelector()` instead.
- **Limit wait time.** If a change makes many tests fail because of failing `page.waitForSelector()` calls, the pipeline takes ages to complete. This is caused by (a) retries and (b) the default timeout being 30 seconds. In most cases it is sufficient to wait for a second or two at max, so the timeout should be limited via the options which can be passed into `page.waitForSelector()` and similar functions.
## Naming Convention
Browser test files are to be named after a general rule:
```
*.browser.tsx
```
## Running
To run *all* browser tests tests in the project, use the following command:
```bash
yarn test:browser
```
To run only *one* specific browser test file use this command:
```bash
yarn test:browser:single TEST_FILE_PATH
```
Where `TEST_FILE_PATH` is the path to the browser test file. For example:
```bash
yarn test:browser:single packages/core/components/Button/__tests__/Button.browser.tsx
```
The result of passed tests will be visible in git terminal.
## Debugging
To debug browser tests in a more convenient way, there is a possibility to run them in your browser. To debug *all* browser tests, run the following command in your terminal:
```bash
yarn start:testBrowser
```
To debug *one* specific test, you can use the `jestPuppeteer.debug()` method provided by [jest-puppeteer](https://github.com/smooth-code/jest-puppeteer) library. Just insert the following code inside your test case, and it will suspend test execution at the line where you put it:
```typescript
await jestPuppeteer.debug();
```
In both these cases, a local server will be started. The server can be accessed by visiting https://localhost:1111 in your browser. This gives you an opportunity to see what's going on in [a real, headful browser](https://www.linkedin.com/pulse/real-vs-headless-browsers-comparison-automation-testing-noor) environment.