# Intro
## Flow
1) Test plan config file is manually hand crafted
2) Using the plan config file & source repo containing test cases:
* Test executions are constructed
* Test execution is filter of test cases (a.k.a. list of test cases) to be executed & infrastructure & RHTAP configuration to run these tests against
* If some test case is not yet implemented (is marked as `test.todo()`), corresponding JIRA is created for manual execution.
3) Each execution is triggered
* Meaning it starts a job which
* Creates ephemeral cluster (in desired version/flavor)
* Installs RHTAP (in desired configuration)
* Executes E2E tests (filtered for this execution)
* Uploads test artifacts
* Tears down
4) Results are gathered in single place
## Open questions
* We need to have a filtering mechanism for test cases.
* Probably having tags in test case name
``` it('verifies if ${gptTemplate} gpt exists in the catalog @smoke @gpt', async ()=>{```
* This will allow us to execute just a subset of tests
* Need to investigate if we can use negative (such as "don't execute TPA test cases" etc.)
* Pretty ugly solution IMO :-(
## Tasks
### CI
* Create automation for spinning up/configuring the cluster (gitops?)
* Create tasks for archiving artifacts
* Might be compied/inspired by how Openshift Pipelines QE is doing this
* Figure out how we will be spinning up ephemeral clusters
* Do we need own Hive instance we would manage?
* Can we just use Rosa HCP?
* Can we also somehow elevate internal hive instance? Would it be worth the hassle?
* Pricing/budget concerns?
### Testing framework (can we call this section that? :-D)
* [Spike] Logic for filtering tests according to target env/config.
* Figure out how to best capture what test cases should be executed in which env/config combinations. This will be very different for different scenarios (for example pre-GA testing of major version vs smoke testing bugfix release vs nighly tests, ...)
* Some test cases might be completely invalid in certain envs/configs
* In some cases we might also introduce some randomization to increase coverage (for example in nightly tests we could "randomly" choose different configurations from support matrix we are not usually executing in our release/testing cycle)
* This needs to also incorporate the manual test cases for which JIRAS will need to be created
* [Spike] Have a consolidated report at the end of the testing cycle
* I think it would be nice to have single place where one could quickly see results of the whole test plan execution
* I also think we would like to avoid reporting to polarion, but if we don't find any better solution (Jira X-Ray is not yet ready) and we would have to create something else ourselves, we could reconsider Polarion as a solution (yuck)