# Testing project
Goal of this project is to automate various testing of GCC Rust. This includes running the offical Rustc testsuite but also projects that we are using as goal-testcases
## Work Items
1. Maintain the testsuites
2. Automation
3. Create summary outputs
4. Dashboard
### Maintain the testsuites
We have various unique test-suites within this testing project:
- `gccrs` parsing only.
- `rustc` on dejagnu testsuite
- `gccrs` on valid test cases only
- `gccrs` on valid `#[no_std]` test cases only
- `gccrs` on valid `#[no_core]` test cases only
- libcore 1.49.0
- Blake3
### Automation
Nightly runs using out dockerfile of the latest GCC Rust master build
- Each testsuite should be its own job
- see https://github.com/marketplace/actions/deploy-nightly for an example cron line
- Each run should archive its test-results
- See https://github.com/Rust-GCC/gccrs/blob/e9d41c4ef9da8ba71570ecf83691c813c12d9149/.github/workflows/ccpp.yml#L71-L76
### Summary outputs
We need the testsuite to generate a json file of the results. Using junit-xml formats is overkill here so all we need is a json object such as:
```javascript=
{
'test-suite-name': "gccrs-fsyntax-only",
'testing-commit': commit-sha of testing-project,
'date': 'date of result',
'gccrs-commit': commit-sha-of-gccrs // maybe we can add in a commit-sha fILE into the dockerfile that we can pull out
'passes:' 1234,
'failures': 44567,
}
```
### Dashboard
Since we are writing a rust compiler it would be nice to reuse rust. My personal preference would be just hack up some python but lets give rust a chance.
Web frameworks in rust:
- rocket
- actix
- List others!
Front-end frameworks in rust:
- [Yew](https://github.com/yewstack/yew)
- [Seed](https://github.com/seed-rs/seed)
#### Design
In order to avoid requiring the web-app to need a persistence layer lets use github as the persistence layer.
The webapp will have a cron job to nightly poll for new testruns and store the results in memory. Using the github API to fetch all test-results.
It will prove a rest api to lookup the testsuite results over time with their associated commits. Web ui frameworks could be simple templates for now using vis.js for data visualisations.
##### Phase 1 - startup fetch results
Rust app on startup uses GITHUB_TOKEN to access the archived results of all nightly runs on github.com/Rust-GCC/testing.
We have data structures of:
```map ['test-suite-name'] = [{'result'}, {'result'}, ...]```
The list does not have to be ordered we can fix this in the UI or by adding queries to our rest API.
Octorust looks like a nice API to use.
- https://docs.rs/octorust/latest/octorust/actions/struct.Actions.html#method.list_artifacts_for_repo
- https://docs.rs/octorust/latest/octorust/types/struct.Artifact.html
##### Phase 2 - create rest end points
- GET /api/testsuites returns a JSON list of all keys in our datastructure in PHASE 1
- GET /api/testsuites/{key: testsuite-name} returns the results of that testrun in any order they key will map to the keys in the map of results
Upon failure we should return a 404 not found when the test-suite does not exist
##### Phase 3 - Begin UI Dev
Choose a UI web framework, probably just use bootstrap and vis.js we only need a simple single page app. With really basic rest calls
##### Phase 4 - Scheduled task
The webapp on startup loads all test runs by requesting them from github and stores them in memory. This then needs a scheduled task each day to refetch all of the data.
Depending on how long this takes, we should support a mechanism to fetch async and upon susessful completion override the existing data in memory.
https://crates.io/crates/tokio-cron-scheduler
### Update idea and flowgraph
```mermaid
flowchart LR;
Dashboard --> API;
curl --> API;
API --> check_cache[Check cache];
check_cache --> is_outdated[Is it too old?];
is_outdated --Yes-->update[Update cache];
update-->cached;
is_outdated --No-->cached[Return cached json];
cached --> API;
```
- [ ] How do we store files appropriately?
- [ ] Can we just keep a hashmap as database and return this? No writing to disk?
- [ ] Yes? Since we only care about the last 90 days anyway?
- [ ] Do we have to purge the cache every once in a while to only keep 90 days?
- [ ] How do we return them to the user?
- [ ] What return type? Is there a `rocket::Json`?
- [x] What does it mean to "check if a file is too old" when we're keeping track of each file's run date?
- [ ] Is that even what we want to do? We just care about the `.last_date` and keeping the hashmap updated
- [ ] Just entering the cache directory and looking for the most recent file?
- [x] We can keep a field `.last_date` in our `Cache` structure for an easy check!
- [ ] We still need to check for the most recent file if we already have a cache directory? Later? Later!