owned this note
owned this note
Published
Linked with GitHub
### Phase 1: Planning
#### Definition of Ready (DoR):
- [ ] **Scope Defined:** The features to be implemented are clearly defined and scoped out.
- [ ] **Requirements Gathered:** Gather and document all the necessary requirements for the feature.
- [ ] **Stakeholder Input:** Ensure relevant stakeholders have provided input on the document scope and content.
#### Definition of Done (DoD):
- [ ] **Document Complete:** All sections of the document are filled out with relevant information.
- [ ] **Reviewed by Stakeholders:** The document has been reviewed and approved by stakeholders.
- [ ] **Ready for Development:** The document is in a state where developers can use it to begin implementation.
### Phase 2: Development
#### Definition of Ready (DoR):
- [ ] **Task Breakdown:** The development team has broken down tasks based on the document.
- [ ] **Communication Plan:** A plan is in place for communication between developers and writers if clarification is needed during implementation.
- [ ] **Developer Understanding:** Developers have a clear understanding of the document content.
#### Definition of Done (DoD):
- [ ] **Code Implementation:** The feature is implemented according to the document specifications.
- [ ] **Developer Testing:**
- Unit tests and basic integration tests are completed
- Developer also completed self-testing for the feature (please add this as a comment in the ticket, with the tested OS and as much info as possible to reduce overlaping effort).
- (AC -> Code Changes -> Impacted scenarios)
- [ ] **Communication with Writers:** Developers have communicated any changes or challenges to the writers, and necessary adjustments are made in the document. (Can be through a note in the PR of the feature for writers to take care, or create a separate PR with the change you made for the docs, for writers to review)
### Phase 3: QA for feature
#### Definition of Ready (DoR):
- [ ] **Test Note Defined:** The test note is prepared outlining the testing items.
- [ ] **Environment Ready:** PR merged to nightly build, Nightly build notes updated (automatically from pipeline after merged).
- [ ] **Status:** Ticket moved to the column Testing and assigning to QA/writers to review.
- [ ] **Test Data Prepared:** Relevant test data is prepared for testing the scenarios.
#### Definition of Done (DoD):
- [ ] **Test Executed:** All identified test items are executed on different OS, along with exploratory testing.
- [ ] **Defects Logged:** Any defects found during testing are resolved / appropriately logged (and approved for future fix).
- [ ] **Test Sign-Off:** QA team provides sign-off indicating the completion of testing.
### Phase 4: Release (DoR)
- [ ] **Pre-release wait time:** Code change to pre-release version should be frozen for at least X (hrs/days) for Regression testing purpose.
- Pre-release cut off on Thu morning for the team to regression test.
- Release to production (Stable) during working hour on Mon morning (if no blocker) or Tue morning.
- During the release cut off, the nightly build will be paused, to leave room for pre-release build. The build version used for regression test will be notified.
- [ ] **Pre-release testing:** A review of the implemented feature has been conducted, a long with regression test (check-list) by the team.
- Release checklist cloned from the templat for different OS (with hackMD link)
- New key test items from new feature added to the checklist.
- Split 3 OS to different team members for testing.
- [ ] **Document Updated:** The document is updated based on the review and feedback on any discrepancies or modification needed for this release.
- [ ] **Reviewed by Stakeholders:** New feature and the updated document is reviewed and approved by stakeholders. The document is in its final version, reflecting the implemented feature accurately.
### Notes (WIP)
- [ ] **Unit test:** Dev team + TDD + AI (demo from Louis)
- [ ] **Cortex.cpp e2e test:** Common models x hardware x important action
- [ ] **API collection run:** to run along with nightly build daily, for critical API validation
- [ ] **Automation run:** for regression testing purpose, to reduce manual testing effort for the same items each release on multiple OS.
- [ ] **Unit test:** Dev team + TDD + AI (demo from Louis)
- [ ] **Cortex.cpp e2e test:** Common models x hardware x important action
- [ ] **API collection run:** to run along with nightly build daily, for critical API validation
- [ ] **Automation run:**
- [ ]
- [ ] The entire automation tests collection
- [ ] Parallel per OS
- [ ] Daily run before nightly build
- [ ] **Manual testing:**
<div style="display: flex; justify-content: space-around; align-items: flex-start;">
<!-- List 1 -->
<div style="flex-basis: 48%; text-align: right;">
#### Current:
Feature branch
↓
Nightly Build (Main)
<br>↓
Nightly Build (Main)
<br>↓
Stable (Main)
</div>
<!-- Arrow -->
<div style="flex-basis: 4%; display: flex; align-items: center; justify-content: center; flex-direction: column;">
</div>
<!-- List 2 -->
<div style="flex-basis: 48%; text-align: left;">
#### Git-flow:
Feature branch
↓
Nightly Build (Dev branch)
<br>↓
Pre-release (Release Candidate Branch)
↓
Stable (Main)
</div>
<!-- List 3 -->
<div style="flex-basis: 30%; text-align: left;">
#### Tester:
Dev self-test
↓
QA/PM daily review new features
↓
Regression test on 2 OS (Team)
↓
Prod verification on the remaining OS (Team)
</div>
</div>
# Plan & Strategy for QA
## 1. Unit Test: Dev Team
- **Objective:** Ensure individual units of code function correctly, detect bugs early, and maintain code quality.
- **Strategy:**
- **Unit test for functions:** @Louis, @Mark
- **Unit test for UI components:** @Faisal
- **Implementation:**
- Following up demo session with Louis to showcase AI-driven test generation and optimization.
- Set up a CI/CD pipeline to run unit tests automatically on any changes to PR.
- **Metrics to Track:**
- Lines of code coverage percentage.
- Number of bugs found in unit tests vs. later stages.
## 2. Cortex.cpp e2e Test: Common Models x Hardware x Important Action
- **Objective:** Validate the cortex.cpp, including different models, hardware, and key functionalities.
- **Strategy:**
- [ ] Continue developing python end-to-end test cases that cover critical workflows across all supported hardware configurations.
- [x] Focus on testing common models and their actions to ensure cross-hardware compatibility.
- [x] Set up dedicated test environments to simulate different hardware conditions.
- **Implementation:**
- [x] Define a matrix of models, hardware, and critical actions.
- [ ] Automate e2e test execution and report results regularly - Report portal.
- [ ] Run these tests in parallel to reduce execution time.
- **Metrics to Track:**
- Pass/fail rate of e2e tests: spend effort to resolve
- Review and address users issue (related to cortex.cpp): to add more test if needed
- Time to execute tests.
## 3. API Collection Run:
- **Strategy:**
- Daily run: Maintain a complete set of API test, updated with new features
- Run on PR: A set of Smoke test that cover critical API + target key endpoints.
- **Implementation:**
- [ ] Use monitoring tools to track error, performance and response times.
- [ ] Automate the collection run (to be discussed) + integrate API tests into the CI/CD pipeline.
- **Metrics to Track:**
- API response time and success rate.
- Number of failures per build.
- Coverage of automation / total API.
- If an API failed:
- Cortex: Dev team will resolve the issue before merge
- Cortex.cpp: cover more cases in cortex.cpp
## 4. Automation Run:
- **Strategy:**
- Daily run: Maintain a complete set of automated tests, updated with new features and bug fixes.
- Run on PR: A set of Smoke test that is stable and cover critical functions
- **Implementation:**
- [x] Parallel execution configured per OS (Windows, Linux, macOS, etc.).
- [ ] Focus on console log error in the app, mostly UI failed caused by API failed with error in console log, additionally, dev team can review the source code to see if any try-catch error not handled properly.
- [ ] Auto update, implement a mechanism to check the autoupdate URL is still valid each build
- [ ] Responsive testing: run the same test on different window size, to make sure no icon overlapped
- [ ] E2e test: for new feature, we can start develope after UI PR created, TDD does not help much with UI
- [ ] Extensions migration + test data treasury: making sure after update, the app does not causing issues.
- [ ] Playwright trace viewer + include screenshot each steps + only failures: easier to troubleshooting issue.
- [ ] Monkey testing for Jan app
- **Metrics to Track:**
- Execution time of automation runs.
- Pass/fail rate of tests per OS.
## 5. Manual Testing
- **Objective:** Identify issues that automated tests may miss, such as usability and UI/UX flaws.
- **Strategy:**
- Define critical scenarios that require manual testing, especially those involving complex interactions.
- Schedule manual test cycles with regression test checklist, especially after major changes or new feature introductions.
- Focus on exploratory testing to uncover unexpected issues.
- **Implementation:**
- Dev self-test the developed features
- Reviewer double check the app via review code
- Use checklists and test cases for consistency, but allow flexibility for exploratory testing.
- Document findings and ensure rapid feedback to the development team.
- **Metrics to Track:**
- Number of bugs found manually vs. through automation.
- Time spent on manual testing.
## Execution Schedule
- **On PR:**
- Automation run for a quick set of stable tests.
- Unit test for components / functions
- API collection run and critical function validation.
- **Every 2 days:**
- API collection run and critical function validation.
- Automation run for the entire set of e2e.
## Notes
- Unit test, API and e2e test coverage review.
- Manual testing for critical updates or post-major releases.
- Review and update the automation suite for new scenarios.