owned this note
owned this note
Published
Linked with GitHub
# Use Case 10 Approve a Review Catalog for use by the CFDE
# Use Case Testing form
**Reviewer**: Jose Sanchez
**OS (including version)**: Windows 10 (1909)
**Browser (including version)**: Google Chrome
**Use case**:
**Review type**: Selenium
**Role groups**: NIH CFDE Metabolomics Approvers
<either not logged in or exactly record all group names> <click your name in the top right corner and view your profile to find group names>
# Use Case Test
<details><summary>Instructions</summary>
<p>
I would like each use case checked by at least two people. Preferably with a mix of browsers and OS, so that we have a better chance of spotting potential bugs.
1. Choose a use case that you will validate
2. Copy the text from the next comment into a new document
3. Follow the use case, filling out the document as you go
4. If you encounter one of the Quick Tests, check that it is right and check it off. If you don't encounter it as part of your use case, leave it blank. If it doesn't work, add some text explaining the problem
5. When you are done with your use case, post your filled form as a comment in this thread
I recommend starting by looking at the Quick Tests section and seeing which ones will be part of your use case so you can check them as you go instead of backtracking at the end
</p>
</details>
## Use Case Description
**1. Evaluate the description.**
- Does this description make sense?
- The description makes sense.
- Ann receives notifications to review new catalog submissions
- Ann checks the submission by looking at dynamically generated plots.
- She can approve or reject submissions
- Does it sound like a useful thing to do?
- Yes, whenever submitting new data packages to the CFDE Portal, it is good that there is an assigned person to verify the data is sound and reflects the work of the respective DCC. When Submitting data packages, it is always good for someone to verify the data is good before accepting it into the data portal where other researchers could use the metadata information.
- Are there any corrections that should be made (spelling, grammar, etc)?
**2. Try to complete the steps as they are described for the persona in the use case.**
<details><summary>Instructions</summary>
<p>
For each step record:
- the specific action you took, for e.g. I clicked on 'leg' in the 'anatomy' filter at [this web address]()
- Whether that action was possible/worked
- Whether the *results* of that action are as described
- and if they are not as described, how they differ
- Any other comments you have, or things you were surprised about. Be specific!
Copy the lines below as many times as needed for your use case
</p>
</details>
Action: Logged in to the CFDE Portal as a NIH Metabolomics Data Administrator
- [x] Worked
- [x] Results as expected
- If not, why not:
- Other Comments
Action: Clicked on the Data Review Tab at the Navigation Bar
- [x] Worked
- [x] Results as expected
- If not, why not:
- Other Comments
Clicking on the data review tab takes me to the the Submitted Data Package.
![](https://i.imgur.com/lAFU3kv.png)
Action: Clicked on `Browse Data`
- [x] Worked
- [x] Results as expected
- If not, why not:
-
- Other Comments
![](https://i.imgur.com/PW0Nwyg.jpg)
Action: Clicked on `Summary Charts`
- [x] Worked
- [ ] Results as expected
- If not, why not:
- Other Comments
Some additional features that may be helpful is to be able to change the Metabolomics Data Snap Shot Bar Graph to change based on whether you are asking for total subjects in a dataset, total biosamples, total files, etc. But it is good to at least know the total file count for this new submission. I think it would be helpful to have side by side comparisons of current total existing data in the portal and total data files in the most recent data submissions.
I think it would help the data reviewer to see these differences and get a glimpse if the changes are what they expect. It helps to see the difference in current and new file counts and judge if that is what is expected in the new submission.
Action: Clicked on the `raw data` link takes me to a globus website authentication page. Everytime I try to login with jsanchez1815@gmail.com I am returned to this page
- [ ] Worked
- [ ] Results as expected
- If not, why not:
- Other Comments
- Everytime I try to login with jsanchez1815@gmail.com I am taken to this page because I don't have the right permissions.
Action: Clicked on `Edit` icon to access the page that allows the DCC Admin to Approve or Reject the current Data package submission at [Approve or Reject page](https://app-staging.nih-cfde.org/chaise/recordedit/#registry/CFDE:datapackage/RID=550?pcid=record&ppid=283422452dmg2piz2f182nsx&invalidate=8088185983908531)
- [ ] Worked
- [ ] Results as expected
- If not, why not:
- Other Comments
![](https://i.imgur.com/IAlSiIh.png)
Action:
- [ ] Worked
- [ ] Results as expected
- If not, why not:
- Other Comments
Action:
- [ ] Worked
- [ ] Results as expected
- If not, why not:
- Other Comments
Action:
- [ ] Worked
- [ ] Results as expected
- If not, why not:
- Other Comments
Action:
- [ ] Worked
- [ ] Results as expected
- If not, why not:
- Other Comments
## Tasks for this use case:
1. Based on the description you walked through, does this list of tasks make sense? If not, why not? Are there missing tasks? Unused tasks? Task descriptions that don't quite match the workflow? Be specific both about which tasks and their specific problems.
2. **OPTIONAL (if not already addressed above):**
Check whether each general task works, regardless of whether the specific instance described in the description works.
<details><summary>Instructions</summary>
<p>
For each task record:
- the specific action you took, for e.g. I clicked on 'leg' in the 'anatomy' filter at [this web address]()
- note that tasks are generally broader than the description, so you likely will need to do more than one action to test it
- Whether that action was possible/worked, i.e. was it technically possible to do?
- Whether the *results* of that action are what you expect, i.e. did it 'work' in the way a user would want
- and if they are not as described, how they differ
Copy the lines below as many times as needed for your use case
</p>
</details>
Action:
- [ ] Worked
- [ ] Results as expected
- If not, why not:
- Other Comments
Action:
- [ ] Worked
- [ ] Results as expected
- If not, why not:
- Other Comments
## Requirements for this use case:
1. Based on the description you walked through and it's tasks, does this list of requirements make sense? If not, why not? Are there things you needed but are not listed as requirements? Unused requirements? Requirement descriptions that don't quite match the workflow? Be specific both about which requirements and their specific problems.
2. **OPTIONAL (if not already addressed above):**
Check whether each requirement works, if possible, regardless of whether the specific instance described in the description works.
<details><summary>Instructions</summary>
<p>
For each requirement record:
- the specific action you took, for e.g. I clicked on 'leg' in the 'anatomy' filter at [this web address]()
- note that requirements are very broad, so you may need to do more than one action to test it
- if you can't find a way to test the requirement, record that and why
- Whether that action was possible/worked, i.e. was it technically possible to do?
- Whether the *results* of that action are what you expect, i.e. did it 'work' in the way a user would want
- and if they are not as described, how they differ
</p>
</details>
Action:
- [ ] Was not testable
- [ ] Worked
- [ ] Results as expected
- If not, why not:
- Other Comments
Action:
- [ ] Was not testable
- [ ] Worked
- [ ] Results as expected
- If not, why not:
- Other Comments
# Overall
What difficulties did you encounter while completing your use case?
Did you see any spelling, grammar or similar mistakes on any resource you visited in completing your use case?
What other comments or questions do you have about your use case?
What other comments or questions do you have about any of the resources you visited?
What feedback do you have about this form/testing process?
# Quick Tests
Complete test if it is encountered as part of your use case.
- If test works/work is complete check the box.
- If you don't encounter the test during your use case, leave it blank
- If test does not work/work is not complete, note the issue. Be specific!
[Link to QA screens for reference](https://drive.google.com/file/d/11-SVyGzTsKy5Ke8o6s_lFE6LCF_BrJ2a/view)
Home page
- [ ] Download button style now matches wireframe
- [ ] chart in upper right corner reflects data
- [ ] Color Palette is updated
Dashboard
- [ ] "Select Data view" box present to show which dashboard graphs are available
- [ ] Download button style now matches wireframe
- [ ] Timestamp for data missing
DCC Review
- [ ] Numbers have links
- [ ] Scroll bar in Data Review table
Registry
- [ ] [Spelling correct](https://github.com/nih-cfde/cfde-deriva/issues/131)
Navbar
- [ ] Bolded option in navbar when page is selected
- [ ] Log out button styles
- [ ] Locks next to Dashboard and Data Review links missing
- [ ] Color Palette