Testing and Specification for MoFaCTS
===
---
# Testing
## Error Reporting Procedure (using the button for reporting in the app)
* check current errors: https://github.com/memphis-iis/mofacts-ies/milestone/15
* what you were doing (and step if using procedure below)
* what happened
* what you expected
* resume testing after reporting the error if it is minor enough
* if it is severe/important, reproduce it and submit a second error report
* open a thread about it here in Slack https://optimallearninglab.slack.com/archives/C024PQ9ABGD
## Student side testing (primarily learning sessions, and student reports)
### Part A
1. Login to test link: https://staging.optimallearning.org/signInSouthwest?showTestLogins=true
1. Use login with your last name and an identifier for the run # and or the date, such as Pavlik3on530
2. Login to the ppavlik@sw.tn.edu, choose TESTCOURSE and then choose Chapter 9 or 10
3. Proceed to available chapters
4. Choose the first 1, and select all items
5. Practice until you start seeing repetitions, make sure you get some right and wrong, the best test is to practice like a real student with effort
6. Check progress report to confirm that the readout appears to be what you completed. Is everything reported comprehensible?
7. Navigate to home
8. Close the browser
### Part B
1. Login to test link: https://staging.optimallearning.org/signInSouthwest?showTestLogins=true the same way as before
2. Use same login
3. Proceed to available chapters
4. Choose the same one
5. Practice for another few trials
7. Navigate to home
7. Check progress report
8. Navigate to home
9. Proceed to available chapters
10. Choose SR and/or TTS for input and output on home screen
11. Choose the same chapter
12. Practice for 5 trials
13. Navigate to home
14. Proceed to available chapters
15. Choose refutation and/or dialogue feedback on home screen
16. Choose the same chapter again
17. Practice for 5 trials
### Part C
1. Login to test link: https://staging.optimallearning.org/signInSouthwest?showTestLogins=true the same way as before
7. Select survey link
8. Complete survey
## Experiment side testing (primarily new experiments)
1. Upload TDF and stim file
2. Go through all the trials at experiment link being tested
3. Check data to confirm
## Teacher side testing (assignments and teacher progress reports)
1. Login as teacher with teacher level access account https://staging.optimallearning.org/signIn
3. Create a class section
4. Assign the chapters to the section
5. Login as student tester to your teacher link: https://staging.optimallearning.org/signInSouthwest?showTestLogins=true
6. Select your ID, then select the class section and then chapter you created
7. Do a few trials, check the progress report and then go to home
9. Do a few trials
10. Repeat for the other chapter with edits
11. Login as teacher with teacher level access account https://staging.optimallearning.org/signIn
12. Go to teacher reports and inspect the progress report for the class you created. confirm that the practice is reported
13. Select the student and drill down to their individual report, look for obvious glitches and confirm totals are same as the main group report
13. Confirm that if you set a time filter for before the time you practiced the practice is NOT reported
14. Create records for a new student on the next day and again check progress reporter with time filter to show the old student but not the new one
## Admin side testing (data download)
1. Download data from all the tests above
3. Confirm downloads, trials are in chronological sequence, shows student response, shows student correctness, shows latencies of actions, show problem and answer, show hints, show time, show tdf used, show student login
---
# Specification
## Class Management
1. Class creation <!-- Notice all numbered lists are "1.", markdown auto numbers for us -->
As a [**teacher**](#teacherDef) I want to be able to create classes to group students and give them assignments <!-- first time term is encountered, link to a definition -->
1. <a id="932a6db8-dd1a-4120-aa75-a7f523f627ad"></a> Given a teacher, ***myteacher*** is on the classEdit page <!-- each leaf requirement has a unique guid --><!-- myteacher is a local variable, useful for referencing the same concept repeatedly in a req -->
And an existing class is not selected
And a class name, ***myclassname*** has been entered in the class name box
And 0 or more student usernames have been entered into the student login ids box
Then a class named ***myclassname*** is created; the owner is set to ***myteacher***, and the listed student usernames are associated with it.
---
Stipulations:
[Stip1.1](#Stip1.1) <!-- stipulations which pertain to more than 1 req are put at the end of that section and linked as so -->
[Stip1.2](#Stip1.2)
1. Class editing
As a **teacher** I want to be able to edit classes to change which students are associated with them
1. <a id="47f2ab4e-e1ad-4b4c-88e8-b490fa371d80"></a> Given a teacher, ***myteacher*** is on the classEdit page
And an existing class, ***myclassname*** is selected
Then the student usernames associated with ***myclassname*** are populated into the student login ids box
---
Stipulations:
[Stip1.1](#Stip1.1)
[Stip1.2](#Stip1.2)
<!-- if a task requires multiple things that need to happen/can be broken up, break it up and list in order with the absolute requirements for an action listed in acceptance criteria and the side effects https://en.wikipedia.org/wiki/Side_effect_(computer_science) put as final effects in the then clause of earlier reqs -->
1. <a id="5748fbfc-78e6-4b22-a7ea-410eb3bb6e6b"></a> Given a teacher is on the classEdit page
And an existing class, ***myclassname*** is selected
And ***myteacher*** clicks save class
Then the student usernames associated with ***myclassname*** will be updated to reflect those in the student login ids box
---
Stipulations:
[Stip1.1](#Stip1.1)
[Stip1.2](#Stip1.2)
1. Class tdf assignment
As a **teacher** I want to be able to edit class assignments to change which tdfs are assigned to a group of students
1. <a id="f3e02563-fb22-4391-bd06-03c32fc08d14"></a> Given a teacher, ***myteacher*** is on the tdfAssignmentEdit page
And an existing class, ***myclassname*** is selected
Then the tdfs associated with ***myclassname*** are populated into the Selected Chapters box and the chapters available to be assigned are populated into the Available Chapters Box
---
Stipulations:
1. Chapters available to be assigned are those which are owned by the owner of the class,**myteacher***, and were created in the current semester
<!-- stipulations that only apply to one requirement should be put inline with that requirement for ease of readability. If they get used by another, pull out and ref as already seen -->
1. <a id="9ca34559-cf4b-4226-befa-3069a5227b88"></a> Given a teacher, ***myteacher*** is on the tdfAssignmentEdit page
And an existing class, ***myclassname*** is selected
And ***myteacher*** clicks the save assignment button
Then ***myclassname*** will have it's associated tdfs set to those in the Selected Chapters box
Stipulations
---
<a id="Stip1.1">Stip1.1</a> Student usernames are not the same as student ids, we just label it that way to be easier for users to understand
<a id="Stip1.2">Stip1.2</a> Student usernames are one per row, i.e. delimited by newlines in the student login ids box
---
## Content Module Setup
1. \<setspec> parameters
As a [**teacher**](#teacherDef) I want to be able to create a module in a [**tdf**](#tdfDef) that has certain features (qualitative and quantitative) for 1 or more subsequent units
1. Given a teacher, [**teacher**](#teacherDef) designates a tdf file
* And \<setspec> is designated
| Fields | Default | Explanation |
|--------|---------|------------|
| \<name> |n/a |Short name
| \<lessonname> | n/a |Full name, punctuated as needed
| \<userselect> | False | True indicates the tdf should be displayed on the main profile page
| \<stimulusfile> | na | Filename for corresponding stimulus list for tdf
| \<lfparameter> | 1 | set from 0 to 1 which indicates the percentage of the response characters for string responses that need to be correct. For example if the response is feemure and the answer is femur, the edit distance is 2, and the max length is 7 for the 2 words, so that mean score is 5/7 correct, and the item will be marked wrong if the lfparameter is grater than .714. The logic is that the lf parameter is the % of the word you must get correct to be marked close enough.
| \<simTimeout> | integer >0 | How many milliseconds simulation takes per simulated test
| \<simCorrectProb> | range from 0 to 1 | Chance that each simulated trial is correctly responded to
| \<speechAPIKey> | n/a | Google SR API key
| \<audioInputEnabled> | False | Whether SR is available
| \<audioInputSensitivity> | | Setting for microphone gain
| \<speechIgnoreOutOfGrammarResponses> | True | Boolean, whether to ignore and force users to try again if we transcribe a response not within the answer set of a tdf while using speech to text. autostopTranscriptionAttemptLimit controls how many times this is attempted for unit in delivery params.
| \<speechOutOfGrammarFeedback> | "Not a possible response." | Message to display if an answer not in the answer set is transcribed from speech recognition in the case that speechIgnoreOutOfGrammarResponses is set to true
| \<enableAudioPromptAndFeedback> |False| Boolean to enable/disable text to speech
| \<audioPromptSpeakingRate> | 1 |Value from 0.1 to 2 for the speed of text to speech. Acts as percentage relative to 1
| \<textToSpeechAPIKey> |n/a | Google TTS API key
| \<shuffleclusters> |n/a | Allows shuffling within groups of n clusters, specified as x-y, each of which is shuffled as a unit, and replaced in the sequence. ranges may overlap as process is 1 by 1
| \<experimentTarget> |n/a | location of no login link for tdf directly for experiments. Format is mofacts.optimallearning.org/experiment/experimentTarget
| \<swapclusters> | n/a | Allows shuffling of groups of clusters, the n groups are shuffled, as specified by non-overlapping ranges. Shuffling the n groups occurs simultaneously. e.g. 0-3 4-6 7-9 indicates 3 groups and can result in 6 possible orders, groups 1,2,3; groups 1,3,2; 2,1,3; 2,3,1; 3,1,2; 3,2,1
| \<randomizedDelivery> |n/a | This is the count for how many retention interval conditions the tdf contains. It requires a unit of the tdf to have retention interval marks equal to the count in a unit as follows: <br>\<deliveryparams><br> \<lockoutminutes># of minutes here</lockoutminutes><br></deliveryparams>
| \<prestimulusDisplay> |n/a | String for the intertrial prompt before each trial (duration specificied in delivery params)
Then the \<units> will be provided with these parameters
1. \<unit> parameters
As a [**teacher**](#teacherDef) I want to be able to create a unit in a [**tdf**](#tdfDef) of content where the unit has certain features (qualitative and quantitative.)
1. Given a teacher, [**teacher**](#teacherDef) designates a unit <unit> of content in a tdf file
* And \<deliveryparams> is designated with values
| Fields | Default | Explanation |
|--------|---------|------------|
| \<unitname> | n/a | For tracking of data
| \<unitinstructions> |n/a | Displayed with continue button
| \<buttonorder> | fixed | Fixed or random order of the buttons for all trials for this unit (avoids hardcoding in stimuli)
| \<deliveryparams> | n/a | Set of values described below related to content delivery
| \<buttontrial> | False | Whether the trials are displayed on the button interface if it is a learning session
| \<assessmentsession> | n/a | Set of values describing unit if it is a designed pattern of trials (not optimization), this is one of two current fundamental types of units.
| \<learningsession> |n/a | Set of values describing control by a selection algorithm, the other fundamental type of unit.
| \<buttonOptions> |n/a | If option for \<buttontrial> is true, this is a comma delimited list of possible options
| \<instructionminseconds> | 0 | This is the mininmum time the student may may view the instructions (to better ensure reading)
| \<instructionmaxseconds> | 0 (which implies no maximuum) | This is the maximuum time the student may may view the instructions (to standardize instruction)
| \<turkemailsubject> | n/a | Subject heading for the Amazon Turk message that reminds students to come back to practice the susequent unit later.
| \<turkemail> |n/a | Contents of email
| \<turkbonus> |n/a | Amount of the Amazon Turk bonus triggered if unit is reached
| \<picture> |n/a | image presented with the unit instructions
Then the **trials** will be provided with these parameters
1. Trial delivery \<unit> parameters
As a [**teacher**](#teacherDef) I want to be able to create a unit in a [**tdf**](#tdfDef) of content where each trial has certain features (qualitative and quantitative.)
1. Given a teacher, [**teacher**](#teacherDef) designates a unit <unit> of content in a tdf file with either an \<learningsession> or an \<assessmentsession>
* And \<deliveryparams> is designated with values
| Fields | Default | Explanation |
|--------|---------|------------|
| \<showhistory> | false| enables scrolling history during practice
| \<forceCorrection> | false| forces the student to type the correct response after feedback
| \<scoringEnabled> | isLearningSession| enables or disables scoring in \<learningsession>
| \<purestudy> | 0| time in ms the system presents the [**item**](#itemDef) when it is a study only trial
| \<initialview> | 0| see TwoPartStim.json and TwoPartOptim.xml, allows 2 stimuli parts for an [**item**](#itemDef), the first of which is shown for initialview ms
| \<drill> | 0| time in ms the system waits before timeout, resets for each keypress to prevent timeouts during responding
| \<reviewstudy> | 0| time in ms the system presents the [**item**](#itemDef) after failure in a drill
| \<correctprompt> | 0| time in ms the system presents the "you got it right" message for the correct response, this is the delay after a correct response before the next trial begins
| \<skipstudy> | false| if true study trials can be skipped by pressing the spacebar
| \<lockoutminutes> | 0| the number of minutes that must be waited before the system allows the student to proceed, at which point the \<turkemail> is triggered if present; may occur multiple times as triggered by \<randomizedDelivery> option in \<setspec>
| \<fontsize> | 3| CSS font size (second part of a tag that is one of h1-h6)
| \<numButtonListImageColumns> | 2| if using buttonimages, this is how many columns to display
| \<correctscore> | 1| amount score increases for correct response
| \<incorrectscore> | 0| amount score decreases for incorrect response
| \<practiceseconds> | 0| the duration of practice for a \<learningsession>, the time after instructions during the unit
| \<autostopTimeoutThreshold> | 0| number of sequential trials that have timed out that triggers return to module select screen
| \<autostopTranscriptionAttemptLimit> | 3| try to transcribe a response this many times before giving up and forcing a default answer (first button in button trial or FORCEDINCORRECT for text input)
| \<timeuntilaudio> | 0| pause before audio plays before study, drill, or test trials
| \<timeuntilaudiofeedback> | 0| pause before feedback (review study) audio plays
| \<prestimulusdisplaytime> | 0| duration of the \<prestimulusDisplay> that is defined in the \<setspec> in ms
| \<forcecorrectprompt> | ''| if \<forceCorrection>==true then this is the prompt given to the student
| \<forcecorrecttimeout> | 0| If \<forceCorrection>==true then this is the duration before timeout (works like drill timer)
| \<studyFirst> | false| if in \<learningsession> give a study trial instead of drill the first time for each [**item**](#itemDef)
| \<checkOtherAnswers> | false| when true this will cause it to do check if the response exactly matches other in set resposes (in case responses are similar you need this), and mark it incorrect even if it close match to the orginal (within lfparameter threshold)
| \<feedbackType> | simple | Simple (feeedback that presents just the correct response the student was expected to type) is available for all practice, and refutational (feedback that disambiguates the student incorrect response from the correct response, pregenerated with Andrew Olney's plugin) and dialogue (a short chat about the confusion shown by the student's incorrect response, also from Olney subsystems) are possible for cloze.
| \<allowFeedbackTypeSelect> | false| This allows users to set the feedbackType above by selecting an option on profile (note they could still choose one and it would be ignored if not set to true)
| \<falseAnswerLimit> | 9999999 | The number of incorrect responses provided for each button trial from incorrectResponses array in [**item**](#itemDef)s or buttonOptions in unit declaration. This can be used to select a subset of button options basically.
Then the **trials** will be provided with these parameters
1. Learning unit creation
As a [**teacher**](#teacherDef) I want to be able to create a unit in a [**tdf**](#tdfDef) of the type \<learningsession>
1. <a id="932a6db8-dd1a-4120-aa75-a7f523f627ad"></a> Given a teacher designates a unit \<unit> of content in a tdf file with the \<learningsession> tag
* And the following required tags are specified for the \<unit>:
* \<deliveryparams>
* And the following required tags are specified for the \<learningsession>:
* \<clusterlist> this is a consecutive list of x-y pairs indicating the sequential chunks of clusters (all stimuli in each cluster are always used in learning sessions), e.g. 0-6 12-17 would indicate the first 7 items, followed by items 13 to item 18. The example 12-17 0-6 is invalid, since it is nonsequntial
* \<unitMode> this is one of several possible stimulus selection algorithms that specifies a method to select the next item to display for the learning session. threshold ceiling, distance, highest, and unspecified (default) are current options
* \<calculateProbability> this is a javascript code block that may define fucntions, but must ultimately return a value, p (typically a probability), that may be used in the selection algorithms. The code block will have acces to a variety of state and history information for the user. The rather voluminous list of variables available is listed in the unitEngine.js file in the calculateSingleProb function.
* And the following optional tags are specified for \<unit>:
* \<buttonorder>
* \<buttonOptions>
* And the following optional tags are specified for the \<learningsession>:
* \<displayminseconds> ) 0 is default (which implies no minimum) This is the mininmum time the student may may use the learning session for before they may skip to the next unit by pressign continue (to ensure some practice)
* \<displaymaxseconds> 0 is default (which implies no maximuum) This is the maximuum time the student may use the practice (to standardize practice amount for the unit)
Then the **tdf** \<learningsession> unit will be produced if a **student** uses the **tdf** (presumably made available by a **teacher**) long enough for the \<learningsession> unit to occur in the ordered sequence of units for the tdf.
1. Assessment, factorials designs, and survey unit creation
As a [**teacher**](#teacherDef) I want to be able to create a unit in a [**tdf**](#tdfDef) of the type \<assessmentsession>
1. Given a teacher designates a unit \<unit> of content in a tdf file with the \<assessmentsession> tag
* And the following required tags are specified for \<unit>:
* \<deliveryparams>
* And the following required tags are specified for \<assessmentsession>:
* \<conditiontemplatesbygroup> see below
* \<initialpositions> a list of the positions of the stimuli repetitions after the templating is applied. The key information here is the start location of each of the template repetitions. the template itself allow inference of the other positions, but the software requires them (it serves as a logic checksum for the schedule)
* \<randomizegroups> not used in recent memory (could be removed)
* \<clusterlist> this is a consecutive list of x-y pairs indicating the sequential chunks of clusters, e.g. 0-6 12-17 would indicate the first 7 items, followed by items 13 to item 18. The example 12-17 0-6 is invalid, since it is nonsequntial
* \<permutefinalresult> this has identical structure, but it means that as a last step in creating the sequence (which is saved to the unitstate for that tdf for that user) each of these regions is randomly ordered individually, then the regions (chunks) are pasted back in order. Again, then chunk sequence can't be reordered, only the sequence within chunks is randomized to complete the schedule for the unit
* \<assignrandomclusters> this causes the assessment session, as a first step in making the schedule, to rerandomize the clusters (typically they are first randomized globally in the setspec which makes them random during the initial unit). To do random assignment for subsequent units this is needed because often subsequent units need to be randomized relative to initial units. for example in a pretest,learning, posttest design, you might need 3 randomization of the order
* And the following required tags are specified for \<conditiontemplatesbygroup>:
* \<groupnames> letters to indicate the names of conditions of practice applying to sequences of item or cluster practices
* \<clustersrepeated> this is how many time each cluster is repeated, it is the template length for each of the groupnames
* \<templatesrepeated> this is how many templates there are for each groupname (each containing )
* \<groups> this is a list for each group of a,b,c,d (4 csv values) for each trial within each repetition of each groupname. The first value is the stimulus item within the cluster (0 indexed) with r indicated a random index. The second value is the dispay mode, e.g. f is standard text responses, b is a button trial. The third value is the whether t, d, or s for a test without feedback, test with feedback, or feedback only. The 4th value is the 0 index location of this trial within the "template" for example 2,f,d,0 2,f,d,18 2,f,d,36 indicates 3 fill in trials for the 3rd stimulus in the cluster with the second trial 18 trials after the first and the third 18 trials after the second
* And the following optional tags are specified for \<unit>:
* \<buttonorder>
* \<buttonOptions>
* And the following optional tags are specified for \<assessmentsession>:
* \<randomchoices> the random index mentioned in \<groups> above
Then the **tdf** \<assessmentsession> unit will be produced if a **student** uses the **tdf** (presumably made available by a **teacher**) long enough for the \<assessmentsession> unit to occur in the ordered sequence of units for the tdf.
---
## Data Output Module Setup
1. Output of student data in DataShop format
As a [**teacher**](#teacherDef) I want to be able to output data for a tdf used by students.
1. Given a teacher, [**teacher**](#teacherDef) navigates to the data download page and clicks on the tdf name
* And there exists prior data for that tdf
Then the txt data tab delimited data file will be provide with these headers and defaults. * indicates fields are outputted for instruction screen trials and instruction units, while assessment sessions and learning session trials (after the instructions) produce the whole list of values.
"CF" is with a parenthetical name is the standardized method for adding new fields not in DataShop format already, but old fields should be used in preference is consulation with Pavlik on case by case basis. DataShop format is specified here: https://datashop.memphis.edu/help?page=importFormatTd
| Column Header | Default | Example Value | Explanation |
|---------------|---------|---------------|-------------|
|* **Anon Student Id**|na|imrryr@gmail.com |Student login id
|* **Session ID**|integer|4|Integer counting which use of the tdf for the student (4 is the 4th time a student uses the tdf)
|* **Condition Namea**|na|xyz.json |Filename for tdf
|* **Condition Typea**|'tdf file' |'tdf file' |DataShop required
|* **Condition Nameb**|0|2 |Integer indicating long-term retention condition for this student
|* **Condition Typeb**|'xcondition' |DataShop required
|**Condition Namec**|d(schedCondition, '') |Assessment session provides data on \<assessmentsession> \<unit> condition, i.e. group and template number
|**Condition Typec**|'schedule condition' |DataShop required
|**Condition Named**|d(lasta.guiSource, '')|Mode of input for the trial (button, keypress, timeout)
|**Condition Typed**|'how answered' |DataShop required
|* **Level (Unit)**|unitNum|Integer sequence value for the unit
|* **Level (Unitname)**|d(unitName, '') |Proper name for the unit
|**Problem Name**| |Text of [**item**](#itemDef) or filename of [**item**](#itemDef)
|**Step Name**|stepName |Problem Name preappended with the count for the [**item**](#itemDef) for the student
|* **Time**|d(lastq.clientSideTimeStamp, 0) |Time that a trial starts as measured on the client
|**Selection**|'' |DataShop required
|**Action**|'' |DataShop required
|**Input**|d(lasta.answer, '') |What the student input, for matching with CF (Correct Answer) below to check correctness
|**Outcome**|d(outcome, null), //answerCorrect recoded as CORRECT or INCORRECT |Used by DataShop to for scoring, HINT is also allowable, but not implemented
|**Student Response Type**|isStudy ? "HINT_REQUEST" : "ATTEMPT", // where is ttype set? |Used by DataShop to for scoring
|**Student Response Subtype**|d(lasta.qtype, '') |DataShop required
|**Tutor Response Type**|isStudy ? "HINT_MSG" : "RESULT", // where is ttype set? |Used by DataShop to for scoring
|**Tutor Response Subtype**|'' |DataShop required
|**KC (Default)**| |The [**item**](#itemDef) KC, corresponds to verbatim repetitions
|**KC Category(Default)**|'' |DataShop required
|**KC (Cluster)**|kcCluster |The grouping KC, used by models to indicate related [**item**](#itemDef)s
|**KC Category(Cluster)**|'' |DataShop required
|**CF (GUI Source)** | d(lasta.guiSource,'') |seems redundant with Condition name D???
|**CF (Audio Input Enabled)** | lasta.audioInputEnabled |Was the student in SR mode
|**CF (Audio Output Enabled)** | lasta.audioOutputEnabled |Was the student in TTS mode
|**CF (Display Order)**|d(lastq.questionIndex, -1)|Order of the trials within a unit
|**CF (Stim File Index)**|d(lastq.clusterIndex, -1)|The integer value of the cluster for the [**item**](#itemDef)|
|**CF (Set Shuffled Index)**|d(lastq.shufIndex, d(lastq.clusterIndex, -1)), //why?|Can't figure out why this is needed|
|**CF (Alternate Display Index)**|d(lastq.alternateDisplayIndex, -1)|Index of the alternate display selected from within the same stim for presentation (used for work with paraphrases where the content is equivalent).|
|**CF (Stimulus Version)**|whichStim|Which [**item**](#itemDef) of a cluster is displayed|
|**CF (Correct Answer)**|correctAnswer|If a drill or test, this is the correct answer to the [**item**](#itemDef)|
|**CF (Correct Answer Syllables)**|currentAnswerSyllablesArray|For text responses, this is the response segmented into syllables, comma delimited|
|**CF (Correct Answer Syllables Count)**|currentAnswerSyllableCount|For text responses, this is the count of syllables in the response|
|**CF (Display Syllable Indices)**|currentAnswerSyllableIndices|For text responses, this is the indexes of syllables given as hints|
|**CF (Overlearning)**|d(lastq.showOverlearningText, false)|For thresholdCieling and distance \<learningsession>s this indicates the student is practicing with all [**item**](#itemDef)s above the critereon for selection|
|**CF (Response Time)**|d(lasta.clientSideTimeStamp, 0)|The time corresponding to when CF (End Latency) is recorded|
|**CF (Start Latency)**| history.stimulusduration|How long it takes from the start of the trial until the student begins typing a response|
|**CF (Response Latency)**|history.responseduration|How long it takes from when a student begins typing a response to when they finish or hit ENTER||
|* **CF (Review Latency)**|history.feedbackduration|How long they spent on the review opportunity, study trial, study screen, or study unit|
|**CF (Review Entry)**|d(lasta.forceCorrectFeedback, ''),| feedback provided from \<forceCorrection> when turned on|
|**CF (Button Order)**|d(lasta.buttonOrder, ''),|Order of buttons displayed to student for button interfaces|
|**CF (Note)**|d(note, '')| Error logging notes|
|* **Feedback Text**||The text or filename (e.g. image or sound location) of the feedback|
---
## Appendix A - Term definitions
<a id="teacherDef">Teacher - A user who has been assigned the teacher role. Includes experimenters.</a>
<a id="studentDef">Student - A user who has been assigned the student role.</a>
---
## Appendix C - Notes
<!-- AT: html tags are interpreted so you have to escape them to display them literally with \ prepended as below; you may want to open the atom markdown preview side-by-side to make sure it displays as you desire -->
<!-- AT: The first req seems good. The one below needs a little work. It would seem from the userStory that it * \<s about the unitMode parameter but you end up specifying how to make a learningsession unit. Assuming you meant the former (which is a big assumption on my part and I make just to be illustrative now), clusterlist and calculateProbability aren't, strictly speaking, relevant to unitMode I believe. Also the end result would be the effect of the unitMode tag, i.e. changing the way the probability is calculated to select the next [**item**](#itemDef) for a trial. I would start by defining the necessary parts of a learningsession, then specify any optional parts the are often used and their effects. For the optional parts you would then refer back to the necessary tags as a prerequisite (using the GUID, which for now you can just make up random strings and we'll make it real GUIDs later) and any optional tags required for that tag specifically. -->
<!-- lots of things are not yet done. try to tell me about any explicit errors (not omissions as much, except perhaps to just list what you want added next). How high priority are the definitions? Many many things could be defined-->
<!-- how should I specify xml fields, should they be defined? where? -->
<!-- AT: For now just make a list of all required and all optional tags/values for tags and I'll pick out the important ones to delve into to save you specifying all of them. In general for this cycle try to aim for breadth before depth and I'll try to help guide where reqs are most useful to optimize your time usage. -->
<!-- seems guid can be added later?-->
<!-- AT: correct -->
<!-- are the stipulation numeric refs sections going to update correctly? Not sure how that should work since main sections are not numeric now. See below. Also what needs to be stipulated here?-->
<!-- AT: the stipulation refs won't update automatically, they have to be copied. The automatic numbering of the 1. 's would change every time we move a stipulation so we have to manually specify them. Strictly speaking they just need unique identifiers, not sequential numbers, but numbers seemed easier to talk about. -->
<!-- AT: I can't think of any stipulations needed yet as I don't think we're far enough into the specifics to need them. -->
GUID generator: https://richardkundl.github.io/shortguid/ (Generate GUID, not short GUID)
4 spaces per indent/tab
bolditalics for variable refs ***local variable/ref***
Special term link [**Term**](#anchorId1)
Special term definition (should be in an appendix) <a id="anchorId1">Term - Definition</a>
Terms linked/defined at first ref only, bolded after first ref in a section
Tentative Requirement Format
---
User story - story format of what should happen, more abstract/like reqs of past; As a ___ I want ____ to happen (optional: when _____ )
Acceptance Criteria (GIVEN...AND...THEN)
Stipulations of acceptance criteria - cross cutting definitions or caveats (may seem like technical details level of abstraction)
---
User flows? List of req guids with general flow described?