owned this note
owned this note
Published
Linked with GitHub
# Network Dynamics in Directed-Forgetting experiments
## Experiment Structure
The experiment consisted of 8 _study-test_ blocks and one _localizer_ block. In each _study-test_ block, participants viewed a central fixation cross for 3 seconds, followed by a 3-second delay. They then studied a 16-word list, list A, followed by an on-screen memory cue instruction telling them to either forget or remember the list A items. Participants then studied a second 16-word list, list B. Finally, participants received an on-screen recall either list A or list B (given 1 minute to recall the words). Participants were told that a forget instruction meant that 100% they would be asked to recall list B on that block. Despite this, the participants' memory for list A on the final forget block was tested. The last block is always a forget block. 4 total forget and 4 total remember blocks.
### Experiment Parameters
- Each TR corresponds to a 2-s interval (According to the paper, the repetition time is _2000 ms_.)
- Number of (valid) subjects: 23
- Shape of regressor matrix: (14, 1653)
- Shape of each `Brain_Data` object extracted from the the subjects' MRI data: (1653, 238955)
- Each `Brain_Data` object will have the following dimensions after applying ROI mask: (1653, 1000)
### Timing for different events:
- Participants viewed a central fixation cross for 3-seconds followed by 3-second delay in each block.
- The remember-forget cues were right after the end of list A.
- Each list word appeared on screen for 3 seconds, and the words were separated by 3 seconds.
- In the 3-second delay between words, participants viewed three randomly chosen images of outdoor scenes (each image lasted one second on the screen). **QUESTION: Are there scenes presented after the end of the last word?**
- Cue gets presented 3 secs after the end of the last word.
- List B's words were not separated by scenes. Each word was displayed for 6 seconds.
## Requirements Spec
The Network Dyanmics project is a notebook that leverages behavioral & fMRI data collected from __Manning et al. 2016__ to study the dynamics of global connectivity in the brain during directed-forgetting experiments.
### Major Functionality
1. Manage downloading and extracting files (mainly the fMRI data).
2. Pre-process the fMRI data:
- Distortion Correction
- Motion Correction
- Slice Timing
- Coregisteration
- Spatial Normalization
- Smoothing
3. Load behavioral and regressor data, excluding participants with incompatible data based on Manning et al., 2016:
- Exclude participant `072413_DFFR_0` due to improperly captured sync pulses causing timing errors.
- Exclude participant `112313_DFFR_0` due to ceiling-level performance eliminating measurable forgetting effects.
4. Display the regressors for any desired subject from the study.
5. Display the serial position curves with appropriate distinction between lists recalled on a forget cue and lists recalled on a forget cue.
6. Use the `Schaefer2018, 17 Networks, 1000 Parcels` study as a parcellation atlas for our experiment.
7. Build an index table that organizes voxels in the following format: `Parcel_ID x y z study hemisphere network code`, where:
- `Parcel_ID` is the ID of the parcel that this voxel belongs to. There are 1000 distinct parcel ID's.
- `x y z` coordinates of the voxel.
- `study` the name of the study this parcellation is from (We will use `17Networks` as a the name for our index table).
- `hemisphere`, is the hemisphere to which this voxel belongs. Takes one of two values `L` or `R`.
- `network`, the name of the network this voxel belongs to.
- `code` The code of the network. There are 17 distinct values (from 1 to 17). Each value for each distinct network
8. Visualize the brain networks, using a distinct color for each network.
9. Extract binary mask from the parcellation study, dividing the brain into 1000 parcels.
10. Calculate average intensity across each ROI (parcel), and return a new array of size `(1653, 1000)` that tracks the average intensity of each parcel across all the time repetitions in the experiment.
11. Extract cue times for each subject, and further differentiate them into either cue times for `forget_cue` or `remember_cue` (store in a hashtable).
12. For each subject, build a dictionary of the following format: `{cue_type: averaged_brain_activity}`, where:
- `cue_type` is either `forget` or `remember`
- `averaged_brain_activity` is calculated in the following way: for each event time that belongs to `cue_type`, choose an interval of fixed size chosen around each event. Average out brain activity in each TR across all the intervals. This array will have the following shape `(time_interval_size, #of_parcels)`
- Eventually, we end up with a dictionary for each subject that records the average brain activity for each `cue_type` for that subject.
13. Compute dynamic correlations for forget and remember networks across subjects.
14. Visualize the correlation matrix at timepoints sampled uniformly.
15. CALCULATE BRAIN NETWORK DYNAMICS (THIS IS VAGUE AND NEEDS MORE CLARIFICATION)
#### Assumptions
- The regressor matrices filenames match the following reges `*_regs_results.mat`
- The regressor matrices are 14 (regressors) by 1653 (time-points). Each entry is an array of 1653 timepoints with entries of 0's and 1's. The 0's indicate that the specific regressor was inactive. (1's indicate it was active)
- The behavioral dictionary provides the following relevant data for each subject: `['L1_objects', 'L1_recalls', 'L1_rest', 'L1_scenes', 'L1_scrambled_scenes', 'L2_objects', 'L2_recalls', 'L2_rest', 'L2_scenes', 'L2_scrambled_scenes', 'correct_recalls', 'cue_objects', 'cue_rest', 'cue_scenes', 'cue_scrambled_scenes', 'cuetype', 'last_image_objects', 'last_image_rest', 'last_image_scenes', 'last_image_scrambled_scenes', 'pfr', 'rec_objects', 'rec_rest', 'rec_scenes', 'rec_scrambled_scenes', 'reclist', 'recmats', 'spc', 'start_objects', 'start_rest', 'start_scenes', 'start_scrambled_scenes', 'tempfact'])`
## Design Specs
Link to design specs can be found [here](https://hackmd.io/@AhmedAlSunbati/BJq2bibree/edit)