# fMRI Lab for 17th March
###### With me (Will), Marisa, Balazs, & Doran
---
### Aim: To get acquainted with the tools needed to generate fMRI results
---
# Plan
#### 1. Tools
#### 2. Introduce experiment
#### 3. View the raw data
#### 4. Explore the design file
#### 5. Preprocessing
#### 6. Modelling task events
#### 7. View the results
#### 8. Multi-subject analysis and automation
---
### Tools for neuroimaging

###### FSL (FMRIB Software Library) is a free suite of applications from Oxford's Functional Magnetic Resonance Imaging of the Brain (FMRIB) laboratory.
###### http://mriquestions.com/best-fmri-software.html
----

###### FEAT (FLAME), BET, FSLeyes
###### https://fsl.fmrib.ox.ac.uk/fsl/fslwiki
---
# The Experiment
Before looking at the data we will describe the experiment.
### Aim: To find the neural correlates of word-generation
###### Data and more info: https://www.fmrib.ox.ac.uk/primers/intro_primer/ExBox11/IntroBox11.html
----
This dataset is from an event-related language experiment and has three different types of events:
----
###### - Word-generation events (WG): Here the subject is presented with a noun, say for example "car" and his/her task is to come up with a pertinent verb (for example "drive") and then "think that word in his/her head". The subject was explicitly instructed never to say or even mouth a word to prevent movement artefacts.
###### - Word-shadowing events (WS): Here the subject is presented with a verb and is instructed to simply "think that word in his/her head".
###### - Null-events (N): These are events where nothing happens, i.e. the cross-hair remains on the screen and no word is presented. The purpose of these "events" is to supply a baseline against which the other two event types can be compared.
----
##### Within one session, the events were presented at a constant ISI (Inter Stimulus Interval) of 6 seconds. For example, the first 72 seconds (twelve events) in this session may have looked like:
### N-WS-N-WS-N-WS-N-WG-N-WS-WG-N
---
## The Data
#### - `home/fsluser/fsl_course_data/ExBox11`
- Navigate to this directory, using the `Terminal` (or using the `files` GUI)
- use `cd your_path` to change working directory
- use `ls` to list files in working directory
- keep this terminal window open for later
---
## The raw neuroimaging data

- Let's view the raw 4D data for the subject
----
#### 1. Open program 'FSLEyes'
- Type `fsleyes` in Terminal/Command Line and press enter
- Or, open it from the FSL GUI
#### 2. Add functional data
- `file` -> `add from file` -> Find: `home/fsluser/fsl_course_data/ExBox11/fmri.nii.gz`
#### 3. View in Movie Mode
---
## The Analysis
- To get from this raw data to the results, an analysis was conducted using `FEAT`.
- We will look at the details of this analysis now.
----
## Analysis with FEAT
#### 1. Open FEAT GUI
- Type `fsl` in terminal & `Enter` -> Click FEAT button
#### 2. Load the design file (`.fsf`)
- `Load` -> `ExBox11/fmri.feat/design.fsf`
----
##### You can think of the design file as a recipe that turns your ingrediants (raw Nifti data) into a full meal (results). Having this recipe saved is useful because it means you can fully reproduce the meal again.
###### Because this dataset has already been analyzed with FEAT, a design file already exists - but for your experiments, you will need to create one yourself using this software.
##### So, let's explore the design file tab by tab.
----
## `Data`

----
## `Misc`

- Use `Baloon help` if you want to find info on parameters
----
## `Pre-stats`

- Many preprocessing options available - e.g. highpass filtering
----
## High-pass filtering

- Remove low-frequency (slow, < 100s) signal changes
- Important due to _scanner drift_
- Let's look at the effect of HPF
----
#### 1. Open `FSLEyes`
#### 2. Add `fmri.feat/filtered_func_data.nii.gz`
#### 3. View the two time series
- `View` -> `Timeseries`
- Click `Plotting mode` drop-down -> `Demeaned`
#### 4. Find example of high-pass filtering
- e.g., voxel coordinate `24,33,15` starts low and drifts upwards in unfiltered image
----
## `Registration`

----
## `Stats`

1. Go to `Full model setup`
----
## Model Set up
### `Explanatory Variables (EVs)`

###### - Each EV models a different effect
----

- View contents of these 3 column .txt files
1. Open previous terminal window
2. Type `ls` and Enter to view files in working directory
3. Type `cat word_generation.txt` and `Enter` (`cat` will print the contents of the txt file)
----
- 1st column = start time of stimulus (s)
- 2nd column = duration (s)
- 3rd column = intensity (1 = on)
----

- Do the same for `word_shadowing.txt`
##### These 3 column txt files contain all the information about the timing of events in the scanner
----
### Convolving

----
### `Contrasts & F-tests`

----
### Contrasts
##### Contrasts are the way in which we express questions (alternative hypotheses).
- For example, the question of whether a positive effect exists (that's associated with an EV called β1) can be expressed as β1 > 0.
- Or to test if the effect size associated with one regressor is larger than another can be expressed as β1 > β2.
----
### Contrasts & F-tests

- Question: What do you think each 5 contrasts are testing?
- Tip: EV1 = word generation & EV2 = word shadowing
----
#### Contrast Meanings
##### - `Generation`: this tests for when there was greater activation during word generation compared to baseline.
##### - `Shadowing`: this tests for when there was greater activation during word shadowing compared to baseline.
##### - `Mean`: this tests for when the average activation during generation and shadowing was greater than baseline.
##### - `Shad > Gen`: this tests for when there was greater activation during word shadowing compared to word generation.
##### - `Gen > Shad`: this tests for when there was greater activation during word generation compared to word shadowing.
----
### `View design`

###### (If error occurs on class laptops, refer to the above picture)
----

###### - After all that set-up, FEAT is now able to create a model of predicted activity for each explanatory variable (i.e. word generation / shadowing) across time.
###### - GLM will compare this model to actual BOLD activity and reveal those voxels whose activity is well explained by model i.e involved in word generation and/or shadowing.
----
## `Post-stats (thresholding)`

- Important for controlling false positives
---
### Running the analysis
#### 2 Options:
- Simply press `Go`
- Or, type `feat <design.fsf>` in to command line.
- ...or view ready-made results!
---
## Viewing the results

----
## Viewing the results
1. View results directory
- `Applications` -> `Files` -> `fsl_course_data/ExBox11/fmri.feat`
2. View results report
- Double click to open `report.html`
3. See the report on registration, pre-stats and stats - these are good sanity checks
----
## Post-stats
- `Post-stats` is where the real results are
1. What contrasts show significant efffect?
- Each contrast has an associated results image
2. View cluster based statistics
- Click on any contrast image
---
### Let's look at the last two results (contrasts Shad>Gen; Gen>Shad) in 3D using FSLeyes
----
### Viewing the results in 3D
1. Open an anatomical image on which to overlay functional results
###### - Add `fmri.feat/example_func.nii.gz`
2. Overlay statistical map of contrast 4 (`fmri.feat/thresh_zstat4; Shad > Gen`) and contrast 5 (`fmri.feat/thresh_zstat5; Gen > Shad`)
3. Apply different colour maps to each
###### - To change a maps colour, select it in the overlay list and then change the colour map at the top
----
4. Load unthresholded versions
- Add `fmri.feat/stats/zstat5`
- not all significant but still potentially useful
---
## Automating this process
- This was a single subject analysis - imagine doing this for 50+ participants
- We can use the command line for more than file navigation - We can use it to automate this process
- Our design file - `design.fsf` - is simply a text file - lot's of potential
---
# Summary
- Hopefully now be acquainted with:
1. Bash commands in Terminal
2. fMRI analysis software FEAT
3. fMRI data/results visualization with FSLEyes
- Code is Key!
----
# fMRI Analysis Resources
----
## Ginette Mumford's FSL FEAT youtube tutorials
https://www.youtube.com/watch?v=lCwewJJPd5U&list=PLB2iAtgpI4YHlH4sno3i3CUjCofI38a-3&ab_channel=mumfordbrainstats
- This is a great series which takes you from start to finish of a task-based fMRI analysis using FSL's FEAT software
----
## Andrew Jahn's FSL videos
https://www.youtube.com/watch?v=9ionYVXUQn8&list=PLOPaMln1VugP5sSKaa_KMmrrVXMJTMrLi&ab_channel=AndrewJahn
- More examples of scripting + FSL
----
## MIT General Linear Model for fMRI
- 3 part lecture series that explains the general linear model nicely. These were the videos that made the GLM and task-design click.
- Theoretical understanding of the principles of the method
----
## FSL's FEAT tutorial
https://www.fmrib.ox.ac.uk/primers/intro_primer/ExBox11/IntroBox11.html
- Example data here
https://fsl.fmrib.ox.ac.uk/fslcourse/lectures/practicals/feat1/index.html
- More explation here
----
https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FEAT/UserGuide
- The master guide to FEAT
----
## GLM Hanbook
https://www.fmrib.ox.ac.uk/primers/appendices/glm.pdf
{"metaMigratedAt":"2023-06-15T19:38:45.004Z","metaMigratedFrom":"Content","title":"fMRI Lab for 17th March","breaks":true,"contributors":"[{\"id\":\"eedaa2d2-bb63-4839-8e61-9fb28793b4c2\",\"add\":14905,\"del\":4491}]"}