# Grand averaging and difference waves ```python %matplotlib qt ``` ```python import mne import matplotlib.pyplot as plt import pandas as pd import numpy as np ``` ```python evoked1, evoked2 = mne.read_evokeds('001-ave.fif') print(evoked1) print(evoked2) ``` Reading 001-ave.fif ... Isotrak not found Found the data of interest: t = -100.00 ... 700.00 ms (words) 0 CTF compensation matrices available nave = 144 - aspect type = 100 No projector specified for this dataset. Please consider the method self.add_proj. No baseline correction applied Isotrak not found Found the data of interest: t = -100.00 ... 700.00 ms (pseudo) 0 CTF compensation matrices available nave = 143 - aspect type = 100 No projector specified for this dataset. Please consider the method self.add_proj. No baseline correction applied <Evoked | 'words' (mean, N=144), [-0.1, 0.7] sec, 157 ch, ~1.3 MB> <Evoked | 'pseudo' (mean, N=143), [-0.1, 0.7] sec, 157 ch, ~1.3 MB> ```python evoked1, evoked2 = mne.read_evokeds('002-ave.fif') print(evoked1) print(evoked2) ``` Reading 002-ave.fif ... Isotrak not found Found the data of interest: t = -100.00 ... 700.00 ms (words) 0 CTF compensation matrices available nave = 24 - aspect type = 100 No projector specified for this dataset. Please consider the method self.add_proj. No baseline correction applied Isotrak not found Found the data of interest: t = -100.00 ... 700.00 ms (pseudo) 0 CTF compensation matrices available nave = 28 - aspect type = 100 No projector specified for this dataset. Please consider the method self.add_proj. No baseline correction applied <Evoked | 'words' (mean, N=24), [-0.1, 0.7] sec, 157 ch, ~1.3 MB> <Evoked | 'pseudo' (mean, N=28), [-0.1, 0.7] sec, 157 ch, ~1.3 MB> ```python evoked1, evoked2 = mne.read_evokeds('002-ica_ave.fif') print(evoked1) print(evoked2) ``` Reading 002-ica_ave.fif ... Isotrak not found Found the data of interest: t = -100.00 ... 700.00 ms (words) 0 CTF compensation matrices available nave = 69 - aspect type = 100 No projector specified for this dataset. Please consider the method self.add_proj. No baseline correction applied Isotrak not found Found the data of interest: t = -100.00 ... 700.00 ms (pseudo) 0 CTF compensation matrices available nave = 70 - aspect type = 100 No projector specified for this dataset. Please consider the method self.add_proj. No baseline correction applied <Evoked | 'words' (mean, N=69), [-0.1, 0.7] sec, 157 ch, ~1.3 MB> <Evoked | 'pseudo' (mean, N=70), [-0.1, 0.7] sec, 157 ch, ~1.3 MB> ```python ave_names = ['001-ave.fif', '002-ica_ave.fif', '003-ave.fif', '004-ave.fif', '005-ave.fif', '015-ave.fif'] ave_names ``` ['001-ave.fif', '002-ica_ave.fif', '003-ave.fif', '004-ave.fif', '005-ave.fif', '015-ave.fif'] ```python all_words = [mne.read_evokeds(ave_name, 'words') for ave_name in ave_names] all_pseudo = [mne.read_evokeds(ave_name, 'pseudo') for ave_name in ave_names] ``` Reading 001-ave.fif ... Isotrak not found Found the data of interest: t = -100.00 ... 700.00 ms (words) 0 CTF compensation matrices available nave = 144 - aspect type = 100 No projector specified for this dataset. Please consider the method self.add_proj. No baseline correction applied Reading 002-ica_ave.fif ... Isotrak not found Found the data of interest: t = -100.00 ... 700.00 ms (words) 0 CTF compensation matrices available nave = 69 - aspect type = 100 No projector specified for this dataset. Please consider the method self.add_proj. No baseline correction applied Reading 003-ave.fif ... Isotrak not found Found the data of interest: t = -100.00 ... 700.00 ms (words) 0 CTF compensation matrices available nave = 142 - aspect type = 100 No projector specified for this dataset. Please consider the method self.add_proj. No baseline correction applied Reading 004-ave.fif ... Isotrak not found Found the data of interest: t = -100.00 ... 700.00 ms (words) 0 CTF compensation matrices available nave = 40 - aspect type = 100 No projector specified for this dataset. Please consider the method self.add_proj. No baseline correction applied Reading 005-ave.fif ... Isotrak not found Found the data of interest: t = -100.00 ... 700.00 ms (words) 0 CTF compensation matrices available nave = 29 - aspect type = 100 No projector specified for this dataset. Please consider the method self.add_proj. No baseline correction applied Reading 015-ave.fif ... Isotrak not found Found the data of interest: t = -100.00 ... 700.00 ms (words) 0 CTF compensation matrices available nave = 140 - aspect type = 100 No projector specified for this dataset. Please consider the method self.add_proj. No baseline correction applied Reading 001-ave.fif ... Isotrak not found Found the data of interest: t = -100.00 ... 700.00 ms (pseudo) 0 CTF compensation matrices available nave = 143 - aspect type = 100 No projector specified for this dataset. Please consider the method self.add_proj. No baseline correction applied Reading 002-ica_ave.fif ... Isotrak not found Found the data of interest: t = -100.00 ... 700.00 ms (pseudo) 0 CTF compensation matrices available nave = 70 - aspect type = 100 No projector specified for this dataset. Please consider the method self.add_proj. No baseline correction applied Reading 003-ave.fif ... Isotrak not found Found the data of interest: t = -100.00 ... 700.00 ms (pseudo) 0 CTF compensation matrices available nave = 142 - aspect type = 100 No projector specified for this dataset. Please consider the method self.add_proj. No baseline correction applied Reading 004-ave.fif ... Isotrak not found Found the data of interest: t = -100.00 ... 700.00 ms (pseudo) 0 CTF compensation matrices available nave = 43 - aspect type = 100 No projector specified for this dataset. Please consider the method self.add_proj. No baseline correction applied Reading 005-ave.fif ... Isotrak not found Found the data of interest: t = -100.00 ... 700.00 ms (pseudo) 0 CTF compensation matrices available nave = 24 - aspect type = 100 No projector specified for this dataset. Please consider the method self.add_proj. No baseline correction applied Reading 015-ave.fif ... Isotrak not found Found the data of interest: t = -100.00 ... 700.00 ms (pseudo) 0 CTF compensation matrices available nave = 144 - aspect type = 100 No projector specified for this dataset. Please consider the method self.add_proj. No baseline correction applied ```python all_words ``` [<Evoked | 'words' (mean, N=144), [-0.1, 0.7] sec, 157 ch, ~1.3 MB>, <Evoked | 'words' (mean, N=69), [-0.1, 0.7] sec, 157 ch, ~1.3 MB>, <Evoked | 'words' (mean, N=142), [-0.1, 0.7] sec, 157 ch, ~1.3 MB>, <Evoked | 'words' (mean, N=40), [-0.1, 0.7] sec, 157 ch, ~1.3 MB>, <Evoked | 'words' (mean, N=29), [-0.1, 0.7] sec, 157 ch, ~1.3 MB>, <Evoked | 'words' (mean, N=140), [-0.1, 0.7] sec, 157 ch, ~1.3 MB>] ```python all_pseudo ``` [<Evoked | 'pseudo' (mean, N=143), [-0.1, 0.7] sec, 157 ch, ~1.3 MB>, <Evoked | 'pseudo' (mean, N=70), [-0.1, 0.7] sec, 157 ch, ~1.3 MB>, <Evoked | 'pseudo' (mean, N=142), [-0.1, 0.7] sec, 157 ch, ~1.3 MB>, <Evoked | 'pseudo' (mean, N=43), [-0.1, 0.7] sec, 157 ch, ~1.3 MB>, <Evoked | 'pseudo' (mean, N=24), [-0.1, 0.7] sec, 157 ch, ~1.3 MB>, <Evoked | 'pseudo' (mean, N=144), [-0.1, 0.7] sec, 157 ch, ~1.3 MB>] ```python words_grouping = mne.grand_average(all_words) print(words_grouping) words_grouping.comment = 'words (n=6)' print(words_grouping) ``` Identifying common channels ... all channels are corresponding, nothing to do. <Evoked | 'Grand average (n = 6)' (mean, N=6), [-0.1, 0.7] sec, 157 ch, ~1.3 MB> <Evoked | 'words (n=6)' (mean, N=6), [-0.1, 0.7] sec, 157 ch, ~1.3 MB> ```python pseudo_grouping = mne.grand_average(all_pseudo) print(pseudo_grouping) pseudo_grouping.comment = 'pseudo (n=6)' print(pseudo_grouping) ``` Identifying common channels ... all channels are corresponding, nothing to do. <Evoked | 'Grand average (n = 6)' (mean, N=6), [-0.1, 0.7] sec, 157 ch, ~1.3 MB> <Evoked | 'pseudo (n=6)' (mean, N=6), [-0.1, 0.7] sec, 157 ch, ~1.3 MB> ```python times = np.arange(0.05, 0.5, 0.05) words_grouping.plot_topomap(times=times, ch_type='mag', time_unit='s') ``` ![](https://i.imgur.com/cfZR3D3.png) ```python ts_args = dict(gfp=True, time_unit='s') topomap_args = dict(sensors=False, time_unit='s') words_grouping.plot_joint(title='Real Words', times=[.13, .20, 0.31], ts_args=ts_args, topomap_args=topomap_args) ``` ![](https://i.imgur.com/H9AbRK3.png) ```python tmp = [words_grouping, pseudo_grouping] colors = 'green', 'red' mne.viz.plot_evoked_topo(tmp, color = colors, title = 'LDT / lexicality') ``` ![](https://i.imgur.com/tm94jdO.png) ```python evoked_diff = mne.combine_evoked([pseudo_grouping, -words_grouping], weights='equal') # calculate difference wave ts_args = dict(gfp=True, time_unit='s') topomap_args = dict(sensors=False, time_unit='s') evoked_diff.plot_joint(title='Lexicality', times=[0.32, 0.5], ts_args=ts_args, topomap_args=topomap_args) ``` ![](https://i.imgur.com/RyqOrWs.png) ```python ```