# 增加溝通成本的視覺訊息 - 失敗卻勇敢認錯的心理學研究 - 背景說明: Strand, J. (2020, April 15). Scientists Make Mistakes. I Made a Big One. Medium. https://elemental.medium.com/when-science-needs-self-correcting-a130eacb4235 - 發表論文 Strand, J. F., Brown, V. A., & Barbour, D. L. (2019, retracked). Talking points: A modulating circle reduces listening effort without improving speech recognition. Psychonomic Bulletin & Review, 26(1), 291–297. https://doi.org/10.3758/s13423-018-1489-7 Strand, J. F., Brown, V. A., & Barbour, D. L. (2020). Talking Points: A Modulating Circle Increases Listening Effort Without Improving Speech Recognition in Young Adults. Psychonomic Bulletin & Review, 27(3), 536–543. https://doi.org/10.3758/s13423-020-01713-y #### Report preprint: N/A final: https://doi.org/10.3758/s13423-020-01713-y #### Preregistration proposal: https://osf.io/nykh5 power analysis: see item "How many observations will be collected ..." in proposal #### Materials protocol: https://osf.io/8bjnz/  see folders"Experiment 1 stimuli", "Experiment 2 stimuli" #### Data & Analytical codes Storage: https://osf.io/8bjnz/  see folder “Data and code" codebook: check R scripts in folder “Data and code" #### 預先註冊計畫 Brown, V. A. (2018, January 4). Abstract visual stimulus and listening effort. https://doi.org/10.17605/OSF.IO/NYKH5 **What's the main question being asked or hypothesis being tested in this study?** An unpublished study in our lab suggests that an abstract visual stimulus - a dot that appears at the onset of the speech stream, disappears at the offset, and grows and shrinks with the amplitude of the speech - does not facilitate recognition of spoken sentences in two-talker babble. The current study aims to first replicate this finding with words rather than sentences, and then address whether an abstract visual stimulus reduces the effort necessary to recognize speech, even in the absence of an improvement in intelligibility. It is hypothesized that an abstract visual stimulus will not affect recognition accuracy, but will reduce listening effort. **Describe the key dependent variable(s) specifying how they will be measured.** First, we will conduct a replication of an unpublished finding in our lab using words rather than sentences. Participants will listen to a stream of words presented in two-talker babble at an SNR of -4 dB, and repeat aloud what they heard after each word. The dependent variable of interest for this portion of the study is accuracy in recognizing audiovisual (with the visual portion being the abstract moving dot) and audio-only words (with correct responses scored as 1 and incorrect responses scored as 0). In the listening effort experiment, participants will perform the same speech recognition task as they simultaneously perform a semantic dual task (Picou & Ricketts, 2014) in which they press a button as quickly and accurately as possible whenever the word they perceived is a noun. The dependent variable of interest is reaction times to trials during which participants respond by pressing the noun button. Slower reaction times suggest that listeners expended more effort in order to understand the spoken words. **How many and which conditions will participants be assigned to?** _Condition will be manipulated within subjects_; each participant will complete each of the four conditions (audiovisual single task, audio-only single task, audiovisual dual task, audio-only dual task), and 100 words will appear in each condition (with word lists counterbalanced across conditions). Participants will first complete the two single task conditions, followed by the two dual task conditions. However, within each task type (single and dual), the two conditions will be counterbalanced such that half of the participants will complete the audiovisual condition first, and half will complete the audio-only condition first. **Specify exactly which analyses you will conduct to examine the main question/hypothesis.** Our primary goal is to determine whether an abstract visual stimulus reduces listening effort. We hypothesize that reaction times will be slower in the audio-only condition compared to the audiovisual condition, suggesting that participants expended more listening effort in the audio-only condition. To conduct this analysis, we will build two mixed effects models predicting reaction time to the semantic dual task: a full model with condition (audiovisual vs. audio-only) as a fixed effect and participants and items as random effects, and a reduced model without condition (i.e., an intercept-only model). We will then compare the models using a likelihood ratio test to determine which model is preferred, and therefore whether the variable "condition" is significant. **Any secondary analyses?** As stated above, the present experiment includes two sub-experiments. First, we aim to replicate an unpublished finding in our lab - that an abstract visual stimulus does not facilitate speech recognition in noise - using words rather than sentences. Therefore, before performing the primary listening effort analyses, we will determine whether an abstract visual stimulus improves recognition of words. Given our previous findings with sentences, we hypothesize that condition (audiovisual vs. audio-only) will not have an effect on recognition accuracy. For this set of analyses, we will build two mixed effects models predicting recognition accuracy: a full model with condition (audiovisual vs. audio-only) as a fixed effect and participants and items entered as random effects, and a reduced model without condition (i.e., an intercept-only model). We will compare the two models using a likelihood ratio test to determine which model is preferred, and therefore whether the variable "condition" is significant. **How many observations will be collected or what will determine sample size? No need to justify decision, but be precise about exactly how the number will be determined.** We will analyze data from 96 individuals from the Carleton College community (ages 18-30). This sample size was determined using a power analysis via a web application (jakewestfall.org/two_factor_power/). We entered the following parameters: effect size = 0.3, residual variance partitioning coefficient (VPC) = 0.3, participant intercept VPC = 0.2, target intercept VPC = 0.2, participant-by-target VPC = 0.1, participant slope VPC = 0.1, target slope VPC = 0.1, total number of targets = 200, power = 0.96. This resulted in a minimum number of participants of 91.4. We will include 96 participants, slightly more than the minimum required, because we want the number of participants to be divisible by four (because we have four lists and four counterbalanced orders, so this decision simplifies experimental design), and we want to increase the power of the experiment. Including 96 participants with the same parameters described above increases the power to 0.965. After running 96 participants, we will determine how many participants and trials will be excluded from analysis (see below for exclusion criteria), and if necessary, we will run more participants until we reach the desired number of observations. **Anything else you would like to pre-register? (e.g., data exclusions, variables collected for exploratory purposes, unusual analyses planned?)** Participants will be excluded from all analyses if their accuracy on the speech recognition task is worse than three standard deviations below the mean accuracy for any condition (note that we will be analyzing accuracy data in the semantic dual task conditions only for exclusion purposes). Participants will also be excluded if their mean reaction time on a given semantic dual task condition (either audiovisual or audio-only) is more than three standard deviations above or below the grand mean reaction time for that condition. For the semantic dual task conditions, we will only analyze trials during which participants respond by pressing the noun button. Individual trials will be excluded if the reaction time to the noun classification task is more than three median absolute deviations above or below that participant’s median reaction time. This exclusion criterion relies on median absolute deviation rather than standard deviation because raw reaction times tend to be skewed. ###### tags: `EXPPSY_Book`