Final auditory stimuli for use in the experiment are stored in *synth_stim_368ms.mat*
Auditory stimuli generated on a PC with *makestim.m*
*checkstim.m* checks the peaks of power in the stimuli and their envelopes
---
Stimuli taken from Ding et al., 2016, Nat Neuro
Some edits made and stored in 'Edited stimuli from Ding 2016':
"Drunk dudes sang hums" changed to "Drunk dudes sang hymns"
**Algorithm of stimulus generation**
All words synthesised with VOICEBOX [http://www.ee.ic.ac.uk/hp/staff/dmb/voicebox/voicebox.html][1]
For each word:
1. Synthesise at all possible rates available in sapisynth (VOICEBOX) until the length of the stimulus is as close as possible to 90% of the desired word duration (but does not exceed 90%)
2. Speech does not immediately onset or offset within the file generated by VOICEBOX, so onsets and offsets identified by crossing of vol_thresh, and then trimmed.
3. Use that optimum rate to synthesise that word for the experimental stimuli. This means that each word has a slightly different speech rate, but that it maximally fills the word duration and so makes short words clearer and avoids trimming longer words.
4. Fade out last 25ms (as Ding et al) because not all words can finish in time due to steps 1-3 (i.e even at fastest rate, it is still too long).
Syllable rate of 368ms chosen as this will have an integer number of EEG samples if sampled at 500Hz (or multiple thereof):
.368 [syllable rate] * 4 [num of syllables] * 500 [EEG sample rate] = 736 EEG samples per 'sentence'.
NOTE: EEG MUST BE SAMPLED AT 500HZ
[1]: http://www.ee.ic.ac.uk/hp/staff/dmb/voicebox/voicebox.html