PT - JOURNAL ARTICLE AU - Christian Brodbeck AU - Proloy Das AU - Marlies Gillis AU - Joshua P. Kulasingham AU - Shohini Bhattasali AU - Phoebe Gaston AU - Philip Resnik AU - Jonathan Z. Simon TI - Eelbrain: A Python toolkit for time-continuous analysis with temporal response functions AID - 10.1101/2021.08.01.454687 DP - 2022 Jan 01 TA - bioRxiv PG - 2021.08.01.454687 4099 - http://biorxiv.org/content/early/2022/11/17/2021.08.01.454687.short 4100 - http://biorxiv.org/content/early/2022/11/17/2021.08.01.454687.full AB - Even though human experience unfolds continuously in time, it is not strictly linear; instead, it entails cascading processes building hierarchical cognitive structures. For instance, during speech perception, humans transform a continuously varying acoustic signal into phonemes, words, and meaning, and these levels all have distinct but interdependent temporal structures. Time-lagged regression using temporal response functions (TRFs) has recently emerged as a promising tool for disentangling electrophysiological brain responses related to such complex models of perception. Here we introduce the Eelbrain Python toolkit, which makes this kind of analysis easy and accessible. We demonstrate its use, using continuous speech as a sample paradigm, with a freely available EEG dataset of audiobook listening. A companion GitHub repository provides the complete source code for the analysis, from raw data to group level statistics. More generally, we advocate a hypothesis-driven approach in which the experimenter specifies a hierarchy of time-continuous representations that are hypothesized to have contributed to brain responses, and uses those as predictor variables for the electrophysiological signal. This is analogous to a multiple regression problem, but with the addition of a time dimension. TRF analysis decomposes the brain signal into distinct responses associated with the different predictor variables by estimating a multivariate TRF (mTRF), quantifying the influence of each predictor on brain responses as a function of time(-lags). This allows asking two questions about the predictor variables: 1) Is there a significant neural representation corresponding to this predictor variable? And if so, 2) what are the temporal characteristics of the neural response associated with it? Thus, different predictor variables can be systematically combined and evaluated to jointly model neural processing at multiple hierarchical levels. We discuss applications of this approach, including the potential for linking algorithmic/representational theories at different cognitive levels to brain responses through computational models with appropriate linking hypotheses.Competing Interest StatementThe authors have declared no competing interest.