Abstract
Interventions aiming to enhance cognitive functions (e.g., computerized cognitive training and non-invasive brain stimulation) are increasingly widespread for the treatment and prevention of cognitive decline. Drawing on the allure of neuroplasticity, such programs comprise a multi-billion dollar industry catering to researchers, clinicians, and individual consumers. Nevertheless, cognitive enhancement interventions remain highly controversial due to uncertainty regarding their mechanisms of action. A major limitation in cognitive enhancement research and practice is the failure to account for expectations of outcomes, which can influence the degree to which participants improve over an intervention (i.e., the placebo effect). Here, we sought to evaluate the psychometric properties of the Expectation Assessment Scale (EAS), a questionnaire we created to measure the perceived effectiveness of cognitive enhancement interventions. We delivered a web-based version of the EAS probing expectations of either computerized cognitive training or non-invasive brain stimulation. We assessed uni-dimensionality of the EAS using principal component analysis and assessed item properties with a graded item response model. Responses on the EAS suggest good validity based on internal structure, across all subscales and for both computerized cognitive training and non-invasive brain stimulation. The EAS can serve as a reliable, valid, and easily incorporated tool to assess the validity of cognitive enhancement interventions, while accounting for expectations of intervention outcomes. Assessing expectations before, during, and after cognitive enhancement interventions will likely prove useful in future studies.
Similar content being viewed by others
References
Boot, W. R., & Kramer, A. F. (2014). The brain-games conundrum: does cognitive training really sharpen the mind?. Retrieved from http://www.dana.org/Cerebrum/2014/The_Brain-Games_Conundrum__Does_Cognitive_Training_Really_Sharpen_the_Mind_/-sthash.iPYa8y4b.dpuf.
Boot, W. R., Simons, D. J., Stothart, C., & Stutts, C. (2013). The pervasive problem with placebos in psychology: why active control groups are not sufficient to rule out placebo effects. Perspectives on Psychological Science, 8(4), 445–454. https://doi.org/10.1177/1745691613491271.
Cai, L., Thissen, D., & du Toit, S. H. C. (Producer). (2011). IRTPRO for Windows [Computer software].
Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Psychology Press.
Foroughi, C. K., Monfort, S. S., Paczynski, M., McKnight, P. E., & Greenwood, P. M. (2016). Placebo effects in cognitive training. Proceedings of the National Academy of Sciences of the United States of America, 113(27), 7470–7474. https://doi.org/10.1073/pnas.1601243113.
Foster, G. C., Min, H., & Zickar, M. J. (2017). Review of item response theory practices in organizational research: lessons learned and paths forward. Organizational Research Methods, 20(3), 465–486. https://doi.org/10.1177/1094428116689708.
Hayduk, L., Cummings, G., Boadu, K., Pazderka-Robinson, H., & Boulianne, S. (2007). Testing! Testing! One, two, three—testing the theory in structural equation models! Personality and Individual Differences, 42(5), 841–850. https://doi.org/10.1016/j.paid.2006.10.001.
Hays, R. D., Morales, L. S., & Reise, S. P. (2000). Item response theory and health outcomes measurement in the 21st century. Medical Care, 38(9), 28–42 Retrieved from <go to ISI>://WOS:000089033000005.
Hooper, D., Coughlan, J., & Mullen, M. (2008). Structural equation modelling: guidelines for determining model fit. Electronic Journal of Business Research Methods, 6(1), 53–60.
Kim, S., & Feldt, L. S. (2010). The estimation of the IRT reliability coefficient and its lower and upper bounds, with comparisons to CTT reliability statistics. Asia Pacific Education Review, 11(2), 179–188. https://doi.org/10.1007/s12564-009-9062-8.
Meade, A. W., & Lautenschlager, G. J. (2004). A comparison of item response theory and confirmatory factor analytic methodologies for establishing measurement equivalence/invariance. Organizational Research Methods, 7(4), 361–388. https://doi.org/10.1177/1094428104268027.
Rabipour, S., & Davidson, P. S. R. (2015). Do you believe in brain training? A questionnaire about expectations of computerised cognitive training. Behavioural Brain Research, 295, 64–70. https://doi.org/10.1016/j.bbr.2015.01.002.
Rabipour, S., Andringa, R., Boot, W. R., & Davidson, P. S. R. (2017). What do people expect of cognitive enhancement? Journal of Cognitive Enhancement, 1–8. https://doi.org/10.1007/s41465-017-0050-3.
Samejima, F. (1997). Graded response model. In W. J. van der Linden & R. K. Hambleton (Eds.), Handbook of Modern Item Response Theory (pp. 85–100). Basel: Springer International Publishing AG.
SharpBrains. (2013). Executive summary: infographic on the digital brain health market 2012–2020. Retrieved from http://www.sharpbrains.com/executive-summary/.
SharpBrains. (2016). The digital brain health market 2012–2020: web-based, mobile and biometrics-based technology to assess, monitor and enhance cognition and brain functioning. Retrieved from http://sharpbrains.com/market-report/.
Shu, L. H., & Schwarz, R. D. (2014). IRT-estimated reliability for tests containing mixed item formats. Journal of Educational Measurement, 51(2), 163–177. https://doi.org/10.1111/jedm.12040.
Simons, D. J., Boot, W. R., Charness, N., Gathercole, S. E., Chabris, C. F., Hambrick, D. Z., & Stine-Morrow, E. A. L. (2016). Do “brain-training” programs work? Psychological Science in the Public Interest, 17(3), 103–186. https://doi.org/10.1177/1529100616661983.
Souders, D. J., Boot, W. R., Blocker, K., Vitale, T., Roque, N. A., & Charness, N. (2017). Evidence for narrow transfer after short-term cognitive training in older adults. Frontiers in Aging Neuroscience, 9. https://doi.org/10.3389/fnagi.2017.00041.
Squires, J. E., Hayduk, L., Hutchinson, A. M., Cranley, L. A., Gierl, M., Cummings, G. G., …, Estabrooks, C. A. (2013). A protocol for advanced psychometric assessment of surveys. Nursing Research and Practice, 2013, 156782. https://doi.org/10.1155/2013/156782.
Acknowledgements
We thank the Natural Sciences and Engineering Research Council of Canada for their support of this work.
Author information
Authors and Affiliations
Contributions
S.R. and P.S.R.D. developed the instrument and collected the data. S.R. and E.K. analyzed the data. S.R. and P.S.R.D. drafted the document, and finalized it based on edits from E.K. All authors approved the final version of the manuscript for submission.
Corresponding author
Rights and permissions
About this article
Cite this article
Rabipour, S., Davidson, P.S.R. & Kristjansson, E. Measuring Expectations of Cognitive Enhancement: Item Response Analysis of the Expectation Assessment Scale. J Cogn Enhanc 2, 311–317 (2018). https://doi.org/10.1007/s41465-018-0073-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s41465-018-0073-4