RT Journal Article SR Electronic T1 Generalization asymmetry in multivariate cross-classification: When representation A generalizes better to representation B than B to A JF bioRxiv FD Cold Spring Harbor Laboratory SP 592410 DO 10.1101/592410 A1 Job van den Hurk A1 Hans P. Op de Beeck YR 2019 UL http://biorxiv.org/content/early/2019/04/12/592410.abstract AB In recent years, the use of multivariate cross-classification (MVCC) has grown in popularity as a way to test for the consistency of information in neural patterns of activation across cognitive states. In this approach, a classification algorithm is trained on dataset A and then tested on a different dataset B in order to test for commonalities in the information representation in both datasets. Interestingly, several papers report an asymmetry in the generalization direction: training on A and testing on B returns significantly better decoding results than training on B and testing on A. Whereas several neurocognitive hypotheses have been put forward as an explanation for this phenomenon, none of them has been demonstrated directly. Through simple simulations, we show that asymmetry can arise as soon as two datasets with identical ground truths have a different signal-to-noise ratio (SNR) – generalization is best from the lower SNR to the higher SNR dataset. The extent of the asymmetry is further modulated by the overlap in informative voxels and whether the two datasets have an equal number of informative voxels. These findings demonstrate that the observation of decoding direction asymmetry in MVCC can be explained by simple SNR differences and not necessarily implies complex neurocognitive mechanisms.