Skip to main content
bioRxiv
  • Home
  • About
  • Submit
  • ALERTS / RSS
Advanced Search
New Results

CoNNECT: Convolutional Neural Network for Estimating synaptic Connectivity from spike Trains

Daisuke Endo, View ORCID ProfileRyota Kobayashi, Ramon Bartolo, View ORCID ProfileBruno B. Averbeck, View ORCID ProfileYasuko Sugase-Miyamoto, View ORCID ProfileKazuko Hayashi, Kenji Kawano, View ORCID ProfileBarry J. Richmond, View ORCID ProfileShigeru Shinomoto
doi: https://doi.org/10.1101/2020.05.05.078089
Daisuke Endo
1Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Ryota Kobayashi
2Mathematics and Informatics Center, The University of Tokyo, Tokyo 113-8656, Japan
3Department of Complexity Science and Engineering, The University of Tokyo, Chiba 277-8561, Japan
4JST, PRESTO, Saitama, 332-0012, Japan
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Ryota Kobayashi
Ramon Bartolo
5Laboratory of Neuropsychology, NIMH/NIH/DHHS, Bethesda, MD 20814, USA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Bruno B. Averbeck
5Laboratory of Neuropsychology, NIMH/NIH/DHHS, Bethesda, MD 20814, USA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Bruno B. Averbeck
Yasuko Sugase-Miyamoto
6Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology, Tsukuba 305-8568, Japan
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Yasuko Sugase-Miyamoto
Kazuko Hayashi
6Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology, Tsukuba 305-8568, Japan
7Japan Society for the Promotion of Science, Tokyo 102-0083, Japan
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Kazuko Hayashi
Kenji Kawano
6Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology, Tsukuba 305-8568, Japan
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Barry J. Richmond
5Laboratory of Neuropsychology, NIMH/NIH/DHHS, Bethesda, MD 20814, USA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Barry J. Richmond
Shigeru Shinomoto
8Department of Physics, Kyoto University, Kyoto 606-8502, Japan
9Brain Information Communication Research Laboratory Group, ATR Institute International, Kyoto 619-0288, Japan
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Shigeru Shinomoto
  • For correspondence: shinomoto.shigeru.6e@kyoto-u.ac.jp
  • Abstract
  • Full Text
  • Info/History
  • Metrics
  • Data/Code
  • Preview PDF
Loading

Abstract

The recent increase in reliable, simultaneous high channel count extracellular recordings is exciting for physiologists and theoreticians, because it offers the possibility of reconstructing the underlying neuronal circuits. We recently presented a method of inferring this circuit connectivity from neuronal spike trains by applying the generalized linear model to cross-correlograms, GLMCC. Although the GLMCC algorithm can do a good job of circuit reconstruction, the parameters need to be carefully tuned for each individual dataset. Here we present another algorithm using a convolutional neural network for estimating synaptic connectivity from spike trains, CoNNECT. After adaptation to very large amounts of simulated data, this algorithm robustly captures the specific feature of monosynaptic impact in a noisy cross-correlogram. There are no user-adjustable parameters. With this new algorithm, we have constructed diagrams of neuronal circuits recorded in several cortical areas of monkeys.

I. INTRODUCTION

More than half a century ago, Perkel, Gerstein, and Moore [1] pointed out that by measuring the influence of one neuron on another through a cross-correlogram, physiologists could infer the strength of the connection between the neurons. If this were done for lots of pairs of neurons, a map of the neuronal circuitry could be built. In practice, this turned out to be difficult. Now, with the advent of high-quality simultaneous recording from large arrays of neurons, it might now have become possible to map the structures of neuronal circuits.

The original cross-correlation method can give plausible inferences about connections. However, in many cases, the cross-correlograms also tends to suggest the presence of connections that are spurious, i.e., false positives (FPs). There were many possible sources for the lack of reliability and specificity, such as large fluctuations produced by external signals or higher-order interactions among neurons. Over the years, there have been many attempts to minimize the presence of such spurious connections, by shuffling spike trains [2], by jittering spike times [3–5], or by taking fluctuating inputs into account [6–10]. These, in general, helped eliminate the FPs, but they then tended to be conservative, giving rise to false negatives (FNs), i.e., missing existing connections.

Recently, we developed an estimation method that works well in balancing the conflicting demands of reducing FPs and reducing FNs [11]. The estimation method we call GLMCC (GLM cross-correlation) nonetheless has a shortcoming: the estimation results are sensitive to the model parameters, and therefore the parameters need to be tuned for the spiking data. Here, we develop another algorithm: Convolutional Neural Network for Estimating synaptic Connectivity from spike Trains (CoNNECT). The two premises are that a convolutional neural network is good at capturing the features important for distinguishing among images (in this case, cross-correlograms) and that the cross-correlogram image will contain sufficient information from which to infer the presence of monosynaptic connectivity. This new algorithm is easy to use, and it works robustly with data arising from different cortical regions in non-human primates. The convolutional neural network algorithm has tens of thousands of internal parameters [12–15]. The parameters are adjusted using hundreds of thousands of pairs of spike trains generated with a large-scale simulation of the circuitry of realistic model neurons. To reproduce large fluctuations in real spike trains, we added external fluctuations to the model neurons in the simulation.

CoNNECT provides reasonable inference. It does not, however, give a rationale for why the result was derived, whereas our previous algorithm GLMCC does because it fits an interaction kernel to the cross-correlogram. These methods, therefore, have different strengths and weaknesses and can be used in combination in a complementary manner. Namely, the inference given by CoNNECT can be used for guiding GLMCC to search for suitable parameters, and GLMCC can provide interpretation.

We evaluated the accuracy of estimation by comparing the inference with the true connections, using synthetic data generated by simulating circuitries of model neurons, and compared the performance of CoNNECT with that of GLMCC, as well as the classical cross-correlogram method [16, 17] and the Jittering method [4, 18]. After confirming the performance of the model, we applied CoNNECT to parallel spike signals recorded from three cortical areas of monkeys and obtained estimation of the local neuronal circuitry among many neurons. We have found that the connections among recorded units are sparse; they are less than 1% for all three datasets.

II. RESULTS

A. Training and validating with synthetic data

CoNNECT infers the presence or absence of monosynaptic connections between a pair of neurons and estimates the amplitude of the postsynaptic potential (PSP) that one neuron would drive in another. The estimation is performed by applying a convolutional neural network [12–15] to a cross-correlogram obtained for every pair of spike trains (Fig. 1a). The algorithm was trained with spike trains generated by a numerical simulation of a network of multiple-timescale adaptive threshold (MAT) model neurons [19, 20] interacting through fixed synapses. In a large-scale simulation, we applied fluctuating inputs to a subset of neurons to reproduce large fluctuations in real spike trains in vivo (Fig. 1b). Figures 1 c, d, and e demonstrate sample spike trains, histograms of the firing rates of excitatory and inhibitory neurons, and firing irregularity measured in terms of the local variation of the interspike intervals Lv [21, 22]. The details of the learning procedure are summarized in METHODS.

FIG. 1.
  • Download figure
  • Open in new tab
FIG. 1.

The architecture of the algorithm CoNNECT. (a) The algorithm infers the presence or absence of monosynaptic connectivity and the value of postsynaptic potential (PSP) from the cross-correlogram obtained from a pair of spike trains. (b) The algorithm is trained with spike trains generated by a numerical simulation of neurons interacting through fixed synapses. Slow fluctuations were added to a subset of neurons to reproduce large fluctuations in real spike trains in vivo. (c) Sample spike trains (cyan: inhibitory neurons; magenta: excitatory neurons). (d) Firing rates of excitatory and inhibitory neurons. (e) Firing irregularity measured in terms of the local variation of the interspike intervals Lv.

We validated the estimation performance of the algorithm using novel spike trains generated by another neuronal circuit with different connections. Figure 2a depicts an estimated connection matrix, referenced to the true connection matrix, of 50 neurons. Here, the estimation was done with spike trains recorded for 120 min. Of 50 spike trains, 40 and 10 are, respectively, sampled from 800 excitatory and 200 inhibitory neurons. Figure 2b compares the estimated PSPs against true values. We have presented an estimated PSP as being 0 if the connection is not detected. Points lying on the nonzero x-axis are existing connections that were not detected or FNs. Points lying on the nonzero y-axis are spurious connections assigned for unconnected pairs or FPs. Figure 2c depicts how the numbers of FNs and FPs for excitatory and inhibitory categories changed with the recording duration or the length of spike trains (10, 30, and 120 min). While the number of FPs or spurious connections does not depend largely on the recording duration, the number of FNs or missing connections decreased with the period, implying that more synaptic connections of weaker strength are revealed by increasing the recording time.

FIG. 2.
  • Download figure
  • Open in new tab
FIG. 2.

Synaptic connections estimated using CoNNECT. (a) An estimated connection matrix, referenced to a true connection matrix. Of 50 neurons, 40 and 10 are, respectively, excitatory and inhibitory neurons sampled from 1,000 model neurons simulated for 120 min. Excitatory and inhibitory connections are represented, respectively, by magenta and cyan squares of the sizes proportional to the postsynaptic potential (PSP). (b) Estimated PSPs plotted against true parameters. Points on the nonzero y-axis represent the false positives (FPs) for unconnected pairs. Points on the nonzero x-axis represent the false negatives (FNs). (c) The numbers of FPs and FNs for excitatory and inhibitory categories counted for different recording durations.

Comparison with other estimation methods

We compared CoNNECT with the conventional cross-correlation method (CC) [16], the Jittering method [4], and GLMCC [19] with respect to their ability to estimate connectivity using synthetic data. Figure 3 shows connection matrices determined by the four methods referenced to the true connection matrices. The numbers of FNs and FPs for excitatory and inhibitory categories are depicted below the connection matrices; smaller numbers of FPs and FNs are better. Overall performance with small numbers of the FPs and FNs was measured in terms of the Matthews correlation coefficient (MCC) (see METHODS). The MCCs for these estimation methods are shown in the right edge panel. A larger MCC is better. For evaluating the performances, we adopted spiking data generated by a network of MAT models and a network of Hodgkin-Huxley (HH) type models (METHODS). In computing the numbers of FPs and FNs, we ignored small excitatory connections (< 0.1 mV for the MAT simulation and < 1 mV for the HH simulation), which are inherently difficult to discern with this observation duration.

FIG. 3.
  • Download figure
  • Open in new tab
FIG. 3.

Comparison of estimation methods using two kinds of synthetic data. (a) The multiple-timescale adaptive threshold (MAT) model simulation. (b) The Hodgkin-Huxley (HH) type model simulation. Connections estimated using the conventional cross-correlation method (CC), the Jittering method, GLMCC, and CoNNECT are depicted, referenced to the true connectivity of the synthetic data. Estimated connections are depicted in equal size for the first two methods because they do not estimate the amplitude of PSP. The numbers of FPs and FNs for the excitatory and inhibitory categories are depicted below the matrices; the smaller, the better. In the rightmost panel, overall performances of the estimation methods compared in terms of the Matthews correlation coefficient (MCC), the larger, the better.

The conventional cross-correlation analysis produced many FPs, revealing a vulnerability to fluctuations in cross-correlograms. The Jittering method succeeded in avoiding FPs but missed many existing connections, thus generating many FNs. In comparison to these methods, GLMCC or CoNNECT has better performance, producing the small number of FPs and FNs and a larger MCC value. Here we have modified GLMCC so that it achieves higher performance than the original algorithm [11], by using the likelihood ratio test to determine the statistical significance (METHODS). When comparing between these two algorithms, GLMCC was slightly conservative, producing more FNs, while CoNNECT tended to suggest more connections, producing more FPs. The converted GLMCC was better than CoNNECT for the HH model data (Fig. 3b), but the converse was true for the MAT model data (Fig. 3a). This might be because CoNNECT was trained using the MAT model data of a similar kind, and GLMCC was constructed by considering the HH model simulation. Although the testing data were generated by separate simulations, there is more similarity between the same types than across different classes.

Cross-correlograms

To observe the situations in which different estimation methods succeeded or failed in detecting the presence or absence of synaptic connectivity, we examined sample cross-correlograms of neuron pairs of a network of MAT model neurons (Fig. 4). Some cross-correlograms from this simulation exhibited large fluctuations that resemble what is seen in real biological data. These were produced by external fluctuations added to a subset of neurons which made the connectivity inference difficult. The inference results obtained by the four estimation methods are distinguished with colors; magenta, cyan, and gray represent that estimated connections were excitatory, inhibitory, or unconnected, respectively. We also superimposed a GLM function fitted to each cross-correlogram.

FIG. 4.
  • Download figure
  • Open in new tab
FIG. 4.

Sample cross-correlograms obtained from the MAT model simulation. (a), (b), and (c) Pairs of neurons that have excitatory and inhibitory connections, and are unconnected, respectively. Four kinds of estimation methods, the cross-correlation (CC), the Jittering (Jit), GLMCC (GLM), and CoNNECT (CoNN), were applied to cross-correlograms. The results of their estimation (excitatory, inhibitory, and unconnected) are respectively distinguished with colors (magenta, cyan, and gray). The lines plotted on the cross-correlograms are the GLM functions fitted by GLMCC. The causal impact from a pre-neuron to a post-neuron appears on the right half in each cross-correlogram.

Figures 4a and 4b depict sample cross-correlograms of neuron pairs that are connected with excitatory and inhibitory synapses, respectively. For the first three cross-correlograms from the top, all four estimation methods succeeded in detecting excitatory or inhibitory connections, thus making true positive (TP) estimations. For the fourth case, the Jittering method failed to detect the connection. This implies that the Jittering method is rather conservative for producing FPs, and as a result has produced many FNs. In this case, the cross-correlation method (CC) has mistaken the excitatory synapse as inhibition due to the large wavy fluctuation in the cross-correlogram. For the last cases, all four estimation methods failed to detect the connection, resulting in FNs. This would have been because the original connections were not strong enough to produce significant impacts on the cross-correlograms.

Figure 4c depicts sample cross-correlograms of unconnected pairs. For the first two cross-correlograms, all four estimation methods judge the absence of connections correctly (or the null hypothesis of the absence of connection was not rejected), resulting in true negatives (TNs). For the third pair, the CC suggested the presence of a connection, resulting in an FP. This demonstrates that the conventional cross-correlation method is fragile in the presence of large fluctuations. For the fourth and the last cases, the CC, GLMCC, and CoNNECT have suggested monosynaptic connections. The sharp peaks appearing in the cross-correlogram would have been caused by indirect interaction via other neurons. In such cases, however, it is difficult to discern the absence of a monosynaptic connection solely from the cross-correlogram. It is interesting to see that the Jittering method has refrained from suggesting connections, due to its strongly conservative stance.

B. Analyzing experimental data

We examined spike trains recorded from the prefrontal (PF), inferior temporal (IT), and the primary visual (V1) cortices of monkeys using the Utah arrays. Experimental conditions of individual data are summarized in METHODS. Because neurons with low firing rates do not provide enough evidence for the connectivity, we have excluded low firing units and examined those that have fired more than 1 Hz.

Figure 5 depicts the estimated connections for three datasets. The units in the connection matrices are arranged in the order provided by a sorting algorithm, and accordingly, units of neighboring indexes of the matrices tended to have been spatially closely located. All three connection matrices had more components in near diagonal elements, implying that neurons in a nearby location are more likely to be connected.

FIG. 5.
  • Download figure
  • Open in new tab
FIG. 5.

Connection matrices and diagrams estimated for spike trains recorded from the prefrontal (PF), inferior temporal (IT), and the primary visual (V1) cortices of monkeys. In the connection diagrams, excitatory and inhibitory dominant units are depicted as triangles and circles, respectively, and units that have no outgoing connections or those that innervate equal numbers of excitatory and inhibitory connections are depicted as squares.

Table I summarizes the statistics of the three datasets. Each neuron is assigned as putative excitatory, putative inhibitory, or undetermined, according to whether the excitatory–inhibitory (E–I) dominance index is positive, negative, or undetermined (or zero), respectively. Here, the E–I dominance index is defined as dei = (ne − ni)/(ne + ni), in which ne and ni represent the numbers of excitatory and inhibitory identified connections projecting from each unit, respectively [11]. Though we have obtained many connections, the total number of all pairs is enormous, scaling with the square of the number of units, and accordingly, the connectivity is sparse (less than 1% for each (directed) pair of neurons). Due to the sparse connectivity, many units were left unconnected within the recorded units, and the E–I dominance index dei remained undetermined.

View this table:
  • View inline
  • View popup
  • Download powerpoint
TABLE I.

Results of analyzing experimental datasets.

In contrast to synthetic data, the currently available experimental data do not contain information regarding the true connectivity. To examine the reliability of the estimation to some extent, we split the recordings in half and compared estimated connections from each half. If the real connectivity is stable, we may expect the estimated connections have overlap between the first and second halves. Figure 6a represents the connection matrices obtained from the first and second halves of the spike trains recorded from PF, IT, and V1. Figure 6b compares the estimated PSPs in two periods. Many estimated connections appear only on one of the two. This might be simply due to statistical fluctuation or due to real changes in synaptic connectivity. Nevertheless, it is noteworthy that the connections of large amplitudes were detected consistently between the first and second halves. Namely, they appear in the first and third quadrants diagonally, implying that they have the same signs with similar amplitudes.

FIG. 6.
  • Download figure
  • Open in new tab
FIG. 6.

Stability of connection estimation. (a) Connection matrices estimated for the first and second halves of the spike trains recorded from PF, IT, and V1. (b) Comparison of the PSPs estimated from the first and second halves.

Cross-correlograms

Some of the experimentally available cross-correlograms exhibit a sharp drop near the origin for a few ms due to the shadowing effect, in which near-synchronous spikes cannot be detected [23]. This effect disrupts estimation of synaptic impacts that should appear near the origin of the cross-correlogram. The data obtained with a sorting algorithm specifically used for the Utah array exhibit rather broad shadowing effects larger than 1 ms (up to 1.75 ms). Here, we analyzed the experimental data by removing an interval of 0 ± 2 ms in the cross-correlogram and applying the estimation algorithm to a cross-correlogram obtained by concatenating the remaining left and right parts (Fig. 7a).

FIG. 7.
  • Download figure
  • Open in new tab
FIG. 7.

Cross-correlograms of real spike trains recorded from PF, IT, and V1 using the Utah arrays. (a) An interval of 0 ± 2 ms in the original cross-correlogram was removed to mitigate the shadowing effect, in which near-synchronous spikes were not detected. (b) Processing real cross-correlograms. (c) The cross-correlograms for which CoNNECT and GLMCC gave the same inference. The fitted GLM functions are superimposed on the histograms. The causal impact from a pre-neuron to a post-neuron appears on the right half in the cross-correlogram.

Figure 7b demonstrates the cross-correlograms of sample neuron pairs for which both CoNNECT and GLMCC estimated connections which were excitatory, inhibitory, or absent (unconnected). It was observed that the real cross-correlograms are accompanied by large fluctuations. Nevertheless, CoNNECT and GLMCC seemed to be able to detect the likely presence or absence of synaptic interaction by ignoring the severe fluctuations.

III. DISCUSSION

Here we have devised a new algorithm for estimating synaptic connections based on a convolutional neural network. While this algorithm does not require adjusting parameters for individual data, it robustly provides a reasonable estimate of synaptic connections by tolerating large fluctuations in the data. This high feature was obtained by training a convolutional neural network using a considerable amount of training data generated by simulating a network of model spiking neurons that were subject to fluctuating current.

We compared CoNNECT with the conventional cross-correlation method, the Jittering method, and GLMCC in their ability to estimate connectivity, using synthetic data obtained by simulating neuronal circuitries of fixed synaptic connections. Both CoNNECT and GLMCC exhibited high performance in predicting individual synaptic connections, much superior to conventional methods.

Then we applied CoNNECT to simultaneously recorded spike trains recorded from monkeys using the Utah arrays. We have found that the connections among recorded units are sparse; they are less than 1% for all three datasets. To test the reliability of the estimation, we divided the entire recording interval in half and estimated connections for respective intervals. We have seen that strong (excitatory or inhibitory) connections overlap between the periods. This result implies that the estimation is reliable for the strong connectivity, and the connectivity lasts at least for hours.

The cross-correlograms of real biological data (Fig. 7) turned out to be even more complicated than those of synthetic data (Fig. 4), which were generated by adding large fluctuations to individual neurons (Fig. 1). The complicated features in real cross-correlograms were not only due to fluctuations in real circuitry but also due to the sorting algorithm. The most serious bottleneck in estimating connectivity may have been the shadowing effect of a few ms, in which near-synchronous spikes were not detected (Fig. 7a); this effect might hide the first part of a monosynaptic impact, which is expected to show up in a few ms in a cross-correlogram. If the sorting algorithm is improved such that the shadowing duration is shortened, the estimation might be more reliable.

So far, we have little knowledge about neuronal circuitry in the brain. By collecting more data from high channel count recordings and applying these reliable analysis methods to them, we shall be able to obtain information about neuronal circuitry in different brain regions, and learn about network characteristics and the information flow in each area. Ultimately, we expect that we will characterize the network characteristics of different brain regions that are processing various kinds of information.

IV. METHODS

A. Configuration of a learning algorithm of estimating synaptic connectivity

Here we describe the details of a four-layered convolutional neural network [12–15] applied to cross-correlograms obtained for every pair of spike trains to estimate the presence or absence of a connection, and its postsynaptic potential (PSP) (Fig. 1). The algorithm learns to find a bump or dent in the cross-correlogram, which was caused by a monosynaptic connection.

In particular, the input consists of 100 integer values of the spike counts in a cross-correlation histogram in an interval of [−50, 50] ms with 1 ms bin size. The network is composed of a 1-dimensional convolution layer, the average pooling, and the internal layer of fully connected 100 nodes. The output layer consists of two units; one represents the confidence of the presence of connectivity in a real value z ∈ [0, 1], and another is the level of PSP represented in a unit of [mV].

Training the convolutional neural network

We ran a numerical simulation of a network of 1,000 neurons interacting through fixed synapses in various conditions and trained the algorithm with spike trains from 400 units selected from the entire network. Thus, we constructed cross-correlograms of about 80, 000 pairs, each of which was assigned with the teaching signals consisting of the true information about the presence or absence of connectivity (respectively represented as z = 1 or 0) and its PSP value in either direction. The training was performed using an algorithm for first-order gradient-based optimization of a stochastic objective functions, based on adaptive estimates of lower-order moments, named Adam [24]. The parameters adopted in the learning are summarized in Table II.

View this table:
  • View inline
  • View popup
  • Download powerpoint
TABLE II.

Hyperparameters of the convolutional neural network.

View this table:
  • View inline
  • View popup
  • Download powerpoint
TABLE III.

Architecture of the convolutional neural network.

Data Augmentation

To make the estimation method applicable to data of a wider range, we performed data augmentation [25–31]. Namely, we augmented the data by rescaling the cross-correlations by 2 and 4 times and used all the data including the original data in the learning.

Web-application program

A ready-to-use version of the web application, the source code, and example data sets are available at our website, http://www.ton.scphys.kyoto-u.ac.jp/%7Eshino/CONNECT

B. Improvement of GLMCC

Original framework of GLMCC

In the previous study [11], we developed a method of estimating the connectivity by fitting the generalized linear model to a cross-correlogram, GLMCC. We designed the GLM function as Embedded Image where t is the time from the spikes of the reference neuron. a(t) represents large-scale fluctuations in the cross-correlogram in a window [−W, W] (W = 50 ms). By discretizing the time in units of Δ(= 1ms), a(t) is represented as a vector Embedded Image represents a possible synaptic connection from the reference (target) neuron to the target (reference) neuron. The temporal profile of the synaptic interaction is modeled as Embedded Image for t > d and f(t) = 0 otherwise, where τ is the typical timescale of synaptic impact and d is the transmission delay. Here we have chosen τ = 4 ms, and let the synaptic delay d be selected from 1, 2, 3, and 4 ms for each pair.

The parameters Embedded Image are determined with the maximum a posteriori (MAP) estimate, that is, by maximizing the posterior distribution or its logarithm: Embedded Image where {ti} are the relative spike times. The log-likelihood is obtained as Embedded Image where npre is the number of spikes of presynaptic neuron (j). Here we have provided the prior distribution of Embedded Image that penalizes a large gradient of a(t) and uniform prior for {J12, J21} Embedded Image where the hyperparameter γ representing the degree of flatness of a(t) was chosen as γ = 2 × 10−4 [ms−1].

Likelihood ratio test

The likely presence of the connectivity can be determined by disproving the null hypothesis that a connection is absent. In the original model, this was performed by thresholding the estimated parameters with Embedded Image, where zα, T, λpre, and λpost are a threshold for the normal distribution, recording time, firing rates of pre- and post-synaptic neurons. But we realized that this thresholding method might induce a large asymmetry in detectability between excitatory and inhibitory connections.

Instead of a simple thresholding, here we introduce the likelihood ratio test that is a general method for testing hypothesis (Chapter 11 of [32], see also [33]): We compute the likelihood ratio between the presence of the connectivity Embedded Image and the absence of connectivity Jij = 0 or its logarithm: Embedded Image where L* (Jij = c) in each case is the likelihood obtained by optimizing all the other parameters with the constraint of Jij = c. It was proven that 2D obey the χ2 distribution in a large sample limit (Wilks’ theorem) [34]. Accordingly, we may reject the null hypothesis if 2D > zα, where zα is the threshold of χ2 distribution of a significance level α. Here we have adopted α = 10−4.

C. Model validation

The performance of CoNNECT was evaluated using the synthetic data generated by independent simulations. The presence or absence of connectivity in each direction is decided by whether or not an output value z ∈ [0, 1] exceeds a threshold θ. It is possible to reduce the number of FPs by shifting the threshold θ to a high level. But this operation may produce many FNs, making many existing connections be missed. To balance the false-positives and false-negatives, we considered maximizing the Matthews correlation coefficient (MCC) [35], as has been done in our previous study [11]. The MCC is defined as Embedded Image where NTP, NTN, NFP, and NFN represent the numbers of true positive, true negative, false positive, and false negative connections, respectively.

We have obtained two coefficients for excitatory and inhibitory categories and taken the macro-average MCC that gives equal importance to these categories (Macro-average) [36], MCC = (MCCE + MCCI)/2 as we have done in the previous study [11]. In computing the coefficient for the excitatory category MCCE, we classify connections as excitatory or other (unconnected and inhibitory); for the inhibitory category MCCI, we classify connections as inhibitory or other (unconnected and excitatory). Here we evaluate MCCE by considering only excitatory connections of reasonable strength (EPSP > 0.1 mV for the MAT simulation and > 1 mV for the HH simulation).

We have confirmed that the Matthews correlation coefficient exhibits a wide peak at about θ ~ 0.5 (Fig. 8), and accordingly, we adopted θ = 0.5 as the threshold.

FIG. 8.
  • Download figure
  • Open in new tab
FIG. 8.

The Matthews correlation coefficient (MCC) plotted against the threshold θ for determining the presence and absence of the connection.

D. A large-scale simulation of a network of MAT neurons

To obtain a large number of spike trains that have resulted under the influence of synaptic connections between neurons, we ran a numerical simulation of a network of 1,000 model neurons interacting through fixed synapses. Of them, 800 excitatory neurons innervate to 12.5 % of other neurons with EPSPs that are log-normally distributed [11, 37–39], whereas 200 inhibitory neurons innervate randomly to 25 % of other neurons with IPSPs that are normally distributed.

Neuron model

As for the spiking neuron model, we adopted the MAT model, which is superior to the Hodgkin-Huxley model in reproducing and predicting spike times of real biological neurons in response to fluctuating inputs [19, 20]. In addition, its numerical simulation is stable and fast. The membrane potential of each neuron obeys a simple relaxation equation following the input signal: Embedded Image where ge, gi represents the excitatory conductance and the inhibitory conductance, respectively. Here RIbg represent the background noise. The conductance evolves with the Embedded Image where τs is the decay constant, tjk is the kth spike time of jth neuron, dj is a synaptic delay and Gj is the synaptic weight from jth neuron. δ(t) is the Dirac delta function.

Next, the adaptive threshold of each neuron θ (t) obeys the following equation: Embedded Image Embedded Image where tj is the jth spike time of a neuron, ω is the resting value of the threshold, τk is the kth time constant, and αk is the weight of the kth component. The parameter values are summarized in Table IV.

View this table:
  • View inline
  • View popup
  • Download powerpoint
TABLE IV.

Parameters for Neuron Models

Synaptic connections

We ran a simulation of a network consisting of 800 pyramidal neurons and 200 interneurons interconnected with a fixed strength. Each neuron receives 100 excitatory inputs randomly selected from 800 pyramidal neurons and 50 inhibitory inputs selected from 200 interneurons. The excitatory and inhibitory synaptic connections were sampled from respective distributions so that the resulting EPSPs and IPSPs are similar to the distributions adopted in our previous study [11]. In particular, the excitatory conductances Embedded Image were sampled independently from a log-normal distribution [37, 38]. Embedded Image where μ = −5.543 and σ = 1.30 are the mean and SD of the natural logarithm of the conductances.

The inhibitory conductances Embedded Image were sampled from the normal distribution: Embedded Image where μ = 0.0217 mS cm−2, σ = 0.00171 mS cm−2 are the mean and SD of the conductances. If the sampled value is less than zero, the conductance is resampled from the same distribution. The delays of the synaptic connections from excitatory neurons are drawn from a uniform distribution between 3 and 5 ms. The delays of the synaptic connections from inhibitory neurons are drawn from a uniform distribution between 2 and 4 ms.

Background noise

Because our model network is smaller than real mammalian cortical networks, we added a background current to represent inputs from many neurons, as previously done by Destexhe et al.[9, 40]. Embedded Image

The summed conductance RIbg represents random bombardments from a number of excitatory and inhibitory neurons. The dynamics of excitatory or inhibitory conductances can be approximated as a stationary fluctuating process represented as the Ornstein–Uhlenbeck process [41], Embedded Image where gX stands for ge or gi, and ξ(t) is the white Gaussian noise satisfying ⟨ξ(t)⟩ = 0 and ⟨ξ(t)ξ(s)⟩ = δij δ(t − s).

The real biological data has a wide variety of fluctuation, including non-trivial large variations with some characteristic timescales. For instance, the hippocampal neurons are subject to the theta oscillation of the frequency range of 3 − 10 [Hz] [42]. To reproduce such oscillations that are also observed in the cross-correlogram, we introduced slow oscillations into the background noise for individual neurons, as Embedded Image where ξ1(t) and ξ2(t) are the white Gaussian noise satisfying ⟨ξi(t)⟩ = 0 and ⟨ξi(t)ξj (s)⟩ = δij δ(t−s).

Among N = 1000 neurons, we added such oscillating background signals to three subgroups of 100 neurons (80 excitatory and 20 inhibitory neurons), respectively with 7, 10, and 20 Hz. The phases of the oscillation δ were chosen randomly from uniform distribution. Amplitudes of the oscillations were chosen randomly from uniform distribution in an interval Embedded Image. The parameters for the background inputs are summarized in Table V.

View this table:
  • View inline
  • View popup
  • Download powerpoint
TABLE V.

Parameters for synaptic currents and background inputs.

Numerical simulation

Simulation codes were written in C++ and parallelized with OpenMP framework. The time step was 0.1 ms. The neural activity was simulated up to 7,200 s.

E. Experimental data

Spike trains were recorded from the PF, IT, and V1 cortices of monkeys in three experimental laboratories using the Utah arrays. Individual experimental settings are summarized as follows.

Prefrontal cortex (PF)

The experimental subject was a male Macaca mulatta (6.7 kg, age 4.5 y). All experimental procedures were performed in accordance with the ILAR Guide for the Care and Use of Laboratory Animals and were approved by the Animal Care and Use Committee of the National Institute of Mental Health (U.S.A.). Procedures adhered to applicable United States federal and local laws, including the Animal Welfare Act (1990 revision) and applicable Regulations (PL89544; USDA 1985) and Public Health Service Policy (PHS2002). Eight 96–electrode arrays (Utah arrays, 10×10 arrangement, 400 μm pitch, 1.5mm depth, Blackrock Microsystems, Salt Lake City, U.S.A.) were implanted on the prefrontal cortex following previously described surgical procedures [43]. Briefly, a single bone flap was temporarily removed from the skull to expose the PFC, then the dura mater was cut open to insert the electrode arrays into the cortical parenchyma. Next, the dura mater was closed and the bone flap was placed back into place and attached with absorbable suture, thus protecting the brain and the implanted arrays. In parallel, a custom designed connector holder, 3D-printed using biocompatible material, was implanted onto the posterior portion of the skull. Recordings were made using the Grapevine System (Ripple, Salt Lake City, USA). Two Neural Interface Processors (NIPs) made up the recording system, one NIP (384 channels each) was connected to the 4 multielectrode arrays of one hemisphere. Synchronizing behavioral codes from MonkeyLogic and eye tracking signals were split and sent to each NIP. Raw extracellular signal was high-pass filtered (1kHz cutoff) and digitized (30kHz) to acquire single unit activity. Spikes were detected online and the waveforms (snippets) were stored using the Trellis package (Grapevine). Single units were manually sorted offline using custom Matlab scripts to define time-amplitude windows in combination with clustering methods based on PCA feature extraction. Further details about the experiment can be found elsewhere [44]. Briefly, the recordings were carried out while the animals were comfortably seated in front of a computer screen, performing left or right saccadic eye movements. Each trial started with the presentation of a fixation dot on the center of the screen and the monkeys were required to fixate. After a variable time (400–800ms) had elapsed, the fixation dot was toggled off and a cue (white square, 2° × 2° side) was presented either to the left or right of the fixation dot. The monkeys had to make a saccade towards the cue and hold for 500ms. 70% of the correctly performed trials were rewarded stochastically with a drop of juice (daily total 175–225 mL). Typically, monkeys performed > 1000 correct trials in a given recording session for a recording time of 120-150 minutes.

Inferior temporal cortex (IT)

All experimental procedures were approved by the Animal Care and Use Committee of the National Institute of Advanced Industrial Science and Technology (Japan) and were implemented in accordance with the “Guide for the Care and Use of Laboratory Animals” (eighth ed., National Research Council of the National Academies). The subject was one male Japanese monkey (Macaca fuscata, 11 kg, age 13 y). Four 96 microelectrode arrays (Utah arrays, 10 × 10 layout, 400 μm pitch, 1.5 mm depth, Blackrock Microsystems, Salt Lake City, USA) were surgically implanted on the IT cortex of the left hemisphere. Three arrays were located in area TE, and the remaining one in area TEO. Surgical procedures were roughly the same as have been described previously [43], except that a bone flap that was temporarily removed from the skull was located over the IT cortex and that a CILUX chamber was implanted onto the anterior part of the skull protecting connectors of the arrays. Recordings of neural data and eye positions were done in a single session using Cerebus™ system (Blackrock Microsystems). Extracellular signal was band-pass filtered (250–7.5 k Hz) and digitized (30 kHz). Units were sorted online before the recording session for the extracellular signal of each electrode using a threshold and time-amplitude windows. Both the spike times and the waveforms (10 and 38 samples, preceding and after a threshold crossing, respectively) of the units were stored using Cerebus Central Suite (Blackrock Microsystems). Single units were refined offline by hand using the PCA projection of the spike waveforms in offline sorter™ (Plexon Inc., Dallas, USA). The monkey seated in a primate chair, and the head was restrained with a head holding device so that the eyes were positioned 57 cm in front of a color monitor’s display (GDM-F520, SONY, Japan). The display subtended a visual angle of 40° × 30° with a resolution of 800 × 600 pixels. A television series on animals (NHK’s Darwin’s Amazing Animals, Asahi Shimbun Publications Inc., Japan) was shown on the display throughout the online spike sorting and the recording session. The monkey’s eye position was monitored using an infrared pupil-position monitoring system [45], and was not restricted.

The primary visual cortex (V1)

The data set was obtained from Collaborative Research in Computational Neuroscience (CR-CNS), pvc-11 [46] by the courtesy of the authors of [47]. In this experiment, spontaneous activity was measured from the primary visual cortex while a monkey viewed a CRT monitor (1024 × 768 pixels, 100 Hz refresh) displaying a uniform gray screen (luminance of roughly 40 cd/m2). Briefly, the animal was premedicated with atropine sulfate (0.05 mg/kg) and diazepam (Valium, 1.5 mg/kg) 30 min before inducing anesthesia with ketamine HCl (10.0 mg/kg). Anesthesia was maintained throughout the experiment by a continuous intravenous infusion of sufentanil citrate. To minimize eye movements, the animal was paralyzed with a continuous intravenous infusion of vecuronium bromide (0.1 mg/kg/h). Vital signs (EEG, ECG, blood pressure, end-tidal PCO2, temperature, and lung pressure) were monitored continuously. The pupils were dilated with topical atropine and the corneas protected with gas-permeable hard contact lenses. Supplementary lenses were used to bring the retinal image into focus by direct ophthalmoscopy, and later adjusted the refraction further to optimize the response of recorded units. Experiments typically lasted 4–5 d. All experimental procedures complied with guidelines approved by the Albert Einstein College of Medicine of Yeshiva University and New York University Animal Welfare Committees.

Spike sorting and analysis criteria: Waveform segments were sorted off-line with an automated sorting algorithm, which clustered similarly shaped waveforms using a competitive mixture decomposition method [48]. The output of this algorithm were refined by hand with custom time-amplitude window discrimination software (written in MATLAB; MathWorks) for each electrode, taking into account the waveform shape and interspike interval distribution. To quantify the quality of the recording, the signal-to-noise ratio (SNR) of each candidate unit was computed as the ratio of the average waveform amplitude to the SD of the waveform noise [49–51]. Candidates which fell below a SNR of 2.75 were discarded as multiunit recordings.

ACKNOWLEDGMENTS

We thank Adam Kohn for permitting us to analyze their experimental data of V1 and providing the detailed information of the experimental conditions, and Richard Saunders and Mark Eldridge for performing surgery on the animal for IT cortex data, Yuji Nagai and Takafumi Minamimoto for assisting the surgery, and Rossella Falcone and Narihisa Matsumoto for helpful discussions upon preparing the IT cortex data. We also thank Masahiro Naito for his technical assistance in developing a web-application program and Kai Shinomoto for drawing an illustration of a monkey for Figure 1.

R.K. is supported by JSPS KAKENHI Grant Numbers JP17H03279, JP18K11560, and JP19H01133, JST ACT-I Grant Number JPMJPR16UC, and JST PRESTO Grant Number JP-MJPR1925, Japan. B.B.A. is supported by NIMH DIRP ZIA MH002928. Y.S.M is supported by JSPS KAKENHI Grant Number JP18H05020 and New Energy and Industrial Technology Development Organization (NEDO). K.H. is supported by Japan Society for the Promotion of Science (JSPS) and JSPS KAKENHI Grant Number JP19J40302. K.K. is supported by JSPS KAKENHI Grant Number JP19K07804. B.J.R. is supported by NIMH DIRP ZIA MH002032. S.S. is supported by JSPS KAKENHI Grant number 26280007, JST CREST Grant Number JPMJCR1304, and the New Energy and Industrial Technology Development Organization (NEDO).

Footnotes

  • http://www.ton.scphys.kyoto-u.ac.jp/~shino/CONNECT/

References

  1. [1].↵
    D. H. Perkel, G. L. Gerstein, and G. P. Moore, Biophys J. 7, 419 (1967).
    OpenUrlCrossRefPubMedWeb of Science
  2. [2].↵
    K. Toyama, M. Kimura, and K. Tanaka, J. Neurophysiol. 46, 202 (1981).
    OpenUrlCrossRefPubMedWeb of Science
  3. [3].↵
    S. Grun, J. Neurophysiol. 101, 1126 (2009).
    OpenUrlCrossRefPubMedWeb of Science
  4. [4].↵
    A. Amarasingham, M. T. Harrison, N. G. Hatsopoulos, and S. Geman, J. Neurophysiol. 107, 517 (2012).
    OpenUrlCrossRefPubMedWeb of Science
  5. [5].↵
    C. D. Schwindel, K. Ali, B. L. McNaughton, and M. Tatsuno, J. Neurosci. 34, 5454 (2014).
    OpenUrlAbstract/FREE Full Text
  6. [6].↵
    M. Okatan, M. A. Wilson, and E. N. Brown, Neural Comput. 17, 1927 (2005).
    OpenUrlCrossRefPubMedWeb of Science
  7. [7].
    J. W. Pillow, J. Shlens, L. Paninski, A. Sher, A. M. Litke, E. Chichilnisky, and E. P. Simoncelli, Nature 454, 995 (2008).
    OpenUrlCrossRefPubMedWeb of Science
  8. [8].
    Z. Chen, D. F. Putrino, S. Ghosh, R. Barbieri, and E. N. Brown, IEEE Trans. Neural Syst. Rehabil. Eng. 19, 121 (2011).
    OpenUrlCrossRefPubMed
  9. [9].↵
    R. Kobayashi and K. Kitano, J. Comput. Neurosci. 35, 109 (2013).
    OpenUrlCrossRefPubMed
  10. [10].↵
    Y. V. Zaytsev, A. Morrison, and M. Deger, J. Comput. Neurosci. 39, 77 (2015).
    OpenUrlCrossRef
  11. [11].↵
    R. Kobayashi, S. Kurita, A. Kurth, K. Kitano, K. Mizuseki, M. Diesmann, B. J. Richmond, and S. Shinomoto, Nature communications 10, 1 (2019).
    OpenUrl
  12. [12].↵
    K. Fukushima, Neural networks 1, 119 (1988).
    OpenUrl
  13. [13].
    Y. LeCun, Y. Bengio, et al., The handbook of brain theory and neural networks 3361, 1995 (1995).
    OpenUrl
  14. [14].
    A. Krizhevsky, I. Sutskever, and G. E. Hinton, in Advances in neural information processing systems (2012) pp. 1097–1105.
  15. [15].↵
    Y. LeCun, Y. Bengio, and G. Hinton, nature 521, 436 (2015).
    OpenUrlCrossRefPubMed
  16. [16].↵
    A. M. Aertsen and G. L. Gerstein, Brain Res. 340, 341 (1985).
    OpenUrlCrossRefPubMedWeb of Science
  17. [17].↵
    R. C. Reid and J.-M. Alonso, Nature 378, 281 (1995).
    OpenUrlCrossRefPubMedWeb of Science
  18. [18].↵
    S. Fujisawa, A. Amarasingham, M. T. Harrison, and G. Buzsáki, Nat. Neurosci. 11, 823 (2008).
    OpenUrlCrossRefPubMedWeb of Science
  19. [19].↵
    R. Kobayashi, Y. Tsubo, and S. Shinomoto, Frontiers in computational neuroscience 3, 9 (2009).
    OpenUrl
  20. [20].↵
    Y. Omura, M. M. Carvalho, K. Inokuchi, and T. Fukai, Journal of Neuroscience 35, 14585 (2015).
    OpenUrlAbstract/FREE Full Text
  21. [21].↵
    S. Shinomoto, K. Shima, and J. Tanji, Neural Comput. 15, 2823 (2003).
    OpenUrlCrossRefPubMedWeb of Science
  22. [22].↵
    Y. Mochizuki, T. Onaga, H. Shimazaki, T. Shimokawa, Y. Tsubo, R. Kimura, A. Saiki, Y. Sakai, Y. Isomura, S. Fujisawa, et al., J. Neurosci. 36, 5736 (2016).
    OpenUrlAbstract/FREE Full Text
  23. [23].↵
    J. W. Pillow, J. Shlens, E. Chichilnisky, and E. P. Simoncelli, PLoS ONE 8, e62123 (2013).
    OpenUrlCrossRef
  24. [24].↵
    D. P. Kingma and J. Ba, arXiv preprint arXiv:1412.6980 (2014).
  25. [25].↵
    L. S. Yaeger, R. F. Lyon, and B. J. Webb, in Advances in neural information processing systems (1997) pp. 807–816.
  26. [26].
    D. C. Ciresan, U. Meier, L. M. Gambardella, and J. Schmidhuber, in 2011 International Conference on Document Analysis and Recognition (IEEE, 2011) pp. 1135–1139.
  27. [27].
    S. C. Wong, A. Gatt, V. Stamatescu, and M. D. McDonnell, in 2016 international conference on digital image computing: techniques and applications (DICTA) (IEEE, 2016) pp. 1–6.
  28. [28].
    Y. Xu, R. Jia, L. Mou, G. Li, Y. Chen, Y. Lu, and Z. Jin, arXiv preprint arXiv:1601.03651 (2016).
  29. [29].
    C. N. Vasconcelos and B. N. Vasconcelos, CoRR, abs/1702.07025 1(2017).
  30. [30].
    J. Wang and L. Perez, Convolutional Neural Networks Vis. Recognit, 11 (2017).
  31. [31].↵
    L. Taylor and G. Nitschke, arXiv preprint arXiv:1708.06020 (2017).
  32. [32].↵
    R. E. Kass, U. T. Eden, and E. N. Brown, Analysis of neural data, Vol. 491 (Springer, 2014).
  33. [33].↵
    M. Volgushev, V. Ilin, and I. H. Stevenson, PLoS computational biology 11(2015).
  34. [34].↵
    S. S. Wilks, Ann. Math. Statist. 9, 60 (1938).
    OpenUrlCrossRef
  35. [35].↵
    B. W. Matthews, Biochim. Biophys. Acta 405, 442 (1975).
    OpenUrlCrossRefPubMedWeb of Science
  36. [36].↵
    A. Sun and E.-P. Lim, in Proceedings of ICDM 2001 (IEEE, 2001) pp. 521–528.
  37. [37].↵
    S. Song, P. J. Sjöström, M. Reigl, S. Nelson, and D. B. Chklovskii, PLoS Biol. 3, e68 (2005).
    OpenUrlCrossRefPubMed
  38. [38].↵
    J.-N. Teramae, Y. Tsubo, and T. Fukai, Sci. Rep. 2, 485 (2012).
    OpenUrlCrossRefPubMed
  39. [39].↵
    G. Buzsáki and K. Mizuseki, Nat. Rev. Neurosci. 15, 264 (2014).
    OpenUrlCrossRefPubMed
  40. [40].↵
    A. Destexhe, M. Rudolph, J.-M. Fellous, and T. J. Sejnowski, Neuroscience 107, 13 (2001).
    OpenUrlCrossRefPubMedWeb of Science
  41. [41].↵
    H. C. Tuckwell, Introduction to theoretical neurobiology: volume 2, nonlinear and stochastic theories (Cambridge University Press, Cambridge, 1988).
  42. [42].↵
    R. Goutagny, J. Jackson, and S. Williams, Nature neuroscience 12, 1491 (2009).
    OpenUrlCrossRefPubMedWeb of Science
  43. [43].↵
    A. R. Mitz, R. Bartolo, R. C. Saunders, P. G. Browning, T. Talbot, and B. B. Averbeck, Journal of neuroscience methods 289, 39 (2017).
    OpenUrlCrossRefPubMed
  44. [44].↵
    R. Bartolo, R. C. Saunders, A. R. Mitz, and B. B. Averbeck, Journal of Neuroscience 40, 1668 (2020).
    OpenUrlAbstract/FREE Full Text
  45. [45].↵
    K. Matsuda, T. Nagami, Y. Sugase, A. Takemura, and K. Kawano, in International Conference on Human-Computer Interaction (Springer, 2017) pp. 593–608.
  46. [46].↵
    “Kohn, A., Smith, M.A. Utah array extracellular recordings of spontaneous and visually evoked activity from anesthetized macaque primary visual cortex (V1). CRCNS.org,” http://dx.doi.org/10.6080/K0NC5Z4X, (2016).
  47. [47].↵
    M. A. Smith and A. Kohn, Journal of Neuroscience 28, 12591 (2008).
    OpenUrlAbstract/FREE Full Text
  48. [48].↵
    S. Shoham, M. R. Fellows, and R. A. Normann, Journal of neuroscience methods 127, 111 (2003).
    OpenUrlCrossRefPubMedWeb of Science
  49. [49].↵
    C. T. Nordhausen, E. M. Maynard, and R. A. Normann, Brain research 726, 129 (1996).
    OpenUrlCrossRefPubMedWeb of Science
  50. [50].
    S. Suner, M. R. Fellows, C. Vargas-Irwin, G. K. Nakata, and J. P. Donoghue, IEEE transactions on neural systems and rehabilitation engineering 13, 524 (2005).
    OpenUrlCrossRefPubMedWeb of Science
  51. [51].↵
    R. C. Kelly, M. A. Smith, J. M. Samonds, A. Kohn, A. Bonds, J. A. Movshon, and T. S. Lee, Journal of Neuroscience 27, 261 (2007).
    OpenUrlFREE Full Text
View Abstract
Back to top
PreviousNext
Posted May 05, 2020.
Download PDF
Data/Code
Email

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Enter multiple addresses on separate lines or separate them with commas.
CoNNECT: Convolutional Neural Network for Estimating synaptic Connectivity from spike Trains
(Your Name) has forwarded a page to you from bioRxiv
(Your Name) thought you would like to see this page from the bioRxiv website.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Share
CoNNECT: Convolutional Neural Network for Estimating synaptic Connectivity from spike Trains
Daisuke Endo, Ryota Kobayashi, Ramon Bartolo, Bruno B. Averbeck, Yasuko Sugase-Miyamoto, Kazuko Hayashi, Kenji Kawano, Barry J. Richmond, Shigeru Shinomoto
bioRxiv 2020.05.05.078089; doi: https://doi.org/10.1101/2020.05.05.078089
Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
Citation Tools
CoNNECT: Convolutional Neural Network for Estimating synaptic Connectivity from spike Trains
Daisuke Endo, Ryota Kobayashi, Ramon Bartolo, Bruno B. Averbeck, Yasuko Sugase-Miyamoto, Kazuko Hayashi, Kenji Kawano, Barry J. Richmond, Shigeru Shinomoto
bioRxiv 2020.05.05.078089; doi: https://doi.org/10.1101/2020.05.05.078089

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Neuroscience
Subject Areas
All Articles
  • Animal Behavior and Cognition (2434)
  • Biochemistry (4796)
  • Bioengineering (3335)
  • Bioinformatics (14704)
  • Biophysics (6649)
  • Cancer Biology (5180)
  • Cell Biology (7440)
  • Clinical Trials (138)
  • Developmental Biology (4374)
  • Ecology (6890)
  • Epidemiology (2057)
  • Evolutionary Biology (9930)
  • Genetics (7351)
  • Genomics (9542)
  • Immunology (4570)
  • Microbiology (12702)
  • Molecular Biology (4954)
  • Neuroscience (28382)
  • Paleontology (199)
  • Pathology (809)
  • Pharmacology and Toxicology (1394)
  • Physiology (2025)
  • Plant Biology (4516)
  • Scientific Communication and Education (978)
  • Synthetic Biology (1302)
  • Systems Biology (3919)
  • Zoology (729)