Theory of correlations in stochastic neural networks

Abstract
One of the main experimental tools in probing the interactions between neurons has been the measurement of the correlations in their activity. In general, however, the interpretation of the observed correlations is difficult since the correlation between a pair of neurons is influenced not only by the direct interaction between them but also by the dynamic state of the entire network to which they belong. Thus a comparison between the observed correlations and the predictions from specific model networks is needed. In this paper we develop a theory of neuronal correlation functions in large networks comprising several highly connected subpopulations and obeying stochastic dynamic rules. When the networks are in asynchronous states, the cross correlations are relatively weak, i.e., their amplitude relative to that of the autocorrelations is of order of 1/N, N being the size of the interacting populations. Using the weakness of the cross correlations, general equations that express the matrix of cross correlations in terms of the mean neuronal activities and the effective interaction matrix are presented. The effective interactions are the synaptic efficacies multiplied by the gain of the postsynaptic neurons. The time-delayed cross-correlation matrix can be expressed as a sum of exponentially decaying modes that correspond to the (nonorthogonal) eigenvectors of the effective interaction matrix. The theory is extended to networks with random connectivity, such as randomly dilute networks. This allows for a comparison between the contribution from the internal common input and that from the direct interactions to the correlations of monosynaptically coupled pairs.