The most basic temporal relationship in the active nervous system is firing synchrony, that is, the coincidence of spikes fired by two or more neurons. Firing synchrony has been widely reported in the nervous system and is of much interest to neuroscientists both because of its potential role in encoding, transmitting and decoding of information in the brain, and because it may reveal something about the underlying synaptic (or other) interactions that generate it (for reviews, see [1, 2]). Firing synchrony increases the reliability of firing in downstream neurons that receive the coincident synaptic inputs [3–5] and may therefore be an efficient way to transmit information through multiple layers of feed-forward excitatory networks [6–8]. Alternatively or in addition, synchrony may be used as a ‘binding’ signal [9] (but see [10]); may be the neural correlate of attention or expectation [11–13]; or may encode information about the stimulus, beyond that available from the firing rate alone [14, 15]. Synchrony can arise when two or more neurons respond to a sudden onset or rapid modulation of a sensory stimulus, such as an auditory click [16], a visual saccade [17] or a high-velocity whisker deflection [18–20]. The focus here, however, will be on stimulus-independent synchrony that is observed (in some systems) after correcting for stimulus effects, or in the absence of a stimulus [5, 21, 22]. Whereas stimulus-dependent synchrony may encode information about the stimulus that drives it, stimulus-independent synchrony may teach the experimenter something about the connectivity of the underlying network. Two network motifs have most often been invoked to account for stimulus-independent synchrony: shared excitatory inputs from diverging presynaptic axons [23, 24] and electrical coupling by gap junctions [25–27]. In a recent publication, we demonstrated that a third motif, reciprocal inhibitory chemical synapses, can drive stimulus-independent synchrony with sub-millisecond precision, in the absence of shared inputs or electrical coupling [28].

Since spike coincidence could also be a purely chance occurrence, an important first step in studying synchrony is to determine if the observed coincidence level is ‘real’, meaning statistically significant - in other words, to calculate its *p*-value or *Z*-score. We refer to this as detecting synchrony. However, if one wants to decipher the synaptic connectivity driving the synchrony, one needs to go beyond a *p*-value and to quantify the synchrony by a well-normalized ‘synchrony index’, which can be examined for correlation with synaptic parameters such as the strength of the electrical coupling or the rate of shared inputs. Various methods for achieving the twin goals of detecting and quantifying synchrony, some more practical than others, have been proposed and used in the past, and some of them are described and examined in detail in a recently published compendium [29]. Unfortunately, many previously proposed methods have serious limitations when used for detection and quantification of synchrony in real spike trains, as we will see below. Analysis techniques that fall under the general class of ‘jitter methods’ [30] seem to overcome many of the drawbacks of other methods when it comes to detecting synchrony; however, jitter methods have not been used previously to quantify synchrony. Here, I take jitter methods one step further and use them to compute a novel synchrony measure, the Jitter-Based Synchrony Index (JBSI).

In the remainder of this background section, I first show how synchrony can be detected when the two spike trains follow stationary Poisson statistics. I then demonstrate numerically how this detection method fails when firing rates are not stationary and explain how jitter methods overcome this hurdle. Finally, I define and describe two of the synchrony indices commonly employed by experimental neurophysiologists to quantify synchrony. In the Results, I first propose a set of five requirements that an ideal index of synchrony should fulfill, pointing out where existing measures fail to meet these requirements. I then explain how the JBSI is computed and follow with a comparison between the JBSI and previously used indices. Lastly, I demonstrate how the JBSI can be used to estimate the precision of spike timing in the system. Computational details are provided in three appendices.

### Detecting synchrony: the Poisson case

Assume that we have recorded simultaneous spike trains from two neurons. I will refer to the slower-firing neuron as Neuron 1 or the reference neuron, and to the faster-firing neuron as Neuron 2 or the target neuron. Let us denote the number of spikes in the reference and target trains by *n*_{
1
} and *n*_{
2
} and the average firing rates by *r*_{
1
} and *r*_{
2
}, respectively (so *n*_{
1
} ≤ *n*_{
2
} and *r*_{
1
} ≤ *r*_{
2
}).

On the face of it, detecting and measuring synchrony seems quite simple. First, decide on a definition of ‘synchrony’: how far apart two spikes can occur and still be considered as synchronous. Throughout this paper, I will use τ_{
S
} to represent this synchrony span. Second, count how many of the spikes fired by the reference neuron occurred within the predefined synchrony span from any spike in the target train. I will denote this observed coincidence count as *N*_{
C
}. Third, normalize the coincidence count so it can be used to compare paired spike trains of different lengths. A simple way to do this is to divide it by the total number of reference spikes, transforming the coincidence count to a coincidence rate *R*_{
C
} ≡ *N*_{
C
}/*n*_{
1
} (note that rate is used here in the sense of per spike rather than per unit time). The advantage of dividing by *n*_{
1
} rather than, say, *n*_{
2
} or the average of *n*_{
1
} and *n*_{
2
}, is that *R*_{
C
} can take values over the full range between 0 (no coincidences) to 1 (all spikes coincident).

On their own, however, *N*_{
C
} and *R*_{
C
} are not very informative, because for any two overlapping series of events there is some non-zero probability that occasionally they will coincide in time, purely by chance. In the case of two spike trains, we want to know whether the two neurons fired independently and the observed coincidences were therefore chance occurrences; this is the null hypothesis. The alternative is that the neurons were coupled, that is, interacted in some way with each other or with a third neuron, causing them to synchronize more (or perhaps less) than expected by chance. Our first task is therefore to determine if the observed coincidence was statistically significant. To do so, we need to know something about the distribution of all chance coincidence counts; if this distribution was known, we could determine the fractional area beyond *N*_{
C
} under the tails of the distribution, in other words, the *p*-value, which would indicate how likely *N*_{
C
} was to belong to this set. At the very least, we would like to know the mean of this distribution - that is, the expected coincidence count **‹** *N*_{
C
}**› -** and its variance, which would then allow us to calculate a *Z*-score (distance of the observed value from the mean, in units of standard deviation). If the distribution is reasonably close to normal, the *Z*-score can be directly converted to a *p*-value (for example, *p* = 0.05 corresponds to *Z* = ~2 and *p* = 0.001 to *Z* = 3.3).

One way to estimate the distribution of chance coincidences is to assume a specific statistical firing model, and very often the assumption made is that the spike trains are a stationary Poisson process. Under this assumption, the expected number of spikes occurring within an interval ∆

*t* is

*r·*∆

*t*, where

*r* is the time-independent (stationary) average firing rate. Going back to our paired spike trains, we observe that, for a sufficiently small synchrony span τ

_{
S
}, the probability that a reference spike will occur within

±τ

_{
S
} from any given target spike is equal to the average number of reference spikes in a window of width

*2*τ

_{
S
}, or

*2*τ

_{
S
}*·r*_{
1
}. Because the target neuron fired

*n*_{
2
} spikes during the recording epoch, we should expect a total of

$<{N}_{C}>=2{\tau}_{S}\cdot {r}_{1}\cdot {n}_{2}$ spike coincidences. For a recording epoch of length

*T*, we have

*r*_{
1
} *= n*_{
1
}*/T* and

*r*_{
2
} *= n*_{
2
}*/T*, so we can re-write the expected number of coincidences either as:

$<{N}_{C}>=2{\tau}_{S}\cdot {r}_{1}\cdot {r}_{2}\cdot T$

(1)

or as:

$<{N}_{C}>=2{\tau}_{S}\cdot {n}_{1}\cdot {n}_{2}/T$

(2)

In a Poisson process, the variance of the observed counts is equal to the expected counts: we can therefore test the null hypothesis of independent firing by calculating the likelihood that *N*_{
C
}, the observed number of coincidences, belongs to a distribution with a mean and variance of **‹** *N*_{
C
}**›**.

### Detecting synchrony: when Poisson cannot be assumed

The pitfall in the procedure described above is that, in actuality, we are testing a dual null hypothesis: that (a) the neurons fired independently and that (b) each spike train is a Poisson process with a stationary average firing rate. If (b) happens to be false, we stand the risk of rejecting the null hypothesis, even when (a) is true. We can illustrate this with a simple numerical example.

Assume that both neurons fired 10,000 spikes each during a 250 s recording epoch, so *r*_{
1
} = *r*_{
2
} = 40 Hz. If τ_{
S
} = 0.5 ms, then the number of coincidences expected by chance is **‹** *N*_{
C
}**›** = 0.001 × 40 × 40 × 250 = 400, with a standard deviation of 20. Now say that we actually observed 500 coincidences. Because this is five standard deviations above the expected mean, we conclude that this degree of precise synchrony was highly unlikely to be a random deviation (less than one in a million probability) and that the two neurons must have been coupled. Unbeknownst to us, however, both neurons received bursts of inhibitory inputs from a common subset of inhibitory neurons during the recording epoch. These bursts occurred randomly, but were powerful enough to silence both neurons (concurrently) for 20% of the recording epoch. The actual firing epoch (that is, the epoch during which the neurons were ‘allowed’ to fire) was therefore only 200 s, so the firing rate of each neuron was actually 10,000/200 = 50 Hz, and the true expected number of spike coincidences was **‹** *N*_{
C
}**›** = 0.001 × 50 × 50 × 200 = 500. In other words, the observed synchrony was precisely at chance level. While one could argue that the two neurons were, in a sense, coupled by the common inhibitory input, and thus violated both assumptions (a) and (b), it would clearly be erroneous to conclude that this common inhibitory input caused the neurons to synchronize with a precision of ±0.5 ms.

The example above underscores the fact that it is the ‘local’ firing rate, rather than the global average firing rate, that needs to be taken into account when determining the expected coincidences, since global averaging ignores modulations in firing rates. If these bursts of inhibitory inputs were all identical or nearly so, if they repeated at regular intervals and if we knew their times of occurrence, we could estimate how many coincidences were introduced because of these co-modulations by dividing the full record into segments aligned on the beginning (or end) of each inhibitory burst and exchanging the spike trains of one neuron between segments. The spike coincidences remaining after this procedure could be taken as a good estimate of **‹** *N*_{
C
}**›** and can then be subtracted from the observed coincidence count *N*_{
C
} to yield an estimate of the ‘excess coincidence’, that is, the coincidence above chance level. This is the same ‘shuffling’ procedure commonly used by electrophysiologists to account for spurious coincidences introduced by a sensory stimulus [31]. However, unlike co-modulations resulting from an experimentally administered stimulus, co-modulations resulting from uncontrolled environmental or systemic influences are rarely reproducible, do not occur at regular intervals and are not locked to the stimulus (if there is a stimulus). Indeed, they may not even be recognized by the experimenter. Therefore, these co-modulations cannot be corrected by shuffling.

Another potential solution is to divide the firing epoch into smaller segments and calculate the local firing rates, and from them the expected coincidences, separately for each segment. This would allow us to correct for any modulations in firing rates that happen on a time scale slower (longer) than the width of each segment. Further scrutiny, however, reveals multiple problems with this approach. First, where should the boundaries between segments be placed? Even a small segment may happen to straddle two epochs with very different firing rates, and may need to be further subdivided. Second, how local is ‘local’ - how small should each segment be? To account for as many co-modulations as possible, some of which may have fast time courses, we would need to make the segments as small as possible; but with very small segments, firing rate estimation would become very imprecise. Ideally, we would like to know the instantaneous firing rate in the immediate vicinity of each spike, but this is impossible to determine from a non-repeating spike train.

The jitter method offers a solution to this dilemma. It allows us to estimate the coincidence count expected from the local firing rate, without explicitly determining the rate itself. To do so, we replace one of the two spike trains with a virtual one (a ‘surrogate’), in which each spike is slightly shifted (or ‘jittered’) from its original time of occurrence by a random amount, within a predetermined ‘jitter window’ of *±*τ_{
J
}. This procedure destroys any spike coincidences that may have resulted from interactions on a time scale faster than 2τ_{
J
}, but fully preserves local firing rates (and therefore any spike coincidences attributable to co-modulations in these rates) at all slower time scales. Again, we can take the number of jitter-resistant coincidences as an estimator of ‹*N*_{
C
}›, the expected number of coincidences, and use it to estimate the excess synchrony. Moreover, if we can determine the distribution of all possible coincidence counts likely to be observed after a jitter, we can determine if the experimentally observed count *N*_{
C
} is an exceptional value (that is, falls in the tails of this distribution) and therefore reflects true, statistically significant synchrony, or if it is likely to be a chance occurrence. Note that the jitter procedure relies on the assumption that co-modulations in firing rates occur on a relatively slow time scale compared with the time scale of the spike coincidences we are interested in. Thus, we need to set τ_{
J
} judiciously: if we set τ_{
J
} too large, we risk destroying some of the chance coincidences caused by co-modulations, and we will then underestimate ‹*N*_{
C
}› and overestimate the true degree of synchrony, just as in the numerical example above. Conversely, it would be meaningless to make τ_{
J
} smaller than the synchrony span τ_{
S,
} because such a small jitter would likely preserve all coincidences, both chance and real, and not help us in distinguishing between the two.

The rather intuitive idea of introducing virtual spike jitter was rigorously formulated and explored theoretically and in computer simulations by Geman and colleagues [30, 32–34], and applied to experimental data by them [32, 35] and others [36–40], in some cases critically [41]. The approach taken here differs in two important ways from these previous implementations. First, it differs in how it determines the distribution of all possible coincidence counts. Previous implementations used the Monte Carlo approach: to generate many (hundreds or thousands of) realizations of jittered spike trains and count the number of spike coincidences for each one, which is computationally intensive. The current approach is based on an analytical computation of the exact probability distribution, thus considerably reducing the computational effort. Even more importantly, previous implementations used the jitter method only to detect synchrony, not to quantify it. Here, I use the jitter method to define a novel synchrony index, the JBSI, which can be used to quantify synchrony and (as we will see) is robust under a wide variety of realistic conditions under which other synchrony indices fail.

### Quantifying synchrony: how to compare different cell pairs

Detecting synchrony is only the first task facing us. Our second task is to quantify the synchrony to allow valid comparisons of synchrony strength between different cell pairs, even when recorded in different experiments or even by different investigators. At first glance, it may seem that the very measures of statistical significance could also be used to quantify the strength of the synchrony. However, measures of statistical significance can be made arbitrarily high (*Z*-scores) or arbitrarily close to zero (*p*-values) by increasing the sample size, that is, by using longer spike trains, even if the rate of synchrony (per spike and per time) remains unchanged. Measures of significance cannot, therefore, be used directly to quantify synchrony - to do so we need a synchrony index.

Many different synchrony indices have been proposed in the literature but relatively few of these have gained popularity with the experimental neurophysiology community, and there seems to be no generally agreed upon ‘best’ index [

5,

12–

14,

39]. Neurophysiologists often depict the outcome of paired recordings graphically, by a cross-correlation histogram (cross-correlogram, CCG) [

22,

31], and most previously employed indices were calculated from the CCG. If the bin width of the raw CCG is chosen to be 2τ

_{
S
}, then the height of the central bin will be equal to

*N*_{
C
}. To generate a synchrony index, the coincidence count is first corrected for chance coincidences by subtracting the expected from the observed count; this yields an estimate of excess coincidences. If the spike trains are locked to a repeating stimulus and if repeated spike trains are assumed to be stationary and reproducible, then this correction can be done by subtracting a shuffled cross-correlogram [

31]. Otherwise, it is done by subtracting the average number of counts in a region of the CCG equal in width to but away from the central peak. For simplicity, we again assume that the central peak of the CCG is one-bin wide, so that the average count in a far-from-center bin is very close to the overall average count per bin,

*b·n*_{
1
}*·n*_{
2
}*/T*, where

*b* is the bin width. With a bin width of 2τ

_{
S
} , the average count is therefore equal to our previously defined (see equation

2):

$<{N}_{C}>=2{\tau}_{S}\cdot {n}_{1}\cdot {n}_{2}/T$

Thus, the excess coincidence count is equal to *N*_{
C
}− **‹** *N*_{
C
}**›**.

Since the excess coincidence count depends on the number of recorded spikes, it is typically normalized in some manner to yield a synchrony index that is not dependent on spike number. For comparison with the JBSI, I selected two of the more commonly used indices in the experimental literature. In the first index, which is usually notated in the literature by E but will be called here Excess Coincidence Index or ECI, the excess coincidence count is simply normalized by the number of spikes in the reference train [

42–

44]:

$\text{ECI}\equiv \left({N}_{C}-<{N}_{C}>\right)/{n}_{1}$

(3)

In the second index, referred to in the literature as the cross-correlation coefficient (CCC) [

4,

45] or, somewhat loosely, as the correlation coefficient [

19,

46,

47], the excess count is divided by a more complicated expression:

$\mathit{\text{CCC}}\equiv \frac{{N}_{C}-<{N}_{C}>}{\sqrt{{n}_{1}\cdot {n}_{2}\left(1-\frac{{n}_{1}\cdot b}{T}\right)\left(1-\frac{{n}_{2}\cdot b}{T}\right)}}$

(4)

This normalization factor is justified mathematically in [48]; for completeness, I derive the CCC from first principles in Appendix C.

Given the availability of these and other synchrony indices in the neurophysiological literature, the reader may wonder: how should an experimenter decide which is the best index to use, and why did the author bother to devise yet one more index? To answer these two questions, I first lay out a set of requirements that, I propose, should be met by an ideal synchrony index. I then show how both the ECI and the CCC fail to meet some of these requirements whereas a novel index, the JBSI, fulfills them all.