Skip to content

How to measure connectivity

unidesigner edited this page Sep 14, 2010 · 1 revision

So far, we have used some measures of connectivity (e.g. density) which rely on fiber counting. But counting of fibers is very unstable. We should think about new ways to “measure the connectivity” between cortical regions.

This is a very key issues, how to derive valid connection matrices (connectomes) from the fiber tracks. I agree that the current measure of density could be improved. Also when doing analysis on the networks, the size (=number of nodes) of the network is a key parameter on which many measures sensitively depend.

The question cannot be separated from how to define the nodes, since defining the edges depends on it. There are some very fundamental issues here, e.g. whether the concept of a brain regions of higher associative parts of cortex makes sense at all. The approach taken by our group is to first start with an anatomical parcellation, subdividing it randomly without further constraints. This in a way circumvents the problem of finding the “right” parcellation and might unravel some meaningful patterns. What I find important is that in our pipeline, we have the option to define different parcellations schemes and connectivity measures to compute the connectomes (connection matrices). This allows then to compare.

Ideas to define the nodes in the connectome:

  • Map a well-defined cortical parcellation from macaque cortical mesh to fsaverage to define the nodes by these regions.
  • Apply a battery of tests to define the nodes for an individual, e.g. by mapping the V1 area, sensory area etc. responses.

Ideas to define the edges in the connectome:

  • Remove the short fibers which are probably noise. Add some more heuristic to remove highly unprobable fibers known from anatomy. Apply fiber clustering, assigning individual fibers to bundles. Based on start and endpoints of these clusters, define the ROI on cortex. Recompute the connectome (maybe also a hierarchical representation of the networks)
  • E.g. for corpus callosum, one know approximately how many fibers there are. (e.g. 1 mio). We can compare that to how many fibers were reconstructed for cc, and thus estimate “how much” an individual fiber represents. This might help us in assigning more meaningful density values
  • Not storing a single average value for a connection (e.g. ADC, FA, density), but a probability distributions over the fibers. If there are large homogeneous samples, these pdfs can continuously be improved.
  • An estimate of the curvature of the fibers connection two regions. This is hypothesized to be relevant in brain development (and related also to the gyrification of the brain).
Clone this wiki locally