Imaging neural connections: New study raises the bar for precision, speed & thoroughness

Yaron Sigal, Colenso Speer, Hazen Babcock and Xiaowei Zhuang (Photograph courtesy of Kendal Kelly)

Yaron Sigal, Colenso Speer, Hazen Babcock and Xiaowei Zhuang
(Photograph courtesy of Kendal Kelly)

Team from Zhuang lab makes first super-resolution microscopy map of all the inhibitory synapses in a neuron’s dendritic tree

by Parizad M. Bilimoria

Geographical maps have long changed the way we live, from the ancient hand-drawn ones that relied on explorer voyages to those on today’s satellite-driven, real time apps like Google Maps. The same transformative power holds true for maps in neuroscience. From the meticulous illustrations of neurons in different brain regions made by neuroanatomist Ramon y Cajal in the late 1800s to the imaging of neural circuits through more modern computerized technologies, the maps that we have of our nervous systems are constantly shaping the questions we think to ask and are able to answer about how our brains and bodies work—and what to do when diseases or injuries arise.

A recent advance in the map-making world of brain scientists comes from the lab of Xiaowei Zhuang, a faculty member in the Conte Center at Harvard who is the David B. Arnold Professor of Science and a Howard Hughes Medical Institute Investigator. In a study published in Cell, led by postdoctoral fellows Yaron Sigal and Colenso Speer, her team showcases a powerful new approach for visualizing and analyzing the input fields of neurons, using neurons in the eye called retinal ganglion cells as a test case. While mapping the sea of inputs that a single neuron receives and integrates is something that neuroscientists have long recognized as critical for understanding the functions of different kinds of neurons and their role in circuits—and have been able to accomplish in varying capacities—until now there’s usually been a trade-off between obtaining very high resolution images and having precise molecular information. The Zhuang team’s new strategy will allow researchers automated analysis of neuron input fields in a way that minimizes this compromise and overlays complex molecular information upon three dimensional reconstructions of neurons situated in relatively thick sections of brain tissue.

Resolution and specificity are factors that matter to map-makers of all kinds. To give a Google Maps analogy, it’s like wanting the power to zoom in deep on any location of interest (say to look for a parking lot near an event venue) and at the same time, the power to selectively tag points of interest within a dense field (for instance, to highlight all the coffee shops along a route). In neuroscience terms, this might mean desiring the ability to zoom in highly upon the input and output structures of individual neurons in the brain, called dendrites and axons. And at the same time, wanting the ability to selectively tag all the synapses—the points of communication between neurons—of a particular variety that dot these dendrites and axons. (Synapses come in two main varieties, excitatory and inhibitory. Distinguishing between these two is critical to understanding neural computations, as excitatory inputs amplify electrical activity in the receiving neuron and inhibitory inputs dampen it.)  

Electron microscopy grants neuroscientists their first wish, the extremely high resolution views of brain tissue, but doesn’t very easily permit the labeling of specific molecules or structures within cells. Conventional fluorescence light microscopy is the opposite, providing the ability to label specific molecules of interest with fluorescent tags, but often without the fine resolution needed to make out the architecture of small sub-cellular structures such as synapses.  Zhuang’s lab is a pioneer in super-resolution fluorescence microscopy, a game-changer that entered the research scene in 2006 and in many ways offers biologists the best of both worlds.

In the new study, Sigal and Speer, working alongside research associate Hazen Babcock, present the first super-resolution microscopy map of all the inhibitory synapses in a neuron’s dendritic tree—plus methods for automated analysis of such maps. Their work is a remarkable advance in the way it combines precision, speed and thoroughness in identifying synapses in three dimensions, and it dovetails beautifully with recent advances in electron microscopy mapping of synapses coming from another team in the Conte Center. Additionally, their research has answered a fundamental biological question about the composition of inhibitory synapses on the ON/OFF direction-selective retinal ganglion cells they focused on: These neurons, which are critical in computing the direction of motion in an organism’s visual field, are very well-studied, but until this latest map it was not known that the vast majority of inhibitory synapses on their dendrites are specialized for the neurotransmitter GABA and almost none use glycine, another common inhibitory neurotransmitter in the brain. This finding demonstrates the ability of the super-resolution approach to reveal vital new information about neural circuits that have long been mapped using other means.

Bringing STORM to Neural Circuits

Sigal and Speer are friends who joined Zhuang’s lab about five years ago, right on each other’s heels. Both were enticed by the neuroscience potential of STORM, a super-resolution microscopy method developed in the Zhuang lab in 2006.

The trick of STORM, which stands for stochastic optical reconstruction microscopy, is using ‘photoswitchable’ fluorescent tags to get past the traditional resolution limits of light microscopy. Normally, when fluorescent molecules are congregated in one place at high density (say neurotransmitter receptors at a synapse), the fluorescence emitted by one molecule overlaps with that of another and produces a blurry image. But when a fluorophore is ‘photoswitchable,’ its fluorescence can be turned on or off using light. This means one can adjust things so that only a small, random subset of all the fluorescent molecules of a population emit light at the same time (the ‘stochastic’ part of STORM), and as a result, better resolve the center position of each fluorescent molecule. By surveying the same visual field again and again, simply with different subsets of the fluorescent molecule population activated each time, one can pinpoint the precise locations of otherwise spatially-overlapping molecules and ultimately overlay information from different rounds of the imaging cycle to reconstruct a full biological picture.

Complex as all this may sound, the power of the technology is obvious to anyone, scientist or not, when a conventional fluorescence image of a sub-cellular structure or molecular complex is shown side-by-side with a STORM version. It’s breathtaking—often revealing things that before were invisible.

Speer felt the call to image synapses with STORM when he realized just how much information neuroscientists might be missing with other methods. “Oftentimes people would be doing really great work, and then they’d try to convince you of what they’re looking at by either using electron microscopy or by doing some combination of conventional imaging with molecular labeling,” he explains. “And neither of those tools, the conventional imaging or the electron microscopy gave you a full picture of the structural and molecular architecture of a synapse.”

When he read the STORM papers coming out of the Zhuang lab—at the time containing gorgeous images of microtubules and the vesicle-coating protein clathrin—he felt the tug to join. “I thought, well, that is going to be really exciting for neuroscience,” he recalls.

Neuron picture high res.png
Top: STORM-reconstruction of an ON/OFF direction-selective retinal ganglion cell with synaptic proteins labeled in magenta and green. Bottom: Comparison of STORM image (left) to     conventional fluorescence image (right), zoomed in on a piece of retinal ganglion cell dendrite with the synaptic proteins in the field again labeled in magenta and green.     (White spots indicate overlap of magenta and green.) Courtesy of Zhuang lab.

Top: STORM-reconstruction of an ON/OFF direction-selective retinal ganglion cell with synaptic proteins labeled in magenta and green. Bottom: Comparison of STORM image (left) to conventional fluorescence image (right), zoomed in on a piece of retinal ganglion cell dendrite with the synaptic proteins in the field again labeled in magenta and green. (White spots indicate overlap of magenta and green.) Courtesy of Zhuang lab.

Sigal, who felt a similar tug, had already been in the lab for some months when Speer joined and was learning that despite how powerful a tool STORM was, it would be no small feat to refine the existing methods enough to build a platform for high-throughput mapping of synaptic landscapes in 3D.

“For me the main motivating question was to image synapse organization in brain tissue,” Sigal says. “The main problem was that the available resolution wasn’t good enough with conventional light microscopy, and the techniques that had been developed for STORM didn’t have the ability to image as thick of a volume as we needed in order to look at full neural circuits, which are organized in three dimensions. So we had to try to develop a technique to image a large volume in three dimensions in high enough resolution to be able to identify synapses on neurons with good confidence.”

Lest one think that this was merely a matter of the lab becoming more familiar with synapses or the imaging of brain sections, it should be noted that in 2010, when the present study began, the lab was already in the midst of publishing its first STORM study of synapses, completed in collaboration with the lab of Catherine Dulac, Professor of Cellular and Molecular Biology, who is also a Howard Hughes Medical Institute Investigator and part of the Conte Center at Harvard. In that project, the team imaged about 10 different proteins within synapses in brain tissue with STORM to better understand the molecular architecture of synapses and where each component was in relation to others.

To build on this, they were interested in zooming out a bit and using STORM to study synapses within the context of neural circuits. “We had already developed interest to go to the circuit level, to see how neurons are connected to each other,” Zhuang explained, noting that her lab is one of many within the Center for Brain Science at Harvard focused on this goal, and through the Conte Center is in particularly close contact with the lab of Jeff Lichtman, Professor of Molecular and Cellular Biology—which has recently published a saturated electron microscopy reconstruction of a volume of neocortex with a large synapse database.

“We are making step wise progress by first aiming to look at an important set of neurons, looking at their whole synaptic fields—where every synapse is and what their molecular properties are,” she notes. “A single neuron could be a computational unit, and knowing where the input synapses are and what their properties are, we could get a better glimpse as to how neurons compute.”

Challenges, Triumphs, and Turning Points

Zhuang was most passionate when conveying how hard Sigal and Speer worked. “This was a very challenging project,” she said—emphasizing that it took multiple years despite being the primary focus for both “extremely talented” postdoctoral fellows. But to a naïve observer, the challenges the researchers faced might not be obvious. If the lab already had expertise in imaging synapses with STORM, why was it so difficult to obtain the synaptic fields of retinal ganglion cells showcased in the new study?

As the expression goes, the devil is in the details. The challenges are too numerous to list here, but to summarize, they fall into two big buckets: (1) preparing the brain sections for STORM in a way that makes it possible to obtain high resolution 3D maps of entire neurons, with thousands of synapses on each dendritic tree, and (2) developing automated methods for identifying and analyzing these synapses.

Just to give a flavor of challenges in the first category, in order to faithfully identify all or nearly all the inputs onto a single neuron, one has to make sure the density of synapse labeling with fluorescent tags is high enough. This labeling issue was particularly difficult in the context of volumetric 3D analyses because the sections of brain tissue needed to be thicker here than those used before for STORM imaging. To accomplish this, the team chose to deconstruct the tissue into many thin sections that they could more easily image and then stitch back together afterwards.  For this to work, the tissue needed to be sectioned in thin slices in a lossless way so that each section is imaged and then the sectioned images can be reconstructed into a volumetric image, which required embedding the tissue in plastic resins. For STORM imaging, the plastic embedding needed to preserve the fluorescence and photoswitchable properties of the dye labels. So the team had to find the right embedding material and determine the right sequence of labeling and tissue processing steps to ensure all these requirements.

The second category, the image segmentation part, was hard mainly because of the enormous volume of the data generated and the fact that the signal and background in this data looked quite different from anything that researchers quantifying synapses had tackled before with electron microscopy or conventional fluorescence. The micrographs are qualitatively very different from previous maps of the synaptic landscapes on dendritic trees and required the writing of much new code. Fortunately, the team did eventually develop a way to automate the process of synapse identification and this dramatically sped up data analysis. “Now the analysis is no longer the scary step. If you do larger volumes, you don’t have to think about proportionally more graduate students being there doing the tracing,” Zhuang observes with a smile. “You could just feed it into a computer.”

Beyond combating technical challenges, one of the key decisions in moving the present study forward was settling upon a neural circuit to study first. The team was drawn to the mouse retina because, based on its developmental origin, the retina is considered part of the brain and because much was already known about the cell types, their organization and functional properties in the retina, that information could be used to validate Sigal and Speer’s new approach. Yet, at the same time there remained open many questions about retinal ganglion cell biology, allowing the STORM data to provide new insights to the field, including the question addressed in the current study about GABA vs. glycinergic synapses. Zhuang calls this blend of known and unknown “a great combination… really great for the first generation demonstration of this new volumetric super-resolution imaging platform.”

One of the major turning points in the study came when the team obtained their first 3D reconstruction of a substantial piece of a retinal ganglion cell with multiple proteins labeled. “For a long time we were pulling together pieces one at a time, one marker here, one color there. It was slow and incremental. The exciting moment for me was when all those components came together and we got our first large volume reconstruction,” Speer recounts. “It was our first big piece of a cell in four colors.”

After that came the first whole cell in four colors. “It was a wow moment for the whole lab,” Zhuang recalls.

But, Sigal shares, there were small eureka moments all along the way. One of the earliest was when, after a long period of not being able to get any fluorescence signal to survive in the resin that the slices were in, he and Babcock finally found the right set of embedding conditions to overcome this.

Nets, Networks and Brain Disorders

So what does the ability to faithfully map synapses across entire dendritic trees in 3D mean for neuroscientists?

“The first thing we want to do next—we want to see how neurons communicate with each other by extending this analysis to a pair of neurons,” Zhuang says. “Labeling two neurons in a volume and seeing their connections and so on.”

“Even with a single neuronal type, the reconstruction of the entire synaptic field really gives us a new knowledge about how the neurons integrate their signal. We could apply this to many different types in the retina, in the central nervous system, the peripheral nervous system… and ultimately it is potentially possible that we could do a dense reconstruction of all neurons in a particular volume,” she adds.

“I’m really excited,” Speer says. “I’m really excited for people to use this, I’m really excited for the technique to continue to be developed, in terms of the size of reconstructions, the density of neurons that can be labeled and the type of synapses that can be imaged. And really, to start applying it to biological questions to get a better understanding of how circuits work, not only in the retina but also in the rest of the central nervous system.”

Sigal shares that he is already working with the lab of Conte Center director Takao Hensch, Professor of Molecular and Cellular Biology and Neurology at Harvard and Boston Children’s Hospital, to start to adapt the new platform for studies of inhibitory neurons in the cortex. A type of inhibitory neuron called the parvalbumin-positive GABAergic interneuron (PV-cell) is of particular interest to the Conte Center, which is focused on unraveling the developmental origins of mental conditions such as schizophrenia and autism, as these cells are thought to be key sites of pathology in these disorders.

The cells are ensheathed in a proteoglycan-rich extracellular matrix structure called the perineuronal net, which may also be aberrant in mental conditions, and Sigal and colleagues, including Luke Bogart, a recent PhD graduate of the Hensch lab, have been collaborating to image the perineuronal net in visual cortex samples, both from typically developing mice and mice modeling a neurodevelopmental disorder with autistic features.

 “The development of this new STORM platform for volumetric analyses of full neurons in relatively thick sections of brain tissue is an extremely promising technological advance for our center and for neurobiologists in general,” Hensch says. “We are very excited to be applying the method to analyses of cells and extracellular structures thought to be critical in the development of serious mental illness and optimistic about extending our analyses from animal tissue to human pathology samples that could shed some much-needed new light on these disorders.”