Abstract
Recent advances in optical-sectioning microscopy, along with novel fluorescent proteins and probes, give us the tools to image molecules and their interactions in space and time. Investigators using these tools routinely collect multichannel three-dimensional (3D) images and time series, but analyzing such complex datasets requires sophisticated visualization techniques. We here provide an overview of the principles and practices of 3D visualization of multichannel microscopic data. We also describe ImageSurfer, a new software package for volume visualization and data analysis. ImageSurfer is freely available (www.imagesurfer.org) and provides powerful interactive tools to explore and analyze complex multichannel 3D datasets. Although ImageSurfer is designed with fluorescent microscopy in mind, it is also effective for other types of data, including 3D datasets acquired by functional magnetic resonance imaging and EM tomography.
The recent flowering of imaging techniques is providing new insights into neural structure and function at scales ranging from single molecules to the entire brain. Fluorescence microscopy is now a central tool in neuroscience. Optical-sectioning microscopy has become widely used, with rapid improvements in spectral and temporal resolution, and novel approaches are now challenging the classical diffraction limit of spatial resolution (Willig et al., 2006). In parallel, the increased availability of fluorescent proteins and probes is enhancing our understanding of gene expression, protein–protein interactions, and cellular processes (Giepmans et al., 2006). We now have tools to image specific molecules and their interactions in space and time with unprecedented precision.
Too much data, not enough information
Investigators using these tools routinely collect multichannel three-dimensional (3D) images and time series, generating large and complex datasets. These datasets are now limited to the field of view, but the development of automatic mosaic acquisition and processing systems will soon enable investigators to acquire wide-field views without sacrificing resolution, presenting challenges related to the sheer size of the dataset ranging up to hundreds of gigabytes (Chow et al., 2006). Aside from size, the multidimensionality of these datasets makes them too complex to be fully understood simply by displaying series of images in two dimensions; they require 3D visualization methods. The goal of these methods is not simply to enable 3D viewing, but to integrate the data into a form that is easy to understand, helping investigators to extract and display relevant information. By enabling real-time interaction, 3D visualization tools should provide a deeper understanding of the data.
Hardware for 3D
Because 3D visualization techniques are computationally intensive, they were historically restricted to professional workstations, preventing widespread use. However, recent advances in processing power and 3D graphics cards, along with inexpensive computer memory and hard drives, make 3D visualization of reasonably sized datasets feasible and affordable even for laboratories that face budget constraints. Although one can still usefully spend tens of thousands of dollars on a dedicated imaging workstation, systems costing less than $2000 are now adequate for most imaging tasks encountered in routine microscopy.
Although consumer-grade hardware can now generate complex 3D representations, interacting with these representations is still inhibited by the 2D nature of interaction and display hardware. 3D images are usually viewed on flat screens; the sense of 3D is then conveyed by a variety of features, including occlusion, perspective, and shading based on directional illumination. However, this does not produce the same impression achieved when we look around a room; that sense of depth also requires binocular disparity. Since the stereopticon of the Victorian era, a number of “true” 3D display methods have been developed. Many 3D visualization software packages support conventional stereoscopic 3D display technologies such as two-color, polarized, or electronically shuttered glasses, but they can strain the eyes after prolonged usage. Newly introduced autostereoscopic displays dispense with the glasses but require users to position themselves precisely in front of the display. Professional-level display systems can produce reasonably convincing full-color images, although their high cost severely limits their use. For exceptionally large and complex 3D models, immersive virtual environment such as “the Cave” (Zhang et al., 2001) enable users to move within the virtual environment and observe the visualization in the same way they would view a physical object in the real world.
Mice and keyboards are the main ways to interact with computers, but using a 2D input device for 3D interaction is inherently problematic. To manipulate 3D objects, one needs at least six degrees of freedom (6DOF; horizontal, vertical, depth, pitch, yaw, and roll), and satisfactory 6DOF input devices now available are currently expensive. Pushed by the computer game industry, consumer-level 6DOF controllers are being developed, so in the future, 6DOF input devices are likely to become a standard component of interactive 3D computer graphics applications.
Visualization techniques
Beyond hardware, effective display of 3D data requires an underlying computational strategy. This problem has been the focus of considerable research in computer science. Most 3D visualization techniques fall into two classes: surface-based binary rendering and semitransparent volume-based continuous rendering (Kniss et al., 2001; Levoy, 1988). For both techniques, the key issue is how to perform the rendering so that relevant information is portrayed. In surface rendering, this is done through “segmentation,” a preprocessing step in which contours that define the surface of the structure to be visualized are extracted. Contours can be defined manually in 2D or 3D space or with any of a number of automated and semiautomated algorithms (Pham et al., 2000). Because these techniques remove the vast majority of the information contained in the volume, surface rendering is generally fast and provides readily understandable representations. However, segmentation can be difficult and potentially arbitrary when structures with similar densities lie near the structure of interest.
Volume rendering provides direct visualization without the need for prior processing; every voxel in the dataset is assigned visual properties (e.g., color and opacity). The assignment rule, or “transfer function,” can be a linear (or nonlinear) function or an arbitrary table. By altering the transfer function, different representations of the same dataset can be rendered, each representation conveying different information. Because this technique is computationally intensive, real-time interaction can be difficult to achieve with large datasets. However, volume rendering is truer to the data than surface rendering (although often more difficult to interpret) and can be used with datasets containing ill-defined structures that cannot be visualized with a surface-based rendering technique.
Software implementations
Hardware limitations are rapidly fading, and visualization and analysis techniques are becoming more powerful, but 3D visualization is only slowly entering the laboratory. The main limiting factor is the software. A number of programs for 3D visualization (most designed for medical imaging) are available in the commercial market and in the public domain, but few go beyond basic 3D viewing of single-channel datasets (Table 1), and therefore they are of limited value for analysis of the kinds of multichannel (e.g., multiply labeled) 3D and 4D images produced by confocal and two-photon microscopes.
Image processing software for multichannel 3D visualization
Commercial products fall in two categories: “generalist” packages, such as Amira by Mercury (Carlsbad, CA), provide a large range of advanced visualization tools to solve a variety of problems, ranging from biology, to geosciences, to engineering. These programs are powerful but inherently complex, with steep learning curves; as such, they are more adapted for laboratories in which 3D visualization is a major focus (e.g., structural biology) than for the typical neuroscientist. Specialized packages, such as Volocity by Improvision (Lexington, MA), aim to provide simple all-in-one solutions for light microscopy (LM); specifications often include acquisition, visualization, deconvolution, quantification, and image organization. These packages are used by many laboratories in which 3D visualization is desirable but not the principal focus. However, 3D visualization for LM is a specialized niche with high performance demands, making it challenging to generate a profitable business model. Both software development and marketing are costly, and these expenses must be passed on to the users. The cost to the user includes not only the price of the base program, but also the price of additional modules, multiuser licenses, and annual software updates. Furthermore, development costs make it difficult for vendors to keep up with the rapid progress of visualization research. Perhaps for these reasons, visualization solutions are unimaginative and tend to be similar from product to product.
These commercial constraints are not easily solved. Indeed, highly technical software designed for a limited customer base may be better implemented by an open-source approach. ImageJ is perhaps the most popular open-source imaging software in neuroscience (http://rsb.info.nih.gov/ij/). Although its main focus is analysis and measurement of 2D images, ImageJ can also provide basic visualization of 3D datasets through plugins, like Volume Viewer (http://rsb.info.nih.gov/ij/plugins/volume-viewer.html) and VolumeJ (http://webscreen.ophth.uiowa.edu/bij/vr.htm). For multichannel 3D images, two freeware packages, Voxx (Clendenon et al., 2002) and VisBio (Rueden et al., 2004), are available. Although both are limited to volume rendering, VisBio also includes 3D slicing at arbitrary orientations and 3D measurements.
Development of open-source imaging software is greatly facilitated by open-source libraries designed for visualization of multidimensional data. The most popular set of tools is the Visualization Toolkit (VTK; www.vtk.org), a software library widely used for scientific and medical applications. The recent addition of the Insight Toolkit (www.itk.org), funded mainly by the National Library of Medicine as part of the Visible Human Project, adds numerous additional rendering and image processing tools. These libraries include contributions from a large group of developers and users, enabling rapid development, thorough validation of code, and a repertoire of novel algorithms.
However, even with the help of powerful toolkits, creating a sophisticated and useable visualization application is beyond the expertise of most neurobiologists. In practice, effective and innovative software solutions are likely to come from biologists and computer scientists working together as a team. Over the past three years, we have pursued this team approach in developing ImageSurfer, a free 3D visualization tool.
ImageSurfer
ImageSurfer illustrates how neurobiologists collaborating with computer scientists on a real-world problem can produce a tool of broad usefulness. Our original goal was to study the distribution of calcium pumps and channels in neuronal membranes using confocal microscopy. The lack of well adapted software, combined with the high price of potentially suitable commercial software, motivated us to develop tools to facilitate this work (Dennis et al., 2004) (Fig. 1; see also supplemental material S2, available at www.jneurosci.org).
Distribution of a plasma membrane calcium pump in dendritic spines of CA1 pyramidal neurons. We performed double labeling for a calcium pump (PMCA2) and the lipophilic membrane dye DiO (for details, refer to supplemental methods, available at www.jneurosci.org as supplemental material). A z-stack of 40 serial optical sections was acquired with a Leica SP2 confocal microscope using a 63× PlanApo objective (numerical aperture 1.4). A, B, Extended focus z-axis series through the 40 sections, showing a spiny dendrite revealed by DiO (A) and the corresponding punctate labeling for PMCA2 (B). C, D, 3D representation of DiO (green) and PMCA2 (red) confocal stacks, using volume rendering (C) and isosurface (D) methods. E, Representation using the colored isosurface rendering technique: surface rendering of the DiO dataset reveals the structure of the dendrites, colored by PMCA2 concentration. This technique directly reveals PMCA2 concentration (yellow is high, and green is low) across the dendritic surface. High magnification of spines 1 and 2 shows that PMCA2 concentrates at the spine neck. F, The white plane interactively positioned through spine #2 represents a “slice extractor.” G, Height field, using color and height to display the values sampled from the slice of volume data specified by the position of the slice extractor in F. Mountains represent high concentrations of PMCA2, and valleys represent low concentrations. The color map defines the outline of the spine (DiO staining). H, I, Calcium pump concentration plotted (I) from data collected along the user-defined spline curve from the height field (H).
ImageSurfer was designed to analyze and quantify relationships between multichannel confocal images. It is written in Java and uses VTK for 3D functionalities (for details, see supplemental material S1, available at www.jneurosci.org). The current version includes several popular visualization tools, including several volume- (Fig. 1C) and surface- (Fig. 1D) rendering techniques. The software also has an innovative colored isosurface mode that provides a qualitative display of the correspondence between two channels (Fig. 1E,F). The rendering options for each channel are independent, enabling a user to mix visualization modes.
A “slice extractor” enables the user to examine a cross section of the volume taken at any orientation. The captured slice can be visualized as a traditional 2D image or as a 3D-relief representation in which the image intensity is coded as height. To further probe spatial overlap between structures, the investigator can use data from another channel to colorize the 3D relief representation (Fig. 1G). This technique provides a far more accurate assessment of colocalization than the traditional red-green-yellow convention. Signal intensity values can then be sampled within captured slices along a user-defined spline (Fig. 1H).
Particular attention has been paid to the task of sharing data, a formidable problem in 3D imaging. In addition to taking snapshots of 3D representations and generating movies, 3D models can also be exported as OBJ or VRML (virtual reality modeling language) files; VRML files can be viewed within web browsers using a free plug-in. Both file types can be imported into popular 3D rendering programs for further processing, and into Acrobat 3D to produce documents that can be viewed interactively with Adobe Reader 8 (supplemental material S3, available at www.jneurosci.org).
Although it was developed for our specific purpose, ImageSurfer is now being downloaded hundreds of times each month and has proven useful for visualization and analysis of a variety of data types, including 3D data acquired by fMRI and EM tomography. For example, ImageSurfer has been used to study the Golgi apparatus of cortical pyramidal neurons (Horton et al., 2005) and is now being used to study human breast cancer cells, myosin-based molecular motors, and nano-fabricated devices.
Perspectives for the future
Progress in 3D neuroimagery has not been limited to light microscopy; it ranges from macroscopic (e.g., magnetic resonance imaging and positron emission tomography) to the nanoscale (e.g., electron tomography and x-ray crystallography) level. This rapid increase in information content raises important problems of data sharing. Many datasets now being generated contain far more information than could ever be analyzed by a single researcher and may also address issues unrelated to the original purpose for which they were acquired. Efforts to facilitate the sharing of primary data in the neuroscience community are now under development (Martone et al., 2004). For example, the Cell Centered Database (Martone et al., 2003) (http://ccdb.ucsd.edu) at the National Center for Microscopy and Imaging Research aims to make unique and valuable datasets (2D, 3D, and 4D data from light and electron microscopy) available to the scientific community for visualization, reuse, and reanalysis. Similar efforts by the European Bioinformatics Institute with EMDep database address the problem of storing and sharing 3D EM data. Such databases can also supplement and augment 3D data reported in journal articles. Both space limitations of traditional journals and the difficulty of representing 3D data in a 2D format make it difficult to report 3D data in the literature. Databases provide a convenient means for storing data referenced in the literature, as well as providing access to related datasets and ancillary information.
As technology evolves, 3D neuroimagery techniques will provide datasets that are even more information-rich. Analysis of such data is far from routine and currently represents a major bottleneck; developing effective new software for data mining and sharing will be essential to extract the information embedded within large datasets. ImageSurfer represents a step in that direction.
Footnotes
-
Editor's Note: Toolboxes are intended to briefly highlight a new method or a resource of general use in neuroscience or to critically analyze existing approaches or methods. For more information, see http://www.jneurosci.org/misc/itoa.shtml.
-
This work was supported by National Institutes of Health Grants 5-P41-RR02170-21 and P41 EB002025 (R. Superfine, R.M.T.) and P01 NS44306 and NS39444 (R.J.W.). We thank S. Grand for histological support, A. G. Filoteo, E. Strehler, and J. T. Penniston for preparation of PMCA antibody; P. Parente, J. Robbins, and C. Weigle for initial programming support; and M. Chua and G. Gordon for critically reading this manuscript. D.J. developed early implementations of the software. D.F. and D.M. constructed the current version of the software. R.M.T. provided funding and supervised D.F., D.M., and D.J. R.J.W. provided funding for D.J. and A.B. A.B. initiated the project, provided direction and feedback, wrote the manuscript, and is responsible for the website. All authors edited the manuscript.
- Correspondence should be addressed to Alain Burette, Department of Cell and Developmental Biology, University of North Carolina, CB #7090, Chapel Hill, NC 27599. alain_burette{at}med.unc.edu