Putting the “More” in Morphology: Part 1

Segmentation, the ability to define the unique features in a region of interest, has always been the greatest challenge in image analysis and quantitation. A new suite of tools, available from PerkinElmer Life Science (Hopkinton, MA), blasts through that barrier, using novel blends of supervised machine-learning; interactive morphometry, intensity, and texture measurements; and spectral unmixing to open intriguing new vistas, not only for segmentation but for classification, quantitation, and analysis. For researchers in biology, investigators in drug discovery, and medical practitioners involved in diagnosis and therapy, these new multiplexed approaches convert “data” into more substantive answers.

Multiplexing: The “suite” solution

Historically, morphometric approaches relied on visually segmenting information by tagging features of interest using colored stains such as hematoxylin and eosin (H&E) to define structures of interest. In the last 30 years, immunohistochemical (IHC) reactions have emerged as part of a rapidly growing tissue staining arsenal. Immunohistochemistry exploits the chemical specificity of antibody–antigen interactions to tag abnormal cells, diagnose disease, and track cell processes, like death, through the expression of particular proteins.

Traditionally, morphological studies produced distributions of cell counts according to size, shape, aspect ratios, etc. Few of these image analysis options could quantify and display the fluorophore/chromogen stain distribution in cells or tissue and analysis. That type of analysis fell to multiparametric approaches such as flow cytometry or gene and protein microarrays that focused on multianalyte biomarker identification, quantitation, and cell sorting in homogenized samples. However, these approaches lacked morphometry’s ability to elucidate structure or context.

New approaches leverage the power of these two disciplines by multiplexing morphometry with multiparametric techniques, answering not only “what” is there, but “where” and “how much.”

For instrument manufacturers, the explosion in multiplexing applications is a challenge. The life sciences now range from molecular analysis of proteins, DNA, RNA, and mRNA through analysis of cells to tissues, small animals, and humans. In response to this challenge, companies such as PerkinElmer have developed suites of technologies and reagents. This paper focuses on the impact of one of these suites: PhenoLOGIC® for cell identification and sorting; inForm® Analysis Software for cell segmentation and classification in tissue sections; and the multispectral analyses enabled by devices such as Vectra®, Nuance®, and TRIO™.

PhenoLOGIC: multiparametric analysis meets supervised machine-learning

PhenoLOGIC’s name is an amalgam of “phenotype,” a manifested trait, and “LOGIC,” referring to analysis. It is a plug-in that relies on three key imaging cues: morphology (structure), intensity (measured as optical density), and texture (accumulation of intensities into recognizable features such as spots, ridges, and edges). Its goal is to rapidly develop an analytical protocol that can be used to characterize and quantify large numbers of cells.

As demonstrated in Figure 1, the process is straightforward. In step 1 (Figure 1a), the user circles representative cells to define the classes that will be used for segmentation. In this example, the smaller red circles identify dead cells; the larger green circles, live cells. Observe the overlap of two live cells in the middle of the image. By circling each one, the user trains the system to more closely define what makes up a single live cell.

Figure 1 – Using PhenoLOGIC to determine live:dead ratios. a) Step 1: Train. b) Step 2: Generate a linear, multiparametric analysis. c) Step 3: Segment and classify as live (green)/dead (red). d) Comparison of PhenoLOGIC results to live:dead biomarker.

In step 2 (Figure 1b), PhenoLOGIC identifies a pair of parameters to distinguish living and dead cells. Note that while the user might have used just size or shape, PhenoLOGIC automatically determined a more effective live:dead discriminator by combining two texture features describing the intensity of the mitochondria with two morphological features: cell area and nuclear roundness. Step 3 (Figure 1c) illustrates the resulting highly effective classification.

Traditionally, to define which cells were living and which were dead, the sample was stained with a special biomarker. However, as demonstrated in Figure 1d, PhenoLOGIC’s multiparametric linear classifier was equally effective, eliminating the need for the extra staining step, saving time and money. Its simple “learn-by-example” interface requires no custom programming. It learns new algorithms quickly, and is able to develop reliable classifiers to fit the experiment under study. Embedding PhenoLOGIC into automated handling stations such as Operetta® and Opera® (both from PerkinElmer) enables the entire process to scale from microscope level to high-throughput screening of thousands of samples.

From cells to tissue

With the rise of proteomics, living cells became the laboratory of choice for studying proteins. The reason was simple: protein behavior depends on conformation. Loosen or uncoil that conformation and the protein loses its functionality. Today, we understand that cells in culture were just the first step. While they are still an important focus of study, we know that cells behave differently when in the “community” known as tissue. This new understanding is driving an expansion in tissue analysis that ranges from localized tumor studies to whole-slide imaging (WSI) and tissue microarrays (TMAs).

To meet the expanded needs unique to tissue analysis, PerkinElmer built the train-by-example machine-learning concept into inForm, software designed for tissue region segmentation, cellular and subcellular segmentation, and biomarker intensity quantitation.

As Figure 2 demonstrates, the steps are similar to those described above for PhenoLOGIC. An image is loaded (Figure 2a); then the user trains inForm by drawing regions of interest for each tissue type to be classified (Figure 2b). As with PhenoLOGIC, inForm uses multiparametric analyses to segment first by tissue type (red/tumor areas versus green/stromal areas in Figure 2c), and then by cell structure (cytoplasm, nucleus, and membrane, shown in the finer detail in Figure 2c).

Figure 2 – Breast tissue stained with DAB (progesterone receptor, PR) and hematoxylin showing inForm steps. a) Load the image. b) Draw regions of interest to train the system. c) Segment first by tissue region (green = stroma; red = tumor), then by subcellular components. d) Score as positive vs negative (red:blue) or (e) as +1/+2/+3 and H-score by “binning” nuclei according to optical density.

Figure 2d is the result of several steps. Behind the scenes, spectral unmixing separates the brown nuclear stain DAB (3,3’-diaminobenzidine) from the blue hematoxylin, and inForm converts the image to quantitative optical density (OD) values. The user then sets an OD threshold to define the boundary between positive versus negative cells. Figure 2d shows the ratio of PR positive nuclei (50.25%, tagged in red) to negative (49.75%, tagged in blue). Alternatively, the data can be interpreted as a 1+/2+/3+ and H-score (Figure 2e) by setting a series of scoring “gates.” (Image Analysis—Quantitative Scoring Methods; www.ihcworld.com/ihc_scoring.htm). In this case, the gates were set to 0–0.072 (blue), 0.073–0.150 (yellow), 0.151–0.306 (red), and 0.307 or greater (brown). The software then “bins” the nuclei and calculates the percentage of nuclei falling within each range. To identify which nuclei fell into which bin, the nuclei are color-coded by bin. inForm’s work flow replicates the process used by pathologists or other experts, scoring only those cells that sit in a particular biology of interest. In this instance, the pathologist would score only the epithelial cells in the tumor and would ignore the stromal cells.

Percent area studies

inForm can also be used to conduct percent area studies on images of tissue sections, such as the Islets of Langerhans in the pancreas (responsible for insulin control), or granulomas in liver. Currently, these assessments are only done visually, a tedious process. Experience with inForm suggests that if an expert can visually distinguish two morphologies, then inForm can likely be trained to segment them. Figure 3 shows the segmented and classified image of an H&E-stained section of breast. The figure demonstrates percent area of five different structures. The resulting data can be displayed in myriad formats including histograms, scattergrams, and matrices, comparing possible combinations of positive and negative stain response.

Figure 3 – Breast involution (H&E) showing tissue segmentation, classification, and percent area for multiple structures. (Image courtesy of Peter Gann, University of Illinois, Chicago.)

Unlike microarray, PCR, ELISA, and related separation techniques that require tissue to be ground up, inForm’s multiplexed approach retains cell morphology and tissue context. Information can be gathered from statistically significant numbers of cells but is not mixed in with cells that are not of interest, as is the case in “grind and find” methods, and each cell can be analyzed separately without being lost in a homogenized sample.

Next steps

Part 1 of this article discussed the impact of multiplexing morphological methods, multiparametric techniques, and supervised machine-learning to answer the question “what” is there, as well as “where” and “how much.” Part 2 of this article, https://www.americanlaboratory.com/913-Technical-Articles/132335-Putting-the-More-in-Morphology-Part-2/, adds another powerful tool to the arsenal, multispectral imaging (MSI). Using spectral signatures to separately image multiple fluorophores or chromogens, MSI is also a powerful aid that removes pervasive autofluorescence, dramatically improving signal-to-noise and enhancing contrast. Finally, the second installment pulls together the full arsenal, demonstrating the real-world impact of the integrated suite.

Barbara Foster is President and Chief Strategic Consultant, The Microscopy & Imaging Place, Inc., 7101 Royal Glen, Ste. A, McKinney, TX 75070, U.S.A.; tel.: 972-924-5310; e-mail: [email protected]www.MicroscopyMarket.com and www.MicroscopyEducation.com. The author wishes to thank James Mansfield and the PerkinElmer staff for their assistance in providing applications and explanations for this article. Author’s note: Different fields use different definitions of multispectral, hyperspectral, and spectral imaging, which makes their use confusing. Multispectral will be used throughout here.