Scientific Imaging: To Sharpen or Obscure?

In the world of scientific imaging, the matter of whether or not to sharpen an image can pose an ethical question: Does sharpening change the essential nature of the image? In tandem with this question is also a hesitance to sharpen because no objective method exists for setting limits to the extent of sharpening. Furthermore, for many journals, authors’ guidelines either forbid sharpening or caution against it (a curiosity, for sharpening almost always comes at the printing press art department when “trapping” an image—a postprocessing method intended for preventing ink from spreading through paper via capillary action). Yet, depending upon the image and the camera system used to acquire the image, sharpening may be required either to reveal salient details or to return the image to its original appearance when viewed through a microscope. Looked at another way, a refusal to sharpen can be a means through which a scientist intentionally or unintentionally obscures potentially critical image information. Thus, sharpening must be seen as a postprocessing method that should be applied to images in instances when details are obscured. To some lesser extent, it is appropriate when focus is “lost” through the camera system.

Some background information before continuing, starting with semantics: When a camera system is used, clarity by seeing increasing levels of details and sharp edges is achieved through the act of focusing; with postprocessing, it is achieved through sharpening. The perception of sharp edges comes with an absence of gradients between the edge of one significant feature and a neighboring significant feature. A significant feature would be something in the image to which the eye is drawn. Generally, a significant feature contains the highest difference between a dark area and a bright area, what is an area of highest contrast, but it can also be a feature that is of greatest interest. Portrait photographers capitalize on human perception by focusing on the iris of the eye, an area of both high interest and intrinsic contrast. If the iris appears “sharp,” and the rest of the image is blurred, viewers of the image still think that the image is focused.

But no matter how critically a user focuses, other possible issues with the camera system can confound those efforts. Camera systems, by themselves, can contribute to a loss in focus. If the camera detector is a mass-produced CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) chip, it is likely that the detector lies behind an anti-aliasing filter. This filter is used to reduce the effect of a stair-like appearance in the image when there are strong horizontal, vertical, or off-axis lines. The net effect of an anti-aliasing filter is a loss of focus. Furthermore, poor optics projecting the image onto the camera itself, along with vibration, can also result in a sharp image to the eye through an eyepiece, but a loss of focus when captured by the camera.

On the other hand, the camera system can be too good. High-resolution optical systems produce images in which submicron distinctions can be discriminated, but the inevitability is that these images look slightly out of focus to the human eye. Furthermore, the details in the image tend to blend in with surrounding areas, so that the very details which provide experimental results are the overlooked details. This phenomenon in which high-resolution images effectively create a loss of contrast and the obscuring of image details results in the all-too-common complaint about the image not having enough “resolution.” The word “resolution” in common parlance is often shorthand for saying “the details I want others to see are plainly visible.” A lack of resolution, on the other hand, means “the details I’ve seen through the microscope don’t appear on the image.” It’s all about seeing details.

As it turns out, the most effective way of revealing details comes through sharpening. While it is often understood that sharpening provides the appearance of sharp edges and focus—arguably legitimate when the camera system itself slightly blurs images—the more useful function is that of revealing obscured details. That’s because commonly used sharpening methods (unsharp mask in particular) also function as a means of increasing contrast between features and surrounding areas.

Having said that, oversharpening can also create false details, especially with brightfield, color images. Areas of increased contrast can create what might look like anatomical features, just as a non-Koehler illuminated sample could appear the same way. With fluorescent images, oversharpening can also create artificial brightness when staining is dim, or experimental evidence when there wasn’t any. In both cases—in any case when it comes to postprocessing—steps must be reported to journals, and all original (raw) images must be archived so that reviewers can compare sharpened images with originals.

When oversharpening isn’t applied to images, and when sharpening is applied for legitimate desires to reveal details that would otherwise be obscured, sharpening would not only be appropriate, but a reason to avoid future accusations of scientific misconduct for obscured visual information.

So how can researchers avoid oversharpening? Two steps that can be taken to prevent oversharpening follow:

  1. Zoom in on the image to approximately 400%, or to the point at which individual cells or cellular features are easy to visualize. At high zoom, oversharpening is less likely to occur.
  2. Examine a sharpened image by eye to determine if false detail is introduced (brightfield) or if the brightest significant areas of the image have not oversaturated (not brightened to the point at which detail is lost, such as when nucleoli in fluorescently labeled cells no longer appear as discrete structures).

Also, a better understanding of sharpening methods would result in choosing the right method for the type of image. Two methods will be discussed: unsharp masking, and a convolution method that uses a high pass filter, called here the “high pass sharpening method.” Both methods create an effect at feature edges in which the tonal difference is increased. That is done via the creation of a darker “moat” that surrounds individual, brighter features (the width determined by setting a radius in the case of the unsharp mask filter). Both methods also result in an increase in brightness, and to a lesser extent, darkness. The methods differ, however, in the way tonal values are changed: The high pass method retains broad tonal relationships, whereas unsharp masking either does not, or more subtle differences within features are detected to then create new tonal ranges (see Figure 1).

Figure 1 – Graph at top shows varying tones underneath the yellow line shown in the “Original Image” picture at bottom right. The tones underneath the line are displayed by intensities from a 0 to 255 tonal scale (y-axis) along a line length of 50 pixels (x-axis). The line profile was created in Fiji (version 3.7). The green profile in the graph indicates the tonal variation along the length of the line for the Original Image. The blue profile (slightly offset to show differences) shows the variation when an unsharp mask filter had been applied in Photoshop (version CS3) at a radius of 2 and an amount of 100 (unsharp mask picture). The reddish profile shows the variation when the high pass method was used (high pass = 4). Note that the unsharp mask filtering creates profiles that “dip” into a darker tone at the bottoms of profiles, whereas the high pass method mimics the profile of the Original Image to a greater degree. Both the high pass method and unsharp masking increase brightness, thus requiring a setting that does not oversaturate the brightest whites, and in so doing, obscure details.

Within many scientific programs, unsharp mask can be selected and sliders used to set the strength (or amount) and the radius (generally set under 2.5). The high pass method is done through a series of steps:

  1. The image is duplicated, and then a high pass filter is applied to the image.
  2. If the image is in color, desaturate the high pass filtered image (Photoshop), or change to gray scale.
  3. The two images then undergo image math, in which a formula is applied. In Photoshop, the formula is generally set to what is termed “Hard Light,” a function that multiplies only values above 128 (on an 8-bit scale) of the original image.

The high pass method generally works best with brightfield images because, in general, brightfield images don’t contain significant tones that can saturate. The high pass method also makes it more troublesome to oversharpen, because the steps taken may have to be repeated more than once.

Figure 2 – Image on the left shows staining of a purkinje cell without postprocessing. On the right, the same image was postprocessed to include the application of an unsharp mask filter (Radius = 2; Amount = 200). The image on the right clearly shows punctate staining, a key feature for the experimental result, and a compelling reason to sharpen. On the left, puncta is obscured, thus leading the viewer to believe that the experimental results created an overall labeling.

Unsharp masking is more ideal for fluorescently labeled images, but caution must be taken to avoid oversaturation. Generally, unsharp masking reveals details in areas where these are generally lost: in the midrange of tones. Features such as spines along a neurite, smaller diameters of ganglion, and smaller structures become far easier to visualize after unsharp masking. Structures can also become anatomically correct after sharpening (see Figure 2).

In the end, when considering sharpening of images, not only do the negative consequences need to be examined, but also the consequence of revealing obscured details. Certainly, images exist in which researchers oversharpened images, or simply sharpened images, when this defined the “doctoring” of images. But the instances when the revelation of detail is necessary should not be overlooked, even when considering that visual images are data. They are, but images are also the means through which we communicate on a perceptual level. Numbers can be deceiving when important details are hidden.

Jerry Sedgewick is a Scientific Imaging & Image Analysis Consultant and Technical Writer/Regulatory Consultant, 965 Cromwell Ave., Saint Paul, MN 55114, U.S.A.; tel.: 651-788-2261; e-mail: [email protected].