Corrections in digital imaging

topbar


Introduction

Digital photography offers exciting possibilities to mitigate lens faults, and other causes of image defects, by means of digital signal processing. Available tools range from proprietary in-camera algorithms and raw converters to third party software suites and math-based programming environments. While exploring internet fora, one often gets the impression that every image imperfection can be corrected. Is it true? Much depends on the meaning of "corrected" and the degree of image degradation. Most image faults treated below can be improved by processing, in the sense that the fault becomes less distracting, and the resulting image closer to the desired result. If "corrected" is understood to mean that the processed image is just as good as a direct capture in the absence of the defect, then only a few faults can be corrected. "Just as good" has a subjective side to it, however, and a discussion is required for each of the faults. Understanding the nature of a fault sheds light on the restoration possibilities. The discussion will use the following definitions:


Deconvolution

Slightly blurred images can be improved by various routine sharpening algorithms which increase edge contrast. These methods enhance the sharpness impression, but do not increase resolution. A text blurred well beyond readability does not become readable with unsharp masking or high-pass sharpening. Deconvolution, however, can achieve this. Anything that causes unintentional blur can only be corrected by deconvolution. Deconvolution can be seen as the inverse process of the blur generation. For each point in object space, deconvolution algorithms attempt to harvest the light from the corresponding blur patch in image space, and bring it back to a single point. Deconvolution requires implicit or explicit knowledge of the point spread function (PSF), and can be an ill-posed problem that consumes a lot of processing power. Boundary effects, ringing, and noise amplification are likely side effects of deconvolution. Highlights exceeding the dynamic range of the sensor may clip many pixels in their blur region. Such areas are beyond rescue even with the best algorithms. Another challenge is discrimination between unintentionally and intentionally blurred parts of the image, when those coexist.

Despite the treacherous terrain, deconvolution offers exciting possibilities for image restoration in the digital domain, where a high bit depth, low noise, and gamma correction help to keep numerical errors within bounds.


This page is under permanent construction. Examples will be added or updated with time.

Case descriptions



White balance

Differences in spectral transmittance result in some lenses being "colder" or "warmer" than other ones. By adjusting the white balance of an image, it is possible to obtain a consistent color balance for a series of images shot with different lenses. Correction is possible when the required adjustment is small, which is usually the case. The situation is different when one attempts to make different light sources look like one another. Street lights or candlelight, for instance, produce light in a relatively small part of the visible spectrum, and images cannot be processed for a daylight look. Similarly, it is not possible to process a person illuminated by colored spotlight for natural skin colors, simply because the required information is missing. Correction would require amplification of color channels carrying little energy, resulting in noise amplification.



Flare

Reflections by lens elements, filters, sensor filter toppings, lens barrels, or scattering by impurities in the lens glass, may lead to false light in the image. Typical manifestions are flare patches of varying colors and shapes, or a more global haze known as veiling glare, affecting contrast and color saturation. Localized flare patches in areas of uniform color and brightness (e.g. a blue sky) can be corrected by copying parts of neighboring areas over the affected area, but the situation is much more complicated when the flare affects areas with lots of detail and tonal variations. Correction is generally not possible without knowing beforehand what the affected areas should look like in the absence of flare.

An exception may occur in the case of veiling glare, when the haze is uniform over the frame. Figures 1A and 1B show a backlit signpost photographed with lenses with good flare control and poor flare control, respectively. Focal length, aperture, exposure, and processing are the same. The veil in Fig. 1B ruins global contrast and color saturation, and results in a dull image.

A backlit sign photographed with a flare-resistant lens.

Figure 1A. A backlit sign photographed with a flare-resistant lens.

A backlit sign photographed with a flare-sensitive lens.

Figure 1B. A backlit sign photographed with a flare-sensitive lens.

Post processing is performed in Matlab [1], simply by subtracting a vertical-gradient gray value and readjusting the levels. Subtraction of a constant gray value already yields a significant improvement, but the present example uses a gradient to achieve an even better result. It reflects the fact that the flare is not completely uniform, but decreases somewhat toward the bottom border. The corrected version in Fig. 1C features a much improved global contrast and vivid colors. There is nothing above a good original though, as shadow detail is inevitably lost.

The image of Fig. 1B after post processing.

Figure 1C. The image of Fig. 1B after post processing.



Vignetting

A reduction in image illumination away fom the center may result in dark corners. The image is beyond rescue in the case of mechanical vignetting, when the corners receive no light at all. In the case of optical vignetting and natural illumination fall-off, it is straightforward to get rid of the dark corners, simply by applying a brightness compensation with a suitable dependence on the radial distance. However, this compensation amplifies noise as well. After the brightness compensation, the noise density will be higher in the corners than in the center. This is often not noticeable, but sometimes it is, depending, among other things, on the amount of vignetting and the ISO setting. It is impossible to correct for the drop in signal-to-noise ratio that is inherent to vignetting.

Improvement of vignetting is illustrated below. The picture in Fig. 2A was taken under poor light conditions at a high ISO setting. Application of a radial brightness adjustment yields a more even illumination: Fig. 2B. Unfortunately, the adjustment also reveals that the bottom corners are rather noisy with a lack of valid detail. In this case, the result of the treatment cannot be called a full correction, because a lens with less vignetting would have resulted in corners with more detail.

Image with vignetting

Figure 2A. Image without vignetting treatment.

Image with reduced vignetting

Figure 2B. Image with reduced vignetting.


Distortion

Distortion is readily noticed when the subject contains straight lines, which are rendered as curves in the image. Barrel, pincushion and moustache distortion are familiar terms to describe different forms of distortion. Distortion is the only lens aberration that does not blur the image, and a high level of correction is possible by means of a two-dimensional resampling (alternatively called rescaling or interpolation) operation, using a resampling factor that depends on the radial distance. The loss in resolution and contrast due to the resampling is moderate to small, depending on the amount of distortion and the quality of the algorithm. Losses occur at the borders, where small regions of the original capture fall outside the rectangular frame of the final image.

Correction of distortion is illustrated below. The raw capture in Fig. 3A was taken with a lens that renders the red edge at the top of the building in a characteristic moustache fashion. A treatment in Lightroom [2], using the Adobe profile for this lens, removes the artifact by neatly straightening the top of the building: Fig. 3B. Accidentally, the black area at the bottom is also a straight object, which is however much closer to the lens and therefore blurred. Unlike the rooftop, this object is subject to strong barrel distortion. This is possible because distortion, like all lens aberrations, depends on the subject distance. The dark object is of course not corrected by a profile meant for infinity focus. (To be sure, this is an academical example. For practical shooting conditions, a suitable profile usually corrects the entire image.)

Uncorrected moustache distortion

Figure 3A. Image without distortion treatment.

Corrected distortion

Figure 3B. Image with distortion correction.

In recent years some manufacturers have relaxed the distortion requirements of their lenses. The reason is that other aberrations can be much reduced by allowing more distortion in the design. Correction is automatically performed by firmware in the camera, and the user never gets to see the raw lens performance.

Image rotation, for example to level the horizon, falls in the same category as distortion correction. The loss in resolution and contrast is moderate to small, depending on the quality of the interpolation algorithm.


Chromatic aberration

In tackling chromatic aberration (CA), one should distinguish between lateral and longitudinal CA. Lateral CA, or lateral color, causes the image magnification to be a function of the wavelength. The image is sharp at each wavelength, but the image of a single point in object space ends up at different positions in the image for the constituent wavelengths. The result is a drop in resolution and microcontrast, as well as visible color fringing. When the fringes are minor, say up to one pixel wide, correction is possible by a two-dimensional resampling (rescaling) of the individual color channels. However, since each color channel spans a wavelength interval, severe cases of later color will blur the image within each color channel. In that case, resampling is only a partial solution.

Axial color, or longitudinal CA, causes the image position to be at different distances from the lens for different wavelengths. In the plane sampled by the sensor, the wavelengths have blur disks of different sizes. Resolution and microcontrast are affected, and color fringing occurs. CA is only defined for the plane of best focus, but its cause (dispersion) may also affect the foreground or background blur of images with selective focus. Defocus color fringing can be as annoying as axial color.

To see more clearly whether a CA tool does more than just fringe desaturation, it is often helpful to compare B&W versions of the untreated and the treated images. Axial color and bad cases of lateral color require deconvolution, solving the inverse problem separately for each color channel while discriminating between the plane of focus and a possibly blurred background. Attempts with Lightroom (V5.4) were not very successful. The tool for lateral color just seems to desaturate the affected areas, and mixed results were obtained with the tool for axial color. In both cases, the distracting colors may disappear from the fringes, greatly improving the overall image appreciation. However, the drop in resolution and microcontrast (which is not restricted to the fringe areas) remains.

Treatment of lateral color

Figure 4. Top row: A cross affected by lateral color. Bottom row: Improvements obtained in post-processing.

Lightroom's treatment of lateral color is illustrated by Fig. 4. The top row shows a cross, placed in the top-left corner of the frame, photographed with three retrofocal wideangle lenses at F/11. The first two lenses are rather poor with significant lateral CA, while the third lens is a state-of-the-art design that is very well corrected for CA. The situation after treatment is shown in the bottom row. The color fringing has become less, but has not disappeared completely for the first two lenses. Edge definition has not improved at all, and the 'corrected' images of the lesser lenses are still worse than the untreated image of the good lens.


Other aberrations

Spherical aberration, coma, astigmatism and field curvature blur the image in various ways. A single point in object space becomes a blur patch in the image, whose size and shape is generally a function of the position in the field. Chromatic variations of these aberrations further aggravate the situation. The cover glass on digital sensors also introduces aberrations, unless the lens design takes its presence into account. Slightly blurred image regions can be improved by ordinary sharpening techniques, but correction requires deconvolution, which is an exceedingly difficult task. The algorithm would need to know, or figure out, the PSF of the lens as it varies over the field, and how it varies with the object distance. The situation is easier in astronomy, as the images of stars in dark surroundings allow direct measurement of the PSF.

There is speculation of manufacturers applying deconvolution in their proprietary (in-camera) algorithms, for instance to deal with the astigmatism of the sensor filter stack. If this is true, it remains to be seen whether it works well under all conditions. Chances of success with third-party software suites are small.



Defocus and motion blur

A focus error, or motion of the camera or the subject during exposure, yield a blurred subject. As with all types of blur, correction requires deconvolution. The restoration task may be less hopeless than with lens aberrations, as defocus blur and some types of motion blur may be fairly uniform over the field. The PSF may be reconstructed approximately, either by guessing or by measurement, for instance if the subject features bright object points in otherwise dark areas. There are also blind algorithms, which try to figure out the PSF from scratch. A reasonable degree of correction may be possible, but it is difficult to avoid artifacts and noise in the restoration process. The following examples were processed in MATLAB [1] and BiaQIm [3].

Note that the examples in this section concern photographs taken with an actually defocused lens. Restoration of these images is much more challenging than undoing synthetic blur. Figure 5A shows a crop of a photograph of a book shelf, with the focus placed on the book spines. This image serves as the ground truth for the restoration processes that follow. The image in Fig. 5B results from defocusing the camera lens. The PSF has a diameter of about 10 pixels, rendering the small print unreadable. An attempt to undo the blur is shown in Fig. 5C, in this case with the Lucy-Richardson algorithm from the image processing toolbox in MATLAB. Clearly the attempt is an improvement, but there is also some ringing and noise.

Subject in focus.

Figure 5A. Subject in focus.

Moderately blurred subject.

Figure 5B. Moderately blurred subject.

Deconvolution of the image of Fig. 5B with the Lucy-Richardson algorithm.

Figure 5C. Deconvolution of the image of Fig. 5B with the Lucy-Richardson algorithm in Matlab.

The image in Fig. 5D is obtained by further defocusing the lens. The diameter of the blur disk has grown to some 30 pixels, which blurs all text beyond legibility. Figures 5E through 5H give the restoration results for a few different deconvolution algorithms. The results are not as good as the result of Fig. 5C, but that was not to be expected. The Lucy-Richardson and Landweber solutions look better than the simple Fourier and Wiener filter approaches, but this comes at a price: See the The shown crop is a bit smaller than the crop used in the processing, allowing for boundary effects. processing times mentioned in the captions. These times are for the shown crop, not the entire image, and were measured on the same work station. Neither the results nor the processing times should be seen as definite characteristics of the tested algorithms. All methods have one or more input parameters, which may affect the performance and the run time. Figures 5E-5H are shown to give an idea of what typical deconvolution results can look like for a case with lots of blur. Perfect reconstruction is not feasible, but the readability of most text can be restored.

Generously blurred subject

Figure 5D. Generously blurred subject.

Deconvolution of the image of Fig. 5D with a simple Fourier method.

Figure 5E. Deconvolution of the image of Fig. 5D with a simple Fourier method programmed in Matlab. (Processing time 0.25 s.)

Deconvolution of the image of Fig. 5D with a Wiener filter.

Figure 5F. Deconvolution of the image of Fig. 5D with the simple Wiener filter in BiaQIm.(Processing time 8 s.)

Deconvolution of the image of Fig. 5D with the Lucy-Richardson algorithm.

Figure 5G. Deconvolution of the image of Fig. 5D with the Lucy-Richardson algorithm in Matlab.(Processing time 33 s.)

Deconvolution of the image of Fig. 5D with the Landweber method.

Figure 5H. Deconvolution of the image of Fig. 5D with the Landweber method in BiaQIm.(Processing time 1200 s.)

To be sure, the amount of defocus blur makes the image of Fig. 5D a real challenge, but otherwise it is relatively simple case. The books are approximately in the same plane, perpendicular to the picture taking direction, and a well-corrected lens was used at an aperture in the middle of its range. These two factors ensure that the blur disk is fairly uniform over the field, and fairly well approximated by a disk of uniform intensity. The following complications may arise under more general shooting conditions:

  1. Object points at different distances from the lens yield blur disks with different sizes.
  2. In case of a non-circular aperture, the orientation of the blur polygon is mirrored between foreground blur and background blur.
  3. Optical vignetting and some aberrations cause the shape of the blur patch to vary over the field.
  4. Aberrations affect the light distribution over the blur patch. Aspherical elements and diffraction may do the same via the onion-ring effect. And let us not forget dust particles.
  5. When chromatic aberration and chromatic variation of other aberrations leave their fingerprint on the out-of-focus areas, the inverse problem has to be solved separately for the individual color channels.
  6. The field of view changes when a lens is defocused.
  7. The perspective changes when a lens is defocused.

The last two points are of no consequence to the quality of the deconvolution result, but imply that correction in a strict sense is not possible even with perfect deconvolution. The defocused image is a blurred version of a different image than the in-focus image. In many cases the difference is small and hardly objectionable; the first five points are more troublesome.


Diffraction

Diffraction causes blur, and correction thus requires deconvolution. There is certainly hope, since diffraction blur is well understood, reasonably predictable for a known f-number, and uniform over the field. One can find impressive examples on the world wide web of diffraction treatment by (deconvolution) sharpening, and Sony is rumored to have it implemented in their A7(r). The bad news is that the correction is only partial. The wavelength dependence of the blur complicates matters, and accurate deconvolution also requires knowledge of the precise shape and orientation of the aperture. Worst of all, the deconvolution has to deal with an exceedingly large PSF, because diffraction affects the contrast at all spatial frequencies (Fig. 6). One only needs to think about the diffraction stars emanating from street lights in night photography, and realize that all image points (except where the subject is black) radiate out in the same way. In post-processing one can hope to increase the resolution, but the reduction in global contrast cannot be undone. The former requires deconvolution of the relatively small central region of the diffraction pattern, whereas the latter requires an algorithm that brings the entire diffraction star back to a single point. That is not going to happen anytime soon.


Diffraction MTF

Figure 6. Diffraction-limited MTF for a round aperture and green light.


Aliasing

When the lens casts an image containing energy at spatial frequencies beyond the Nyquist frequency of the sensor, and if the sensor lacks an anti-alias filter (AAF), aliasing occurs. The energy is redistributed over the frequency regime below Nyquist and comes on top of valid energy at those frequencies. Aliasing affects areas with sharp edges or high detail, or wherever the subject has high-frequency components. Artifacts are plentiful and include jagged slanted edges (staircase effect) and crunchy looking vegetation. Aliasing of a regular microscopic pattern may lead to a macroscopic pattern, known as moiré. Sensors with a Bayer CFA have more aliasing in the blue and red color channels than in the green channel, leading to colored moiré patterns. Color moiré is not unique to Bayer filters, however, and also depends on the demosaicing algorithm.

The staircase effect is illustrated by Fig. 7. The left crop shows a window with blinds, captured on a sensor with an AAF. The middle crop shows the same scene, photographed with the same lens, on a sensor with the same resolution, but without an AAF. This sensor also has a different color filter array (CFA), X-Trans instead of Bayer, which is not equally well handled by all raw converters. For comparison, the third crop shows the same X-Trans capture, but in a different raw converter. The capture without AAF looks more crisp, but the crispness comes at the expense of jagged blinds. The X-Trans CFA is an uncertain factor, but the artifacts have all the characteristics of aliasing. Fuji's claim that X-Trans does away with the need for an AAF is nonsense, anyway.

Aliasing: The staircase effect.

Figure 7. Image crops of X-A1 in Lightroom (left), X-E2 in Lightroom (middle), and X-E2 in Photo Ninja [4].

Aliasing is a completely different beast than lens faults, and arises because the lens outresolves the sensor. "Are these stripes moiré, or is this the actual motif of the curtains?" Correction of lens aberrations requires knowledge of the point spread function of the lens, whereas correction of aliasing requires knowledge of the subject. Automated algorithms have no way of telling aliasing artifacts apart from valid detail, not even in theory. Available tools are limited to a symptomatic treatment of moiré. Algorithms differ between software suites, but desaturation is often included. The improvement can be real, depending on the conditions, but valid detail inevitably suffers in the process. Downsizing an image does not eliminate the problem, as it does with blur, because moiré is usually present at the spatial frequencies of relevance to the smaller image.

Figures 8A–8C show an example of color moiré and two restoration attemps. The subject is a newspaper photograph, whose microscopic halftone raster gives rise to aliasing in the form of a pattern of green and purple stripes. Use of the moiré brush in Lightroom desaturates the stripes, but there is significant collateral damage as the algorithm also attacks valid colors. Moreover, the stripes remain visible as a pattern of alternating brightness. Neat Image [5] does a much better job at removing the patterning, in this case at the expense of a hazy softness. There is less desaturation compared with Lightroom, but in the absence of a ground truth it is not possible to tell whether the restored colors are accurate.

Aliasing: Color moiré.

Figure 8A. Crop of a newspaper reproduction with an Otus 1.4/55 on an A7r. (Image courtesy of 3d-kraft.)

Lightroom moiré brush.

Figure 8B. Treatment of the image of Figure 8A with Lightroom's moiré brush at 50%.

Neat Image Pro.

Figure 8C. Treatment of the image of Figure 8A with Neat Image Pro V7.6. (Restoration work by John Michael Leslie.)

Verdict

Signal processing can be used to deal with imperfections in digital imaging. In most of the discussed cases, the image can at least be improved. One should not hesitate to use the tools at one's disposal, and the end result is good if the user is happy. An image free of artifacts is necessarily a bit soft at the pixel level, but a crisp look may be preferred to a faithful one. Simultaneously it is clear that genuine correction is often not possible, or only partly. Or only in theory. Or not even that. A software tool is not a substitute for a good lens and good technique, and cannot replace an anti-alias filter.


© Paul van Walree 2014–2016



References


[1]   MATLAB, http://www.mathworks.com
[2]   Adobe Lightroom, https://lightroom.adobe.com/
[3]   P. J. Tadrous, BiaQIm image processing software, http://www.bialith.com (version 2.9 alpha, 2011).
[4]   Photo Ninja, http://www.picturecode.com/
[5]   Neat Image, http://www.neatimage.com/