Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

TuLUMIS - a tunable LED-based underwater multispectral imaging system

Open Access Open Access

Abstract

Multispectral imaging (MSI) is widely used in terrestrial applications to help increase the discriminability between objects of interest. While MSI has shown potential for underwater geological and biological surveys, it is thus far rarely applied underwater. This is primarily due to the fact light propagation in water is subject to wavelength dependent attenuation and tough working conditions in the deep ocean. In this paper, a novel underwater MSI system based on a tunable light source is presented which employs a monochrome still image camera with flashing, pressure neutral color LEDs. Laboratory experiments and field tests were performed. Results from the lab experiments show an improvement of 76.66% on discriminating colors on a checkerboard by using the proposed imaging system over the use of an RGB camera. The field tests provided in situ MSI observations of pelagic fauna, and showed the first evidence that the system is capable of acquiring useful imagery under real marine conditions.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Multispectral and hyperspectral imaging are important detection methods, which acquire a 3D spatial-spectral data cube containing both spatial and spectral information of a scene. They are widely used in remote sensing for mineral recognition [1], in water quality monitoring [2], in the fields of agriculture [3], and in food industry [4]. Spectral imaging is also a promising method for deep sea optical surveying where a spectral imager attached to an underwater camera platform (towed structure, remotely operated vehicle - ROV, autonomous underwater vehicle - AUV) captures detailed inherent reflectance spectra of benthic targets for efficient discrimination and classification [5–11].

Optical spectrum acquisition techniques developed rapidly. Spectrometers measure the spectrum of a single point with high spectral resolution. Full-sampling (scanning) spectral imaging systems provide both spectral and spatial resolution by using a 2D image sensor to sample the spectral-spatial data cube over the temporal domain. Undersampling (snapshot) systems based on compressive sensing theory can modulate, capture, and reconstruct the spectral-spatial data cube by fewer snapshots [12,13] or even by a single shot [14].

For terrestrial applications, the light intensity loss and spectral distortion through a short distance of air (or vacuum) are negligible. In remote sensing (long distances), the atmosphere has effects on the light propagation mainly due to aerosol scattering and water vapor absorption. In dry and clear condition, atmospheric effects can be well compensated by statistical and physics based algorithms [15].

In contrast, when light passes through water the absorption is more significant than in air and dependent on wavelength. In addition to water itself, light attenuation in natural water is also affected by phytoplankton containing different pigments (chlorophylls, carotenoids and phycobilipigments), suspended particles (living organic particles, detrital organic matter, and inorganic particles), colored dissolved organic matter (CDOM) and bubbles. The variable composition and concentration of oceanic constituents results in temporal and spatial variations of optical properties in natural water [16].

Besides, in the deep ocean, durable optical systems with compact structure and optically efficient design are preferred considering the tough working conditions such as shaking platforms, restricted load and space, absence of sunlight, limited electrical power, and lack of computational power.

In underwater applications, RGB imaging and point-based spectroscopy are widely used, while underwater multispectral and hyperspectral imaging techniques are rarely applied. A modified RGB imaging system [17] has been used in an underwater optical survey to recover real color information in natural water. Reflectance or fluorescence spectra are collected by a wide field-of-view (FOV) spectrometer, which has been proven valuable for studies of coral genera [18–20] and coral health condition [21]. A combination of cameras and spectrometers on an AUV [11] has been proposed recently to map the seafloor with mosaicking averaged spectra in the FOV of the spectrometer along the dive track.

Existing underwater multispectral and hyperspectral imaging systems mainly capture the spatial-spectral data cube through the “push broom” spatial-scanning method and the filter based spectral-scanning method. The “push-broom” method scans one spatial dimension with a narrow slit FOV and spans the spectral dimension using a dispersive optical device (prism or diffraction grating). Images taken by the “push-broom” method feature high spectral resolution, and do not need to be registered along wavelengths, but need to be merged spatially. Underwater hyperspectral imaging (UHI) systems based on the “push-broom” method have been deployed with a diver [22] or on AUVs and ROVs for effective seafloor mapping with both spatial and spectral resolution [5–10]. Pettersen et al. (2014) studied the inverse relationship between absorption spectra and reflected spectra of pigments, and effectively used a UHI system in bio-optical taxonomic identification [23]. Letnes et al. (2017) used a UHI system to monitor and classify cold water corals in different living conditions [24].

The spectral scanning method stacks up a sequence of monochrome images of the scene at different wavelengths specified by narrow-band filters in front of the sensor. In order to scan more wavelengths in less time, motorized filter wheels [25] and solid-state tunable filters (e.g., LCTF and AOTF [26]) have been introduced. Gleason et al. studied coral reefs with a six-band filter wheel underwater multispectral camera, and concluded an improvement of using narrow spectral bands ratio combined with image texture measures in automated coral classification [27]. The alignment of images taken at different wavelengths is necessary, and the number of channels is limited by the switching time and the size of the filter wheel (e.g. more filters require a longer acquisition time and a larger filter wheel). Other than the filter based approach, a tunable light source (e.g. a color LED array [28,29]) is used to specify wavelength as e.g. shown in medical research [30,31]. By using a tunable light source and removing the (optical) filters, the camera system is further simplified, and the optical efficiency is improved. However, this technique has not been implemented in underwater applications yet.

In this paper, we present a tunable LED-based underwater multispectral imaging system (TuLUMIS) which synchronizes a monochrome camera with flashing, pressure neutral color LEDs. As a scanning method, it provides 2D spatial light intensity distribution of the scene in a sequence of eight spectral channels in a short period (RGB cameras provide only three channels, i.e., red, green and blue). It acquires finer-resolved spectral reflectance than the RGB method, and features a durable and simpler structure as well as lower cost compared to existing underwater multispectral imaging techniques.

The main contribution of this paper is the description of the TuLUMIS system. Additionally, we present a quantitative method to compare the usefulness of different spectral imaging methods. It is based on a normalisation criterion and can compare data sets of different dimensionality. It is hence able to discriminate data of different spectral resolution and thus more generally applicable than methods that rank different spectral imaging methods of the same dimensionality [20,32]. We use this discrimination method to compare results obtained by TuLUMIS to RGB imaging and point-based spectroscopic measurements.

The paper describes the principles of spectral imaging and the proposed discrimination criteria with data analysis algorithms in Section 2. Composition and specification of TuLUMIS are presented in Section 3, and lab experiments are described in Section 4 followed by the presentation of results in Section 5. The field test conducted during a cruise to the Atlantic Ocean is also briefly demonstrated in Section 5. Dimensionality reduction methods for the data analysis and potential improvements of TuLUMIS are discussed in Section 6 with conclusions drawn in Section 7.

2. Principles on spectral imaging

2.1. Acquisition of spectral signatures

According to the underwater imaging model and the Beer-Lambert law [32], the intensity detected by a gray value camera at each pixel can be represented by

I=λcIs(λ)eα(λ)d1R(λ)eα(λ)d2C(λ)dλ,
where:
  • I = intensity detected by the camera
  • Is(λ) = spectral radiance of the light source
  • R(λ) = spectral reflectance at the surface of the target to be studied
  • C(λ) = spectral response of the camera
  • α = the wavelength-dependent attenuation coefficient of the water medium
  • d1, d2 = the distances from the light source to the target, and the distances from the target to the camera, respectively
  • λc = spectral range of the camera
  • = the symbol of the differential of the variable λ

An effective spectrum analysis is based on an accurate estimate of the surface spectral reflectance. Therefore it is necessary to recover the reflectance at the surface of the target from the detected radiance at the sensor to remove the effects of illumination and the medium before subsequent analysis.

As reference, a white board with equal spectral reflectance through all visible light bands is used to correct the spectrally distorted images. For the white board area in the images, the reflectance R(λ) is constant in Eq. (1). The relative reflectance of other pixels can be obtained by dividing the intensity at the white pixel in each channel. Subsequently, each spatial pixel consists of a spectral vector s = [s1, s2, . . ., sn]T with n relative reflectance values from different channels, the vector s ∈ ℝn is known as the spectral signature at that pixel.

2.2. Discrimination measures of spectral signatures

For the analysis of multispectral and hyperspectral signatures, spectral angle mapper (SAM) and spectral information divergence (SID) are two discrimination measures that are widely and effectively used [20,33]. In this study, the similarity of the spectral signatures extracted from two different samples are estimated by both SAM and SID.

SAM calculates the spectral angle θ between two signature vectors in the n-dimensional space as

SAM(s,r)=θ=arccossrsrs,rn,
where s = [s1, s2, . . ., sn]T and r = [r1, r2, . . ., rn]T are spectral signatures (ℝn×1 vectors) of two samples.

SID is a measure according to information theory. It appraises the probability vector which is defined as

pi=|si|j=1n|sj|,
given a spectral pixel vector s = [s1, s2, . . ., sn]T. By means of this probability interpretation, any pixel vector s can be a single information source. As a result, the probability vector p = [p1, p2, . . ., pn]T can be used to describe the spectral variability of the pixel vector s. And SID is derived from Kullback-Leibler information measure (or cross entropy) as
SID(s,r)=D(sr)+D(rs)
D(sr)=i=1mpilog(pi/qi)
D(rs)=i=1mqilog(qi/pi)
given p = [p1, p2, . . ., pn]T and q = [q1, q2, . . ., qn]T are probability distributions of the spectral signatures s and r, respectively.

2.3. Comparison of spectral dissimilarity among different dimensions

To quantify the discrimination ability on spectral-different objects of TuLUMIS, a comparison is made with respect to SAM and SID among the three methods: images taken by a general RGB color camera illuminated by a white light source (RGB), images taken by the TuLUMIS multispectral camera (MS), and spectral reflectance measured by a spectrometer with 709 channels covering the visible spectrum from 400 nm to 700 nm (SP).

Since the direct comparison of SAM and SID among feature spaces of different dimensions (s ∈ ℝ3 for the RGB method, s ∈ ℝ8 for the MS method, and s ∈ ℝ709 for the SP method) is not appropriate, principal component analysis (PCA) [34] is used to convert spectral signatures from ℝ8 space and ℝ709 space into ℝ3 space before using SAM or SID. The dimensionality reduction processing will be discussed in detail in Section 7.1.

In Fig. 1, the comparison between two objects is for exemplarily shown. Suppose there are m1 samples [C11, C12, . . ., C1m1] taken from object C1 and m2 samples [C21, C22, . . ., C2m2] taken from object C2 and for each of these m1 + m2 samples, all three spectral imaging methods are applied, and the per-sample spectral signatures, denoted as c11, c12, . . ., c1m1 and c21, c22, . . ., c2m2 are provided by averaging signatures of all pixels forming the sample. A matrix as in Fig. 1 can be built to show the similarity between each of two samples by using the measures of SAM and SID as introduced in Eq. (2) and Eq. (4), respectively.

 figure: Fig. 1

Fig. 1 Demonstration of the spectral dissimilarity criterion. C11 through C13 are different samples on object C1, C21 through C23 are different samples on object C2. The spectral signature of each sample is the averaged signatures of all pixels forming the sample. The 9-by-9 matrix on the right can be divided into sub-matrices, where sub-matrix C1 and sub-matrix C2 are within-class similarity measures of object C1 and C2, and the elements in sub-matrix M are between-class similarity measure of samples in object C1 and C2. Elements on the main diagonal represent similarity measures of samples with themselves. The dissimilarity of the two objects C1 and C2 depends on both within-class and between-class measures.

Download Full Size | PDF

Ideally, each object is characterized by a unique deterministic spectrum, i.e. all samples on the same object have the same signature. However, in practice the assumption of an exact spectral signature does not always hold due to mixed-pixel interference and inherent spectral variability (i.e., inhomogeneous surface composition, media condition and sensor noise, etc) [35]. In Fig. 1, the 9-by-9 matrix can be divided into sub-matrices:

C1ij(Γ)=Γ(c1i,c1j),i,j=1,2,,m1
C2ij(Γ)=Γ(c2i,c2j),i,j=1,2,,m2
Mij(Γ)=Γ(c1i,c2j)i=1,2,,m1;j=1,2,,m2
where Γ is the algorithm of SAM or SID, matrix C1 and C2 are within-class measures of similarity and the matrix M is the between-class measure of similarity.

In order to find the dissimilarity between spectral signatures of two different objects, each measure of a pair of between-class samples (i.e. Mij) is divided by the average of all corresponding within-class measures (i.e. C1ij and C2ij):

μ(Γ)(C1,C2)=i=1m1j=1,jim1C1ij+i=1m1j=1,jim1C2ijm1(m11)+m2(m21)
Δij(Γ)=Mijμ(Γ)(C1,C2)

Hence Δij(SAM) and Δij(SID) denotes normalized between-class SAM and SID dissimilarity, respectively. The results of Δij(Γ)can then be compared among the three spectral imaging methods RGB, MS, and SP. It allows evaluating their discrimination ability on two objects; a smaller Δ(Γ) indicates higher similarity.

3. Setup of TuLUMIS

3.1. Hardware description

As shown in Fig. 2, TuLUMIS consists of a 12-bit industrial monochrome camera (acA2040-25gmNIR, Basler, Germany) with an Apo-Xenoplan 2.0/24 lens (Schneider, Germany), custom color LEDs (LUXEON Z, Lumileds, Netherlands) with drivers (CAM-V2, PCB Components, Germany), and periphery components for power supply (TEN 60-2412 DC/DC Converters, Traco Power, Switzerland), synchronization (Arduino Micro, Italy), controlling (NUC6i5SYB, Intel, USA), data storing (SSD 850 EVO 1TB, Samsung, Korea), and power/signal transmission through underwater cables (SubConn Micro Circular, MacArtney, Denmark).

 figure: Fig. 2

Fig. 2 Setup of TuLUMIS. (a) The scheme of the system comprising LEDs, a camera, and components for power supply, synchronization, control, data storage, and power/signal transmission. The Arduino micro board is programmed to synchronize the flash of LEDs and the acquisition time of the camera. (b) The components of TuLUMIS, with close-ups of the internal structure of the water tight housing for the camera and control circuits, and a cast LED.

Download Full Size | PDF

In order to deploy TuLUMIS in the deep ocean, a titanium housing (Develogic, Germany) is used to protect the camera and the control system from water and high pressure. On one end of the housing, a flat sapphire glass port is assembled as the window for the camera.

3.2. Synchronization strategy and softwares

An Arduino micro board is programmed to send external trigger signals to synchronize the flash of the LEDs and the acquisition of the camera. Several integrated circuit components (hex Schmitt-trigger inverters SN74HCT14 and 3-line to 8-line decoders SN74HCT138, Texas Instruments, USA) are used to achieve fast switching among the LEDs. A C++ software based on the camera’s SDK (Software Development Kit) runs on the camera control computer (NUC, Intel, USA) with Ubuntu 16.04 as operating system. The software is used to adjust parameters for image acquisition and data storage. All images are stored as 16-bit uncompressed TIFF files. In order to preprocess and show the multispectral images, a software has been programmed in C++ using the OpenCV library (version 3.3.0) [36] and Qt (version 5.9).

3.3. Pressure-neutral LEDs

TuLUMIS has sixteen colored LEDs to illuminate a target at eight respective wavelengths covering the visible spectrum from 400 nm to 700 nm. As shown in Fig. 2, the LEDs are mounted on a metal printed circuit board (PCB), and are cast in highly transparent polyurethane [37]. The polyurethane forms thin walls to transfer heat efficiently to the surrounding water. A reflector is also adapted on each LED. The LED itself as a solid semiconductor is exposed to the water-depth dependent pressure. The pressure neutral cast LEDs, combined with also pressure neutral cast drivers, form low-cost lightweight and corrosion resistant light source units rated to 6000 m.

The nominal central wavelengths of the LEDs are 405 nm, 450 nm, 500 nm, 530 nm, 565 nm, 590 nm, 615 nm, 660 nm tested at 500 mA, 25°C by the manufacturer [38]. The relative radiance of each LED driven at 700 mA was measured by using a spectrometer (FLAME-S, Ocean Optics, USA) with a cosine corrector (CC-3-UV, Ocean Optics, USA). The relative radiances of the LEDs and the spectral response curve of the camera (provided by the manufacturer) are plotted in Fig. 3. The relative energy distribution of the light source was calculated by integrating each LED’s spectral radiance with the camera’s response. The spectral coverage of each LED can be indicated by the full width at half maximum (FWHM) of the corresponding LED’s spectral radiance.

 figure: Fig. 3

Fig. 3 The relative radiances of the eight LEDs at 700 mA are measured by a spectrometer, and the spectra are plotted as colored curves in (a), while the nominal central wavelengths of the LEDs tested by the manufacturer at 500 mA and 25°C are listed in the legend. The thin black curve in (a) is the spectral response of the camera. The height of each square in (b) is the integration of each LED’s spectral radiance and the camera’s response, and the width of each square shows the full width at half maximum (FWHM) of the corresponding LED’s spectral radiance.

Download Full Size | PDF

4. Lab experiments

Intuitively, detection of an object with more and narrower spectral channels brings finer information on the spectrum, and consequently improves the discriminative ability. In order to derive a quantitative comparison between the RGB method (three channels), the MS method (eight channels), and the SP method (709 channels) with respect to normalized between-class SAM and SID dissimilarities as defined in Section 2.3, lab experiments were conducted under controlled environmental conditions. In the experiments, a custom checkerboard illustrated in Fig. 5 with Macbeth colors was used as common target for the three methods.

 figure: Fig. 4

Fig. 4 Setup of the lab experiment.

Download Full Size | PDF

For the RGB method, an RGB camera (ILCE-7SM2, Sony, Japan) has been used with a white LED light source (BXRA-56C5300-H, Bridgelux, USA). The white LED is manufactured by covering a conversion layer on a blue LED; the spectrum is not as even as sun light. Such LEDs are widely used underwater as an artificial light source because of the high energy efficiency and compactness. A water tank filled with tap water was used to conduct the experiments. As shown in Fig. 4, the distance from the light source to the target, and from the target to the camera is 1 m. It is necessary to transform raw camera RGB colors to a device independent space because of the variation of color appearance underwater with different devices [39]. The DCRaw software written by David Coffin was used to transfer “ARW” format raw data files in Bayer mosaic pattern to “tiff” format three-channel color images for subsequent processing [40]. For this transfer the settings “-v -w -o 0 -4 -T” were used in the command to output 16-bit “tiff” files with original camera white balance and no other modification (i.e., no color space designation, no gamma correction and no automatically brightening).

 figure: Fig. 5

Fig. 5 Preprocessing of the raw image of the checker board taken by the RGB camera. (a) shows the original image with white and black frames marking selected color and white areas respectively. (b) shows the non-uniform illuminance background calculated by third order polynomial fitting of the selected white area. Contours are augmented to highlight the intensity change of the illuminance from bright (the center) to dim (the edge). (c) shows the image corrected by dividing (a) by (b) in all channels. The indices are assigned according to the hue of the colors. (d) shows the selected color samples with black and white frames marking selected area. (e) shows the colors extracted from the corrected image by taking the average of all pixels in the marked area of each color block. All colors are transformed according to SonyA7SM2-Generic’s ICC profile for visualization.

Download Full Size | PDF

For the MS method, TuLUMIS has been used in the same water tank. For the SP method, a spectrometer (FLAME-S, Ocean Optics, USA) with a Y-shaped fiber-optic probe has been used in the tank, with the distance between the probe and the board surface fixed at 3 mm.

The spectral signatures for the RGB method and the MS method are extracted through the preprocessing shown in Fig. 5. First, the raw images are cropped to the area of interest which is the color checkerboard area as shown in Fig. 5(a).

Then the cropped images are segmented to create a mask of all color blocks and a mask of all white blocks as shown in Fig. 5(a). Each selected square is cropped to 80% of the corresponding color block width (64% of the color block area).

After that, the white background (or spatial illuminance distribution) of the image in each individual channel is estimated by third order polynomial fitting of the pixels in white blocks. The white blocks are filtered from the image by using the mentioned white mask.

The color blocks are then divided by the estimated white background illumination shown in Fig. 5(b). This alleviates the effect of a non-uniform illuminance distribution and allows to calculate the relative reflectance shown in Fig. 5(c).

In each selected color block, ten samples are collected on the diagonal shown in Fig. 5(d). The spectral reflectance of each sample is the averaged spectral reflectance of all pixels in the sampled area. Each color is assigned an index (arranged according to their hue). In Fig. 5(e), a color bar is created as a reference, where each color is calculated by averaging all pixels in the corresponding ten collected sample areas. All colors are transformed according to the ICC profile of SonyA7SM2-Generic only for visualization purposes.

5. Experimental results

5.1. Results of lab experiments

The discrimination ability of the RGB method, the MS method and the SP method are based on the spectral data collected through the experiments described above. The spectral acquisition capabilities of the three techniques are illustrated explicitly by overlaying their captured spectral reflectance curves of all 33 color panels on the checkerboard. As plotted in Fig. 6, the RGB method uses three values to roughly estimate the reflectance spectrum, while the SP method measures the detailed spectrum in 709 spectral channels. The MS method with eight channels acquires a finer-resolved spectrum than the RGB method, and provides comparable spectral signatures as the SP method.

 figure: Fig. 6

Fig. 6 Relative spectral reflectance of the 33 color panels on the checkerboard measured by the RGB method (dashed lines), the MS method (solid lines), and the SP method (dotted lines). The wavelength range of each sub-figure covers the visible spectrum from 400 nm to 700 nm, and relative reflectances range from 0 to 1, with axis ticks shown in the legend in the lower left corner. Order of the sub-figures and color of the curves are consistent with Fig. 5(e). Compared to the RGB method, the MS method with eight channels acquires a finer-resolved spectral information. The spectrl resolution of MS is lower than that of the SP method but more than four million spectral measurements can be conducted in parallel (for all the pixels vs. for one point measurement).

Download Full Size | PDF

In total, 330 samples (33 color blocks, 10 samples per block) were collected from the color checkerboard. A 330-by-330 symmetric matrix was built to show the dissimilarity between each two samples (shown in Fig. 7), where 330 elements are on the main diagonal, 3,300 elements represent within-class measures and the remaining 105,600 (= 330 × 330 − 3300) elements are between-class measures. Because of symmetry, only the 52,800 elements on the upper triangular part are considered.

 figure: Fig. 7

Fig. 7 Results of similarity measures SAM (the first row) and SID (the second row) of the RGB method (the first column), the MS method (the second column) and the SP method (the third column). Each matrix is a 330-by-330 matrix (10 samples per color) and each element represents the similarity measure between the corresponding two samples. The color bars are in logarithm scale. Detailed comparisons are shown in Fig. 8 and Fig. 9.

Download Full Size | PDF

The first row in Fig. 7 shows the results of SAM for the RGB method, the MS method and the SP method calculated by Eq. (2). In general, the results of the SP method feature the smallest within-class dissimilarity and the largest between-class dissimilarity, and the results of the RGB method feature the smallest between-class dissimilarity. The results of the MS method fall between the two other methods. For color blocks No. 25 to No. 33, which are different shades of gray, the samples can be distinguished more easily by using the MS method over the use of the RGB method; the SP method has the best discriminative ability among the three methods.

The second row in Fig. 7 shows the results of SID of the three methods calculated by Eq. (4). They are consistent with the results of SAM; the results of the SP method feature the smallest within-class dissimilarity and the largest between-class dissimilarity, but the results of the RGB method and the MS method are more complex. For color blocks No. 25 to No. 33, the same conclusion can be drawn as for the SAM results.

From the SAM results, the matrix of the normalized between-class SAM, or Δ(SAM) can be calculated using Eq. (11). The matrix in Fig. 8(a) shows the difference of Δ(SAM) of the RGB method subtracted from Δ(SAM) of the MS method. It can be seen that for most pairs of samples, Δ(SAM) is increased. It is shown in Fig. 8(b) that for blue color blocks, the spectral discrimination ability using TuLUMIS is not always better than the use of the RGB camera. The matrix in Fig. 8(c) shows the difference of Δ(SAM) of the RGB method subtracted from Δ(SAM) of the SP method. It can be seen that for almost all pairs of samples, Δ(SAM) is increased. The color comprising of the histogram shown in Fig. 8(d) indicates that by using the SP method, the discrimination abilities for all color pairs on the checkerboard are comparable. Histograms of the corresponding matrices are shown in Fig. 8(b) and Fig. 8(d). 40,479 out of 52,800 of between-class elements are increased (76.66%) by using the MS method compared to the RGB method. 52,794 out of 52,800 of between-class elements are increased (99.99%) by using the SP method compared to the RGB method.

 figure: Fig. 8

Fig. 8 Difference matrices of the spectral dissimilarity (or normalized between-class SAM) Δ(SAM) (a) between the MS method and the RGB method, and (c) between the SP method and the RGB method. The histograms (b) and (d) show the number of elements counted in the corresponding difference matrix (a) and (c), respectively. 76.66% of the between-class elements are increased by using the MS method over the use of the RGB method, and 99.99% of the between-class elements are increased by using the SP method over the use of the RGB method. The bars in the histograms are stacked by the colors of the related samples on the checkerboard.

Download Full Size | PDF

From the SID results, the matrix of the normalized between-class SID, or Δ(SID) can also be calculated using Eq. (11). The matrix in Fig. 9(a) shows the difference of Δ(SID) of the RGB method subtracted from Δ(SID) of the MS method. For most pairs of samples, Δ(SID) is increased. Decreased elements appear in the measures between most of the color blocks and the different shades of gray blocks, between red blocks and green blocks, and between blue blocks and green blocks. It is also shown in Fig. 9(b) that for these color pairs, the Δ(SID,MS) − Δ(SID,RGB) are located in the bins close to zero. The matrix in Fig. 9(c) shows the difference of Δ(SID) of the RGB method subtracted from Δ(SID) of the SP method. It can be seen that for almost all pairs of samples, Δ(SID) is increased, which can also be found in Fig. 9(d). Histograms of the corresponding matrices are shown in Fig. 9(b) and Fig. 9(d). 36,030 out of 52,800 of between-class elements are increased (68.24%) by using the MS method compared to the RGB method. 52,674 out of 52,800 of between-class elements are increased (99.76%) by using the SP method compared to the RGB method.

 figure: Fig. 9

Fig. 9 Difference matrices of the spectral dissimilarity (or normalized between-class SID) Δ(SID) (a) between the MS method and the RGB method, and (c) between the SP method and the RGB method. The histograms (b) and (d) show the number of elements counted in the corresponding difference matrix (a) and (c), respectively. 68.24% of the between-class elements are increased by using the MS method over the use of the RGB method, and 99.76% of the between-class elements are increased by using the SP method over the use of the RGB method. The bars in the histograms are stacked by the colors of the related samples on the checkerboard.

Download Full Size | PDF

5.2. Field test of the first generation prototype

A field test of the first generation prototype of TuLUMIS was conducted during an oceanographic cruise (MSM61) on the German research vessel Maria S. Merian in waters around the Republic of Cape Verde [41]. TuLUMIS was carried by the frame of the towed pelagic in situ observation system PELAGIOS (Hoving et al in prep.) as presented in Fig. 10(a) for in-situ observations of pelagic fauna.

 figure: Fig. 10

Fig. 10 Setups of TuLUMIS during cruise MSM 61. (a) The PELAGIOS frame on which TuLUMIS was carried. (b) A schematic illustration of the LED array layout and camera focal length setting. A white board was used as a reference to balance the radiance of each LED and to calibrate the effect of wavelength-dependent light attenuation.

Download Full Size | PDF

Figure 11 shows a sergestid shrimp in spectral stacks and the fusion into a pseudo-color image after using affine transform based on manually selected feature points to align the images. It shows that the proposed TuLUMIS is technically feasible to be deployed in the deep sea. Nine TuLUMIS deployments (30 minutes each) were conducted between 75 and 100 meters depth at night (no sunlight) with different multispectral camera settings, LED array layouts and towing depths. The experience obtained from the field test is discussed in detail from a technical perspective in Section 6.2.

 figure: Fig. 11

Fig. 11 A sergestid shrimp observed during the cruise MSM61 by the TuLUMIS. The monochrome images in eight spectral channels are shown on the left, with a fused pseudo-color image on the right.

Download Full Size | PDF

6. Discussion

6.1. Dimensionality reduction

In the evaluation of spectral similarity, SAM and SID described in Section 2.2 are widely used. A larger SAM means a larger dissimilarity between two spectral vectors. However, SAM can be systematically impacted by changing the dimension of the vector space. Similarly, a larger SID indicates a larger dissimilarity between two spectral information sources but the increase of dimensionality of the signal changes the entire probability distribution. Therefore it is not reasonable to directly compare SAM or SID dissimilarities that are calculated based on spectral signatures with different dimensions.

In the field of pattern recognition, dimensionality is usually reduced by using principal component analysis (PCA), as well as linear discriminant analysis (LDA) based on Fisher’s discriminant [34]. PCA finds the components with the largest variety while LDA optimizes the ratio of between-class variety and within-class variety. PCA is unsupervised while LDA is supervised which requires a preliminary knowledge of the classes of the targets. However, the classes of the underwater targets are usually unknown, thus a training process before dimensionality reduction is not feasible in practice.

In case the information on the targets is available, the discrimination ability of the MS method can be further improved. It is evident from Fig. 12, where LDA was used for dimensionality reduction instead of PCA. When compared with Fig. 8 and Fig. 9, the between-class elements in the difference matrix by using the MS method are increased from 76.66% (with PCA) to 95.82% (with LDA) for Δ(SAM), and the increased from 68.24% (with PCA) to 78.06% (with LDA) for Δ(SID).

 figure: Fig. 12

Fig. 12 Difference matrices of (a) normalized between-class SAM (i.e. dissimilarity Δ(SAM)), and (c) normalized between-class SID (i.e. dissimilarity Δ(SID)) between the MS method and the RGB method after dimensionality reduction using LDA. The histograms (b) and (d) show the number of elements counted in the corresponding difference matrix (a) and (c), respectively. In the case of using LDA, 95.82% of the between-class elements are increased by using the MS method with SAM over the use of the RGB method, and 78.06% of the between-class elements are increased by using the MS method with SID over the use of the RGB method. The bars in the histograms are stacked by the colors of the related samples on the checkerboard.

Download Full Size | PDF

6.2. Potential improvements

During the lab experiments and field tests, the first generation prototype of TuLUMIS operated effectively and the results have verified the expected merits:

  • The pressure neutral LED light source with neither pressure housing nor extra mechanical parts has the benefit of reduced bulk and complexity of the system
  • Light reflected from targets is detected by the camera directly without passing through any filter hence increases the light efficiency
  • The intensity of each LED can be adjusted separately to compensate for the water attenuation in different water conditions
  • The combination of pressure-neutral LEDs and an off-the-shelf gray camera is cost-effective compared to specialized multispectral cameras
As a novel underwater multispectral imaging system, TuLUMIS can be further improved by considering the following aspects.

According to the results illustrated in Section 5, the spectral discrimination ability by using TuLUMIS is not better than the use of the RGB camera for blue and green colors. The reason for this could be the significant overlapping of the LED light spectra covering 480 nm – 600 nm as shown in Fig. 3. This is a result from the so called “green-yellow gap” in LED technology. Blue, cyan, and green LEDs typically use InGaN as semiconductor, while yellow and red LEDs are AlInGaP based. Generally, red and blue LEDs are radiant efficient (with high wall-plug efficiency), but green and yellow LEDs are not. The gap in efficiency can be filled with a green phosphor-converted LED (pumped by a blue LED, and then converted to green) as the one used in TuLUMIS with the central wavelength of 565 nm. As shown in Fig. 3, the efficiency of the converted green LED is welcome, but the large FWHM is counterproductive in this case. A more even combination of LED spectra and reduced spectral overlap could result in a further improvement on the spectral discrimination ability of TuLUMIS. In addition, optical filters based on Fabry-Pérot interferometer design could provide sharp separation between spectral adjacent LEDs but at the cost of energy loss.

For real-world studies, compensation of the effects of the light source and the water attenuation needs to be considered. In this study, experiments were only conducted in a water tank filled with clear tap water; the distance between the light source and the target, and the distance between the target and the camera were only 1 m. Under such conditions, spectral distortion can be corrected by dividing with a white reference, without taking advantage of the compensation feature of the tunable light source. At greater distance, tuning the LEDs for counter acting the wavelength depending attenuation will be an asset of TuLUMIS. Besides, the temporal variance of water conditions should also be taken into account in practical scenarios which could also be achieved with TuLUMIS.

Correction of the heterogeneity of the light source is worth further study. In the lab experiment, only one LED was turned on at a time, thus the uneven spatial distribution of the illuminance can be captured by low order polynomial fitting. However, in practice, where an array of LEDs flashes at the same time, the optical field could be complex and thus could have a more severe effect on the construction of the spectral signatures.

In general, TuLUMIS with its spectral-scanning approach usually has a better performance in taking images of static scenes instead of moving objects. As shown in Fig. 11, the images taken during the field test are lacking brightness, and the alignment of images taken at different wavelength was difficult due to the changing target (distance, movement of the fauna). Different aperture sizes and acquisition times were evaluated to balance the depth of field, brightness and sharpness. The PELAGIOS was towed at a speed of 0.5 knots (approximately 0.25 m/s) and the acquisition time of the camera was 35 ms with a pixel size of 5.5 μm and focal length of 24.5 mm. By enhancing the light intensity, the aperture can be smaller to achieve a larger depth-of-view, and the acquisition time of the camera can be shorter to alleviate the relative movement between the camera and targets. Deploying TuLUMIS for midwater imaging requires more tuning of the imaging parameters in the future. We will further improve the system by in-situ and ex-situ imaging of sessile benthic fauna.

7. Conclusion

In this paper, an underwater multispectral imaging system based on a tunable light source using pressure neutral color LEDs is presented. The tunable LEDs bring flexibility to the spectral energy distribution of the light source. The combination of pressure-neutral LEDs and an off-the-shelf gray camera reduces the complexity and cost of the system. Spectral dissimilarities based on both SAM and SID measures are used to quantify the spectral discrimination ability compared to traditional RGB imaging, MS imaging, and hyperspectral (SP) information. Results of lab experiments show that for different color blocks, the MS method with eight channels can distinguish 76.66% of color pairs more easily than the common RGB method, while almost all color pairs can be more effectively distinguished by using the SP method.

In future studies we will apply TuLUMIS to spectral imaging of fauna in aquaria under controllable conditions. The LEDs will be tuned individually to compensate for the wavelength-dependent attenuation of the light in natural water conditions at different distances, thus constructing more accurate spectral signatures. We will investigate how different minerals, corals and sediments can be better discriminated in-situ with our multispectral approach.

Funding

China Scholarship Council (201606320111); German Research Foundation (DFG) Cluster of Excellence FUTURE OCEAN (CP1626); National High-tech R&D Program of China (863 Program) (2014AA093400).

Acknowledgments

It is publication No. 38 of the DeepSea Monitoring group of GEOMAR. The authors express their sincere gratitude to the staff of the GEOMAR Technology and Logistics Centre for their generous technical support, especially to Thorsten Schott, Matthias Wieck, Bjoern Schäfer, and Sidney Michalak. We would also like to thank Yilu Guo (Zhejiang University) and all the colleagues at the DeepSea Monitoring group of GEOMAR, especially to Jochen Mohrmann and Yifan Song, for joining discussions and providing comments, and all members on the cruise MSM61 for their generous help. Additionally, Hongbo Liu would like to thank Yufei Jin for her appreciation, care and company.

References and links

1. T. A. Carrino, A. P. Crósta, C. L. B. Toledo, and A. M. Silva, “Hyperspectral remote sensing applied to mineral exploration in Southern Peru: A multiple data integration approach in the chapi chiara gold prospect,” Int. J. Appl. Earth Obs. 64, 287–300 (2018). [CrossRef]  

2. H. Pu, D. Liu, J.-H. Qu, and D.-W. Sun, “Applications of imaging spectrometry in inland water quality monitoring-a review of recent developments,” Water, Air, & Soil Pollution 228, 131 (2017). [CrossRef]  

3. V. Leemans, G. Marlier, M.-F. Destain, B. Dumont, and B. Mercatoris, “Estimation of leaf nitrogen concentration on winter wheat by multispectral imaging,” Proc. SPIE 10213, 102130I (2017). [CrossRef]  

4. A. I. Ropodi, E. Z. Panagou, and G.-J. E. Nychas, “Multispectral imaging (MSI): A promising method for the detection of minced beef adulteration with horsemeat,” Food Control 73, 57–63 (2017). [CrossRef]  

5. G. Johnsen, Z. Volent, E. Sakshaug, F. Sigernes, and L. H. Pettersson, Remote sensing in the Barents Sea (Tapir Academic, 2009), Chap. 6.

6. G. Johnsen, Z. Volent, H. Dierssen, R. Pettersen, M. Van Ardelan, F. Søreide, P. Fearns, M. Ludvigsen, and M. Moline, “Underwater hyperspectral imagery to create biogeochemical maps of seafloor properties,” in Subsea Optics and Imaging, J. Watson and O. Zielinski, eds. (Woodhead, 2013). [CrossRef]  

7. J. Tegdan, S. Ekehaug, I. M. Hansen, L. M. S. Aas, K. J. Steen, R. Pettersen, F. Beuchel, and L. Camus, “Underwater hyperspectral imaging for environmental mapping and monitoring of seabed habitats,” in Proceedings of IEEE/MTS OCEANS’15 (IEEE, 2015), pp. 1–6.

8. G. Johnsen, M. Ludvigsen, A. Sørensen, and L. M. S. Aas, “The use of underwater hyperspectral imaging deployed on remotely operated vehicles - methods and applications,” IFAC-PapersOnLine 49, 476–481 (2016). [CrossRef]  

9. A. A. Mogstad and G. Johnsen, “Spectral characteristics of coralline algae: a multi-instrumental approach, with emphasis on underwater hyperspectral imaging,” Appl. Opt. 56, 9957–9975 (2017). [CrossRef]  

10. Ø. Sture, M. Ludvigsen, and L. M. S. Aas, “Autonomous underwater vehicles as a platform for underwater hyperspectral imaging,” in Proceedings of IEEE/MTS OCEANS’17 (IEEE, 2017), pp. 1–8.

11. D. L. Bongiorno, M. Bryson, T. C. Bridge, D. G. Dansereau, and S. B. Williams, “Coregistered hyperspectral and stereo image seafloor mapping from an autonomous underwater vehicle,” J. Field Robot. (2017). [CrossRef]  

12. L. Bian, J. Suo, G. Situ, Z. Li, J. Fan, F. Chen, and Q. Dai, “Multispectral imaging using a single bucket detector,” Sci. Rep. -UK 6, 24752 (2016). [CrossRef]  

13. S. Jin, W. Hui, Y. Wang, K. Huang, Q. Shi, C. Ying, D. Liu, Q. Ye, W. Zhou, and J. Tian, “Hyperspectral imaging using the single-pixel fourier transform technique,” Sci. Rep. -UK 7, 45209 (2017). [CrossRef]  

14. X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world,” IEEE Signal Proc. Mag. 33, 95–108 (2016). [CrossRef]  

15. M. K. Griffin and H.-h. K. Burke, “Compensation of hyperspectral data for atmospheric effects,” Lincoln Laboratory Journal 14, 29–54 (2003).

16. C. Mobley, E. Boss, and C. Roesler, Ocean Optics Web Bookhttp://www.oceanopticsbook.info/.

17. I. Vasilescu, C. Detweiler, and D. Rus, “Color-accurate underwater imaging using perceptual adaptive illumination,” Auton. Robot. 31, 285 (2011). [CrossRef]  

18. I. Leiper, S. Phinn, and A. G. Dekker, “Spectral reflectance of coral reef Benthos and substrate assemblages on Heron Reef, Australia,” Int. J. Remote Sens. 33, 3946–3965 (2012). [CrossRef]  

19. T. Treibitz, B. P. Neal, D. I. Kline, O. Beijbom, P. L. Roberts, B. G. Mitchell, and D. Kriegman, “Wide field-of-view fluorescence imaging of coral reefs,” Sci. Rep. -UK 5, 7694 (2015). [CrossRef]  

20. D. G. Zawada and C. H. Mazel, “Fluorescence-based classification of caribbean coral reef organisms and substrates,” PloS one 9, e84570 (2014). [CrossRef]   [PubMed]  

21. H. Holden and E. LeDrew, “Hyperspectral discrimination of healthy versus stressed corals using in situ reflectance,” J. Coastal Res.850–858 (2001).

22. A. Chennu, P. Färber, G. De’ath, D. de Beer, and K. E. Fabricius, “A diver-operated hyperspectral imaging and topographic surveying system for automated mapping of benthic habitats,” Sci. Rep. -UK 7, 7122 (2017). [CrossRef]  

23. R. Pettersen, G. Johnsen, P. Bruheim, and T. Andreassen, “Development of hyperspectral imaging as a bio-optical taxonomic tool for pigmented marine organisms,” Org. Divers. Evol. 14, 237–246 (2014). [CrossRef]  

24. P. A. Letnes, I. M. Hansen, L. M. Aas, I. Eide, R. Pettersen, L. Tassara, J. Receveur, S. le Floch, J. Guyomarch, L. Camus, and J. Bytingsvik, “Underwater hyperspectral classification of deep sea corals exposed to a toxic compound,” bioRxiv (2017).

25. Y. Guo, H. Song, H. Liu, H. Wei, P. Yang, S. Zhan, H. Wang, H. Huang, N. Liao, Q. Mu, J. Leng, and W. Yang, “Model-based restoration of underwater spectral images captured with narrowband filters,” Opt. Express 24, 13101–13120 (2016). [CrossRef]   [PubMed]  

26. H. R. Morris, C. C. Hoyt, and P. J. Treado, “Imaging spectrometers for fluorescence and raman microscopy: acousto-optic and liquid crystal tunable filters,” Appl. Spectrosc. 48, 857–866 (1994). [CrossRef]  

27. A. Gleason, R. Reid, and K. Voss, “Automated classification of underwater multispectral imagery for coral reef monitoring,” in Proceedings of IEEE/MTS OCEANS’07 (IEEE, 2007), pp. 1–8.

28. J.-I. Park, M.-H. Lee, M. D. Grossberg, and S. K. Nayar, “Multispectral imaging using multiplexed illumination,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 2007), pp. 1–8.

29. H. Blasinski and J. Farrell, “Computational multispectral flash,” in Proceedings of IEEE Conference on Computational Photography (IEEE, 2017), pp. 1–10.

30. M. B. Bouchard, B. R. Chen, S. A. Burgess, and E. M. Hillman, “Ultra-fast multispectral optical imaging of cortical oxygenation, blood flow, and intracellular calcium dynamics,” Opt. Express 17, 15670–15678 (2009). [CrossRef]   [PubMed]  

31. X. Delpueyo, M. Vilaseca, S. Royo, M. Ares, L. Rey-Barroso, F. Sanabria, S. Puig, J. Malvehy, G. Pellacani, F. Noguero, G. Solomita, and T. Bosch, “Multispectral imaging system based on light-emitting diodes for the detection of melanomas and basal cell carcinomas: a pilot study,” J. Biomed. Opt. 22, 065006 (2017). [CrossRef]  

32. D. Swinehart, “The beer-lambert law,” J. Chem. Educ 39, 333 (1962). [CrossRef]  

33. Y. Du, C.-I. Chang, H. Ren, C.-C. Chang, J. O. Jensen, and F. M. D’Amico, “New hyperspectral discrimination measure for spectral characterization,” Opt. Eng. 43, 1777–1786 (2004). [CrossRef]  

34. C. M. Bishop, Pattern Recognition and Machine Learning (Springer, 2006).

35. D. Manolakis, D. Marden, and G. A. Shaw, “Hyperspectral image processing for automatic target detection applications,” Lincoln Laboratory Journal 14, 79–116 (2003).

36. Itseez, “Open source computer vision library,” https://github.com/opencv/opencv (2017).

37. J. Sticklus and T. Kwasnitschka, “Verfahren und vorrichtung zur herstellung von in vergussmasse vergossenen leuchten,” (2015). DE Patent 102,014,118,672.

38. Lumileds Holding B.V., “DS105 LUXEON Z color line product datasheet,” https://www.lumileds.com/uploads/415/DS105-pdf (2017).

39. D. Akkaynak, E. Chan, J. J. Allen, and R. T. Hanlon, “Using spectrometry and photography to study color underwater,” in Proceedings of IEEE/MTS OCEANS’11 (IEEE, 2011), pp. 1–8.

40. D. Coffin, “DCRaw Version 9.27,” https://www.cybercom.net/~dcoffin/dcraw/ (2016).

41. B. Fiedler, “Short cruise report RV Maria S. Merian MSM61,” https://www.ldf.uni-hamburg.de/merian/wochenberichte/wochenberichte-merian/msm58-2-msm61/msm61-scr.pdf (2017).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 Demonstration of the spectral dissimilarity criterion. C11 through C13 are different samples on object C1, C21 through C23 are different samples on object C2. The spectral signature of each sample is the averaged signatures of all pixels forming the sample. The 9-by-9 matrix on the right can be divided into sub-matrices, where sub-matrix C1 and sub-matrix C2 are within-class similarity measures of object C1 and C2, and the elements in sub-matrix M are between-class similarity measure of samples in object C1 and C2. Elements on the main diagonal represent similarity measures of samples with themselves. The dissimilarity of the two objects C1 and C2 depends on both within-class and between-class measures.
Fig. 2
Fig. 2 Setup of TuLUMIS. (a) The scheme of the system comprising LEDs, a camera, and components for power supply, synchronization, control, data storage, and power/signal transmission. The Arduino micro board is programmed to synchronize the flash of LEDs and the acquisition time of the camera. (b) The components of TuLUMIS, with close-ups of the internal structure of the water tight housing for the camera and control circuits, and a cast LED.
Fig. 3
Fig. 3 The relative radiances of the eight LEDs at 700 mA are measured by a spectrometer, and the spectra are plotted as colored curves in (a), while the nominal central wavelengths of the LEDs tested by the manufacturer at 500 mA and 25°C are listed in the legend. The thin black curve in (a) is the spectral response of the camera. The height of each square in (b) is the integration of each LED’s spectral radiance and the camera’s response, and the width of each square shows the full width at half maximum (FWHM) of the corresponding LED’s spectral radiance.
Fig. 4
Fig. 4 Setup of the lab experiment.
Fig. 5
Fig. 5 Preprocessing of the raw image of the checker board taken by the RGB camera. (a) shows the original image with white and black frames marking selected color and white areas respectively. (b) shows the non-uniform illuminance background calculated by third order polynomial fitting of the selected white area. Contours are augmented to highlight the intensity change of the illuminance from bright (the center) to dim (the edge). (c) shows the image corrected by dividing (a) by (b) in all channels. The indices are assigned according to the hue of the colors. (d) shows the selected color samples with black and white frames marking selected area. (e) shows the colors extracted from the corrected image by taking the average of all pixels in the marked area of each color block. All colors are transformed according to SonyA7SM2-Generic’s ICC profile for visualization.
Fig. 6
Fig. 6 Relative spectral reflectance of the 33 color panels on the checkerboard measured by the RGB method (dashed lines), the MS method (solid lines), and the SP method (dotted lines). The wavelength range of each sub-figure covers the visible spectrum from 400 nm to 700 nm, and relative reflectances range from 0 to 1, with axis ticks shown in the legend in the lower left corner. Order of the sub-figures and color of the curves are consistent with Fig. 5(e). Compared to the RGB method, the MS method with eight channels acquires a finer-resolved spectral information. The spectrl resolution of MS is lower than that of the SP method but more than four million spectral measurements can be conducted in parallel (for all the pixels vs. for one point measurement).
Fig. 7
Fig. 7 Results of similarity measures SAM (the first row) and SID (the second row) of the RGB method (the first column), the MS method (the second column) and the SP method (the third column). Each matrix is a 330-by-330 matrix (10 samples per color) and each element represents the similarity measure between the corresponding two samples. The color bars are in logarithm scale. Detailed comparisons are shown in Fig. 8 and Fig. 9.
Fig. 8
Fig. 8 Difference matrices of the spectral dissimilarity (or normalized between-class SAM) Δ(SAM) (a) between the MS method and the RGB method, and (c) between the SP method and the RGB method. The histograms (b) and (d) show the number of elements counted in the corresponding difference matrix (a) and (c), respectively. 76.66% of the between-class elements are increased by using the MS method over the use of the RGB method, and 99.99% of the between-class elements are increased by using the SP method over the use of the RGB method. The bars in the histograms are stacked by the colors of the related samples on the checkerboard.
Fig. 9
Fig. 9 Difference matrices of the spectral dissimilarity (or normalized between-class SID) Δ(SID) (a) between the MS method and the RGB method, and (c) between the SP method and the RGB method. The histograms (b) and (d) show the number of elements counted in the corresponding difference matrix (a) and (c), respectively. 68.24% of the between-class elements are increased by using the MS method over the use of the RGB method, and 99.76% of the between-class elements are increased by using the SP method over the use of the RGB method. The bars in the histograms are stacked by the colors of the related samples on the checkerboard.
Fig. 10
Fig. 10 Setups of TuLUMIS during cruise MSM 61. (a) The PELAGIOS frame on which TuLUMIS was carried. (b) A schematic illustration of the LED array layout and camera focal length setting. A white board was used as a reference to balance the radiance of each LED and to calibrate the effect of wavelength-dependent light attenuation.
Fig. 11
Fig. 11 A sergestid shrimp observed during the cruise MSM61 by the TuLUMIS. The monochrome images in eight spectral channels are shown on the left, with a fused pseudo-color image on the right.
Fig. 12
Fig. 12 Difference matrices of (a) normalized between-class SAM (i.e. dissimilarity Δ(SAM)), and (c) normalized between-class SID (i.e. dissimilarity Δ(SID)) between the MS method and the RGB method after dimensionality reduction using LDA. The histograms (b) and (d) show the number of elements counted in the corresponding difference matrix (a) and (c), respectively. In the case of using LDA, 95.82% of the between-class elements are increased by using the MS method with SAM over the use of the RGB method, and 78.06% of the between-class elements are increased by using the MS method with SID over the use of the RGB method. The bars in the histograms are stacked by the colors of the related samples on the checkerboard.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

I = λ c I s ( λ ) e α ( λ ) d 1 R ( λ ) e α ( λ ) d 2 C ( λ ) d λ ,
SAM ( s , r ) = θ = arccos s r s r s , r n ,
p i = | s i | j = 1 n | s j | ,
SID ( s , r ) = D ( s r ) + D ( r s )
D ( s r ) = i = 1 m p i log ( p i / q i )
D ( r s ) = i = 1 m q i log ( q i / p i )
C 1 i j ( Γ ) = Γ ( c 1 i , c 1 j ) , i , j = 1 , 2 , , m 1
C 2 i j ( Γ ) = Γ ( c 2 i , c 2 j ) , i , j = 1 , 2 , , m 2
M i j ( Γ ) = Γ ( c 1 i , c 2 j ) i = 1 , 2 , , m 1 ; j = 1 , 2 , , m 2
μ ( Γ ) ( C 1 , C 2 ) = i = 1 m 1 j = 1 , j i m 1 C 1 i j + i = 1 m 1 j = 1 , j i m 1 C 2 i j m 1 ( m 1 1 ) + m 2 ( m 2 1 )
Δ i j ( Γ ) = M i j μ ( Γ ) ( C 1 , C 2 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.