<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">CMC</journal-id>
<journal-id journal-id-type="nlm-ta">CMC</journal-id>
<journal-id journal-id-type="publisher-id">CMC</journal-id>
<journal-title-group>
<journal-title>Computers, Materials &#x0026; Continua</journal-title>
</journal-title-group>
<issn pub-type="epub">1546-2226</issn>
<issn pub-type="ppub">1546-2218</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">23339</article-id>
<article-id pub-id-type="doi">10.32604/cmc.2022.023339</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Image Dehazing Based on Pixel Guided CNN with PAM via Graph Cut</article-title>
<alt-title alt-title-type="left-running-head">Image Dehazing Based on Pixel Guided CNN with PAM via Graph Cut</alt-title>
<alt-title alt-title-type="right-running-head">Image Dehazing Based on Pixel Guided CNN with PAM via Graph Cut</alt-title>
</title-group>
<contrib-group content-type="authors">
<contrib id="author-1" contrib-type="author" corresp="yes">
<name name-style="western"><surname>Alenezi</surname><given-names>Fayadh</given-names></name><email>fshenezi@ju.edu.sa</email>
</contrib>
<aff id="aff-1"><institution>Department of Electrical Engineering, College of Engineering, Jouf University</institution>, <addr-line>Sakaka</addr-line>, <country>Saudi Arabia</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>&#x002A;</label>Corresponding Author: Fayadh Alenezi. Email: <email>fshenezi@ju.edu.sa</email></corresp>
</author-notes>
<pub-date pub-type="epub" date-type="pub" iso-8601-date="2021-11-29"><day>29</day>
<month>11</month>
<year>2021</year></pub-date>
<volume>71</volume>
<issue>2</issue>
<fpage>3425</fpage>
<lpage>3443</lpage>
<history>
<date date-type="received"><day>04</day><month>9</month><year>2021</year></date>
<date date-type="accepted"><day>20</day><month>10</month><year>2021</year></date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2022 Alenezi</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Alenezi</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_CMC_23339.pdf"></self-uri>
<abstract>
<p>Image dehazing is still an open research topic that has been undergoing a lot of development, especially with the renewed interest in machine learning-based methods. A major challenge of the existing dehazing methods is the estimation of transmittance, which is the key element of haze-affected imaging models. Conventional methods are based on a set of assumptions that reduce the solution search space. However, the multiplication of these assumptions tends to restrict the solutions to particular cases that cannot account for the reality of the observed image. In this paper we reduce the number of simplified hypotheses in order to attain a more plausible and realistic solution by exploiting a priori knowledge of the ground truth in the proposed method. The proposed method relies on pixel information between the ground truth and haze image to reduce these assumptions. This is achieved by using ground truth and haze image to find the geometric-pixel information through a guided Convolution Neural Networks (CNNs) with a Parallax Attention Mechanism (PAM). It uses the differential pixel-based variance in order to estimate transmittance. The pixel variance uses local and global patches between the assumed ground truth and haze image to refine the transmission map. The transmission map is also improved based on improved Markov random field (MRF) energy functions. We used different images to test the proposed algorithm. The entropy value of the proposed method was 7.43 and 7.39, a percent increase of <inline-formula id="ieqn-1"><mml:math id="mml-ieqn-1"><mml:mrow><mml:mo>&#x2243;</mml:mo></mml:mrow><mml:mn>4.35</mml:mn><mml:mi mathvariant="normal">&#x0025;</mml:mi></mml:math></inline-formula> and <inline-formula id="ieqn-2"><mml:math id="mml-ieqn-2"><mml:mrow><mml:mo>&#x2243;</mml:mo></mml:mrow><mml:mn>5.42</mml:mn><mml:mi mathvariant="normal">&#x0025;</mml:mi></mml:math></inline-formula>, respectively, compared to the best existing results. The increment is similar in other performance quality metrics and this validate its superiority compared to other existing methods in terms of key image quality evaluation metrics. The proposed approach&#x0027;s drawback, an over-reliance on real ground truth images, is also investigated. The proposed method show more details hence yields better images than those from the existing state-of-the-art-methods.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>Pixel information</kwd>
<kwd>human visual perception</kwd>
<kwd>convolution neural network</kwd>
<kwd>graph cut</kwd>
<kwd>parallax attention mechanism</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1"><label>1</label><title>Introduction</title>
<p>Images acquired in an outdoor environment are sometimes affected by degradation due to atmospheric conditions such as fog, rain, snow, or wind-blown sand. Such haze is a type of degradation that affects the image quality more or less homogeneously and persistently, making the visibility of details very difficult. This inevitably reduces the performance of high-level tasks such as the interpretation of the content of the observed scene [<xref ref-type="bibr" rid="ref-1">1</xref>].</p>
<p>The haze phenomenon is due to the presence of water droplets suspended in the air. These droplets cause the phenomenon of light scattering, the distribution and photometric appearance of which depends on the size of the water particles scattering and the wavelength of light. Weather conditions can cause fluctuations in the particles that in turn causes the haze in the atmosphere [<xref ref-type="bibr" rid="ref-2">2</xref>]. These particles&#x2019; collective effect arises due to the illumination effect in the image at any given pixel. These effects can be dynamic (snow or rain) or steady (haze, mist, and fog) [<xref ref-type="bibr" rid="ref-2">2</xref>].</p>
<p>Dehazing aims at removing the light-scattering effect from the image by making it more exploitable in various image processing and analysis tasks. However, dehazing methods generally try to reduce or eliminate this phenomenon in a global way without taking into account local aspects, and in particularly typically fail to account for spatial structures and inter-pixel interactions [<xref ref-type="bibr" rid="ref-3">3</xref>]. Thus, this proposal takes into account local aspects to yield a better result.</p>
<p>In order to restore the salient and essential feature regions in the images, the existing image dehazing algorithms tend to use specific points in the image region to approximate the atmospheric light [<xref ref-type="bibr" rid="ref-4">4</xref>]. The majority of the proposed image dehazing algorithms based on atmospheric scattering models, aim at deriving a haze-free image from the observed image [<xref ref-type="bibr" rid="ref-5">5</xref>] by estimating the transmission map. Atmospheric light and the transmission map are estimated in some dehazing methods through the use of physical maps such as color-attenuation on some non-local priors or through the observation of haze-free outdoor images as in the dark-channel prior approach [<xref ref-type="bibr" rid="ref-6">6</xref>]. Despite the huge successes born from these methods, they do not work well in certain cases. For example, in the case of Fattal et al. [<xref ref-type="bibr" rid="ref-5">5</xref>], transmission fails in the presence of white objects in the background. Similarly, non-local prior-based methods like that of Berman et al. [<xref ref-type="bibr" rid="ref-6">6</xref>] have failed in cases of heavy hazed regions as the transmission designed becomes irrelevant. Cai et al. [<xref ref-type="bibr" rid="ref-7">7</xref>] has also suggested color-attenuation prior underestimates the transmission of distant region.</p>
<p>The traditional proposed dehazing methods have been recently combined with CNNs [<xref ref-type="bibr" rid="ref-8">8</xref>]. This has been facilitated by the success of CNNs in the majority of the image processing tasks. CNNs have been combined with other filters to estimate transmission maps, while conventional methods such as Retinex theory have been used to estimate atmospheric light [<xref ref-type="bibr" rid="ref-8">8</xref>]. However, the existing dehazing methods still lack accuracy in the estimation of transmission maps. For instance, Alenezi et al. [<xref ref-type="bibr" rid="ref-9">9</xref>] disregard the physical model of the imaging principle while improving the image quality. Other models such as saliency extraction [<xref ref-type="bibr" rid="ref-10">10</xref>], histogram equalization [<xref ref-type="bibr" rid="ref-11">11</xref>] and Retinex theory [<xref ref-type="bibr" rid="ref-12">12</xref>] have yielded images with color distortion due to incomplete recovery effects [<xref ref-type="bibr" rid="ref-13">13</xref>]. Even promising state-of-the-art methods like that developed by Salazar-Colores et al. [<xref ref-type="bibr" rid="ref-14">14</xref>] yield inaccurate results since their procedures are based on many assumptions.</p>
<p>Image dehazing methods based on supplementary haze removal have various shortcomings. For instance, Wang et al. [<xref ref-type="bibr" rid="ref-1">1</xref>] proposed a method in which final images having a washed-out effect in darker regions due to atmospheric light failures. Middleton [<xref ref-type="bibr" rid="ref-15">15</xref>] have exaggerated contrast on the final images. Vazquez-Corral [<xref ref-type="bibr" rid="ref-16">16</xref>] proposed dehazing technique yields final images with poor information content. Feng et al. [<xref ref-type="bibr" rid="ref-17">17</xref>] proposed using sky and non-sky regions regions as the basis to improve hazy images. The method&#x0027;s strength lies in its bright sky regions, where the results generated have superior edges and good robustness. However, the results from the other sky regions are darker and have a hazed background. These results are similar to those from Wang et al. [<xref ref-type="bibr" rid="ref-1">1</xref>].</p>
<p>Fattal et al. [<xref ref-type="bibr" rid="ref-5">5</xref>,<xref ref-type="bibr" rid="ref-18">18</xref>] dark channel prior contribution in image dehazing has found numerous usages. The soft matting employed in Zhou et al. [<xref ref-type="bibr" rid="ref-18">18</xref>] algorithm makes its computation extensive. The use of a guided filter in soft matting in the first step reduces calculation-and application-related costs. However, He et al. technique has produced outcomes with deprived edges and discriminatory dehazing, which are only sound in non-sky area images [<xref ref-type="bibr" rid="ref-5">5</xref>,<xref ref-type="bibr" rid="ref-8">8</xref>]. He et al. [<xref ref-type="bibr" rid="ref-19">19</xref>] proposed method introduces wavelet transform, assuming haze effects solitarily affect low-frequency element of the image. Yet He et al. [<xref ref-type="bibr" rid="ref-19">19</xref>] proposed technique has not accounted for the differential light from the scene and the atmospheric light, subsequently making the results darker.</p>
<p>Some methods combine traditional existing dehaze methods and Artificial Neural Networks (ANN) to yield promising results. For instance, the multilayer perceptron (MLP), which has nurtured usage in numerous areas in image processing applications such as skin divisions and image denoising [<xref ref-type="bibr" rid="ref-20">20</xref>], has been used by Guo et al. [<xref ref-type="bibr" rid="ref-20">20</xref>]. Guo et al. [<xref ref-type="bibr" rid="ref-20">20</xref>] suggested method was based on the MLP, which draws the transmission map of the haze image directly from the dark channel. The results indicate extended contrast and intensified dynamic range of the dehazed image. However, visual inspection shows that Guo et al. [<xref ref-type="bibr" rid="ref-20">20</xref>] proposed outcomes retain haze towards the horizon, yielding imperfect edges. Other existing hybrid methods of CNNs with traditional methods have also produced imperfect images. For instance, Alenezi et al. [<xref ref-type="bibr" rid="ref-9">9</xref>] estimate a transmission map via DehazeNet. Their method has produced superior results against existing state-of-the-art methods but the CNN functions were limited in predicting the transmission map.</p>
<p>O&#x0027;Shea et al. [<xref ref-type="bibr" rid="ref-21">21</xref>] proposed a method where the attention block captures the informative spatial and channel-wise features. A visual analysis of the dehazed image results reveals a haze towards the horizon in both simulated and natural images. Unlike the existing methods, a more recent method by Zhu et al. [<xref ref-type="bibr" rid="ref-8">8</xref>] considers the existence of differential pixel-values. This method [<xref ref-type="bibr" rid="ref-8">8</xref>] combines graph-cut with single-pass CNN algorithms estimating transmission maps via global and local patches. However, the proposed method yielded images where the over-bright areas tended to lose some final image features. A more recent study by Zhao et al. [<xref ref-type="bibr" rid="ref-22">22</xref>] merged the merits of prior-based and learning-based approaches. The method [<xref ref-type="bibr" rid="ref-22">22</xref>] combines visibility restoration and realness improvement sub-tasks using two-staged weakly supervised dehazing network. The results of the work had little washed-out effects despite having better performance than existing state-of-the art methods.</p>
<p>In summary, the existing image dehazing techniques have varied drawbacks, which necessitates further research into the topic. The proposed paper uses global and local Markov random fields and graph cuts to [<xref ref-type="bibr" rid="ref-8">8</xref>] improve the transmission map, exploiting the geometric-variance pixel-based guided local and global relationships between the &#x2018;assumed&#x2019; ground truth and hazed image. This helps to estimate the transmittance medium and to extract a dehaze image accurately. Thus, this paper&#x0027;s proposed method uses the local and global pixel variance within the local and global image neighborhoods to estimate the transmittance medium. This is achieved by comparing the corresponding local and global pixels between the haze and its assumed ground truth. The energy variations in the global and local Markov fields function as a proposed extension based on corresponding high-low pixel gradient and variance-based boundary in between the two images, and to help smooth and constrain the connection between local and global pixel neighborhoods. These proposed geometric-based methods improve the dehazed image features. The rest of the paper is as follows: Section 2 outlines the contribution of the paper. Section 3 outlines the proposed method, then offers a description of the experiments in Section 4. Finally, Section 5 offers the conclusion.</p>
</sec>
<sec id="s2"><label>2</label><title>Contribution</title>
<p>This paper makes three significant contributions: it presents a novel combination of CNNs with a parallax attention mechanism and graph-cut algorithms which results in a novel dehazed image; a transmittance medium dependent on pixel variance corresponding to local-and global-based neighborhood between the ground truth and haze image, which serves to strengthen local and global image features; and a local and global correspondence between the ground truth and haze image pixel-based energy function based on the pixel variance restraints of corresponding neighborhoods that enhances the transmission map, which has the effect of enhancing the finer details of the dehazed image. The later stages (the global and local Markov random fields and the graph cut) are an extension of existing work [<xref ref-type="bibr" rid="ref-8">8</xref>].</p>
</sec>
<sec id="s3"><label>3</label><title>Proposed Method</title>
<sec id="s3_1"><label>3.1</label><title>Atmospheric Scattering Model</title>
<p><?A3B2 "fig1",5,"anchor"?><xref ref-type="fig" rid="fig-1">Fig. 1</xref> shows a hazy condition with numerous particles suspended in the environment, resulting in a scattering effect on the light [<xref ref-type="bibr" rid="ref-8">8</xref>,<xref ref-type="bibr" rid="ref-13">13</xref>,<xref ref-type="bibr" rid="ref-17">17</xref>]. Scattered particles during hazy weather conditions allow the attenuation of reflected light on the surfaces of objects. The attenuated light deteriorates the image&#x0027;s brightness and decreases the image&#x0027;s resolution as a forward scattering consequence substantially persists between the particles and surfaces [<xref ref-type="bibr" rid="ref-13">13</xref>]. The ultimate hazed image differs from the ground truth image locally and globally based on their pixels&#x2019; information. The back-scattering of atmospheric particles in ordinary light yields images with reduced contrast, hue deviation, and image saturation, contrasting with the ground truth image [<xref ref-type="bibr" rid="ref-23">23</xref>]. These irregular scattering effects on sensor light and natural light in hazy images are broadly demonstrated via a dark channel prior prototype as follows [<xref ref-type="bibr" rid="ref-8">8</xref>,<xref ref-type="bibr" rid="ref-13">13</xref>,<xref ref-type="bibr" rid="ref-17">17</xref>]:
<disp-formula id="eqn-1"><label>(1)</label><mml:math id="mml-eqn-1" display="block"><mml:mi>&#x03A5;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mi>&#x03A9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mi>&#x03C9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>+</mml:mo><mml:mi>&#x03A8;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03C9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mo>.</mml:mo></mml:math></disp-formula></p>
<fig id="fig-1"><label>Figure 1</label><caption><title>A summary of formation of scattering effect diagram showing environment radiance <inline-formula id="ieqn-78"><mml:math id="mml-ieqn-78"><mml:mi>&#x03A9;</mml:mi></mml:math></inline-formula>, atmospheric light <inline-formula id="ieqn-79"><mml:math id="mml-ieqn-79"><mml:mi>&#x03D5;</mml:mi></mml:math></inline-formula>, attenuation or transmission <inline-formula id="ieqn-80"><mml:math id="mml-ieqn-80"><mml:mi>&#x03C9;</mml:mi></mml:math></inline-formula>, and observed intensity <inline-formula id="ieqn-81"><mml:math id="mml-ieqn-81"><mml:mi>&#x03A5;</mml:mi></mml:math></inline-formula></title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_23339-fig-1.png"/>
</fig>
<p>In <xref ref-type="disp-formula" rid="eqn-1">(1)</xref>, <inline-formula id="ieqn-3"><mml:math id="mml-ieqn-3"><mml:mi>&#x03A5;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> is the observed image or brightness of the hazy image as established by the observer at pixel <inline-formula id="ieqn-4"><mml:math id="mml-ieqn-4"><mml:mi>&#x03B3;</mml:mi></mml:math></inline-formula>; <inline-formula id="ieqn-5"><mml:math id="mml-ieqn-5"><mml:mi>&#x03A9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> is the scene or environment radiance of the haze-free image; <inline-formula id="ieqn-6"><mml:math id="mml-ieqn-6"><mml:mi>&#x03A8;</mml:mi></mml:math></inline-formula> is the atmospheric light; and <inline-formula id="ieqn-7"><mml:math id="mml-ieqn-7"><mml:mi>&#x03C9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> is the attenuation or transmittance medium, which ranges between 0 and 1. Thus can be redefined as,
<disp-formula id="eqn-2"><label>(2)</label><mml:math id="mml-eqn-2" display="block"><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x03C9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mrow><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03B7;</mml:mi><mml:mi>&#x03BE;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x03C9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x2208;</mml:mo><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:mrow></mml:msub></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mo>,</mml:mo></mml:math></disp-formula>where <inline-formula id="ieqn-8"><mml:math id="mml-ieqn-8"><mml:mi>&#x03B7;</mml:mi></mml:math></inline-formula> is the scattering coefficient of the atmosphere, and <inline-formula id="ieqn-9"><mml:math id="mml-ieqn-9"><mml:mi>&#x03BE;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> is the depth of the scene. <xref ref-type="disp-formula" rid="eqn-2">Eq. (2)</xref> is related to homogeneity in the atmosphere; otherwise, <inline-formula id="ieqn-10"><mml:math id="mml-ieqn-10"><mml:mi>&#x03C9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> is given by <xref ref-type="disp-formula" rid="eqn-3">(3)</xref>.</p>
<p>The observed image brightness, <inline-formula id="ieqn-11"><mml:math id="mml-ieqn-11"><mml:mi>&#x03A5;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> can be obtained by eliminating atmospheric light, <inline-formula id="ieqn-12"><mml:math id="mml-ieqn-12"><mml:mi>&#x03A8;</mml:mi></mml:math></inline-formula>, while rewarding attenuation of the light, <inline-formula id="ieqn-13"><mml:math id="mml-ieqn-13"><mml:mi>&#x03C9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>, to reinstate haze-free scenes, <inline-formula id="ieqn-14"><mml:math id="mml-ieqn-14"><mml:mi>&#x03A9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>. RGB color space vectors <inline-formula id="ieqn-15"><mml:math id="mml-ieqn-15"><mml:mi>&#x03A9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>,</mml:mo><mml:mi>&#x03A8;</mml:mi></mml:math></inline-formula>, and <inline-formula id="ieqn-16"><mml:math id="mml-ieqn-16"><mml:mi>&#x03A5;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> in <xref ref-type="disp-formula" rid="eqn-2">(2)</xref> are coplanar from a geometric point of view. The terminal points of vectors <inline-formula id="ieqn-17"><mml:math id="mml-ieqn-17"><mml:mi>&#x03A9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>,</mml:mo><mml:mi>&#x03A8;</mml:mi></mml:math></inline-formula>, and <inline-formula id="ieqn-18"><mml:math id="mml-ieqn-18"><mml:mi>&#x03A5;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> are collinear and the transmission, <inline-formula id="ieqn-19"><mml:math id="mml-ieqn-19"><mml:mi>&#x03C9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>, is relative to the length of the two lines, as defined in <xref ref-type="disp-formula" rid="eqn-3">(3)</xref>
<disp-formula id="eqn-3"><label>(3)</label><mml:math id="mml-eqn-3" display="block"><mml:mi>&#x03C9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mrow><mml:mo symmetric="true">&#x2016;</mml:mo><mml:mfrac><mml:mrow><mml:mi>&#x03A8;</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03A5;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x03A8;</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03A9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mfrac><mml:mo symmetric="true">&#x2016;</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:math></disp-formula>
</p>
<p><xref ref-type="disp-formula" rid="eqn-4">Eq. (4)</xref> emanates from <xref ref-type="disp-formula" rid="eqn-2">(2)</xref> and shows that haze removal is based on accurate retrieval of <inline-formula id="ieqn-20"><mml:math id="mml-ieqn-20"><mml:mi>&#x03A9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>, <inline-formula id="ieqn-21"><mml:math id="mml-ieqn-21"><mml:mi>&#x03A8;</mml:mi></mml:math></inline-formula>, and <inline-formula id="ieqn-22"><mml:math id="mml-ieqn-22"><mml:mi>&#x03C9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> from <inline-formula id="ieqn-23"><mml:math id="mml-ieqn-23"><mml:mi>&#x03A5;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>. <inline-formula id="ieqn-24"><mml:math id="mml-ieqn-24"><mml:mi>&#x03A9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mi>&#x03C9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>, which shows emissivity decay of the natural environment in the medium, represents direct attenuation. <inline-formula id="ieqn-25"><mml:math id="mml-ieqn-25"><mml:mi>&#x03A8;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03C9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> is air-light based on previously scattered light, leading to alteration of natural environment color. Therefore, the greater the distance between the sensor and the object the larger the attenuation (<inline-formula id="ieqn-26"><mml:math id="mml-ieqn-26"><mml:mi>&#x03A9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mi>&#x03C9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>) and scattering effect (<inline-formula id="ieqn-27"><mml:math id="mml-ieqn-27"><mml:mi>&#x03A8;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03C9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>), suggesting the exponential transmission that is shown in <xref ref-type="disp-formula" rid="eqn-2">(2)</xref>.
<disp-formula id="eqn-4"><label>(4)</label><mml:math id="mml-eqn-4" display="block"><mml:mi>&#x03A9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>&#x03A5;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03A8;</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x03C9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mfrac><mml:mo>+</mml:mo><mml:mi>&#x03A8;</mml:mi><mml:mo>.</mml:mo></mml:math></disp-formula>
</p>
</sec>
<sec id="s3_2"><label>3.2</label><title>Convolution Neural Network</title>
<p>CNN is similar to ordinary neural networks: they are composed of learnable weights and biases [<xref ref-type="bibr" rid="ref-24">24</xref>]. In CNN&#x0027;s, each neuron receives an input, such as an image that performs a dot product and may follow a non-linear computation. CNNs are expressed as a single differentiated score function, scoring input image pixel to one another. CNN also has a loss function on the last layer of the network [<xref ref-type="bibr" rid="ref-24">24</xref>]. ConvNets explicitly assumes image inputs, making it possible to encode image properties such as texture and information content into the architecture. This feature makes the forward function in the architecture of ConvNet more efficient during implementation, thus reducing the number of parameters in the network [<xref ref-type="bibr" rid="ref-25">25</xref>]. The rest of the literature on the structure and architecture of Convolution Neural Network (CNNs/ConvNets) is widely presented in papers [<xref ref-type="bibr" rid="ref-26">26</xref>].</p>
</sec>
</sec>
<sec id="s4"><label>4</label><title>Image Dehazing Based on Pixel Guided CNN with PAM via Graph Cut</title>
<sec id="s4_1"><label>4.1</label><title>Transmission Map</title>
<p>We define the mapping of pixel value fluctuations along the smallest regions of the hazed and ground truth image as <inline-formula id="ieqn-28"><mml:math id="mml-ieqn-28"><mml:mi>&#x0394;</mml:mi><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>H</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> and that of ground truth as <inline-formula id="ieqn-29"><mml:math id="mml-ieqn-29"><mml:mi>&#x0394;</mml:mi><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>G</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula>. The pixel fluctuations also replicate changes in image features. If we symbolize the variance of these deviations with <inline-formula id="ieqn-30"><mml:math id="mml-ieqn-30"><mml:mi>&#x03B6;</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x0394;</mml:mi><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:math></inline-formula>, where <inline-formula id="ieqn-31"><mml:math id="mml-ieqn-31"><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mi>H</mml:mi><mml:mo>,</mml:mo><mml:mi>G</mml:mi></mml:math></inline-formula> for H (haze) and G (ground truth), when <inline-formula id="ieqn-32"><mml:math id="mml-ieqn-32"><mml:mi>&#x03B6;</mml:mi><mml:mo stretchy="false">&#x2192;</mml:mo><mml:mn>0</mml:mn></mml:math></inline-formula>, the variation is invisible. Since pixel values are between <inline-formula id="ieqn-33"><mml:math id="mml-ieqn-33"><mml:mn>0</mml:mn></mml:math></inline-formula> and <inline-formula id="ieqn-34"><mml:math id="mml-ieqn-34"><mml:mn>1</mml:mn></mml:math></inline-formula>, then variance in the neighboring pixels, <inline-formula id="ieqn-35"><mml:math id="mml-ieqn-35"><mml:mrow><mml:msub><mml:mi>q</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> is given by <inline-formula id="ieqn-36"><mml:math id="mml-ieqn-36"><mml:mo stretchy="false">&#x2225;</mml:mo><mml:mi>&#x0394;</mml:mi><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>&#x0394;</mml:mi><mml:mrow><mml:msub><mml:mi>q</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:msup><mml:mo>&#x2225;</mml:mo><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:math></inline-formula>, where, <inline-formula id="ieqn-37"><mml:math id="mml-ieqn-37"><mml:mo stretchy="false">&#x2225;</mml:mo><mml:mo>.</mml:mo><mml:mo stretchy="false">&#x2225;</mml:mo></mml:math></inline-formula> is the magnitude of the pixels. We designate threshold process in input images <inline-formula id="ieqn-38"><mml:math id="mml-ieqn-38"><mml:mrow><mml:msub><mml:mi>I</mml:mi><mml:mi>d</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> (haze) and ground truth <inline-formula id="ieqn-39"><mml:math id="mml-ieqn-39"><mml:mrow><mml:msub><mml:mi>I</mml:mi><mml:mi>G</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> as
<disp-formula id="eqn-5"><label>(5)</label><mml:math id="mml-eqn-5" display="block"><mml:mrow><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mi>h</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mi>h</mml:mi><mml:mi>o</mml:mi><mml:mi>l</mml:mi><mml:mi>d</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mo stretchy="false">&#x2225;</mml:mo><mml:mi>&#x0394;</mml:mi><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>&#x0394;</mml:mi><mml:mrow><mml:msub><mml:mi>q</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:msup><mml:mo>&#x2225;</mml:mo><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:msub><mml:mi>q</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow></mml:mfrac><mml:mo>.</mml:mo></mml:math></disp-formula>
</p>
<p><xref ref-type="disp-formula" rid="eqn-5">Eq. (5)</xref> is analogous to <xref ref-type="disp-formula" rid="eqn-3">(3)</xref>, that is,
<disp-formula id="eqn-6"><label>(6)</label><mml:math id="mml-eqn-6" display="block"><mml:mrow><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mi>h</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mi>h</mml:mi><mml:mi>o</mml:mi><mml:mi>l</mml:mi><mml:mi>d</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mo fence="false" stretchy="false">&#x2016;</mml:mo><mml:mi>&#x0394;</mml:mi><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>&#x0394;</mml:mi><mml:mrow><mml:msub><mml:mi>q</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:msup><mml:mo fence="false" stretchy="false">&#x2016;</mml:mo><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:msub><mml:mi>q</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow></mml:mfrac><mml:mo>&#x2243;</mml:mo><mml:mrow><mml:mo symmetric="true">&#x2016;</mml:mo><mml:mfrac><mml:mrow><mml:mrow><mml:msub><mml:mi>&#x03A8;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03A5;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>&#x03B3;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mrow><mml:mrow><mml:msub><mml:mi>&#x03A8;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03A9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>&#x03B3;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mfrac><mml:mo symmetric="true">&#x2016;</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>&#x03C9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>&#x03B3;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mo>.</mml:mo></mml:math></disp-formula>
</p>
<p>Thus, the transmittance medium defined by <xref ref-type="disp-formula" rid="eqn-2">(2)</xref> becomes
<disp-formula id="eqn-7"><label>(7)</label><mml:math id="mml-eqn-7" display="block"><mml:mfrac><mml:mrow><mml:mo fence="false" stretchy="false">&#x2016;</mml:mo><mml:mi>&#x0394;</mml:mi><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>&#x0394;</mml:mi><mml:mrow><mml:msub><mml:mi>q</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:msup><mml:mo fence="false" stretchy="false">&#x2016;</mml:mo><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:msub><mml:mi>q</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow></mml:mfrac><mml:mo>&#x2243;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x03C9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mrow><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03B7;</mml:mi><mml:mi>&#x03BE;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>&#x03B3;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x03C9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>&#x03B3;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x2208;</mml:mo><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:mrow></mml:msub></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mo>.</mml:mo></mml:math></disp-formula>
<xref ref-type="disp-formula" rid="eqn-7">(7)</xref> is substituted into <xref ref-type="disp-formula" rid="eqn-1">(1)</xref> leads
<disp-formula id="eqn-8"><label>(8)</label><mml:math id="mml-eqn-8" display="block"><mml:mi mathvariant="normal">&#x03A5;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>&#x03B3;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mi>&#x03A9;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>&#x03B3;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mo stretchy="false">&#x2225;</mml:mo><mml:mi>&#x0394;</mml:mi><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>&#x0394;</mml:mi><mml:mrow><mml:msub><mml:mi>q</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:msup><mml:mo>&#x2225;</mml:mo><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:msub><mml:mi>q</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:msub><mml:mi>&#x03A8;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mo stretchy="false">&#x2225;</mml:mo><mml:mi>&#x0394;</mml:mi><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>&#x0394;</mml:mi><mml:mrow><mml:msub><mml:mi>q</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:msup><mml:mo>&#x2225;</mml:mo><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:msub><mml:mi>q</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:math></disp-formula>
</p>
<p><xref ref-type="disp-formula" rid="eqn-8">Eq. (8)</xref> shows that a major challenge of image dehazing is solved. In contrast, while in the beginning there were three unknowns present, <xref ref-type="disp-formula" rid="eqn-8">(8)</xref> shows that only two unknowns are left; <inline-formula id="ieqn-40"><mml:math id="mml-ieqn-40"><mml:mrow><mml:msub><mml:mi>&#x03A8;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> and <inline-formula id="ieqn-41"><mml:math id="mml-ieqn-41"><mml:mi>&#x03A5;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>&#x03B3;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>. However, <inline-formula id="ieqn-42"><mml:math id="mml-ieqn-42"><mml:mrow><mml:msub><mml:mi>&#x03A8;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> can be estimated based on Retinex theory [<xref ref-type="bibr" rid="ref-12">12</xref>], which derives the atmospheric light of the brightest pixel from <inline-formula id="ieqn-43"><mml:math id="mml-ieqn-43"><mml:mrow><mml:msub><mml:mi>&#x03A8;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>x</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>R</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>,</mml:mo><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>x</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>G</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>,</mml:mo><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>x</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>B</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:msup><mml:mo stretchy="false">]</mml:mo><mml:mi>t</mml:mi></mml:msup></mml:mrow></mml:math></inline-formula>, where R, G, B are the three-color channels in the image and <italic>t</italic> regulates the weights of the colors.</p>
<sec id="s4_2"><label>4.2</label><title>Global and Local Markov Random Fields</title>
<p>Scene depth changes gradually and entails variation in local and global neighborhood pixels. Thus, accurate depth variation estimation depends on features including color, texture, location and shape as well as both the local and global neighborhood pixels between the haze and ground-truth images. This paper proposes that these are attainable via a novel energy function in the depth estimation network. The energy function is based on a novel global-local Markov chain already discussed in detail in [<xref ref-type="bibr" rid="ref-8">8</xref>]. The resultant energy function is optimized by the graph-cut as discussed in Zhu et al. [<xref ref-type="bibr" rid="ref-8">8</xref>]. However, in this model, we use the color channel features as representative of both global and local color moments, proposed by [<xref ref-type="bibr" rid="ref-27">27</xref>]. This opposes the super-pixels in global and local neighborhoods as presented in [<xref ref-type="bibr" rid="ref-1">1</xref>]. Thus, the ambient light used epitomizes the connection between global and local pixels and super-pixels. The approach extends the global and local consistency, which helps to protect the proposed convolution neural network from the problem of smoother far apart pixels. It also assists in evading over-saturation of color and produces sharper boundaries. The relationship between the global and local neighborhood pixels and super-pixels is modeled via the long and short-range interaction. This is achieved by considering the global relationship between neighboring local pixels as proposed by Song et al. [<xref ref-type="bibr" rid="ref-27">27</xref>]. The results are extended to the global and local pixels to map the relationship between haze and ground truth image. The constructed Markov Random Fields have edge costs representative of the neighboring pixel&#x0027;s consistency in overlapping regions based on high gradient boundary.</p>
<p>The graph cut and parallax attention mechanism (PAM), which has already been proposed by Zhu [<xref ref-type="bibr" rid="ref-8">8</xref>], helps in optimizing MRF. Furthermore, it protects against over-saturation of color and sharpens boundaries. PAM helps in estimating the correspondences between haze and ground truth pixel values [<xref ref-type="bibr" rid="ref-28">28</xref>]. It also helps in the computation of occlusion maps and warps ground truth image features into the final dehazed image. PAM has inputs from feature maps <inline-formula id="ieqn-44"><mml:math id="mml-ieqn-44"><mml:mi>F</mml:mi><mml:mrow><mml:msub><mml:mi>M</mml:mi><mml:mi>G</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> and <inline-formula id="ieqn-45"><mml:math id="mml-ieqn-45"><mml:mi>F</mml:mi><mml:mrow><mml:msub><mml:mi>M</mml:mi><mml:mi>L</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> denoting global and local features, respectively (see <?A3B2 "fig4",5,"anchor"?><xref ref-type="fig" rid="fig-4">Fig. 4</xref>). <inline-formula id="ieqn-46"><mml:math id="mml-ieqn-46"><mml:mi>F</mml:mi><mml:mrow><mml:msub><mml:mi>M</mml:mi><mml:mi>G</mml:mi></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mi>F</mml:mi><mml:mrow><mml:msub><mml:mi>M</mml:mi><mml:mi>L</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mi>R</mml:mi><mml:mrow><mml:mi>R</mml:mi><mml:mo>,</mml:mo><mml:mi>G</mml:mi><mml:mo>,</mml:mo><mml:mi>B</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math></inline-formula> represent color channels from the feature extraction based on pixel information. The onset of the PAM has two residual blocks with shared weights adapting input features for transmission estimation and generation of feature maps <inline-formula id="ieqn-47"><mml:math id="mml-ieqn-47"><mml:mi>F</mml:mi><mml:mrow><mml:msub><mml:mi>M</mml:mi><mml:mrow><mml:mi>G</mml:mi><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:math></inline-formula> and <inline-formula id="ieqn-48"><mml:math id="mml-ieqn-48"><mml:mi>F</mml:mi><mml:mrow><mml:msub><mml:mi>M</mml:mi><mml:mrow><mml:mi>L</mml:mi><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:math></inline-formula>. This helps in maximization of the training process to avoid training conflicts [<xref ref-type="bibr" rid="ref-29">29</xref>]. A <inline-formula id="ieqn-49"><mml:math id="mml-ieqn-49"><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn></mml:math></inline-formula> convolution layer converts <inline-formula id="ieqn-50"><mml:math id="mml-ieqn-50"><mml:mi>F</mml:mi><mml:mrow><mml:msub><mml:mi>M</mml:mi><mml:mrow><mml:mi>G</mml:mi><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:math></inline-formula> into a query feature map <inline-formula id="ieqn-51"><mml:math id="mml-ieqn-51"><mml:mi>Q</mml:mi><mml:mi>F</mml:mi><mml:mi>M</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="fraktur">R</mml:mi></mml:mrow></mml:mrow></mml:mrow><mml:mrow><mml:mi>R</mml:mi><mml:mo>,</mml:mo><mml:mi>G</mml:mi><mml:mo>,</mml:mo><mml:mi>B</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math></inline-formula> and another <inline-formula id="ieqn-52"><mml:math id="mml-ieqn-52"><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn></mml:math></inline-formula> layer converts <inline-formula id="ieqn-53"><mml:math id="mml-ieqn-53"><mml:mi>F</mml:mi><mml:mrow><mml:msub><mml:mi>M</mml:mi><mml:mrow><mml:mi>L</mml:mi><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:math></inline-formula> into a feature map <inline-formula id="ieqn-54"><mml:math id="mml-ieqn-54"><mml:mi>F</mml:mi><mml:mi>M</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="fraktur">R</mml:mi></mml:mrow></mml:mrow></mml:mrow><mml:mrow><mml:mi>R</mml:mi><mml:mo>,</mml:mo><mml:mi>G</mml:mi><mml:mo>,</mml:mo><mml:mi>B</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math></inline-formula> which is reshaped to <inline-formula id="ieqn-55"><mml:math id="mml-ieqn-55"><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="fraktur">R</mml:mi></mml:mrow></mml:mrow></mml:mrow><mml:mrow><mml:mi>R</mml:mi><mml:mo>,</mml:mo><mml:mi>G</mml:mi><mml:mo>,</mml:mo><mml:mi>B</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math></inline-formula>, a feature map depending on the shared global and local features of the haze and ground truth image. <inline-formula id="ieqn-56"><mml:math id="mml-ieqn-56"><mml:mi>Q</mml:mi><mml:mi>F</mml:mi><mml:mi>M</mml:mi></mml:math></inline-formula> and <inline-formula id="ieqn-57"><mml:math id="mml-ieqn-57"><mml:mi>F</mml:mi><mml:mi>M</mml:mi></mml:math></inline-formula> are multiplied and graph cut with softmax (see <?A3B2 "fig6",5,"anchor"?><xref ref-type="fig" rid="fig-6">Fig. 6</xref>). The results are then applied to obtain a parallax attention map <inline-formula id="ieqn-58"><mml:math id="mml-ieqn-58"><mml:mrow><mml:msub><mml:mi>M</mml:mi><mml:mrow><mml:mi>F</mml:mi><mml:mrow><mml:msub><mml:mi>M</mml:mi><mml:mi>G</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false">&#x2192;</mml:mo><mml:mi>F</mml:mi><mml:mrow><mml:msub><mml:mi>M</mml:mi><mml:mi>L</mml:mi></mml:msub></mml:mrow></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="fraktur">R</mml:mi></mml:mrow></mml:mrow></mml:mrow><mml:mrow><mml:mi>R</mml:mi><mml:mo>,</mml:mo><mml:mi>G</mml:mi><mml:mo>,</mml:mo><mml:mi>B</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math></inline-formula>. <inline-formula id="ieqn-59"><mml:math id="mml-ieqn-59"><mml:mrow><mml:msub><mml:mi>M</mml:mi><mml:mrow><mml:mi>F</mml:mi><mml:mrow><mml:msub><mml:mi>M</mml:mi><mml:mi>L</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false">&#x2192;</mml:mo><mml:mi>F</mml:mi><mml:mrow><mml:msub><mml:mi>M</mml:mi><mml:mi>G</mml:mi></mml:msub></mml:mrow></mml:mrow></mml:msub></mml:mrow></mml:math></inline-formula> is seen as a cost matrix encoding the correspondence along with pixel correlations between the haze and ground truth images. The proceeding step sees <inline-formula id="ieqn-60"><mml:math id="mml-ieqn-60"><mml:mi>F</mml:mi><mml:mrow><mml:msub><mml:mi>M</mml:mi><mml:mi>L</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> processed by <inline-formula id="ieqn-61"><mml:math id="mml-ieqn-61"><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn></mml:math></inline-formula> convolution layer to obtain <inline-formula id="ieqn-62"><mml:math id="mml-ieqn-62"><mml:mi>R</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="fraktur">R</mml:mi></mml:mrow></mml:mrow></mml:mrow><mml:mrow><mml:mi>R</mml:mi><mml:mo>,</mml:mo><mml:mi>G</mml:mi><mml:mo>,</mml:mo><mml:mi>B</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math></inline-formula>, which is multiplied by <inline-formula id="ieqn-63"><mml:math id="mml-ieqn-63"><mml:mrow><mml:msub><mml:mi>M</mml:mi><mml:mrow><mml:mi>F</mml:mi><mml:mrow><mml:msub><mml:mi>M</mml:mi><mml:mi>G</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false">&#x2192;</mml:mo><mml:mi>F</mml:mi><mml:mrow><mml:msub><mml:mi>M</mml:mi><mml:mi>L</mml:mi></mml:msub></mml:mrow></mml:mrow></mml:msub></mml:mrow></mml:math></inline-formula> to generate <inline-formula id="ieqn-64"><mml:math id="mml-ieqn-64"><mml:mi>O</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="fraktur">R</mml:mi></mml:mrow></mml:mrow></mml:mrow><mml:mrow><mml:mi>R</mml:mi><mml:mo>,</mml:mo><mml:mi>G</mml:mi><mml:mo>,</mml:mo><mml:mi>B</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math></inline-formula> (the warping of <inline-formula id="ieqn-65"><mml:math id="mml-ieqn-65"><mml:mi>F</mml:mi><mml:mrow><mml:msub><mml:mi>M</mml:mi><mml:mi>L</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> into <inline-formula id="ieqn-66"><mml:math id="mml-ieqn-66"><mml:mi>F</mml:mi><mml:mrow><mml:msub><mml:mi>M</mml:mi><mml:mi>G</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula>). PAM also helps in estimation occlusion maps, <inline-formula id="ieqn-67"><mml:math id="mml-ieqn-67"><mml:mrow><mml:msub><mml:mi>V</mml:mi><mml:mrow><mml:mi>F</mml:mi><mml:mrow><mml:msub><mml:mi>M</mml:mi><mml:mi>L</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false">&#x2192;</mml:mo><mml:mi>F</mml:mi><mml:mrow><mml:msub><mml:mi>M</mml:mi><mml:mi>G</mml:mi></mml:msub></mml:mrow></mml:mrow></mml:msub></mml:mrow></mml:math></inline-formula>, to help refine the transmission medium between ground truth and haze image. During estimation of the occlusion map, a second PAM <inline-formula id="ieqn-68"><mml:math id="mml-ieqn-68"><mml:mrow><mml:msub><mml:mi>M</mml:mi><mml:mrow><mml:mi>F</mml:mi><mml:mrow><mml:msub><mml:mi>M</mml:mi><mml:mi>L</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false">&#x2192;</mml:mo><mml:mi>F</mml:mi><mml:mrow><mml:msub><mml:mi>M</mml:mi><mml:mi>G</mml:mi></mml:msub></mml:mrow></mml:mrow></mml:msub></mml:mrow></mml:math></inline-formula> is estimated by exchanging <inline-formula id="ieqn-69"><mml:math id="mml-ieqn-69"><mml:mi>F</mml:mi><mml:mrow><mml:msub><mml:mi>M</mml:mi><mml:mi>G</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> and <inline-formula id="ieqn-70"><mml:math id="mml-ieqn-70"><mml:mi>F</mml:mi><mml:mrow><mml:msub><mml:mi>M</mml:mi><mml:mi>L</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula>. The rest of the details about the occlusion maps are presented in [<xref ref-type="bibr" rid="ref-30">30</xref>]. The literature on the functionality of the PAM about its applicability in image processing is extensively presented in the following existing papers [<xref ref-type="bibr" rid="ref-30">30</xref>]. Graph cut is widely illustrated by Zhu et al. [<xref ref-type="bibr" rid="ref-8">8</xref>] and extensively discussed and described by [<xref ref-type="bibr" rid="ref-9">9</xref>,<xref ref-type="bibr" rid="ref-31">31</xref>]. The two main components of the graph cut are data and regularization [<xref ref-type="bibr" rid="ref-32">32</xref>]. The data part assesses the image data compliance, such as image features, while the regularization part polishes the boundaries of the different conformity areas.</p>
</sec>
</sec>
<sec id="s5"><label>5</label><title>Experiments</title>
<sec id="s5_1"><label>5.1</label><title>Data and Implementation</title>
<p>The proposed technique (summarized in <?A3B2 "fig3",5,"anchor"?><xref ref-type="fig" rid="fig-3">Fig. 3</xref> and comprehensive in <xref ref-type="fig" rid="fig-2">Figs. 2</xref> and <xref ref-type="fig" rid="fig-6">6</xref>) was applied to various images (presented in <xref ref-type="fig" rid="fig-4">Figs. 4</xref>, <?A3B2 "fig5",5,"anchor"?><xref ref-type="fig" rid="fig-5">5</xref> and <?A3B2 "fig7",5,"anchor"?><?A3B2 "fig8",5,"anchor"?><?A3B2 "fig9",5,"anchor"?><?A3B2 "fig10",5,"anchor"?><?A3B2 "fig11",5,"anchor"?><?A3B2 "fig12",5,"anchor"?><xref ref-type="fig" rid="fig-7 fig-8 fig-9 fig-10 fig-11 fig-12">7&#x2013;12</xref>) obtained from different databases. These images were resized to reduce computational complexities. The images presented in <xref ref-type="fig" rid="fig-4">Figs. 4</xref>, <xref ref-type="fig" rid="fig-5">5</xref> and <xref ref-type="fig" rid="fig-7 fig-8 fig-9 fig-10 fig-11 fig-12">7&#x2013;12</xref> are examples obtained from a dataset of 56 examples used in the experiment. The performance metrics presented in <?A3B2 "tbl2",5,"anchor"?><xref ref-type="table" rid="table-2">Tab. 2</xref> are constructed from the results, whose parameter values are represented in <?A3B2 "tbl1",5,"anchor"?><xref ref-type="table" rid="table-1">Tab. 1</xref>. We used a total of <inline-formula id="ieqn-71"><mml:math id="mml-ieqn-71"><mml:mn>24640</mml:mn></mml:math></inline-formula> images to train the network using 440 partitions from 56 image samples. We validated the network results with <inline-formula id="ieqn-72"><mml:math id="mml-ieqn-72"><mml:mn>11000</mml:mn></mml:math></inline-formula> images. These were generated from simulated clear images from the images presented in <xref ref-type="fig" rid="fig-4">Figs. 4</xref>, <xref ref-type="fig" rid="fig-5">5</xref> and <xref ref-type="fig" rid="fig-7 fig-8 fig-9 fig-10 fig-11 fig-12">7&#x2013;12</xref>. We extracted images (see <xref ref-type="fig" rid="fig-9">Fig. 9</xref> (validation images)) from regions with rich textures. Thus, the quality could be compromised for these set of results due to the absence of ground truth to validate the images. We constructed the final images outputs from 440 partitioned images to yield the results presented in <xref ref-type="fig" rid="fig-4">Figs. 4</xref>, <xref ref-type="fig" rid="fig-5">5</xref> and <xref ref-type="fig" rid="fig-7 fig-8 fig-9 fig-10 fig-11 fig-12">7&#x2013;12</xref>. The partition helped organize images into patches of similar local and global neighborhoods for the corresponding haze and ground truth images. A BIZON X5000 G2 with 16 GB RAM pc was used to train the process for the proposed dehazing technique.</p>
</sec>
<sec id="s5_2"><label>5.2</label><title>Evaluation Metrics</title>
<p>The proposed method&#x0027;s performance evaluation was conducted using five image quality criteria, including: (i). Entropy [<xref ref-type="bibr" rid="ref-33">33</xref>]; (ii). e (visible edges) [<xref ref-type="bibr" rid="ref-11">11</xref>]; (iii). r (edge preservation performance) [<xref ref-type="bibr" rid="ref-11">11</xref>]; (iv). Contrast, and (v). Homogeneity [<xref ref-type="bibr" rid="ref-28">28</xref>]. These criteria were chosen based on the proposed method&#x0027;s objective: improving information content, measuring human visual quality and textural features, and comparing the similarities between a dehazed image with the ground truth.</p>
<table-wrap id="table-1"><label>Table 1</label><caption><title>Values obtained and used during the experiment for the proposed dehazing algorithm</title></caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left">Item</th>
<th align="left">Experimental value range</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Accuracy</td>
<td align="left">96.32%&#x02013;99.05%</td>
</tr>
<tr>
<td align="left">Training time</td>
<td align="left"><inline-formula id="ieqn-99"><mml:math id="mml-ieqn-99"><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>32</mml:mn><mml:mrow><mml:mtext> min</mml:mtext></mml:mrow><mml:mspace width="thickmathspace" /><mml:mn>50</mml:mn><mml:mspace width="thickmathspace" /><mml:mi>s</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>45</mml:mn><mml:mrow><mml:mtext> min</mml:mtext></mml:mrow><mml:mspace width="thickmathspace" /><mml:mn>27</mml:mn><mml:mspace width="thickmathspace" /><mml:mi>s</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula></td>
</tr>
<tr>
<td align="left">Learning rate</td>
<td align="left">0.095&#x02013;0.015</td>
</tr>
<tr>
<td align="left">Validation frequency</td>
<td align="left">230&#x02013;590</td>
</tr>
<tr>
<td align="left">Iterations</td>
<td align="left">10500&#x02013;16720</td>
</tr>
<tr>
<td align="left">Epoch</td>
<td align="left">560&#x02013;640</td>
</tr>
<tr>
<td align="left">Estimated momentum</td>
<td align="left">0.85&#x02013;0.92 [<xref ref-type="bibr" rid="ref-34">34</xref>]</td>
</tr>
<tr>
<td align="left">Estimated <inline-formula id="ieqn-105"><mml:math id="mml-ieqn-105"><mml:mrow><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mi>g</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula></td>
<td align="left">98.5 [<xref ref-type="bibr" rid="ref-34">34</xref>]</td>
</tr>
</tbody>
</table>
</table-wrap>
<p><xref ref-type="fig" rid="fig-6">Fig. 6a</xref> is comprised of input, encoder, and decoder. The encoder consists of convolution neural networks which extract global and local features from the hazy images and compare their corresponding features to the ground truth images. The decoder functions like the encoder except for its residual functions which contain PAM with graph cut (see <xref ref-type="fig" rid="fig-6">Figs. 6b</xref> and <xref ref-type="fig" rid="fig-6">6c</xref>). The residual decoder function permits full connection with other neurons, thus enhancing the learning rate and merging the training models. <xref ref-type="fig" rid="fig-6">Fig. 6c</xref> is a build graph designed to minimize the energy problem. The graph consists of nodes corresponding to image pixels and pixel labels. The pixels are weighted based on their label. The cut consists of a configuration of pixels at its maximum label based on haze and ground truth image. The cut also ensures the energy is minimal at all configurations.</p>
<fig id="fig-2">
<label>Figure 2</label><caption><title>The schematic detail shows the proposed architecture with seven neurons in the second hidden layer, eight neurons in the third hidden layer, and a single output. The series contains alternating global and local feature extraction before full connection and PAM via graph cut to obtain the final dehazed image</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_23339-fig-2.png"/></fig>
<fig id="fig-3">
<label>Figure 3</label><caption><title>A detail of the proposed image dehazing using ground truth-based geometric-pixel guided CNN with PAM via graph cut</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_23339-fig-3.png"/></fig>
</sec>
</sec>
<sec id="s5_3"><label>5.3</label><title>Results Analysis and Comparison</title>
<fig id="fig-4"><label>Figure 4</label><caption><title>Comparison of the effect of proposed energy function on the image features (a) Tan [<xref ref-type="bibr" rid="ref-33">33</xref>] (b) Zhu et al. [<xref ref-type="bibr" rid="ref-8">8</xref>] and the proposed method in the last column. The effectiveness of the proposed method has visibly extracted extra features in the dehazed image compared with existing dehaze cut method&#x0027;s results ([<xref ref-type="bibr" rid="ref-35">35</xref>] and [<xref ref-type="bibr" rid="ref-8">8</xref>])</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_23339-fig-4.png"/>
</fig>
<fig id="fig-5"><label>Figure 5</label><caption><title>Comparison showing dehaze-cut results (the first image) and results from [<xref ref-type="bibr" rid="ref-8">8</xref>] (the second image) as well as the proposed method&#x0027;s result (the last image). The red and green patches show the effectiveness of the proposed method in terms of detailed information in comparison with the existing dehaze-cut methods [<xref ref-type="bibr" rid="ref-35">35</xref>] and [<xref ref-type="bibr" rid="ref-8">8</xref>]</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_23339-fig-5.png"/>
</fig>
<fig id="fig-6"><label>Figure 6</label><caption><title>Detailed CNN for the proposed dehazing method with the encoder and decoder, which are similar except in the residual phase. The residual function ensures that each hidden neuron is fully connected, enhances the learning rate, and merges the training data set models. (b) The dense-residual phase is composed of softmax, which feeds information to the (c) PAM via graph-cut algorithm, which conforms image features and smooths the boundaries of varying conformity areas between the corresponding haze and ground truth image</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_23339-fig-6.png"/>
</fig>
</sec>
<sec id="s5_4"><label>5.4</label><title>Quantitative Comparison</title>
<fig id="fig-7">
<label>Figure 7</label><caption><title>Summary of the test comparison with extracted synthetic images from O&#x0027;Shea et al. [<xref ref-type="bibr" rid="ref-21">21</xref>] showing the original image in the I haze image, and dehazing results from <inline-formula id="ieqn-82"><mml:math id="mml-ieqn-82"><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mtext>a</mml:mtext></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> Fattal et al. [<xref ref-type="bibr" rid="ref-5">5</xref>], <inline-formula id="ieqn-83"><mml:math id="mml-ieqn-83"><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mtext>b</mml:mtext></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> Barman et al. [<xref ref-type="bibr" rid="ref-6">6</xref>], <inline-formula id="ieqn-84"><mml:math id="mml-ieqn-84"><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mtext>c</mml:mtext></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> Zhu et al. [<xref ref-type="bibr" rid="ref-36">36</xref>], <inline-formula id="ieqn-85"><mml:math id="mml-ieqn-85"><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mtext>d</mml:mtext></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> Sener et al. [<xref ref-type="bibr" rid="ref-37">37</xref>], <inline-formula id="ieqn-86"><mml:math id="mml-ieqn-86"><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mtext>e</mml:mtext></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> Ancuti et al. [<xref ref-type="bibr" rid="ref-4">4</xref>], <inline-formula id="ieqn-87"><mml:math id="mml-ieqn-87"><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mtext>f</mml:mtext></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> Meng et al. [<xref ref-type="bibr" rid="ref-31">31</xref>], <inline-formula id="ieqn-88"><mml:math id="mml-ieqn-88"><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mtext>g</mml:mtext></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> O&#x0027;Shea et al. [<xref ref-type="bibr" rid="ref-21">21</xref>], and <inline-formula id="ieqn-89"><mml:math id="mml-ieqn-89"><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mtext>h</mml:mtext></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> results from the proposed algorithm, along with T the ground truth in the last column</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_23339-fig-7.png"/>
</fig>
<fig id="fig-8"><label>Figure 8</label><caption><title>Summary of the test comparison showing the original haze image in the first column followed by the results from multilayer perceptron [<xref ref-type="bibr" rid="ref-36">36</xref>], residual-based dehazing method [<xref ref-type="bibr" rid="ref-37">37</xref>], the results from the proposed algorithm in the second-to-last column, and the ground truth in the last column</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_23339-fig-8.png"/></fig>
<fig id="fig-9"><label>Figure 9</label><caption><title>Summary of the test comparison with natural images showing the original image in the I haze image, and dehazing results from <inline-formula id="ieqn-90"><mml:math id="mml-ieqn-90"><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mtext>a</mml:mtext></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> Fattal et al. [<xref ref-type="bibr" rid="ref-5">5</xref>], <inline-formula id="ieqn-91"><mml:math id="mml-ieqn-91"><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mtext>b</mml:mtext></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> Barman et al. [<xref ref-type="bibr" rid="ref-6">6</xref>], <inline-formula id="ieqn-92"><mml:math id="mml-ieqn-92"><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mtext>c</mml:mtext></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> Meng et al. [<xref ref-type="bibr" rid="ref-31">31</xref>], <inline-formula id="ieqn-93"><mml:math id="mml-ieqn-93"><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mtext>d</mml:mtext></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> Zhu et al. [<xref ref-type="bibr" rid="ref-38">38</xref>], <inline-formula id="ieqn-94"><mml:math id="mml-ieqn-94"><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mtext>e</mml:mtext></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> He et al. [<xref ref-type="bibr" rid="ref-19">19</xref>], <inline-formula id="ieqn-95"><mml:math id="mml-ieqn-95"><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mtext>f</mml:mtext></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> Li et al. [<xref ref-type="bibr" rid="ref-39">39</xref>], <inline-formula id="ieqn-96"><mml:math id="mml-ieqn-96"><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mtext>g</mml:mtext></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> O&#x0027;Shea et al. [<xref ref-type="bibr" rid="ref-21">21</xref>] and <inline-formula id="ieqn-97"><mml:math id="mml-ieqn-97"><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mtext>h</mml:mtext></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> results from the proposed algorithm in the last column. The patches marked red are the regions of the assumed ground truth for the purposed of training the proposed method</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_23339-fig-9.png"/></fig>
</sec>
<sec id="s5_5"><label>5.5</label><title>Comparison Analysis</title>
<p>In all the cases, (see <xref ref-type="table" rid="table-2">Tab. 2</xref>), the images that resulted from the proposed algorithm on average demonstrated higher entropy, e, r, contrast and homogeneity. This suggests that the proposed method resulted in a dehazed image with improved information content, visibility, and with better texture than existing methods (7&#x2013;12). The difference in the textural properties of the proposed method is compared with those of the state-of-the-art methods in <xref ref-type="fig" rid="fig-11">Figs. 11</xref> and <xref ref-type="fig" rid="fig-12">12</xref>. The difference in the textures in <xref ref-type="fig" rid="fig-11">Figs. 11</xref> and <xref ref-type="fig" rid="fig-12">12</xref> shows that a modification of the combination of PAM via graph cut and CNN with modified energy function and pixel-guided transmission based on &#x2018;assumed ground&#x2019; ultimately yields a better dehazed image. The true ground truth and &#x2018;assumed ground truth&#x2019; informs the pixel reconstruction to yield an image with cutting edge experience in color correction and visible blue sky (see proposed in the (h) or last column of <xref ref-type="fig" rid="fig-10">Fig. 10</xref>). A further visual inspection of patched sections of the proposed results in <xref ref-type="fig" rid="fig-11">Figs. 11</xref> and <xref ref-type="fig" rid="fig-12">12</xref> compared to the existing methods reveals its strength and weakness.</p>
<p>The proposed method&#x0027;s major strength lies in its capacity to extract more details in the dehazed images (see blue patches in <xref ref-type="fig" rid="fig-12">Fig. 12</xref>). The areas marked blue tend to have more details than those in Zhu et al. [<xref ref-type="bibr" rid="ref-34">34</xref>]. The extra information can be credited to the proposed pixel differential-based transmittance medium, which emphases the global and local patches&#x2019; pixel difference. This explains the addition of some tree leaves in the patched sections. The approximation of transmittance medium via local and global pixels with image neighborhood distinguishes regions, resulting in more information extraction.</p>
<fig id="fig-10"><label>Figure 10</label><caption><title>Summary of hazed images used in the paper. (a) input image, (b) Zhu et al. [<xref ref-type="bibr" rid="ref-34">34</xref>], and (c) proposed results. The red (d), (e), and (f) patches represent the regions assumed as ground truth and used for training in the proposed method. The blue patches present the visible differences between the proposed method (i) and input similar region (g) and existing state-of-the-art method (h) Zhu et al. [<xref ref-type="bibr" rid="ref-34">34</xref>]</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_23339-fig-10.png"/>
</fig>
<fig id="fig-11"><label>Figure 11</label><caption><title>Summary of the proposed method&#x0027;s strength compared to existing state-of-the-art method of Sener et al. [<xref ref-type="bibr" rid="ref-35">35</xref>]. The proposed method preserves light and gives almost similar results to that of ground truth. The green patches show the Sener et al. [<xref ref-type="bibr" rid="ref-35">35</xref>] method tends to exaggerate light, an indication of retention of most of the haze particles. The proposed method in the middle column, although it appears darker, has better visibility than Sener et al. [<xref ref-type="bibr" rid="ref-35">35</xref>]</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_23339-fig-11.png"/></fig>
<fig id="fig-12"><label>Figure 12</label><caption><title>Summary of hazed images used in the paper. (a) input image, (b) Yousaf et al. [<xref ref-type="bibr" rid="ref-23">23</xref>], and (c) proposed results. The red (d), (e), and (f) patches represent the regions assumed as ground truth and used for training in the proposed method. The blue patches present the visible differences between the proposed method (i) and input similar region (g) and existing state-of-the-art method (h) Yousaf et al. [<xref ref-type="bibr" rid="ref-23">23</xref>]</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_23339-fig-12.png"/></fig>
<p>The visual inspection of patched sections of the proposed results in <xref ref-type="fig" rid="fig-10">Fig. 10</xref> compared to the existing methods reveals its weakness. While the proposed method focuses on extracting finer details of the dehazed images (see blue patches), the regions with excess light still retain some light, and hence less information (see also <xref ref-type="fig" rid="fig-5">Fig. 5</xref>). The red patched areas, for instance, in the areas marked red and black, tend to blur over the entire regions compared to those in Zhu et al. [<xref ref-type="bibr" rid="ref-34">34</xref>] and the input image. This is attributed to the proposed pixel differential-based transmittance medium&#x0027;s reliance on the assumed ground truth, which is not accurate. However, the contrary is true in areas where there exists real ground truth, such as in the simulated image results presented in <xref ref-type="fig" rid="fig-7">Figs. 7</xref> and <xref ref-type="fig" rid="fig-8">8</xref>. The pixel difference of the global and local patches between the haze and ground truth images functions but fails to extract features with similar pixels within regions as noised, causing a blur in &#x2018;assumed ground truth&#x2019; (see <xref ref-type="fig" rid="fig-10">Figs. 10</xref> and <xref ref-type="fig" rid="fig-12">12</xref>) but correctly extracting details in real ground truth (see <xref ref-type="fig" rid="fig-11">Fig. 11</xref>). The estimation of transmittance medium via local and global pixels within haze and ground truth image neighborhood distinguishes regions with similar traits, leading to better results, presented in <xref ref-type="table" rid="table-2">Tab. 2</xref>. This also explains the clear visibility of the sky and clouds in (h) <xref ref-type="fig" rid="fig-9">Fig. 9</xref>, in which a highly textured region is used as &#x2018;assumed ground truth, as well as the conservation of color and light in the green patches in <xref ref-type="fig" rid="fig-11">Fig. 11</xref>.</p>
<p>In all the examples, extra features of the proposed image results which arose from the proposed novel estimation of transmittance medium are clearly visible in comparison to existing results. The standard deviation values in all cases, as presented in <xref ref-type="table" rid="table-2">Tab. 2</xref>, show lower values than the corresponding benchmark algorithms. <xref ref-type="table" rid="table-2">Tab. 2</xref> also shows that our proposed algorithm has a higher entropy of <inline-formula id="ieqn-73"><mml:math id="mml-ieqn-73"><mml:mn>7.43</mml:mn></mml:math></inline-formula> than Zhu et al. [<xref ref-type="bibr" rid="ref-34">34</xref>] algorithm entropy of <inline-formula id="ieqn-74"><mml:math id="mml-ieqn-74"><mml:mn>7.12</mml:mn></mml:math></inline-formula> and Sener et al. [<xref ref-type="bibr" rid="ref-35">35</xref>] algorithm entropy of <inline-formula id="ieqn-75"><mml:math id="mml-ieqn-75"><mml:mn>6.89</mml:mn></mml:math></inline-formula>. Also, our proposed algorithm has better consistency than others as tabulated in <xref ref-type="table" rid="table-2">Tab. 2</xref> for <xref ref-type="fig" rid="fig-9">Fig. 9</xref>. These show that the proposed method gives more consistent and predictable results than existing algorithms. However, the proposed method faces a challenge: some regions of the dehazed image tend to blur instances where the ground truth is assumed since this method relies on the actual ground truth. This is a common problem even in the existing state-of-the-art methods used for comparison.</p>
<table-wrap id="table-2"><label>Table 2</label><caption><title>Comparison of mean and standard deviation of performance evaluation metrics of the proposed and existing state-of-the-art algorithm for example presented in <xref ref-type="fig" rid="fig-7">Figs. 7</xref> and <xref ref-type="fig" rid="fig-8">8</xref>. <italic>e</italic> and <italic>r</italic> are blind assessment indicators. <italic>e</italic> assesses increased rate of visible edges while <italic>r</italic> assesses edge preservation performance. higher values of <inline-formula id="ieqn-106"><mml:math id="mml-ieqn-106"><mml:mi>&#x03BC;</mml:mi></mml:math></inline-formula> indicate better method while lower values of <inline-formula id="ieqn-107"><mml:math id="mml-ieqn-107"><mml:mi>&#x03C3;</mml:mi></mml:math></inline-formula> show the consistency of the results</title></caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left">Example</th>
<th align="left"><inline-formula id="ieqn-108"><mml:math id="mml-ieqn-108"><mml:mi>&#x03BC;</mml:mi></mml:math></inline-formula> and <inline-formula id="ieqn-109"><mml:math id="mml-ieqn-109"><mml:mi>&#x03C3;</mml:mi></mml:math></inline-formula></th>
<th align="left">Entropy</th>
<th align="left"><inline-formula id="ieqn-110"><mml:math id="mml-ieqn-110"><mml:mi>e</mml:mi></mml:math></inline-formula></th>
<th align="left"><inline-formula id="ieqn-111"><mml:math id="mml-ieqn-111"><mml:mi>r</mml:mi></mml:math></inline-formula></th>
<th align="left">Contrast</th>
<th align="left">Homogeneity</th>
<th align="left">Algorithm</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="6"><xref ref-type="fig" rid="fig-8">Fig. 8</xref></td>
<td align="left"><inline-formula id="ieqn-112"><mml:math id="mml-ieqn-112"><mml:mi>&#x03BC;</mml:mi></mml:math></inline-formula></td>
<td align="left">7.12</td>
<td align="left">20.89</td>
<td align="left">1.45</td>
<td align="left">0.29</td>
<td align="left">0.85</td>
<td align="left" rowspan="2">[<xref ref-type="bibr" rid="ref-36">36</xref>]</td>
</tr>
<tr>
<td align="left"><inline-formula id="ieqn-113"><mml:math id="mml-ieqn-113"><mml:mi>&#x03C3;</mml:mi></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-114"><mml:math id="mml-ieqn-114"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.71</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-115"><mml:math id="mml-ieqn-115"><mml:mo>&#x00B1;</mml:mo><mml:mn>2.41</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-116"><mml:math id="mml-ieqn-116"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.39</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-117"><mml:math id="mml-ieqn-117"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.19</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-118"><mml:math id="mml-ieqn-118"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.21</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td align="left"><inline-formula id="ieqn-119"><mml:math id="mml-ieqn-119"><mml:mi>&#x03BC;</mml:mi></mml:math></inline-formula></td>
<td align="left">6.89</td>
<td align="left">20.37</td>
<td align="left">3.71</td>
<td align="left">0.67</td>
<td align="left">0.65</td>
<td align="left" rowspan="2">[<xref ref-type="bibr" rid="ref-37">37</xref>]</td>
</tr>
<tr>
<td align="left"><inline-formula id="ieqn-120"><mml:math id="mml-ieqn-120"><mml:mi>&#x03C3;</mml:mi></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-121"><mml:math id="mml-ieqn-121"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.15</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-122"><mml:math id="mml-ieqn-122"><mml:mo>&#x00B1;</mml:mo><mml:mn>2.21</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-123"><mml:math id="mml-ieqn-123"><mml:mo>&#x00B1;</mml:mo><mml:mn>1.51</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-124"><mml:math id="mml-ieqn-124"><mml:mo>&#x00B1;</mml:mo><mml:mn>1.09</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-125"><mml:math id="mml-ieqn-125"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.17</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td align="left"><inline-formula id="ieqn-126"><mml:math id="mml-ieqn-126"><mml:mi>&#x03BC;</mml:mi></mml:math></inline-formula></td>
<td align="left">7.43</td>
<td align="left">22.40</td>
<td align="left">3.97</td>
<td align="left">1.61</td>
<td align="left">0.89</td>
<td align="left" rowspan="2">Proposed</td>
</tr>
<tr>
<td align="left"><inline-formula id="ieqn-127"><mml:math id="mml-ieqn-127"><mml:mi>&#x03C3;</mml:mi></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-128"><mml:math id="mml-ieqn-128"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.10</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-129"><mml:math id="mml-ieqn-129"><mml:mo>&#x00B1;</mml:mo><mml:mn>1.09</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-130"><mml:math id="mml-ieqn-130"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.21</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-131"><mml:math id="mml-ieqn-131"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.08</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-132"><mml:math id="mml-ieqn-132"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.05</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td align="left" rowspan="16"><xref ref-type="fig" rid="fig-7">Fig. 7</xref></td>
<td align="left"><inline-formula id="ieqn-133"><mml:math id="mml-ieqn-133"><mml:mi>&#x03BC;</mml:mi></mml:math></inline-formula></td>
<td align="left">6.88</td>
<td align="left">14.57</td>
<td align="left">1.57</td>
<td align="left">0.29</td>
<td align="left">0.31</td>
<td align="left" rowspan="2">[<xref ref-type="bibr" rid="ref-5">5</xref>]</td>
</tr>
<tr>
<td align="left"><inline-formula id="ieqn-134"><mml:math id="mml-ieqn-134"><mml:mi>&#x03C3;</mml:mi></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-135"><mml:math id="mml-ieqn-135"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.39</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-136"><mml:math id="mml-ieqn-136"><mml:mo>&#x00B1;</mml:mo><mml:mn>1.12</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-137"><mml:math id="mml-ieqn-137"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.45</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-138"><mml:math id="mml-ieqn-138"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.87</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-139"><mml:math id="mml-ieqn-139"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.25</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td align="left"><inline-formula id="ieqn-140"><mml:math id="mml-ieqn-140"><mml:mi>&#x03BC;</mml:mi></mml:math></inline-formula></td>
<td align="left">6.67</td>
<td align="left">22.80</td>
<td align="left">2.31</td>
<td align="left">1.71</td>
<td align="left">0.83</td>
<td align="left" rowspan="2">[<xref ref-type="bibr" rid="ref-6">6</xref>]</td>
</tr>
<tr>
<td align="left"><inline-formula id="ieqn-141"><mml:math id="mml-ieqn-141"><mml:mi>&#x03C3;</mml:mi></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-142"><mml:math id="mml-ieqn-142"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.8</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-143"><mml:math id="mml-ieqn-143"><mml:mo>&#x00B1;</mml:mo><mml:mn>4.31</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-144"><mml:math id="mml-ieqn-144"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.23</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-145"><mml:math id="mml-ieqn-145"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.79</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-146"><mml:math id="mml-ieqn-146"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.58</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td align="left"><inline-formula id="ieqn-147"><mml:math id="mml-ieqn-147"><mml:mi>&#x03BC;</mml:mi></mml:math></inline-formula></td>
<td align="left">6.95</td>
<td align="left">21.56</td>
<td align="left">2.12</td>
<td align="left">1.50</td>
<td align="left">0.31</td>
<td align="left" rowspan="2">[<xref ref-type="bibr" rid="ref-31">31</xref>]</td>
</tr>
<tr>
<td align="left"><inline-formula id="ieqn-148"><mml:math id="mml-ieqn-148"><mml:mi>&#x03C3;</mml:mi></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-149"><mml:math id="mml-ieqn-149"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.49</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-150"><mml:math id="mml-ieqn-150"><mml:mo>&#x00B1;</mml:mo><mml:mn>4.83</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-151"><mml:math id="mml-ieqn-151"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.39</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-152"><mml:math id="mml-ieqn-152"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.85</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-153"><mml:math id="mml-ieqn-153"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.71</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td align="left"><inline-formula id="ieqn-154"><mml:math id="mml-ieqn-154"><mml:mi>&#x03BC;</mml:mi></mml:math></inline-formula></td>
<td align="left">6.71</td>
<td align="left">19.01</td>
<td align="left">3.75</td>
<td align="left">1.97</td>
<td align="left">0.79</td>
<td align="left" rowspan="2">[<xref ref-type="bibr" rid="ref-38">38</xref>]</td>
</tr>
<tr>
<td align="left"><inline-formula id="ieqn-155"><mml:math id="mml-ieqn-155"><mml:mi>&#x03C3;</mml:mi></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-156"><mml:math id="mml-ieqn-156"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.51</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-157"><mml:math id="mml-ieqn-157"><mml:mo>&#x00B1;</mml:mo><mml:mn>2.80</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-158"><mml:math id="mml-ieqn-158"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.33</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-159"><mml:math id="mml-ieqn-159"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.61</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-160"><mml:math id="mml-ieqn-160"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.19</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td align="left"><inline-formula id="ieqn-161"><mml:math id="mml-ieqn-161"><mml:mi>&#x03BC;</mml:mi></mml:math></inline-formula></td>
<td align="left">6.79</td>
<td align="left">18.99</td>
<td align="left">1.18</td>
<td align="left">1.45</td>
<td align="left">0.55</td>
<td align="left" rowspan="2">[<xref ref-type="bibr" rid="ref-9">9</xref>]</td>
</tr>
<tr>
<td align="left"><inline-formula id="ieqn-162"><mml:math id="mml-ieqn-162"><mml:mi>&#x03C3;</mml:mi></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-163"><mml:math id="mml-ieqn-163"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.21</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-164"><mml:math id="mml-ieqn-164"><mml:mo>&#x00B1;</mml:mo><mml:mn>3.15</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-165"><mml:math id="mml-ieqn-165"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.25</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-166"><mml:math id="mml-ieqn-166"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.39</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-167"><mml:math id="mml-ieqn-167"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.29</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td align="left"><inline-formula id="ieqn-168"><mml:math id="mml-ieqn-168"><mml:mi>&#x03BC;</mml:mi></mml:math></inline-formula></td>
<td align="left">6.93</td>
<td align="left">19.25</td>
<td align="left">2.37</td>
<td align="left">1.89</td>
<td align="left">0.65</td>
<td align="left" rowspan="2">[<xref ref-type="bibr" rid="ref-37">37</xref>]</td>
</tr>
<tr>
<td align="left"><inline-formula id="ieqn-169"><mml:math id="mml-ieqn-169"><mml:mi>&#x03C3;</mml:mi></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-170"><mml:math id="mml-ieqn-170"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.36</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-171"><mml:math id="mml-ieqn-171"><mml:mo>&#x00B1;</mml:mo><mml:mn>3.47</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-172"><mml:math id="mml-ieqn-172"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.49</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-173"><mml:math id="mml-ieqn-173"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.53</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-174"><mml:math id="mml-ieqn-174"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.17</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td align="left"><inline-formula id="ieqn-175"><mml:math id="mml-ieqn-175"><mml:mi>&#x03BC;</mml:mi></mml:math></inline-formula></td>
<td align="left">7.01</td>
<td align="left">21.07</td>
<td align="left">2.93</td>
<td align="left">1.69</td>
<td align="left">0.95</td>
<td align="left" rowspan="2">[<xref ref-type="bibr" rid="ref-21">21</xref>]</td>
</tr>
<tr>
<td align="left"><inline-formula id="ieqn-176"><mml:math id="mml-ieqn-176"><mml:mi>&#x03C3;</mml:mi></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-177"><mml:math id="mml-ieqn-177"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.45</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-178"><mml:math id="mml-ieqn-178"><mml:mo>&#x00B1;</mml:mo><mml:mn>3.57</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-179"><mml:math id="mml-ieqn-179"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.49</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-180"><mml:math id="mml-ieqn-180"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.97</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-181"><mml:math id="mml-ieqn-181"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.31</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td align="left"><inline-formula id="ieqn-182"><mml:math id="mml-ieqn-182"><mml:mi>&#x03BC;</mml:mi></mml:math></inline-formula></td>
<td align="left">7.39</td>
<td align="left">23.01</td>
<td align="left">4.13</td>
<td align="left">2.91</td>
<td align="left">0.97</td>
<td align="left" rowspan="2">Proposed</td>
</tr>
<tr>
<td align="left"><inline-formula id="ieqn-183"><mml:math id="mml-ieqn-183"><mml:mi>&#x03C3;</mml:mi></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-184"><mml:math id="mml-ieqn-184"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.15</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-185"><mml:math id="mml-ieqn-185"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.95</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-186"><mml:math id="mml-ieqn-186"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.19</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-187"><mml:math id="mml-ieqn-187"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.29</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-188"><mml:math id="mml-ieqn-188"><mml:mo>&#x00B1;</mml:mo><mml:mn>0.15</mml:mn></mml:math></inline-formula></td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
</sec>
<sec id="s6"><label>6</label><title>Conclusion</title>
<p>This paper presents a novel method for image dehazing. We propose to solve the dehazing problem using a combination of CNN with PAM via graph-cut algorithms. The method considers the transmittance based on differential pixel-based variance, and uses local and global patches between the ground truth and haze image as well as energy functions to improve the transmission map. Through the outcomes presented presented and demonstrated in given examples, the paper shows that the proposed algorithm yields a better dehazed image than those of the existing state-of-the-art methods, as shown in <xref ref-type="fig" rid="fig-8">Figs. 8</xref>,<xref ref-type="fig" rid="fig-10">10</xref> and <xref ref-type="fig" rid="fig-11">11</xref>. Comparison of entropy values in <xref ref-type="fig" rid="fig-7">Figs. 7</xref> and <xref ref-type="fig" rid="fig-8">8</xref> suggest the proposed method improved the information content of dehazed image by <inline-formula id="ieqn-76"><mml:math id="mml-ieqn-76"><mml:mrow><mml:mo>&#x2243;</mml:mo></mml:mrow><mml:mn>4.35</mml:mn><mml:mi mathvariant="normal">&#x0025;</mml:mi></mml:math></inline-formula> and <inline-formula id="ieqn-77"><mml:math id="mml-ieqn-77"><mml:mrow><mml:mo>&#x2243;</mml:mo></mml:mrow><mml:mn>5.42</mml:mn><mml:mi mathvariant="normal">&#x0025;</mml:mi></mml:math></inline-formula> respectively, compared to the best values. In all the comparison metrics, the proposed method gives consistent results than those of existing methods. These show that our proposed method gives images with better visibility, greater clarity of features, and more features. In general, our results show more details compared to existing benchmark enhancement methods. These improved results can be attributed to strengthening local and global image features by a transmittance medium dependent on image pixel variance. However, the proposed method faces a challenge: some regions of the dehazed image tend to blur instances where the ground truth is assumed since this method relies on the actual ground truth. Future research could consider combining our method with other existing algorithms such as dark channel prior, since at least one-color channel of an RGB image has some pixels of the lowest intensities. This can be achieved via sub-tasking of the CNN framework based on the problems to be solved. This will enhance algorithm complexity while reducing the operational cost. Future research could also test out combining conditions for atmospheric homogeneity and ratio between the ground truth and haze image segments during estimation of transmittance medium. This can be achieved by developing a framework for finding the variation in atmospheric light and the best blend to give optimal results.</p>
</sec>
</body>
<back>
<fn-group>
<fn fn-type="other"><p><bold>Funding Statement:</bold> This work was funded by the Deanship of Scientific Research at Jouf University under grant No DSR-2021-02-0398.</p></fn>
<fn fn-type="conflict"><p><bold>Conflicts of Interest:</bold> The authors declare that they have no conflicts of interest to report regarding the present study.</p></fn>
</fn-group>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>[1]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>Wang</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Guo</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Wang</surname></string-name>, <string-name><given-names>Z.</given-names> <surname>Liang</surname></string-name>, <string-name><given-names>Z. J.</given-names> <surname>Lin</surname></string-name> <etal>et al.</etal></person-group>, &#x201C;<article-title>Parallax attention for unsupervised stereo correspondence learning</article-title>,&#x201D; <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>, vol. <volume>1</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>18</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-2"><label>[2]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>G.</given-names> <surname>Obulesu</surname></string-name>, <string-name><given-names>M. S. P.</given-names> <surname>Kumar</surname></string-name>, <string-name><given-names>P.</given-names> <surname>Sowmya</surname></string-name> and <string-name><given-names>V. T.</given-names> <surname>Reddy</surname></string-name></person-group>, &#x201C;<article-title>Single image dehazing using multilayer perceptron and dcp</article-title>,&#x201D; <source>International Journal of Engineering and Research</source>, vol. <volume>8</volume>, no. <issue>3</issue>, pp. <fpage>205</fpage>&#x2013;<lpage>209</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-3"><label>[3]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>G.</given-names> <surname>Li</surname></string-name> and <string-name><given-names>H.</given-names> <surname>Fan</surname></string-name></person-group>, &#x201C;<article-title>Image dehazing using residual-based deep cnn</article-title>,&#x201D; <source>IEEE Access</source>, vol. <volume>6</volume>, pp. <fpage>26831</fpage>&#x2013;<lpage>26842</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-4"><label>[4]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>C. O.</given-names> <surname>Ancuti</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Ancuti</surname></string-name> and <string-name><given-names>C.</given-names> <surname>De Vleeschouwer</surname></string-name></person-group>, &#x201C;<article-title>Effective local airlight estimation for image dehazing</article-title>,&#x201D; in <conf-name>2018 25th IEEE Int. Conf. on Image Processing (ICIP)</conf-name>, <conf-loc>Phoenix, Arizona, USA</conf-loc>, <publisher-name>IEEE</publisher-name>, pp. <fpage>2850</fpage>&#x2013;<lpage>2854</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-5"><label>[5]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>R.</given-names> <surname>Fattal</surname></string-name></person-group>, &#x201C;<article-title>Single image dehazing</article-title>,&#x201D; <source>ACM Transactions on Graphics (TOG)</source>, vol. <volume>27</volume>, no. <issue>3</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>9</lpage>, <year>2008</year>.</mixed-citation></ref>
<ref id="ref-6"><label>[6]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>D.</given-names> <surname>Berman</surname></string-name> and <string-name><given-names>S.</given-names> <surname>Avidan</surname></string-name></person-group>, &#x201C;<article-title>Non-local image dehazing</article-title>,&#x201D; in <conf-name>Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition</conf-name>, <conf-loc>Honolulu, Hawaii, USA</conf-loc>, pp. <fpage>1674</fpage>&#x2013;<lpage>1682</lpage>, <year>2016</year>.</mixed-citation></ref>
<ref id="ref-7"><label>[7]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>B.</given-names> <surname>Cai</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Xu</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Jia</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Qing</surname></string-name> and <string-name><given-names>D.</given-names> <surname>Tao</surname></string-name></person-group>, &#x201C;<article-title>Dehazenet: An end-to-end system for single image haze removal</article-title>,&#x201D; <source>IEEE Transactions on Image Processing</source>, vol. <volume>25</volume>, no. <issue>11</issue>, pp. <fpage>5187</fpage>&#x2013;<lpage>5198</lpage>, <year>2016</year>.</mixed-citation></ref>
<ref id="ref-8"><label>[8]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Zhu</surname></string-name> and <string-name><given-names>B.</given-names> <surname>He</surname></string-name></person-group>, &#x201C;<article-title>Dehazing via graph cut</article-title>,&#x201D; <source>Optical Engineering</source>, vol. <volume>56</volume>, no. <issue>11</issue>, pp. <fpage>113105</fpage>, <year>2017</year>.</mixed-citation></ref>
<ref id="ref-9"><label>[9]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>F. S.</given-names> <surname>Alenezi</surname></string-name> and <string-name><given-names>S.</given-names> <surname>Ganesan</surname></string-name></person-group>, &#x201C;<article-title>Geometric-pixel guided single-pass convolution neural network with graph cut for image dehazing</article-title>,&#x201D; <source>IEEE Access</source>, vol. <volume>9</volume>, pp. <fpage>29380</fpage>&#x2013;<lpage>29391</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-10"><label>[10]</label><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Chetlur</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Woolley</surname></string-name>, <string-name><given-names>P.</given-names> <surname>Vandermersch</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Cohen</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Tran</surname></string-name> <etal>et al.</etal></person-group>, &#x201C;<article-title>Cudnn: Efficient primitives for deep learning</article-title>,&#x201D; arXiv preprint arXiv:1410.0759, <year>2014</year>.</mixed-citation></ref>
<ref id="ref-11"><label>[11]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>D.</given-names> <surname>Zhen</surname></string-name>, <string-name><given-names>H. A.</given-names> <surname>Jalab</surname></string-name> and <string-name><given-names>L.</given-names> <surname>Shirui</surname></string-name></person-group>, &#x201C;<article-title>Haze removal algorithm using improved restoration model based on dark channel prior</article-title>,&#x201D; in <conf-name>Int. Visual Informatics Conf.</conf-name>, <conf-loc>Bangi, Malaysia</conf-loc>, <publisher-name>Springer</publisher-name>, pp. <fpage>157</fpage>&#x2013;<lpage>169</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-12"><label>[12]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>H.</given-names> <surname>Zhang</surname></string-name> and <string-name><given-names>V. M.</given-names> <surname>Patel</surname></string-name></person-group>, &#x201C;<article-title>Densely connected pyramid dehazing network</article-title>,&#x201D; in <conf-name>Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition</conf-name>, <conf-loc>Salt Lake City, Utah, USA</conf-loc>, pp. <fpage>3194</fpage>&#x2013;<lpage>3203</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-13"><label>[13]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>B.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Peng</surname></string-name>, <string-name><given-names>Z.</given-names> <surname>Wang</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Xu</surname></string-name> and <string-name><given-names>D.</given-names> <surname>Feng</surname></string-name></person-group>, &#x201C;<article-title>Aod-net: all-in-one dehazing network</article-title>,&#x201D; in <conf-name>Proc. of the IEEE Int. Conf. on Computer Vision</conf-name>, <conf-loc>Venice, Italy</conf-loc>, pp. <fpage>4770</fpage>&#x2013;<lpage>4778</lpage>, <year>2017</year>.</mixed-citation></ref>
<ref id="ref-14"><label>[14]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Salazar-Colores</surname></string-name>, <string-name><given-names>I.</given-names> <surname>Cruz-Aceves</surname></string-name> and <string-name><given-names>J. -M.</given-names> <surname>Ramos-Arreguin</surname></string-name></person-group>, &#x201C;<article-title>Single image dehazing using a multilayer perceptron</article-title>,&#x201D; <source>Journal of Electronic Imaging</source>, vol. <volume>27</volume>, no. <issue>4</issue>, pp. <fpage>043022</fpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-15"><label>[15]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><given-names>W. E. K.</given-names> <surname>Middleton</surname></string-name></person-group>, <italic>In</italic> <source>Vision Through the Atmosphere</source>, <publisher-loc>Toronto, Canada</publisher-loc>: <publisher-name>University of Toronto Press</publisher-name>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-16"><label>[16]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Vazquez-Corral</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Galdran</surname></string-name>, <string-name><given-names>P.</given-names> <surname>Cyriac</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Bertalmo</surname></string-name></person-group>, &#x201C;<article-title>A fast image dehazing method that does not introduce color artifacts</article-title>,&#x201D; <source>Journal of Real-Time Image Processing</source>, vol. <volume>17</volume>, no. <issue>3</issue>, pp. <fpage>607</fpage>&#x2013;<lpage>622</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-17"><label>[17]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>Z. -H.</given-names> <surname>Feng</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Kittler</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Awais</surname></string-name>, <string-name><given-names>P.</given-names> <surname>Huber</surname></string-name> and <string-name><given-names>X. -J.</given-names> <surname>Wu</surname></string-name></person-group>, &#x201C;<article-title>Wing loss for robust facial landmark localisation with convolutional neural networks</article-title>,&#x201D; in <conf-name>Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition</conf-name>, <conf-loc>Boston, MA, USA</conf-loc>, pp. <fpage>2235</fpage>&#x2013;<lpage>2245</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-18"><label>[18]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Zhou</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Shi</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Lai</surname></string-name> and <string-name><given-names>G.</given-names> <surname>Jimenez</surname></string-name></person-group>, &#x201C;<article-title>Contrast enhancement of medical images using a new version of the world cup optimization algorithm</article-title>,&#x201D; <source>Quantitative Imaging in Medicine and Surgery</source>, vol. <volume>9</volume>, no. <issue>9</issue>, pp. <fpage>1528</fpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-19"><label>[19]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>K.</given-names> <surname>He</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Sun</surname></string-name> and <string-name><given-names>X.</given-names> <surname>Tang</surname></string-name></person-group>, &#x201C;<article-title>Single image haze removal using dark channel prior</article-title>,&#x201D; <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>, vol. <volume>33</volume>, no. <issue>12</issue>, pp. <fpage>2341</fpage>&#x2013;<lpage>2353</lpage>, <year>2010</year>.</mixed-citation></ref>
<ref id="ref-20"><label>[20]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Guo</surname></string-name> and <string-name><given-names>A. S.</given-names> <surname>Ashour</surname></string-name></person-group>, <italic>In</italic> <source>Neutrosophic Set in Medical Image Analysis</source>, <publisher-loc>Cambridge, Massachusetts, USA</publisher-loc>: <publisher-name>Academic Press</publisher-name>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-21"><label>[21]</label><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><given-names>K.</given-names> <surname>O&#x0027;Shea</surname></string-name> and <string-name><given-names>R.</given-names> <surname>Nash</surname></string-name></person-group>, &#x201C;<article-title>An introduction to convolutional neural networks</article-title>,&#x201D; arXiv preprint arXiv:1511.08458, <year>2015</year>.</mixed-citation></ref>
<ref id="ref-22"><label>[22]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Zhao</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Shen</surname></string-name> and <string-name><given-names>Y.</given-names> <surname>Zhou</surname></string-name></person-group>, &#x201C;<article-title>RefineDNet: A weakly supervised refinement framework for single image dehazing</article-title>,&#x201D; <source>IEEE Transactions on Image Processing</source>, vol. <volume>30</volume>, pp. <fpage>3391</fpage>&#x2013;<lpage>3404</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-23"><label>[23]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>R. M.</given-names> <surname>Yousaf</surname></string-name>, <string-name><given-names>H. A.</given-names> <surname>Habib</surname></string-name>, <string-name><given-names>Z.</given-names> <surname>Mehmood</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Banjar</surname></string-name>, <string-name><given-names>R.</given-names> <surname>Alharbey</surname></string-name> <etal>et al.</etal></person-group>, &#x201C;<article-title>Single image dehazing and edge preservation based on the dark channel probability-weighted moments</article-title>,&#x201D; <source>Mathematical Problems in Engineering</source>, vol. <volume>2019</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>11</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-24"><label>[24]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>E. H.</given-names> <surname>Land</surname></string-name></person-group>, &#x201C;<article-title>The retinex theory of color vision</article-title>,&#x201D; <source>Scientific American</source>, vol. <volume>237</volume>, no. <issue>6</issue>, pp. <fpage>108</fpage>&#x2013;<lpage>129</lpage>, <year>1977</year>.</mixed-citation></ref>
<ref id="ref-25"><label>[25]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Z.</given-names> <surname>Fu</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Yang</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Shu</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Wu</surname></string-name> <etal>et al.</etal></person-group>, &#x201C;<article-title>Improved single image dehazing using dark channel prior</article-title>,&#x201D; <source>Journal of Systems Engineering and Electronics</source>, vol. <volume>26</volume>, no. <issue>5</issue>, pp. <fpage>1070</fpage>&#x2013;<lpage>1079</lpage>, <year>2015</year>.</mixed-citation></ref>
<ref id="ref-26"><label>[26]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>R.</given-names> <surname>Fattal</surname></string-name></person-group>, &#x201C;<article-title>Single image dehazing</article-title>,&#x201D; <source>ACM Transactions on Graphics (TOG)</source>, vol. <volume>27</volume>, no. <issue>3</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>9</lpage>, <year>2008</year>.</mixed-citation></ref>
<ref id="ref-27"><label>[27]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Song</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Wang</surname></string-name> and <string-name><given-names>X.</given-names> <surname>Chen</surname></string-name></person-group>, &#x201C;<article-title>Single image dehazing using ranking convolutional neural network</article-title>,&#x201D; <source>IEEE Transactions on Multimedia</source>, vol. <volume>20</volume>, no. <issue>6</issue>, pp. <fpage>1548</fpage>&#x2013;<lpage>1560</lpage>, <year>2017</year>.</mixed-citation></ref>
<ref id="ref-28"><label>[28]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Zhao</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Chen</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Feng</surname></string-name>, <string-name><given-names>Z.</given-names> <surname>Xu</surname></string-name> and <string-name><given-names>Q.</given-names> <surname>Li</surname></string-name></person-group>, &#x201C;<article-title>Infrared image enhancement through saliency feature analysis based on multi-scale decomposition</article-title>,&#x201D; <source>Infrared Physics &#x0026; Technology</source>, vol. <volume>62</volume>, pp. <fpage>86</fpage>&#x2013;<lpage>93</lpage>, <year>2014</year>.</mixed-citation></ref>
<ref id="ref-29"><label>[29]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><given-names>V.</given-names> <surname>Sharma</surname></string-name> and <string-name><given-names>E.</given-names> <surname>Elamaran</surname></string-name></person-group>, &#x201C;<chapter-title>Role of filter sizes in effective image classification using convolutional neural network</chapter-title>,&#x201D; <italic>In</italic> <source>Cognitive Informatics and Soft Computing</source>, <publisher-loc>Berlin, Heidelberg, Germany</publisher-loc>: <publisher-name>Springer</publisher-name>, pp. <fpage>625</fpage>&#x2013;<lpage>637</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-30"><label>[30]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>Peng</surname></string-name> and <string-name><given-names>B.</given-names> <surname>Li</surname></string-name></person-group>, &#x201C;<article-title>Single image dehazing based on improved dark channel prior and unsharp masking algorithm</article-title>,&#x201D; in <conf-name>Int. Conf. on Intelligent Computing</conf-name>, <conf-loc>Wuhan, China</conf-loc>, <publisher-name>Springer</publisher-name>, pp. <fpage>347</fpage>&#x2013;<lpage>358</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-31"><label>[31]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>G.</given-names> <surname>Meng</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Wang</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Duan</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Xiang</surname></string-name> and <string-name><given-names>C.</given-names> <surname>Pan</surname></string-name></person-group>, &#x201C;<article-title>Efficient image dehazing with boundary constraint and contextual regularization</article-title>,&#x201D; in <conf-name>Proc. of the IEEE Int. Conf. on Computer Vision</conf-name>, <conf-loc>Sydney, Australia</conf-loc>, pp. <fpage>617</fpage>&#x2013;<lpage>624</lpage>, <year>2013</year>.</mixed-citation></ref>
<ref id="ref-32"><label>[32]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Yin</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Wang</surname></string-name> and <string-name><given-names>Y. -H.</given-names> <surname>Yang</surname></string-name></person-group>, &#x201C;<article-title>A novel image-dehazing network with a parallel attention block</article-title>,&#x201D; <source>Pattern Recognition</source>, vol. <volume>102</volume>, pp. <fpage>107255</fpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-33"><label>[33]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>K.</given-names> <surname>He</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Sun</surname></string-name> and <string-name><given-names>X.</given-names> <surname>Tang</surname></string-name></person-group>, &#x201C;<article-title>Guided image filtering</article-title>,&#x201D; <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>, vol. <volume>35</volume>, no. <issue>6</issue>, pp. <fpage>1397</fpage>&#x2013;<lpage>1409</lpage>, <year>2012</year>.</mixed-citation></ref>
<ref id="ref-34"><label>[34]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Zhu</surname></string-name> and <string-name><given-names>B.</given-names> <surname>He</surname></string-name></person-group>, &#x201C;<article-title>Dehazing via graph cut</article-title>,&#x201D; <source>Optical Engineering</source>, vol. <volume>56</volume>, no. <issue>11</issue>, pp. <fpage>113105</fpage>, <year>2017</year>.</mixed-citation></ref>
<ref id="ref-35"><label>[35]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>R. T.</given-names> <surname>Tan</surname></string-name></person-group>, &#x201C;<article-title>Visibility in bad weather from a single image</article-title>,&#x201D; in <conf-name>2008 IEEE Conf. on Computer Vision and Pattern Recognition</conf-name>, <publisher-name>IEEE</publisher-name>, pp. <fpage>1</fpage>&#x2013;<lpage>8</lpage>, <year>2008</year>.</mixed-citation></ref>
<ref id="ref-36"><label>[36]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Q.</given-names> <surname>Zhu</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Mai</surname></string-name> and <string-name><given-names>L.</given-names> <surname>Shao</surname></string-name></person-group>, &#x201C;<article-title>A fast single image haze removal algorithm using color attenuation prior</article-title>,&#x201D; <source>IEEE Transactions on Image Processing</source>, vol. <volume>24</volume>, no. <issue>11</issue>, pp. <fpage>3522</fpage>&#x2013;<lpage>3533</lpage>, <year>2015</year>.</mixed-citation></ref>
<ref id="ref-37"><label>[37]</label><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><given-names>O.</given-names> <surname>Sener</surname></string-name> and <string-name><given-names>V.</given-names> <surname>Koltun</surname></string-name></person-group>, &#x201C;<article-title>Multi-task learning as multi-objective optimization</article-title>,&#x201D; <italic>arXiv preprint arXiv:1810.04650</italic>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-38"><label>[38]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Q.</given-names> <surname>Zhu</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Mai</surname></string-name> and <string-name><given-names>L.</given-names> <surname>Shao</surname></string-name></person-group>, &#x201C;<article-title>A fast single image haze removal algorithm using color attenuation prior</article-title>,&#x201D; <source>IEEE Transactions on Image Processing</source>, vol. <volume>24</volume>, no. <issue>11</issue>, pp. <fpage>3522</fpage>&#x2013;<lpage>3533</lpage>, <year>2015</year>.</mixed-citation></ref>
<ref id="ref-39"><label>[39]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>B.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Ren</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Fu</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Tao</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Feng</surname></string-name> <etal>et al.</etal></person-group>, &#x201C;<article-title>Benchmarking single-image dehazing and beyond</article-title>,&#x201D; <source>IEEE Transactions on Image Processing</source>, vol. <volume>28</volume>, no. <issue>1</issue>, pp. <fpage>492</fpage>&#x2013;<lpage>505</lpage>, <year>2018</year>.</mixed-citation></ref>
</ref-list>
</back>
</article>