<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xml:lang="en" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">CMES</journal-id>
<journal-id journal-id-type="nlm-ta">CMES</journal-id>
<journal-id journal-id-type="publisher-id">CMES</journal-id>
<journal-title-group>
<journal-title>Computer Modeling in Engineering &#x0026; Sciences</journal-title>
</journal-title-group>
<issn pub-type="epub">1526-1506</issn>
<issn pub-type="ppub">1526-1492</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">67530</article-id>
<article-id pub-id-type="doi">10.32604/cmes.2025.067530</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Optimizing Haze Removal: A Variable Scattering Approach to Transmission Mapping</article-title>
<alt-title alt-title-type="left-running-head">Optimizing Haze Removal: A Variable Scattering Approach to Transmission Mapping</alt-title>
<alt-title alt-title-type="right-running-head">Optimizing Haze Removal: A Variable Scattering Approach to Transmission Mapping</alt-title>
</title-group>
<contrib-group>
<contrib id="author-1" contrib-type="author">
<name name-style="western"><surname>Saxena</surname><given-names>Gaurav</given-names></name><xref ref-type="aff" rid="aff-1">1</xref></contrib>
<contrib id="author-2" contrib-type="author">
<name name-style="western"><surname>Napte</surname><given-names>Kiran</given-names></name><xref ref-type="aff" rid="aff-2">2</xref></contrib>
<contrib id="author-3" contrib-type="author">
<name name-style="western"><surname>Shukla</surname><given-names>Neeraj Kumar</given-names></name><xref ref-type="aff" rid="aff-3">3</xref><xref ref-type="aff" rid="aff-4">4</xref></contrib>
<contrib id="author-4" contrib-type="author" corresp="yes">
<name name-style="western"><surname>Parihar</surname><given-names>Sushma</given-names></name><xref ref-type="aff" rid="aff-5">5</xref><email>sushmap@sitpune.edu.in</email></contrib>
<aff id="aff-1"><label>1</label><institution>Department of Computer Science &#x0026; Engineering, Jaypee University of Engineering and Technology</institution>, <addr-line>AB Road, Raghogarh, Guna, 473226, Madhya Pradesh</addr-line>, <country>India</country></aff>
<aff id="aff-2"><label>2</label><institution>Department of Electronics &#x0026; Telecommunication Engineering, Pimpri Chinchwad College of Engineering and Research</institution>, <addr-line>Ravet, Haveli, Pune, 412101, Maharashtra</addr-line>, <country>India</country></aff>
<aff id="aff-3"><label>3</label><institution>Department of Electrical Engineering, College of Engineering, King Khalid University</institution>, <addr-line>P.O. Box 394, Abha, 61421</addr-line>, <country>Saudi Arabia</country></aff>
<aff id="aff-4"><label>4</label><institution>Center for Engineering and Technology Innovations, King Khalid University</institution>, <addr-line>Abha, 61421</addr-line>, <country>Saudi Arabia</country></aff>
<aff id="aff-5"><label>5</label><institution>Symbiosis Institute of Technology</institution>, <addr-line>Pune Campus</addr-line>, <institution>Symbiosis International (Deemed University) (SIU)</institution>, <addr-line>Pune, 412115, Maharashtra</addr-line>, <country>India</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>&#x002A;</label>Corresponding Author: Sushma Parihar. Email: <email>sushmap@sitpune.edu.in</email></corresp>
</author-notes>
<pub-date date-type="collection" publication-format="electronic">
<year>2025</year>
</pub-date>
<pub-date date-type="pub" publication-format="electronic">
<day>31</day><month>08</month><year>2025</year>
</pub-date>
<volume>144</volume>
<issue>2</issue>
<fpage>2307</fpage>
<lpage>2323</lpage>
<history>
<date date-type="received">
<day>06</day>
<month>5</month>
<year>2025</year>
</date>
<date date-type="accepted">
<day>14</day>
<month>7</month>
<year>2025</year>
</date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2025 The Authors.</copyright-statement>
<copyright-year>2025</copyright-year>
<copyright-holder>Published by Tech Science Press.</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_CMES_67530.pdf"></self-uri>
<abstract>
<p>The ill-posed character of haze or fog makes it difficult to remove from a single image. While most existing methods rely on a transmission map refined through depth estimation and assume a constant scattering coefficient, this assumption limits their effectiveness. In this paper, we propose an enhanced transmission map that incorporates spatially varying scattering information inherent in hazy images. To improve linearity, the model utilizes the ratio of the difference between intensity and saturation to their sum. Our approach also addresses critical issues such as edge preservation and color fidelity. In terms of qualitative as well as quantitative analysis, experimental outcomes show that the suggested framework is more effective than the currently used haze removal techniques.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>Dehazing</kwd>
<kwd>ambient light</kwd>
<kwd>transmissivity</kwd>
<kwd>color diminution and depth refurbishment</kwd>
</kwd-group>
<funding-group>
<award-group id="awg1">
<funding-source>Deanship of Research and Graduate Studies at King Khalid University</funding-source>
<award-id>RGP2/274/46</award-id>
</award-group>
</funding-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label>
<title>Introduction</title>
<p>Image restoration has been a significant problem in the fields of vehicle conjunction monitoring on the road [<xref ref-type="bibr" rid="ref-1">1</xref>], airborne photography [<xref ref-type="bibr" rid="ref-2">2</xref>], army applications [<xref ref-type="bibr" rid="ref-3">3</xref>], and related purposes [<xref ref-type="bibr" rid="ref-4">4</xref>]. Imaging in adverse conditions has a major impact on the restoration process and has become a challenge for various applications [<xref ref-type="bibr" rid="ref-5">5</xref>]. Unfavorable atmospheric conditions create an environment where varying density particles, such as haze and smog, are present [<xref ref-type="bibr" rid="ref-6">6</xref>]. The ambient light is typically scattered differently by these dissolved particles, which causes images taken by the camera to be distorted at that moment [<xref ref-type="bibr" rid="ref-7">7</xref>]. Low brightness and poor contrast are issues with these photos, which have a big impact on various activities of image processing applications like segmenting images [<xref ref-type="bibr" rid="ref-8">8</xref>], identifying targets [<xref ref-type="bibr" rid="ref-9">9</xref>], tracing objects of desire, etc. [<xref ref-type="bibr" rid="ref-10">10</xref>,<xref ref-type="bibr" rid="ref-11">11</xref>]. Furthermore, since these foreign particles generate a varying scattering environment inside a single image, degraded photos have different haze densities [<xref ref-type="bibr" rid="ref-12">12</xref>]. Therefore, recovering a perfect image from a murky one is a challenging task. Defogging the distorted images for use in computer vision applications requires an effective haze removal algorithm that considers the variation of scattering.</p>
<p>The atmospheric scattering model described in the literature serves as the foundation for existing single-image dehazing methods. This method produces better restoration results by estimating depth, transmission, and ambient light. The formation of an unclear image is explained by the atmospheric scattering model in the following way:
<disp-formula id="eqn-1"><label>(1)</label><mml:math id="mml-eqn-1" display="block"><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mi>R</mml:mi><mml:mrow><mml:mi>d</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mspace width=".5em" /><mml:mspace width="thinmathspace" /><mml:msub><mml:mi>t</mml:mi><mml:mrow><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mi>A</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>t</mml:mi><mml:mrow><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula>where <italic>I</italic><sub><italic>f</italic></sub>(<italic>x</italic>) is the intensity of input at position <italic>x</italic>, <italic>R</italic><sub><italic>d</italic></sub>(<italic>x</italic>) is the glow of the reinstated image, <italic>t</italic><sub><italic>r</italic></sub>(<italic>x</italic>) is the transmission map, and <italic>A</italic> is atmospheric light. Additionally, transmissivity is determined as:
<disp-formula id="eqn-2"><label>(2)</label><mml:math id="mml-eqn-2" display="block"><mml:mrow><mml:msub><mml:mi>t</mml:mi><mml:mi>r</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mspace width="1pt" /></mml:mrow><mml:mi>d</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msup></mml:mrow></mml:math></disp-formula>where <italic>&#x03B2;</italic> and <italic>d</italic>(<italic>x</italic>) are the scattering coefficient and scene depth, respectively. Therefore, two parameters, <italic>&#x03B2;</italic> and <italic>d</italic>(<italic>x</italic>), are the only ones used in the calculation of the transmission map [<xref ref-type="bibr" rid="ref-5">5</xref>,<xref ref-type="bibr" rid="ref-9">9</xref>]. Over the previous two decades, numerous researchers [<xref ref-type="bibr" rid="ref-13">13</xref>&#x2013;<xref ref-type="bibr" rid="ref-17">17</xref>], have presented their works on the calculation of depth <italic>d</italic>(<italic>x</italic>). It explains how the haze changes linearly in the image itself. The bottom and top sides of a 2-D projection of a 3-D scene determine the image&#x2019;s objects, both close and far away, respectively [<xref ref-type="bibr" rid="ref-18">18</xref>&#x2013;<xref ref-type="bibr" rid="ref-21">21</xref>]. Similarly, the top side of any hazy image will always have more fog than the bottom since fog density is always higher for distant things. For precise transmission map calculations, a rectilinear depth model may be helpful because the amount of fog grows linearly with depth [<xref ref-type="bibr" rid="ref-22">22</xref>]. In the previous decade, numerous scholars created a linear model by bearing these factors in mind, as is explained in the succeeding paragraph. Nayar et al. [<xref ref-type="bibr" rid="ref-13">13</xref>] have given a method in which, by comparing two images taken under various atmospheric situations, it was possible to determine the scene depth. Additionally, to acquire the scene depth required by the transmission function, Oakley et al. [<xref ref-type="bibr" rid="ref-23">23</xref>] explained the technique for minimizing this degradation in conditions where the scene geometry is known. Afterward, Narasimhan et al. [<xref ref-type="bibr" rid="ref-24">24</xref>] offered a technique to locate the sky zone manually in an image to determine the scene depth. The aforementioned techniques, meanwhile, are frequently ineffective in actual practice. For dehazing an image, He et al. [<xref ref-type="bibr" rid="ref-25">25</xref>] developed the dark channel prior method. Due to its simplicity and efficacy compared to the prior methods, this method attracted a lot of attention. Further, in order to recover the densely hazy images, Zhu et al. [<xref ref-type="bibr" rid="ref-15">15</xref>] have given a technique that is based on color attenuation, w.r.t. the difference between saturation and intensity value, the prior color attenuation characterizes the fog changes. Real-time defogging for surveillance applications is made possible by using color channel prior knowledge, which enables the rapid computation of the transmission map for a piece of pixel in the unique image. Raikwar et al. [<xref ref-type="bibr" rid="ref-16">16</xref>] introduced an enhanced linear depth system that integrates a hue factor into the depth calculation formula, which is dependent on the variation between saturation and the combination of hue and intensity. Assuming homogeneous scattering in the local area, it has been discovered that the depth maps created up to this point compute the transmissivity with constant values of the scattering coefficient. For calculating the transmission map, the scattering coefficient plays a role that is nearly as important as that of the depth map, as variations in the scattering medium lead to changes in haze thickness. Therefore, it is not appropriate to take the value of <italic>&#x03B2;</italic> as constant over the entire image. Selecting various values of <italic>&#x03B2;</italic>, such as <italic>&#x03B2;</italic> &#x003E; 1, <italic>&#x03B2;</italic> &#x003D; 1, and <italic>&#x03B2;</italic> &#x003C; 1, will generally allow us to see how <italic>&#x03B2;</italic> affects the restored image, and that effect can be easily observed in <xref ref-type="fig" rid="fig-1">Fig. 1</xref>. <xref ref-type="fig" rid="fig-1">Fig. 1a</xref> shows two hazy input images of different locations, though the restored images are shown in <xref ref-type="fig" rid="fig-1">Fig. 1b</xref> using <italic>&#x03B2;</italic> &#x003E; 1. Through <xref ref-type="fig" rid="fig-1">Fig. 1b</xref>, it is clear that the fog in the background has been removed, causing the foreground to appear darker or disturbed (as indicated by the red line in the figure). Contrarily, for <italic>&#x03B2;</italic> &#x003C; 1, the fog has a minimal defogging effect in the background and is primarily removed from the foreground of photographs, as highlighted by the red line in <xref ref-type="fig" rid="fig-1">Fig. 1d</xref>. Consequently, as can be seen in <xref ref-type="fig" rid="fig-1">Fig. 1c</xref>, <italic>&#x03B2;</italic> &#x003D; 1 produced uniform defogging in the backgrounds and foregrounds. As far as we know, the variable scattering concept has not yet been taken into account by any de-fogging algorithms. To significantly improve defogging techniques, it is necessary to develop a scattering model that incorporates changeable values of <italic>&#x03B2;</italic>. With this information in mind, this article presents a model created using the linear regression technique, which accounts for variations in <italic>&#x03B2;</italic> based on the fog density in the image. An improved transmission map that serves the intended restoration objective is obtained using the proposed approach. In contrast, the depth map is calculated using the color attenuation prior method from [<xref ref-type="bibr" rid="ref-15">15</xref>,<xref ref-type="bibr" rid="ref-16">16</xref>], the atmospheric light is approximated employing the technique from [<xref ref-type="bibr" rid="ref-25">25</xref>], and the computation of <italic>R</italic><sub><italic>d</italic></sub>(<italic>x</italic>) is performed using <xref ref-type="disp-formula" rid="eqn-1">(1)</xref>. The rest of the article is structured as follows: <xref ref-type="sec" rid="s2">Section 2</xref> discusses the color attenuation prior-based restoration approach. In <xref ref-type="sec" rid="s3">Section 3</xref> suggested technique is described for calculating <italic>&#x03B2;</italic>. The experimental evaluation is discussed in <xref ref-type="sec" rid="s4">Section 4</xref>, and lastly, <xref ref-type="sec" rid="s5">Section 5</xref> provides the concluding remarks.</p>
<fig id="fig-1">
<label>Figure 1</label>
<caption>
<title>Analysis of impact raised due to variation of <italic>&#x03B2;</italic>; (<bold>a)</bold> Foggy input picture, (<bold>b</bold>&#x2013;<bold>d)</bold> recovered pictures with <italic>&#x03B2;</italic> &#x003C; 1, <italic>&#x03B2;</italic> &#x003D; 1, and <italic>&#x03B2;</italic> &#x003E; 1 correspondingly</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_67530-fig-1a.tif"/>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_67530-fig-1b.tif"/>
</fig>
</sec>
<sec id="s2">
<label>2</label>
<title>Color Attenuation-Based Recovery Technique</title>
<p>The algorithm for the restoration of a foggy image is created using the atmospheric scattering model that is given in <xref ref-type="disp-formula" rid="eqn-1">Eqs. (1)</xref> and <xref ref-type="disp-formula" rid="eqn-2">(2)</xref>. According to <xref ref-type="disp-formula" rid="eqn-1">(1)</xref>, the image restitution process involves four key factors: scene depth, transmissivity, atmospheric light, and picture radiance retrieval. In the next subsections, these four elements are briefly discussed.</p>
<sec id="s2_1">
<label>2.1</label>
<title>Estimation of Scene Depth</title>
<p>Removing fog from a single image remains a significant challenge in computer vision due to the limited structural information available about the scene. A notable advancement in this area is the Color Attenuation Prior (CAP) proposed by Zhu et al. [<xref ref-type="bibr" rid="ref-15">15</xref>], which leverages the Hue, Saturation, and Value (HSV) color space for depth estimation. This method is based on the observation that haze concentration generally increases with scene depth, leading to the assumption that scene depth <italic>d</italic>(<italic>x</italic>) is positively correlated with haze concentration <italic>c</italic>(<italic>x</italic>), which, in turn, is related to the difference between pixel value and saturation: <italic>d(x) &#x221D; c(x) &#x221D; v(x) &#x2212; s(x)</italic>.</p>
<p>Building on this assumption, Zhu et al. [<xref ref-type="bibr" rid="ref-15">15</xref>] proposed a linear depth estimation model defined as: <inline-formula id="ieqn-1"><mml:math id="mml-ieqn-1"><mml:mi>d</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mi>v</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>x</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> where <italic>d</italic>(<italic>x</italic>) represents scene depth, <italic>v</italic>(<italic>x</italic>) is brightness, <italic>s</italic>(<italic>x</italic>) is saturation, and <inline-formula id="ieqn-2"><mml:math id="mml-ieqn-2"><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> are linear coefficients learned from a training dataset comprising hundreds of diverse images across various scenes and locations.</p>
<p>To improve the linearity and accuracy of this model, Raikwar and Shashikala [<xref ref-type="bibr" rid="ref-16">16</xref>] extended it by incorporating the hue component. Their results showed that as scene depth or fog concentration increases, the difference between saturation and the sum of brightness and hue also increases, leading to a more robust linear depth model.</p>
<p>In this paper, we adopt both depth estimation approaches [<xref ref-type="bibr" rid="ref-15">15</xref>,<xref ref-type="bibr" rid="ref-16">16</xref>] to compute the scene depth, which is then used to generate the transmission map (transmissivity) essential for haze removal.</p>
</sec>
<sec id="s2_2">
<label>2.2</label>
<title>Calculation of Transmissivity</title>
<p>The scattering coefficient <italic>&#x03B2;</italic> and scene depth are needed for the calculation of the transmission map using <xref ref-type="disp-formula" rid="eqn-2">(2)</xref>. In this process, the scene depth <italic>d</italic>(<italic>x</italic>) is initially established by utilizing either [<xref ref-type="bibr" rid="ref-15">15</xref>] or [<xref ref-type="bibr" rid="ref-16">16</xref>]. The transmissivity is then refined using a median filter to keep the current edges. As a result, the raw depth map can be described as:
<disp-formula id="eqn-3"><label>(3)</label><mml:math id="mml-eqn-3" display="block"><mml:msub><mml:mi>d</mml:mi><mml:mrow><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:munder><mml:mrow><mml:mi>m</mml:mi><mml:mi>e</mml:mi><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>y</mml:mi><mml:mo>&#x2208;</mml:mo><mml:msub><mml:mi mathvariant="normal">&#x03A9;</mml:mi><mml:mrow><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:munder><mml:mrow><mml:mo>[</mml:mo><mml:mi>d</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>y</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:math></disp-formula>where <italic>d</italic><sub><italic>r</italic></sub>(<italic>x</italic>) is a raw depth map with size <italic>r</italic>, and &#x2126;(<italic>x</italic>) is a filter of <italic>r</italic> &#x00D7; <italic>r</italic> zone centring on <italic>x</italic>. Yet, certain blocking artifacts may still be present as a result of the mask-based processing, and these can be eliminated with an image-guided filter, yielding <italic>d</italic><sub>0</sub>(<italic>x</italic>). Furthermore, the magnitude of <italic>&#x03B2;</italic> is crucial for scene approximation. The approaches [<xref ref-type="bibr" rid="ref-15">15</xref>,<xref ref-type="bibr" rid="ref-16">16</xref>] assume that atmospheric scattering is homogeneous throughout the scene, resulting in a constant value of <italic>&#x03B2;</italic>. Consequently, the redefined term for the transmissivity could be expressed as follows:
<disp-formula id="eqn-4"><label>(4)</label><mml:math id="mml-eqn-4" display="block"><mml:msub><mml:mi>t</mml:mi><mml:mrow><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>exp</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:msub><mml:mi>d</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula></p>
</sec>
<sec id="s2_3">
<label>2.3</label>
<title>Approximation of Ambient Light &#x0026; Image Brightness Restoration</title>
<p>The color attenuation-supported picture defogging method arranges the projected depth map in decreasing order of brightness value to determine atmospheric light <italic>A</italic>. Bright regions, which are often found in remote locations on the map, are chosen by selecting the upper 0.1% of pixels as targeted ambient bright areas. The brightness of these pixels is compared in the foggy image, and the pixel with the highest brightness is chosen as the ambient light <italic>A</italic>. This method gives a rapid and precise estimation of ambient light. After calculating the transmission map <italic>t</italic><sub><italic>r</italic></sub>(<italic>x</italic>) and the atmospheric light <italic>A</italic>, the restored image <italic>R</italic><sub><italic>d</italic></sub>(<italic>x</italic>) can be obtained using the following relationship:
<disp-formula id="eqn-5"><label>(5)</label><mml:math id="mml-eqn-5" display="block"><mml:msub><mml:mi>R</mml:mi><mml:mrow><mml:mi>d</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>A</mml:mi></mml:mrow><mml:mrow><mml:mo movablelimits="true" form="prefix">max</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>t</mml:mi><mml:mrow><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mn>0.1</mml:mn><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mfrac><mml:mo>+</mml:mo><mml:mi>A</mml:mi></mml:math></disp-formula>where 0.1 serves as a cut-off point to keep the denominator from becoming small enough.</p>
</sec>
<sec id="s2_4">
<label>2.4</label>
<title>Issues</title>
<p>In general, the transmission map plays an important role in determining the efficacy of the fog removal algorithm. According to <xref ref-type="disp-formula" rid="eqn-2">Eq. (2)</xref>, two factors (i) scene depth (<italic>d</italic>(<italic>x</italic>)) and (ii) scattering coefficient (<italic>&#x03B2;</italic>) are used to calculate the transmission map. According to <xref ref-type="disp-formula" rid="eqn-2">(2)</xref>, the transmission map is calculated using two factors: (i) scene depth (<italic>d</italic>(<italic>x</italic>)) and (ii) scattering coefficient (<italic>&#x03B2;</italic>). According to past research, the majority of researchers only attempted to improve scene depth to calculate the transmission map, while they regarded the scattering coefficient&#x2019;s value as unity under the presumption that the atmospheric disturbances are uniform throughout the whole area of the foggy image. The following conclusions are drawn from this study:
<list list-type="simple">
<list-item><label>(i)</label><p>Since the atmosphere&#x2019;s particles vary in size and orientation, when light from the entire atmosphere passes through them, it is scattered in all directions. As a consequence, whenever the image is taken, it exhibits non-homogeneous disruption. This theory addresses the question of how scattering coefficients, previously assumed to be uniform, can remain constant even when the disturbance is not homogeneous.</p></list-item>
<list-item><label>(ii)</label><p>The fog is not distributed equally across the entire picture. If the scattering coefficient were treated as a constant in this situation, it might differently affect the recovery of colors in the foreground and background, as illustrated in <xref ref-type="fig" rid="fig-1">Fig. 1</xref>.</p></list-item>
</list></p>
<p>The observations cited above demonstrate that the transmission map&#x2019;s functionality and the defogging algorithm&#x2019;s effectiveness may both suffer from a constant value of scattering coefficient. The variable scattering coefficient may be able to aid with the aforementioned issues as well as improve the overall effectiveness of the defogging method. This motivates the development of a variable scattering coefficient design for fog removal methods. This article&#x2019;s main contribution can be summed up as follows:
<list list-type="simple">
<list-item><label>1.</label>
<p>When computing scene depth using methods from [<xref ref-type="bibr" rid="ref-15">15</xref>,<xref ref-type="bibr" rid="ref-16">16</xref>], the HSV color space components are adjusted to establish a relationship in which brightness and hue increase, while saturation decreases with perceived depth. Furthermore, analysis of the HSV matrices shows that the ratio of the difference to the sum of saturation and intensity values varies linearly across the image. This observation is used to develop an algorithm that dynamically adjusts the scattering coefficient for image defogging.</p></list-item>
<list-item><label>2.</label>
<p>The issue of color fidelity is addressed by incorporating a spatially variable scattering coefficient, <italic>&#x03B2;</italic>(<italic>x</italic>), into existing defogging techniques. However, preserving image edges remains a challenge. To mitigate edge degradation, we evaluated several filters reported in the literature and found that median filtering is most suitable, as it enhances visibility while effectively preserving edge details.</p></list-item>
</list></p>
</sec>
</sec>
<sec id="s3">
<label>3</label>
<title>Suggested Approach to Estimating Atmospheric Scattering and Updated Transmissivity</title>
<p>As discussed in previous sections, the transmission map can be enhanced by developing it with a variable scattering coefficient <italic>&#x03B2;</italic>(<italic>x</italic>) for better recovery of the hazy pictures. Consequently, this section introduces a pattern-based linear model for calculating the scattering coefficients of the input foggy image. In any given hazy image, the fog density generally increases from the bottom to the top. Fog is primarily caused by foreign particles, such as moisture, dust, smog, and murk, suspended in the air. The scattering of these particles is affected by their properties, such as dimension, dispersion, or positioning, which are never entirely uniform. The varying particle sizes result in a variable-density scattering medium in the atmosphere, complicating the prediction of a variable scattering coefficient model. Moreover, several linear models have been developed in the literature [<xref ref-type="bibr" rid="ref-15">15</xref>,<xref ref-type="bibr" rid="ref-16">16</xref>,<xref ref-type="bibr" rid="ref-25">25</xref>] for determining depth, considering that fog density varies linearly with depth. In a similar way, and taking the same factors into account, the scattering map can also be produced.</p>
<p>In general, three separate information planes, like HSV or RGB, are used to model color images. The scattering map is created using information from any foggy image&#x2019;s HSV model. Several studies have been conducted on various foggy images to explore the potential of finding a relationship between scattering and the values of HSV planes. In essence, the tests are based on random procedures carried out on the HSV values of chosen images that are blurry. Experimental analysis statistics reveal that the ratio between the difference and the sum of saturation and brightness values is directly linked to the distribution of the scattering medium.
<disp-formula id="eqn-6"><label>(6)</label><mml:math id="mml-eqn-6" display="block"><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x221D;</mml:mo><mml:mfrac><mml:mrow><mml:mi>v</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>s</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>v</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mi>s</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mfrac></mml:math></disp-formula></p>
<p>A linear model for the variable scattering coefficient is suggested, as presented in <xref ref-type="disp-formula" rid="eqn-7">(7)</xref>, based on an approximation of <xref ref-type="disp-formula" rid="eqn-6">(6)</xref>, which indicates the concentration of scattering particles.
<disp-formula id="eqn-7"><label>(7)</label><mml:math id="mml-eqn-7" display="block"><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:mi>&#x03C7;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mi>&#x03B5;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula>where <inline-formula id="ieqn-3"><mml:math id="mml-ieqn-3"><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>v</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>v</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mi>s</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mfrac></mml:math></inline-formula> and <inline-formula id="ieqn-4"><mml:math id="mml-ieqn-4"><mml:mi>&#x03C7;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>s</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>v</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mi>s</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mfrac></mml:math></inline-formula> are the normalization factors for the brightness and saturation of a hazy picture. Here, <italic>c</italic><sub>1</sub>, <italic>c</italic><sub>2</sub>, and <italic>c</italic><sub>3</sub> are the unidentified linear constants, and <italic>&#x03B5;</italic>(<italic>x</italic>) represents an arbitrary error in the scattering model, characterized as an arbitrary picture with zero mean and variance &#x03C3;2 (i.e., &#x03B5;(x)&#x223C;N(0, &#x03C3;2)).</p>
<p>To reduce the square of error, the optimal values of linear coefficients are computed as follows:
<disp-formula id="eqn-8"><label>(8)</label><mml:math id="mml-eqn-8" display="block"><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>&#x03B5;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:mi>&#x03C7;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>&#x2245;</mml:mo><mml:mn>0</mml:mn></mml:math></disp-formula></p>
<p>The mathematical method of calculating linear coefficients, known as regression analysis, is predicated on the idea that the sum of the squared errors from the <italic>n</italic> samples has to be kept to a minimum. For conducting the regression examination, the linear method <xref ref-type="disp-formula" rid="eqn-7">(7)</xref> can be rewritten in its globalized system as:
<disp-formula id="eqn-9"><label>(9)</label><mml:math id="mml-eqn-9" display="block"><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi>&#x03C7;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B5;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>Likewise, stochastic errors might be globalized as follows:
<disp-formula id="eqn-10"><label>(10)</label><mml:math id="mml-eqn-10" display="block"><mml:msub><mml:mi>&#x03B5;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi>&#x03C7;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>Afterward, squares of errors are calculated and combined, which is given as:
<disp-formula id="eqn-11"><label>(11)</label><mml:math id="mml-eqn-11" display="block"><mml:mi>E</mml:mi><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:msub><mml:mi>&#x03B5;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msup><mml:mrow><mml:msub><mml:mi>&#x03B5;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msup><mml:mrow><mml:msub><mml:mi>&#x03B5;</mml:mi><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>+</mml:mo><mml:msup><mml:mrow><mml:msub><mml:mi>&#x03B5;</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula>or
<disp-formula id="eqn-12"><label>(12)</label><mml:math id="mml-eqn-12" display="block"><mml:mi>E</mml:mi><mml:mo>=</mml:mo><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:munderover><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>&#x03B5;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:munderover><mml:msup><mml:mrow><mml:mo>[</mml:mo><mml:mspace width="thinmathspace" /><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi>&#x03C7;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math></disp-formula></p>
<p>Differentiate <xref ref-type="disp-formula" rid="eqn-12">(12)</xref> partially about <italic>c</italic><sub>1</sub>, <italic>c</italic><sub>2</sub>, and <italic>c</italic><sub>3</sub>, and equate it to zero to reduce the value of error <italic>E</italic>.
<disp-formula id="eqn-13"><label>(13)</label><mml:math id="mml-eqn-13" display="block"><mml:mtable columnalign="right left right left right left right left right left right left" rowspacing="3pt" columnspacing="0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em" displaystyle="true"><mml:mtr><mml:mtd /><mml:mtd><mml:mfrac><mml:mrow><mml:mi mathvariant="normal">&#x2202;</mml:mi><mml:mi>E</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x2202;</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>=</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mn>2</mml:mn><mml:mrow><mml:mo>[</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi>&#x03C7;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mn>2</mml:mn><mml:mrow><mml:mo>[</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi>&#x03C7;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd /><mml:mtd><mml:mi></mml:mi><mml:mo>&#x2212;</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mn>2</mml:mn><mml:mrow><mml:mo>[</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi>&#x03C7;</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="eqn-14"><label>(14)</label><mml:math id="mml-eqn-14" display="block"><mml:mtable columnalign="right left right left right left right left right left right left" rowspacing="3pt" columnspacing="0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em" displaystyle="true"><mml:mtr><mml:mtd /><mml:mtd><mml:mfrac><mml:mrow><mml:mi mathvariant="normal">&#x2202;</mml:mi><mml:mi>E</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x2202;</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>=</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mn>2</mml:mn><mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mo>[</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi>&#x03C7;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mn>2</mml:mn><mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mo>[</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi>&#x03C7;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mspace width="1em" /><mml:mspace width="1em" /></mml:mtd></mml:mtr><mml:mtr><mml:mtd /><mml:mtd><mml:mi></mml:mi><mml:mo>&#x2212;</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mn>2</mml:mn><mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mo>[</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi>&#x03C7;</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="eqn-15"><label>(15)</label><mml:math id="mml-eqn-15" display="block"><mml:mtable columnalign="right left right left right left right left right left right left" rowspacing="3pt" columnspacing="0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em" displaystyle="true"><mml:mtr><mml:mtd /><mml:mtd><mml:mfrac><mml:mrow><mml:mi mathvariant="normal">&#x2202;</mml:mi><mml:mi>E</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x2202;</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>=</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mn>2</mml:mn><mml:msub><mml:mi>&#x03C7;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mo>[</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi>&#x03C7;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mn>2</mml:mn><mml:msub><mml:mi>&#x03C7;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mo>[</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi>&#x03C7;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd /><mml:mtd><mml:mi></mml:mi><mml:mo>&#x2212;</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mn>2</mml:mn><mml:msub><mml:mi>&#x03C7;</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mo>[</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi>&#x03C7;</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p><xref ref-type="disp-formula" rid="eqn-13">Eqs. (13)</xref>&#x2013;<xref ref-type="disp-formula" rid="eqn-15">(15)</xref> are simplified as:
<disp-formula id="eqn-16"><label>(16)</label><mml:math id="mml-eqn-16" display="block"><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:munderover><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>n</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:munderover><mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:munderover><mml:msub><mml:mi>&#x03C7;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula>
<disp-formula id="eqn-17"><label>(17)</label><mml:math id="mml-eqn-17" display="block"><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:munderover><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:munderover><mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:munderover><mml:msup><mml:mrow><mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:munderover><mml:msub><mml:mi>&#x03C7;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula>
<disp-formula id="eqn-18"><label>(18)</label><mml:math id="mml-eqn-18" display="block"><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:munderover><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:msub><mml:mi>&#x03C7;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:munderover><mml:msub><mml:mi>&#x03C7;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:munderover><mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:msub><mml:mi>&#x03C7;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:munderover><mml:msup><mml:mrow><mml:msub><mml:mi>&#x03C7;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>To determine the optimal values for scattering coefficients <italic>c</italic><sub>1</sub>, <italic>c</italic><sub>2</sub>, and <italic>c</italic><sub>3</sub>, above mentioned equation could be used. However, solving these equations requires the values of the variables <inline-formula id="ieqn-5"><mml:math id="mml-ieqn-5"><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:munderover><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mspace width="thinmathspace" /><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:munderover><mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mspace width="thinmathspace" /><mml:mrow><mml:mtext>and</mml:mtext></mml:mrow><mml:mspace width="thinmathspace" /><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:munderover><mml:msub><mml:mi>&#x03C7;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula>. Hence, these variables could be derived using a dataset which have hazy pictures along with ground truth pictures. This dataset can be generated using 500 clear pictures taken from the Google Images dataset. After that, hazy pictures are developed using the model given in <xref ref-type="disp-formula" rid="eqn-1">(1)</xref>, which produces the <italic>i</italic>-th hazy picture, <italic>I</italic><sub><italic>f</italic></sub>(<italic>x</italic>) for each recovered picture <italic>R</italic><sub><italic>d</italic></sub>(<italic>x</italic>). Once <italic>I</italic><sub><italic>f</italic></sub>(<italic>x</italic>) is obtained for each picture, the hue, saturation, and brightness components are extracted. Subsequently, the necessary matrices for these three variables are computed. The hazy picture developing model <xref ref-type="disp-formula" rid="eqn-1">(1)</xref> is utilized to create the <italic>i</italic>-th hazy picture, <italic>I</italic><sub><italic>f</italic></sub>(<italic>x</italic>), for each lucid picture <italic>R</italic><sub><italic>d</italic></sub>(<italic>x</italic>). Afterward, obtaining <italic>I</italic><sub><italic>f</italic></sub>(<italic>x</italic>) for each picture, the hue, saturation, and brightness components are separated. Subsequently, the necessary matrices for the three variables are computed. The three matrices previously mentioned are used to create the sample space for fitting the curve of <xref ref-type="disp-formula" rid="eqn-12">(12)</xref> using the least squares regression approach, as explained in <xref ref-type="disp-formula" rid="eqn-16">(16)</xref>&#x2013;<xref ref-type="disp-formula" rid="eqn-18">(18)</xref>. This allows for the determination of the values of the linear coefficients. Algorithm 1 can be utilized to compute the required values of variables <italic>c</italic><sub>1</sub>, <italic>c</italic><sub>2</sub>, and <italic>c</italic><sub>3</sub> as delineated below:</p>
<fig id="fig-10">
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_67530-fig-10.tif"/>
</fig>
<p>The linear coefficient values obtained using Algorithm 1 are <italic>c</italic><sub>1</sub> &#x003D; 0.8612, <italic>c</italic><sub>2</sub> &#x003D; 0.8059, and <italic>c</italic><sub>3</sub> &#x003D; &#x2212;0.65525. To generate a transmission map, the scattering map is modeled using both the estimated linear coefficients and <xref ref-type="disp-formula" rid="eqn-7">(7)</xref>. As a result, the redeveloped form of the transmission map could be expressed as follows:
<disp-formula id="eqn-19"><label>(19)</label><mml:math id="mml-eqn-19" display="block"><mml:msub><mml:mi>t</mml:mi><mml:mrow><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>exp</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:msub><mml:mi>d</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula>
<disp-formula id="eqn-20"><label>(20)</label><mml:math id="mml-eqn-20" display="block"><mml:msub><mml:mi>t</mml:mi><mml:mrow><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>exp</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x00D7;</mml:mo><mml:msub><mml:mi>d</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:msub><mml:mi>d</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:mi>&#x03C7;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:msub><mml:mi>d</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mi>&#x03B5;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:msub><mml:mi>d</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>Additionally, the outcomes of current approaches have improved with the help of <xref ref-type="disp-formula" rid="eqn-20">(20)</xref>, and the efficiency of the suggested model is evaluated in the following section.</p>
</sec>
<sec id="s4">
<label>4</label>
<title>Experimental Results and Analysis</title>
<p>To compare the performance of the suggested technique with existing methods, both qualitative and quantitative analyses were conducted in MATLAB 19 on a 64-bit Core i7 Processor with 6 GB RAM [<xref ref-type="bibr" rid="ref-4">4</xref>,<xref ref-type="bibr" rid="ref-15">15</xref>,<xref ref-type="bibr" rid="ref-16">16</xref>,<xref ref-type="bibr" rid="ref-25">25</xref>]. The test images were selected from available datasets, including Frida2 [<xref ref-type="bibr" rid="ref-26">26</xref>] and the Waterloo IVC Dehazed Image Database [<xref ref-type="bibr" rid="ref-27">27</xref>]. The dataset included real-world foggy photos captured at various times of the day, morning, afternoon, and night.</p>
<sec id="s4_1">
<label>4.1</label>
<title>Qualitative Analysis</title>
<p>The quality evaluation was conducted by simulating two existing methods [<xref ref-type="bibr" rid="ref-15">15</xref>,<xref ref-type="bibr" rid="ref-16">16</xref>] using the proposed transmission model and a specified value of <italic>&#x03B2;</italic> on several sets of images, as illustrated in <xref ref-type="fig" rid="fig-2">Fig. 2</xref>. The figure consists of 7 rows and 5 columns of various images. For qualitative evaluation purposes, the arrangement of the figures is as follows: column 1 contains the original foggy images; column 2 shows the results of method [<xref ref-type="bibr" rid="ref-15">15</xref>] with a constant <italic>&#x03B2;</italic>; column 3 shows the results of method [<xref ref-type="bibr" rid="ref-15">15</xref>] with the proposed <italic>&#x03B2;</italic>(<italic>x</italic>); column 4 shows the results of method [<xref ref-type="bibr" rid="ref-16">16</xref>] with a constant <italic>&#x03B2;</italic>; and column 5 shows the results of method [<xref ref-type="bibr" rid="ref-16">16</xref>] with the proposed <italic>&#x03B2;</italic>(<italic>x</italic>).</p>
<fig id="fig-2">
<label>Figure 2</label>
<caption>
<title>Evaluation of suggested and standing methods on actual creation pictures. (<bold>a)</bold> foggy input pictures; <bold>(b)</bold> Outcomes of [<xref ref-type="bibr" rid="ref-15">15</xref>] and (<bold>d)</bold> Outcomes of [<xref ref-type="bibr" rid="ref-16">16</xref>], with persistent <bold><italic>&#x03B2;</italic></bold>; <bold>(c)</bold> Outcomes of [<xref ref-type="bibr" rid="ref-15">15</xref>] and (<bold>e)</bold> Outcomes of [<xref ref-type="bibr" rid="ref-16">16</xref>] using the suggested scattering model with variable <bold><italic>&#x03B2;</italic></bold></title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_67530-fig-2.tif"/>
</fig>
<p>Additionally, to illustrate the performance of the proposed method more clearly, row 5 presents a zoomed-in version of row 4. All of the visuals in <xref ref-type="fig" rid="fig-2">Fig. 2</xref> demonstrate that the proposed scattering model provides superior visibility compared to the result obtained from the existing techniques. In other words, the performance of the existing methods improves when the proposed transmission map is used in place of their original one.</p>
</sec>
<sec id="s4_2">
<label>4.2</label>
<title>Quantitative Analysis</title>
<p>This analysis includes several performance metrics that are described in [<xref ref-type="bibr" rid="ref-27">27</xref>,<xref ref-type="bibr" rid="ref-28">28</xref>] and are calculated to support the defogged image&#x2019;s quality. This assessment offers a solid foundation for measuring how well algorithms can restore deteriorated edges and improve contrast, structure-preserving, and visibility. Further, simulations have been conducted utilizing the proposed and existing single image defogging techniques [<xref ref-type="bibr" rid="ref-4">4</xref>,<xref ref-type="bibr" rid="ref-15">15</xref>,<xref ref-type="bibr" rid="ref-16">16</xref>,<xref ref-type="bibr" rid="ref-25">25</xref>] to conduct the qualitative analysis, and <xref ref-type="fig" rid="fig-3">Figs. 3</xref>&#x2013;<xref ref-type="fig" rid="fig-8">8</xref> illustrate the acquired outcomes. On the images in <xref ref-type="fig" rid="fig-3">Figs. 3</xref>&#x2013;<xref ref-type="fig" rid="fig-8">8</xref>, a quantitative evaluation has been done using the following performance measuring parameters, and the acquired outcomes are shown in <xref ref-type="table" rid="table-1">Table 1</xref>.</p>
<fig id="fig-3">
<label>Figure 3</label>
<caption>
<title>Zoomed fogy picture with less background (<bold>a</bold>) Foggy input picture. (<bold>b</bold>&#x2013;<bold>e</bold>) Recovered pictures [<xref ref-type="bibr" rid="ref-4">4</xref>,<xref ref-type="bibr" rid="ref-15">15</xref>,<xref ref-type="bibr" rid="ref-16">16</xref>,<xref ref-type="bibr" rid="ref-25">25</xref>], and (<bold>f</bold>) Recommended tactic</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_67530-fig-3.tif"/>
</fig><fig id="fig-4">
<label>Figure 4</label>
<caption>
<title>Visual comparison of fogy morning picture of policeman discharging duty (<bold>a</bold>) Foggy input picture. (<bold>b</bold>&#x2013;<bold>e</bold>) Recovered pictures [<xref ref-type="bibr" rid="ref-4">4</xref>,<xref ref-type="bibr" rid="ref-15">15</xref>,<xref ref-type="bibr" rid="ref-16">16</xref>,<xref ref-type="bibr" rid="ref-25">25</xref>], and (<bold>f</bold>) Recommended tactic</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_67530-fig-4.tif"/>
</fig><fig id="fig-5">
<label>Figure 5</label>
<caption>
<title>Visual comparison of challenge fogy picture of railway tracks (<bold>a</bold>) Foggy input picture. (<bold>b</bold>&#x2013;<bold>e</bold>) Recovered pictures [<xref ref-type="bibr" rid="ref-4">4</xref>,<xref ref-type="bibr" rid="ref-15">15</xref>,<xref ref-type="bibr" rid="ref-16">16</xref>,<xref ref-type="bibr" rid="ref-25">25</xref>], and (<bold>f</bold>) Recommended tactic</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_67530-fig-5.tif"/>
</fig><fig id="fig-6">
<label>Figure 6</label>
<caption>
<title>Fogy image of airport with deep background (<bold>a</bold>) Foggy input picture. (<bold>b</bold>&#x2013;<bold>e</bold>) Recovered pictures [<xref ref-type="bibr" rid="ref-4">4</xref>,<xref ref-type="bibr" rid="ref-15">15</xref>,<xref ref-type="bibr" rid="ref-16">16</xref>,<xref ref-type="bibr" rid="ref-25">25</xref>], and (<bold>f</bold>) Recommended tactic</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_67530-fig-6.tif"/>
</fig><fig id="fig-7">
<label>Figure 7</label>
<caption>
<title>Synthetic objective testing picture (<bold>a</bold>) Foggy input picture. (<bold>b</bold>&#x2013;<bold>e</bold>) Recovered pictures [<xref ref-type="bibr" rid="ref-4">4</xref>,<xref ref-type="bibr" rid="ref-15">15</xref>,<xref ref-type="bibr" rid="ref-16">16</xref>,<xref ref-type="bibr" rid="ref-25">25</xref>], and (<bold>f</bold>) Recommended tactic</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_67530-fig-7.tif"/>
</fig><fig id="fig-8">
<label>Figure 8</label>
<caption>
<title>Synthetic testing images with multiple colors (<bold>a</bold>) Foggy input picture. (<bold>b</bold>&#x2013;<bold>e</bold>) Recovered pictures [<xref ref-type="bibr" rid="ref-4">4</xref>,<xref ref-type="bibr" rid="ref-15">15</xref>,<xref ref-type="bibr" rid="ref-16">16</xref>,<xref ref-type="bibr" rid="ref-25">25</xref>], and (<bold>f</bold>) Recommended tactic</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_67530-fig-8.tif"/>
</fig><table-wrap id="table-1">
<label>Table 1</label>
<caption>
<title>Performance measures of suggested and standing methods</title>
</caption>
<table>
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th rowspan="2">Constraints</th>
<th rowspan="2">Methods</th>
<th colspan="6">Figures</th>
</tr>
<tr>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
</tr>
</thead>
<tbody>
<tr>
<td><bold>e</bold></td>
<td>[<xref ref-type="bibr" rid="ref-25">25</xref>]</td>
<td>7.059</td>
<td>16.246</td>
<td>14.348</td>
<td>16.466</td>
<td>15.860</td>
<td>12.402</td>
</tr>
<tr>
<td/>
<td>[<xref ref-type="bibr" rid="ref-4">4</xref>]</td>
<td>14.600</td>
<td>31.386</td>
<td>26.532</td>
<td>25.366</td>
<td>17.037</td>
<td>21.600</td>
</tr>
<tr>
<td/>
<td>[<xref ref-type="bibr" rid="ref-15">15</xref>]</td>
<td>4.737</td>
<td>21.064</td>
<td>6.917</td>
<td>12.654</td>
<td>9.216</td>
<td>17.596</td>
</tr>
<tr>
<td/>
<td>[<xref ref-type="bibr" rid="ref-16">16</xref>]</td>
<td>2.41</td>
<td>16.751</td>
<td>9.0001</td>
<td>12.770</td>
<td>11.183</td>
<td>12.718</td>
</tr>
<tr>
<td/>
<td><bold>Suggested</bold></td>
<td>34.8771</td>
<td>40.023</td>
<td>25.207</td>
<td>28.075</td>
<td>34.679</td>
<td>31.679</td>
</tr>
<tr>
<td><inline-formula id="ieqn-9"><mml:math id="mml-ieqn-9"><mml:mover><mml:mi mathvariant="bold-italic">r</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover></mml:math></inline-formula></td>
<td>[<xref ref-type="bibr" rid="ref-25">25</xref>]</td>
<td>1.084</td>
<td>0.948</td>
<td>2.582</td>
<td>2.006</td>
<td>2.834</td>
<td>0.954</td>
</tr>
<tr>
<td/>
<td>[<xref ref-type="bibr" rid="ref-4">4</xref>]</td>
<td>1.576</td>
<td>3.934</td>
<td>2.418</td>
<td>1.813</td>
<td>3.459</td>
<td>2.902</td>
</tr>
<tr>
<td/>
<td>[<xref ref-type="bibr" rid="ref-15">15</xref>]</td>
<td>1.031</td>
<td>0.995</td>
<td>1.680</td>
<td>1.742</td>
<td>2.066</td>
<td>1.161</td>
</tr>
<tr>
<td/>
<td>[<xref ref-type="bibr" rid="ref-16">16</xref>]</td>
<td>1.078</td>
<td>0.917</td>
<td>2.161</td>
<td>1.998</td>
<td>2.540</td>
<td>1.210</td>
</tr>
<tr>
<td/>
<td><bold>Suggested</bold></td>
<td>1.801</td>
<td>4.653</td>
<td>3.029</td>
<td>2.224</td>
<td>6.429</td>
<td>4.029</td>
</tr>
<tr>
<td><bold><italic>IVM</italic></bold></td>
<td>[<xref ref-type="bibr" rid="ref-25">25</xref>]</td>
<td>4.387</td>
<td>7.000</td>
<td>3.911</td>
<td>5.244</td>
<td>4.685</td>
<td>5.114</td>
</tr>
<tr>
<td/>
<td>[<xref ref-type="bibr" rid="ref-4">4</xref>]</td>
<td>5.950</td>
<td>7.397</td>
<td>9.191</td>
<td>8.391</td>
<td>5.228</td>
<td>6.728</td>
</tr>
<tr>
<td/>
<td>[<xref ref-type="bibr" rid="ref-15">15</xref>]</td>
<td>3.875</td>
<td>8.924</td>
<td>2.252</td>
<td>4.286</td>
<td>3.334</td>
<td>6.385</td>
</tr>
<tr>
<td/>
<td>[<xref ref-type="bibr" rid="ref-16">16</xref>]</td>
<td>3.3630</td>
<td>9.312</td>
<td>2.888</td>
<td>4.678</td>
<td>3.897</td>
<td>5.653</td>
</tr>
<tr>
<td/>
<td><bold>Suggested</bold></td>
<td>9.1486</td>
<td>9.0212</td>
<td>9.699</td>
<td>8.968</td>
<td>8.798</td>
<td>9.557</td>
</tr>
<tr>
<td><bold><italic>CG</italic></bold></td>
<td>[<xref ref-type="bibr" rid="ref-25">25</xref>]</td>
<td>0.095</td>
<td>0.292</td>
<td>0.188</td>
<td>0.239</td>
<td>0.194</td>
<td>0.074</td>
</tr>
<tr>
<td/>
<td>[<xref ref-type="bibr" rid="ref-4">4</xref>]</td>
<td>0.190</td>
<td>0.280</td>
<td>0.472</td>
<td>0.217</td>
<td>0.224</td>
<td>0.347</td>
</tr>
<tr>
<td/>
<td>[<xref ref-type="bibr" rid="ref-15">15</xref>]</td>
<td>0.068</td>
<td>0.729</td>
<td>0.082</td>
<td>0.172</td>
<td>0.113</td>
<td>0.112</td>
</tr>
<tr>
<td/>
<td>[<xref ref-type="bibr" rid="ref-16">16</xref>]</td>
<td>0.047</td>
<td>0.103</td>
<td>0.195</td>
<td>0.308</td>
<td>0.222</td>
<td>0.141</td>
</tr>
<tr>
<td/>
<td><bold>Suggested</bold></td>
<td>0.307</td>
<td>0.443</td>
<td>0.795</td>
<td>0.497</td>
<td>0.612</td>
<td>0.688</td>
</tr>
<tr>
<td><bold><italic>VCM</italic></bold></td>
<td>[<xref ref-type="bibr" rid="ref-25">25</xref>]</td>
<td>32.456</td>
<td>21.200</td>
<td>35.768</td>
<td>61.000</td>
<td>54.462</td>
<td>23.846</td>
</tr>
<tr>
<td/>
<td>[<xref ref-type="bibr" rid="ref-4">4</xref>]</td>
<td>37.017</td>
<td>40.384</td>
<td>59.333</td>
<td>30.961</td>
<td>57.665</td>
<td>71.800</td>
</tr>
<tr>
<td/>
<td>[<xref ref-type="bibr" rid="ref-15">15</xref>]</td>
<td>28.245</td>
<td>18.500</td>
<td>51.730</td>
<td>55.800</td>
<td>36.842</td>
<td>22.500</td>
</tr>
<tr>
<td/>
<td>[<xref ref-type="bibr" rid="ref-16">16</xref>]</td>
<td>27.017</td>
<td>54.166</td>
<td>59.807</td>
<td>56.801</td>
<td>42.791</td>
<td>24.615</td>
</tr>
<tr>
<td/>
<td><bold>Suggested</bold></td>
<td>34.736</td>
<td>47.307</td>
<td>76.333</td>
<td>31.923</td>
<td>76.659</td>
<td>86.401</td>
</tr>
</tbody>
</table>
</table-wrap>
<p><italic>Blind Assessment</italic>: To evaluate the capacity of an algorithm used for the assessment of retaining and enhancing edges, two parameters, <bold>e</bold>, and <inline-formula id="ieqn-8"><mml:math id="mml-ieqn-8"><mml:mover><mml:mi mathvariant="bold-italic">r</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover></mml:math></inline-formula> are used.</p>
<p>Greater values of these parameters indicate that the suggested method can improve the level of visibility and preserve edges. <xref ref-type="table" rid="table-1">Table 1</xref> makes it abundantly clear that the suggested model has improved the values of these two descriptors.</p>

<p><italic>Image Visibility Measure (IVM)</italic>: The visible edge segmentation-based assessment parameter was suggested by Yu et al. [<xref ref-type="bibr" rid="ref-29">29</xref>]. According to Yu et al., an image that has been defogged will have better visibility with a larger value for this attribute. The proposed model&#x2019;s ability to improve the <italic>IVM</italic> is justified by <xref ref-type="table" rid="table-1">Table 1</xref>.</p>

<p><italic>Contrast Gain (CG) and Visual Contrast Measure (VCM)</italic>: The performance of the various defogging methods is also evaluated using these two attributes. For this, Tripathi et al. [<xref ref-type="bibr" rid="ref-3">3</xref>] and Jobson et al. [<xref ref-type="bibr" rid="ref-30">30</xref>] suggested CG and VCM to list the level of visibility in recovered images. Both of these parameters need to be larger for clear pictures than the unclear ones. <xref ref-type="table" rid="table-1">Table 1</xref> shows that utilizing the suggested model, the derived values of CG and VCM are significantly greater for each of the six test images. Furthermore, the suggested method is more efficient for non-reference-based analysis, whereas reference-based analysis also allows for quantitative analysis of the recovered image. Peak Signal to Noise Ratio (PSNR) and Structure Similarity Index Measure (SSIM) are reference-based parameters measured on test images (with the original scene) from the RESIDE [<xref ref-type="bibr" rid="ref-31">31</xref>] dataset, which is ordinarily used as a test image in numerous current dehazing approaches.</p>

<p>Simulations have been performed on both proposed and existing algorithms [<xref ref-type="bibr" rid="ref-4">4</xref>,<xref ref-type="bibr" rid="ref-15">15</xref>,<xref ref-type="bibr" rid="ref-16">16</xref>,<xref ref-type="bibr" rid="ref-25">25</xref>], In <xref ref-type="table" rid="table-2">Table 2</xref>, we used the SOTS (Synthetic Objective Testing Set) subset of the RESIDE dataset for quantitative evaluation, as it provides clean ground truth images suitable for benchmarking dehazing algorithms. Due to computational and resource limitations, we focused on RESIDE-SOTS in this article.</p>
<table-wrap id="table-2">
<label>Table 2</label>
<caption>
<title>Reference-based quantitative assessment for picture datasets [<xref ref-type="bibr" rid="ref-31">31</xref>]</title>
</caption>
<table>
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th rowspan="2">Constraints</th>
<th colspan="4">Standing methods</th>
<th rowspan="2">Suggested</th>
</tr>
<tr>
<th>[<xref ref-type="bibr" rid="ref-25">25</xref>]</th>
<th>[<xref ref-type="bibr" rid="ref-4">4</xref>]</th>
<th>[<xref ref-type="bibr" rid="ref-15">15</xref>]</th>
<th>[<xref ref-type="bibr" rid="ref-16">16</xref>]</th>
</tr>
</thead>
<tbody>
<tr>
<td><bold><italic>PSNR</italic></bold></td>
<td><bold>8.58</bold></td>
<td><bold>9.08</bold></td>
<td><bold>12.71</bold></td>
<td><bold>13.98</bold></td>
<td><bold>19.79</bold></td>
</tr>
<tr>
<td><bold><italic>SSIM</italic></bold></td>
<td><bold>0.65</bold></td>
<td><bold>0.78</bold></td>
<td><bold>0.81</bold></td>
<td><bold>0.69</bold></td>
<td><bold>0.88</bold></td>
</tr>
</tbody>
</table>
</table-wrap>
<p><xref ref-type="table" rid="table-2">Table 2</xref> reveals that the average values of PSNR and SSIM for the dataset [<xref ref-type="bibr" rid="ref-31">31</xref>] are appreciably good for the proposed model. Last but not least, average values for each non-reference-based parameter are calculated and presented as a bar chart as depicted in <xref ref-type="fig" rid="fig-9">Fig. 9</xref>. It is evident from <xref ref-type="fig" rid="fig-9">Fig. 9</xref> that the proposed haze removal approach demonstrates superior performance in all aspects compared to other current methods [<xref ref-type="bibr" rid="ref-4">4</xref>,<xref ref-type="bibr" rid="ref-15">15</xref>,<xref ref-type="bibr" rid="ref-16">16</xref>,<xref ref-type="bibr" rid="ref-25">25</xref>].</p>
<fig id="fig-9">
<label>Figure 9</label>
<caption>
<title>Assessment of all the performance constraints for existing [<xref ref-type="bibr" rid="ref-4">4</xref>,<xref ref-type="bibr" rid="ref-15">15</xref>,<xref ref-type="bibr" rid="ref-16">16</xref>,<xref ref-type="bibr" rid="ref-25">25</xref>] and recommended tactics</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_67530-fig-9.tif"/>
</fig>
</sec>
</sec>
<sec id="s5">
<label>5</label>
<title>Conclusion</title>
<p>Removing fog from a single image presents a significant challenge for most consumer and computer vision applications. The effectiveness of the fog removal process depends on the accuracy of estimating the transmission map, which in turn relies on precise scene depth assessment and a constant scattering coefficient, <italic>&#x03B2;</italic>. Furthermore, the current technique only aims to increase scene depth while maintaining a constant <italic>&#x03B2;</italic>. However, the haze removal method might perform more effectively with a transmission map based on variable scattering. As a result, an updated transmission map model that accounts for fluctuations in scattering information in foggy images is proposed. For effective image restoration, this updated transmission map is generated using the proposed scattering model combined with scene depth from existing approaches. Experimental findings show that the suggested model outperforms current fog removal methods in both qualitative and quantitative analyses. Moreover, the results suggest that the revised transmission map addresses established issues such as edge preservation and chromatic constancy.</p>
<p>A significant limitation of the proposed work is the increased computational complexity required to compute <italic>&#x03B2;</italic>(<italic>x</italic>), which scales with <inline-formula id="ieqn-10"><mml:math id="mml-ieqn-10"><mml:mi>n</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>m</mml:mi></mml:math></inline-formula> compared to using a constant <italic>&#x03B2;</italic>. Additionally, it raises hardware complexity when implemented in hardware. However, these limitations are acceptable given the substantial improvement in image restoration quality.</p>
</sec>
</body>
<back>
<ack>
<p>The authors extend their appreciation to the Deanship of Research and Graduate Studies at King Khalid University for funding this work through Large Research Project under grant number RGP2/274/46.</p>
</ack>
<sec>
<title>Funding Statement</title>
<p>The Deanship of Research and Graduate Studies at King Khalid University funded this work through a Large Research Project under grant number RGP2/274/46.</p>
</sec>
<sec>
<title>Author Contributions</title>
<p>Gaurav Saxena has developed the methodology, Kiran Napte &#x0026; Neeraj Kumar Shukla conceived the experiments, and Sushma Parihar analyzed the results. All authors reviewed the results and approved the final version of the manuscript.</p>
</sec>
<sec sec-type="data-availability">
<title>Availability of Data and Materials</title>
<p>The datasets analyzed during the current study are available at: Dataset&#x2019;A: <ext-link ext-link-type="uri" xlink:href="https://ivc.uwaterloo.ca/database/dehaze.html">https://ivc.uwaterloo.ca/database/dehaze.html</ext-link> (accessed on 01 January 2025), Dataset&#x2019;B: <ext-link ext-link-type="uri" xlink:href="http://perso.lcpc.fr/tarel.jean-philippe/bdd/frida.html">http://perso.lcpc.fr/tarel.jean-philippe/bdd/frida.html</ext-link> (accessed on 01 January 2025).</p>
</sec>
<sec>
<title>Ethics Approval</title>
<p>Not applicable.</p>
</sec>
<sec sec-type="COI-statement">
<title>Conflicts of Interest</title>
<p>The authors declare no conflicts of interest to report regarding the present study.</p>
</sec>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>[1]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Hautiere</surname> <given-names>N</given-names></string-name>, <string-name><surname>Tarel</surname> <given-names>JP</given-names></string-name>, <string-name><surname>Aubert</surname> <given-names>D</given-names></string-name></person-group>. <article-title>Towards fog-free in-vehicle vision systems through contrast restoration</article-title>. In: <conf-name>2007 IEEE Conference on Computer Vision and Pattern Recognition; 2007 Jun 17&#x2013;22; Minneapolis, MN, USA.</conf-name> p. <fpage>1</fpage>&#x2013;<lpage>8</lpage>. doi:<pub-id pub-id-type="doi">10.1109/CVPR.2007.383259</pub-id>.</mixed-citation></ref>
<ref id="ref-2"><label>[2]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Woodell</surname> <given-names>G</given-names></string-name>, <string-name><surname>Jobson</surname> <given-names>DJ</given-names></string-name>, <string-name><surname>Rahman</surname> <given-names>ZU</given-names></string-name>, <string-name><surname>Hines</surname> <given-names>G</given-names></string-name></person-group>. <article-title>Advanced image processing of aerial imagery</article-title>. In: <conf-name>Visual information processing XV. Orlando, FL, USA: SPIE</conf-name>; <year>2006</year>. <fpage>62460E</fpage> p. doi:<pub-id pub-id-type="doi">10.1117/12.666767</pub-id>.</mixed-citation></ref>
<ref id="ref-3"><label>[3]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Tripathi</surname> <given-names>A</given-names></string-name>, <string-name><surname>Mukhopadhyay</surname> <given-names>S</given-names></string-name></person-group>. <article-title>Removal of fog from images: a review</article-title>. <source>IETE Tech Rev</source>. <year>2012</year>;<volume>29</volume>(<issue>2</issue>):<fpage>148</fpage>. doi:<pub-id pub-id-type="doi">10.4103/0256-4602.95386</pub-id>.</mixed-citation></ref>
<ref id="ref-4"><label>[4]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Raikwar</surname> <given-names>SC</given-names></string-name>, <string-name><surname>Tapaswi</surname> <given-names>S</given-names></string-name></person-group>. <article-title>Adaptive dehazing control factor based fast single image dehazing</article-title>. <source>Multimed Tools Appl</source>. <year>2020</year>;<volume>79</volume>(<issue>1</issue>):<fpage>891</fpage>&#x2013;<lpage>918</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s11042-019-08120-z</pub-id>.</mixed-citation></ref>
<ref id="ref-5"><label>[5]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Ngo</surname> <given-names>D</given-names></string-name>, <string-name><surname>Lee</surname> <given-names>GD</given-names></string-name>, <string-name><surname>Kang</surname> <given-names>B</given-names></string-name></person-group>. <article-title>Improved color attenuation prior for single-image haze removal</article-title>. <source>Appl Sci</source>. <year>2019</year>;<volume>9</volume>(<issue>19</issue>):<fpage>4011</fpage>. doi:<pub-id pub-id-type="doi">10.3390/app9194011</pub-id>.</mixed-citation></ref>
<ref id="ref-6"><label>[6]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Saxena</surname> <given-names>G</given-names></string-name>, <string-name><surname>Bhadauria</surname> <given-names>SS</given-names></string-name>, <string-name><surname>Singhal</surname> <given-names>SK</given-names></string-name></person-group>. <article-title>Performance analysis of single image fog expulsion techniques</article-title>. In: <conf-name>2021 10th IEEE International Conference on Communication Systems and Network Technologies (CSNT); 2021 Jun 18&#x2013;19; Bhopal, India</conf-name>. p. <fpage>182</fpage>&#x2013;<lpage>7</lpage>. doi:<pub-id pub-id-type="doi">10.1109/csnt51715.2021.9509733</pub-id>.</mixed-citation></ref>
<ref id="ref-7"><label>[7]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Bhamidipati</surname> <given-names>RG</given-names></string-name>, <string-name><surname>Jatoth</surname> <given-names>RK</given-names></string-name>, <string-name><surname>Naresh</surname> <given-names>M</given-names></string-name>, <string-name><surname>Surepally</surname> <given-names>SP</given-names></string-name></person-group>. <article-title>Real-time classification of haze and non-haze images on Arduino Nano BLE using Edge Impulse</article-title>. In: <conf-name>2023 4th International Conference on Computing and Communication Systems (I3CS); 2023 Mar 16&#x2013;18; Shillong, India</conf-name>. p. <fpage>1</fpage>&#x2013;<lpage>7</lpage>. doi:<pub-id pub-id-type="doi">10.1109/I3CS58314.2023.10127312</pub-id>.</mixed-citation></ref>
<ref id="ref-8"><label>[8]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Tang</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>X</given-names></string-name>, <string-name><surname>Li</surname> <given-names>X</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>S</given-names></string-name></person-group>. <article-title>Robust image hashing with ring partition and invariant vector distance</article-title>. <source>IEEE Trans Inf Forensics Secur</source>. <year>2016</year>;<volume>11</volume>(<issue>1</issue>):<fpage>200</fpage>&#x2013;<lpage>14</lpage>.</mixed-citation></ref>
<ref id="ref-9"><label>[9]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Choi</surname> <given-names>LK</given-names></string-name>, <string-name><surname>You</surname> <given-names>J</given-names></string-name>, <string-name><surname>Bovik</surname> <given-names>AC</given-names></string-name></person-group>. <article-title>Referenceless prediction of perceptual fog density and perceptual image defogging</article-title>. <source>IEEE Trans Image Process</source>. <year>2015</year>;<volume>24</volume>(<issue>11</issue>):<fpage>3888</fpage>&#x2013;<lpage>901</lpage>. doi:<pub-id pub-id-type="doi">10.1109/TIP.2015.2456502</pub-id>; <pub-id pub-id-type="pmid">26186784</pub-id></mixed-citation></ref>
<ref id="ref-10"><label>[10]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Sankaraiah</surname> <given-names>YR</given-names></string-name>, <string-name><surname>Guru</surname> <given-names>M</given-names></string-name>, <string-name><surname>Reddy</surname> <given-names>P</given-names></string-name>, <string-name><surname>Venkata</surname> <given-names>O</given-names></string-name>, <string-name><surname>Reddy</surname> <given-names>P</given-names></string-name>, <string-name><surname>Muralikrishna</surname> <given-names>K</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>Deep learning model for haze removal from remote sensing images</article-title>. <source>Turk J Comput Math Educ</source>. <year>2023</year>;<volume>14</volume>(<issue>2</issue>):<fpage>375</fpage>&#x2013;<lpage>84</lpage>. doi:<pub-id pub-id-type="doi">10.17762/turcomat.v14i2.13662</pub-id>.</mixed-citation></ref>
<ref id="ref-11"><label>[11]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Khade</surname> <given-names>PI</given-names></string-name>, <string-name><surname>Rajput</surname> <given-names>AS</given-names></string-name></person-group>. <chapter-title>Chapter 8&#x2014;efficient single image haze removal using CLAHE and Dark Channel Prior for Internet of multimedia things</chapter-title>. In: <person-group person-group-type="editor"><string-name><surname>Shukla</surname> <given-names>S</given-names></string-name>, <string-name><surname>Singh</surname> <given-names>AK</given-names></string-name>, <string-name><surname>Srivastava</surname> <given-names>G</given-names></string-name>, <string-name><surname>Xhafa</surname> <given-names>F</given-names></string-name></person-group>, editors. <source>Internet of multimedia things (IoMT)</source>. <publisher-loc>Cambridge, MA, USA</publisher-loc>: <publisher-name>Academic Press</publisher-name>; <year>2022</year>. p. <fpage>189</fpage>&#x2013;<lpage>202</lpage>. doi:<pub-id pub-id-type="doi">10.1016/b978-0-32-385845-8.00013-7</pub-id>.</mixed-citation></ref>
<ref id="ref-12"><label>[12]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Babu</surname> <given-names>GH</given-names></string-name>, <string-name><surname>Venkatram</surname> <given-names>N</given-names></string-name></person-group>. <article-title>ABF de-hazing algorithm based on deep learning CNN for single I-Haze detection</article-title>. <source>Adv Eng Softw</source>. <year>2023</year>;<volume>175</volume>(<issue>9</issue>):<fpage>103341</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.advengsoft.2022.103341</pub-id>.</mixed-citation></ref>
<ref id="ref-13"><label>[13]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Nayar</surname> <given-names>SK</given-names></string-name>, <string-name><surname>Narasimhan</surname> <given-names>SG</given-names></string-name></person-group>. <article-title>Vision in bad weather</article-title>. <source>IEEE Int Conf Comput Vis</source>. <year>1999</year>;<volume>2</volume>:<fpage>820</fpage>&#x2013;<lpage>7</lpage>. doi:<pub-id pub-id-type="doi">10.1109/iccv.1999.790306</pub-id>.</mixed-citation></ref>
<ref id="ref-14"><label>[14]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Tan</surname> <given-names>KK</given-names></string-name>, <string-name><surname>Oakley</surname> <given-names>JP</given-names></string-name></person-group>. <article-title>Physics-based approach to color image enhancement in poor visibility conditions</article-title>. <source>J Opt Soc Am A</source>. <year>2001</year>;<volume>18</volume>(<issue>10</issue>):<fpage>2460</fpage>&#x2013;<lpage>7</lpage>. doi:<pub-id pub-id-type="doi">10.1364/josaa.18.002460</pub-id>; <pub-id pub-id-type="pmid">11583262</pub-id></mixed-citation></ref>
<ref id="ref-15"><label>[15]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Zhu</surname> <given-names>Q</given-names></string-name>, <string-name><surname>Mai</surname> <given-names>J</given-names></string-name>, <string-name><surname>Shao</surname> <given-names>L</given-names></string-name></person-group>. <article-title>A fast single image haze removal algorithm using color attenuation prior</article-title>. <source>IEEE Trans Image Process</source>. <year>2015</year>;<volume>24</volume>(<issue>11</issue>):<fpage>3522</fpage>&#x2013;<lpage>33</lpage>. doi:<pub-id pub-id-type="doi">10.1109/TIP.2015.2446191</pub-id>; <pub-id pub-id-type="pmid">26099141</pub-id></mixed-citation></ref>
<ref id="ref-16"><label>[16]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Raikwar</surname> <given-names>SC</given-names></string-name>, <string-name><surname>Tapaswi</surname> <given-names>S</given-names></string-name></person-group>. <article-title>An improved linear depth model for single image fog removal</article-title>. <source>Multimed Tools Appl</source>. <year>2018</year>;<volume>77</volume>(<issue>15</issue>):<fpage>19719</fpage>&#x2013;<lpage>44</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s11042-017-5398-y</pub-id>.</mixed-citation></ref>
<ref id="ref-17"><label>[17]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Saxena</surname> <given-names>G</given-names></string-name>, <string-name><surname>Bhadauria</surname> <given-names>SS</given-names></string-name></person-group>. <chapter-title>Haze identification and classification model for haze removal techniques</chapter-title>. In: <source>Advances in intelligent computing and communication</source>. <publisher-loc>Berlin/Heidelberg, Germany</publisher-loc>: <publisher-name>Springer</publisher-name>; <year>2021</year>. p. <fpage>123</fpage>&#x2013;<lpage>32</lpage>. doi:<pub-id pub-id-type="doi">10.1007/978-981-16-0695-3_13</pub-id>.</mixed-citation></ref>
<ref id="ref-18"><label>[18]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Saxena</surname> <given-names>G</given-names></string-name>, <string-name><surname>Bhadauria</surname> <given-names>SS</given-names></string-name></person-group>. <article-title>An efficient deep learning based fog removal model for multimedia applications</article-title>. <source>Turk J Electr Eng Comput Sci</source>. <year>2021</year>;<volume>29</volume>(<issue>3</issue>):<fpage>1445</fpage>&#x2013;<lpage>63</lpage>. doi:<pub-id pub-id-type="doi">10.3906/elk-2005-78</pub-id>; <pub-id pub-id-type="pmid">31411186</pub-id></mixed-citation></ref>
<ref id="ref-19"><label>[19]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Cui</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>Q</given-names></string-name>, <string-name><surname>Li</surname> <given-names>C</given-names></string-name>, <string-name><surname>Ren</surname> <given-names>W</given-names></string-name>, <string-name><surname>Knoll</surname> <given-names>A</given-names></string-name></person-group>. <article-title>EENet: an effective and efficient network for single image dehazing</article-title>. <source>Pattern Recognit</source>. <year>2025</year>;<volume>158</volume>(<issue>12</issue>):<fpage>111074</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.patcog.2024.111074</pub-id>.</mixed-citation></ref>
<ref id="ref-20"><label>[20]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Kumar</surname> <given-names>A</given-names></string-name>, <string-name><surname>Jha</surname> <given-names>RK</given-names></string-name>, <string-name><surname>Nishchal</surname> <given-names>NK</given-names></string-name></person-group>. <article-title>An improved Gamma correction model for image dehazing in a multi-exposure fusion framework</article-title>. <source>J Vis Commun Image Represent</source>. <year>2021</year>;<volume>78</volume>(<issue>1</issue>):<fpage>103122</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.jvcir.2021.103122</pub-id>.</mixed-citation></ref>
<ref id="ref-21"><label>[21]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Liu</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>X</given-names></string-name>, <string-name><surname>Hu</surname> <given-names>E</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>A</given-names></string-name>, <string-name><surname>Shiri</surname> <given-names>B</given-names></string-name>, <string-name><surname>Lin</surname> <given-names>W</given-names></string-name></person-group>. <article-title>VNDHR: variational single nighttime image dehazing for enhancing visibility in intelligent transportation systems via hybrid regularization</article-title>. <source>IEEE Trans Intell Transp Syst</source>. <year>2025</year>;<volume>26</volume>(<issue>7</issue>):<fpage>10189</fpage>&#x2013;<lpage>203</lpage>. doi:<pub-id pub-id-type="doi">10.1109/TITS.2025.3550267</pub-id>.</mixed-citation></ref>
<ref id="ref-22"><label>[22]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>McCartney</surname> <given-names>EJ</given-names></string-name></person-group>. <source>Optics of the atmosphere: scattering by molecules and particles</source>. <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>John Wiley &#x0026; Sons, Inc.</publisher-name>; <year>1976</year>.</mixed-citation></ref>
<ref id="ref-23"><label>[23]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Oakley</surname> <given-names>JP</given-names></string-name>, <string-name><surname>Satherley</surname> <given-names>BL</given-names></string-name></person-group>. <article-title>Improving image quality in poor visibility conditions using a physical model for contrast degradation</article-title>. <source>IEEE Trans Image Process</source>. <year>1998</year>;<volume>7</volume>(<issue>2</issue>):<fpage>167</fpage>&#x2013;<lpage>79</lpage>. doi:<pub-id pub-id-type="doi">10.1109/83.660994</pub-id>; <pub-id pub-id-type="pmid">18267391</pub-id></mixed-citation></ref>
<ref id="ref-24"><label>[24]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Narasimhan</surname> <given-names>SG</given-names></string-name>, <string-name><surname>Nayar</surname> <given-names>SK</given-names></string-name></person-group>. <article-title>Interactive (de) weathering of an image using physical models</article-title>. In: <conf-name> IEEE Workshop on Color and Photometric Methods in Computer Vision; 2003 Oct 13&#x2013;16; Nice, France</conf-name>. p. <fpage>1</fpage>&#x2013;<lpage>8</lpage>.</mixed-citation></ref>
<ref id="ref-25"><label>[25]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>He</surname> <given-names>K</given-names></string-name>, <string-name><surname>Sun</surname> <given-names>J</given-names></string-name>, <string-name><surname>Tang</surname> <given-names>X</given-names></string-name></person-group>. <article-title>Single image haze removal using dark channel prior</article-title>. <source>IEEE Trans Pattern Anal Mach Intell</source>. <year>2011</year>;<volume>33</volume>(<issue>12</issue>):<fpage>2341</fpage>&#x2013;<lpage>53</lpage>. doi:<pub-id pub-id-type="doi">10.1109/tpami.2010.168</pub-id>; <pub-id pub-id-type="pmid">20820075</pub-id></mixed-citation></ref>
<ref id="ref-26"><label>[26]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Tarel</surname> <given-names>JP</given-names></string-name>, <string-name><surname>Hautiere</surname> <given-names>N</given-names></string-name>, <string-name><surname>Caraffa</surname> <given-names>L</given-names></string-name>, <string-name><surname>Cord</surname> <given-names>A</given-names></string-name>, <string-name><surname>Halmaoui</surname> <given-names>H</given-names></string-name>, <string-name><surname>Gruyer</surname> <given-names>D</given-names></string-name></person-group>. <article-title>Vision enhancement in homogeneous and heterogeneous fog</article-title>. <source>IEEE Intell Transp Syst Mag</source>. <year>2012</year>;<volume>4</volume>(<issue>2</issue>):<fpage>6</fpage>&#x2013;<lpage>20</lpage>. doi:<pub-id pub-id-type="doi">10.1109/MITS.2012.2189969</pub-id>.</mixed-citation></ref>
<ref id="ref-27"><label>[27]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Ma</surname> <given-names>K</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>W</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>Z</given-names></string-name></person-group>.<article-title>&#x201C;Perceptual evaluation of single image dehazing algorithms&#x201D;</article-title>. <source>Dept. of Electrical &#x0026; Computer Engineering, University of Waterloo, Waterloo , ON</source>; <year>2015</year>. p. <fpage>3600</fpage>&#x2013;<lpage>04</lpage>.</mixed-citation></ref>
<ref id="ref-28"><label>[28]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Hauti&#x00E8;re</surname> <given-names>N</given-names></string-name>, <string-name><surname>Tarel</surname> <given-names>JP</given-names></string-name>, <string-name><surname>Aubert</surname> <given-names>D</given-names></string-name>, <string-name><surname>Dumont</surname> <given-names>&#x00C9;</given-names></string-name></person-group>. <article-title>Blind contrast enhancement assessment by gradient ratioing at visible edges</article-title>. <source>Image Anal Stereol</source>. <year>2008</year>;<volume>27</volume>(<issue>2</issue>):<fpage>87</fpage>. doi:<pub-id pub-id-type="doi">10.5566/ias.v27.p87-95</pub-id>.</mixed-citation></ref>
<ref id="ref-29"><label>[29]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Yu</surname> <given-names>X</given-names></string-name>, <string-name><surname>Xiao</surname> <given-names>C</given-names></string-name>, <string-name><surname>Deng</surname> <given-names>M</given-names></string-name>, <string-name><surname>Peng</surname> <given-names>L</given-names></string-name></person-group>. <article-title>A classification algorithm to distinguish image as haze or non-haze</article-title>. In: <conf-name>2011 Sixth International Conference on Image and Graphics; 2011 Aug 12&#x2013;15; Hefei, China</conf-name>. p. <fpage>286</fpage>&#x2013;<lpage>9</lpage>. doi:<pub-id pub-id-type="doi">10.1109/ICIG.2011.22</pub-id>.</mixed-citation></ref>
<ref id="ref-30"><label>[30]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Jobson</surname> <given-names>DJ</given-names></string-name>, <string-name><surname>Rahman</surname> <given-names>ZU</given-names></string-name>, <string-name><surname>Woodell</surname> <given-names>GA</given-names></string-name>, <string-name><surname>Hines</surname> <given-names>GD</given-names></string-name></person-group>. <chapter-title>A comparison of visual statistics for the image enhancement of FORESITE aerial images with those of major image classes</chapter-title>. In: <person-group person-group-type="editor"><string-name><surname>Rahman</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Reichenbach</surname> <given-names>SE</given-names></string-name>, <string-name><surname>Neifeld</surname> <given-names>MA</given-names></string-name></person-group>, editors. <source>Visual information processing XV</source>. <publisher-loc>Orlando, FL, USA</publisher-loc>: <publisher-name>SPIE</publisher-name>; <year>2006</year>. <fpage>624601</fpage> p. doi:<pub-id pub-id-type="doi">10.1117/12.664591</pub-id>.</mixed-citation></ref>
<ref id="ref-31"><label>[31]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Li</surname> <given-names>B</given-names></string-name>, <string-name><surname>Ren</surname> <given-names>W</given-names></string-name>, <string-name><surname>Fu</surname> <given-names>D</given-names></string-name>, <string-name><surname>Tao</surname> <given-names>D</given-names></string-name>, <string-name><surname>Feng</surname> <given-names>D</given-names></string-name>, <string-name><surname>Zeng</surname> <given-names>W</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>Benchmarking single-image dehazing and beyond</article-title>. <source>IEEE Trans Image Process</source>. <year>2019</year>;<volume>28</volume>(<issue>1</issue>):<fpage>492</fpage>&#x2013;<lpage>505</lpage>. doi:<pub-id pub-id-type="doi">10.1109/tip.2018.2867951</pub-id>; <pub-id pub-id-type="pmid">30176593</pub-id></mixed-citation></ref>
</ref-list>
</back></article>











