<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xml:lang="en" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">CMES</journal-id>
<journal-id journal-id-type="nlm-ta">CMES</journal-id>
<journal-id journal-id-type="publisher-id">CMES</journal-id>
<journal-title-group>
<journal-title>Computer Modeling in Engineering &#x0026; Sciences</journal-title>
</journal-title-group>
<issn pub-type="epub">1526-1506</issn>
<issn pub-type="ppub">1526-1492</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">52585</article-id>
<article-id pub-id-type="doi">10.32604/cmes.2024.052585</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Incorporating Lasso Regression to Physics-Informed Neural Network for Inverse PDE Problem</article-title>
<alt-title alt-title-type="left-running-head">Incorporating Lasso Regression to Physics-Informed Neural Network for Inverse PDE Problem</alt-title>
<alt-title alt-title-type="right-running-head">Incorporating Lasso Regression to Physics-Informed Neural Network for Inverse PDE Problem</alt-title>
</title-group>
<contrib-group>
<contrib id="author-1" contrib-type="author" corresp="yes">
<name name-style="western"><surname>Ma</surname><given-names>Meng</given-names></name><xref ref-type="aff" rid="aff-1">1</xref><xref ref-type="aff" rid="aff-2">2</xref><email>meng_ma@xjtu.edu.cn</email></contrib>
<contrib id="author-2" contrib-type="author">
<name name-style="western"><surname>Fu</surname><given-names>Liu</given-names></name><xref ref-type="aff" rid="aff-1">1</xref><xref ref-type="aff" rid="aff-2">2</xref></contrib>
<contrib id="author-3" contrib-type="author">
<name name-style="western"><surname>Guo</surname><given-names>Xu</given-names></name><xref ref-type="aff" rid="aff-3">3</xref></contrib>
<contrib id="author-4" contrib-type="author">
<name name-style="western"><surname>Zhai</surname><given-names>Zhi</given-names></name><xref ref-type="aff" rid="aff-1">1</xref><xref ref-type="aff" rid="aff-2">2</xref></contrib>
<aff id="aff-1"><label>1</label><institution>National Key Lab of Aerospace Power System and Plasma Technology, Xi&#x2019;an Jiaotong University</institution>, <addr-line>Xi&#x2019;an, 710049</addr-line>, <country>China</country></aff>
<aff id="aff-2"><label>2</label><institution>School of Mechanical Engineering, Xi&#x2019;an Jiaotong University</institution>, <addr-line>Xi&#x2019;an, 710049</addr-line>, <country>China</country></aff>
<aff id="aff-3"><label>3</label><institution>Department of Electrical and Computer Engineering, University of Massachusetts Lowell</institution>, <addr-line>Lowell, MA 01854</addr-line>, <country>USA</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>&#x002A;</label>Corresponding Author: Meng Ma. Email: <email>meng_ma@xjtu.edu.cn</email></corresp>
</author-notes>
<pub-date date-type="collection" publication-format="electronic">
<year>2024</year></pub-date>
<pub-date date-type="pub" publication-format="electronic"><day>20</day><month>8</month><year>2024</year></pub-date>
<volume>141</volume>
<issue>1</issue>
<fpage>385</fpage>
<lpage>399</lpage>
<history>
<date date-type="received">
<day>04</day>
<month>4</month>
<year>2024</year>
</date>
<date date-type="accepted">
<day>31</day>
<month>5</month>
<year>2024</year>
</date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2024 The Authors.</copyright-statement>
<copyright-year>2024</copyright-year>
<copyright-holder>Published by Tech Science Press.</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_CMES_52585.pdf"></self-uri>
<abstract>
<p>Partial Differential Equation (PDE) is among the most fundamental tools employed to model dynamic systems. Existing PDE modeling methods are typically derived from established knowledge and known phenomena, which are time-consuming and labor-intensive. Recently, discovering governing PDEs from collected actual data via Physics Informed Neural Networks (PINNs) provides a more efficient way to analyze fresh dynamic systems and establish PED models. This study proposes Sequentially Threshold Least Squares-Lasso (STLasso), a module constructed by incorporating Lasso regression into the Sequentially Threshold Least Squares (STLS) algorithm, which can complete sparse regression of PDE coefficients with the constraints of <italic>l</italic><sub>0</sub> norm. It further introduces PINN-STLasso, a physics informed neural network combined with Lasso sparse regression, able to find underlying PDEs from data with reduced data requirements and better interpretability. In addition, this research conducts experiments on canonical inverse PDE problems and compares the results to several recent methods. The results demonstrated that the proposed PINN-STLasso outperforms other methods, achieving lower error rates even with less data.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>Physics-informed neural network</kwd>
<kwd>inverse partial differential equation</kwd>
<kwd>Lasso regression</kwd>
<kwd>scientific machine learning</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label>
<title>Introduction</title>
<p>Dynamic systems are ubiquitous, and Partial Differential Equations (PDE) are the primary tools utilized to describe them. Examples include the Navier-Stokes equation for the motion of fluids, the Burgers&#x2019; equation for the propagation of waves, and the Schr&#x00F6;dinger equation for the motion of microscopic particles [<xref ref-type="bibr" rid="ref-1">1</xref>&#x2013;<xref ref-type="bibr" rid="ref-4">4</xref>]. Common PDE problems can be categorized into forward and inverse problems [<xref ref-type="bibr" rid="ref-5">5</xref>]. The forward problem refers to solving the PDE to obtain analytical or numerical solutions, allowing for a comprehensive system exploration. Traditionally, these problems are solved by numerical methods such as finite difference and finite element methods, which often entail heavy computational burdens [<xref ref-type="bibr" rid="ref-6">6</xref>]. However, the inverse problem aims to extract several governing PDEs to accurately describe a complex dynamic system based on abundant real experimental or operational data. Existing methods typically accomplish this by deriving from known physical principles like conservation laws or minimum energy principles, which often require substantial human resources and are hard to achieve [<xref ref-type="bibr" rid="ref-7">7</xref>].</p>
<p>Due to the rapid development of computer science and deep learning, Physics Informed Neural Networks (PINNs) have become widely applied in solving forward and inverse PDE problems [<xref ref-type="bibr" rid="ref-8">8</xref>,<xref ref-type="bibr" rid="ref-9">9</xref>]. Deep Neural Networks (DNN) are recognized for their powerful universal approximation capabilities and high expressivity, making them popular for solving PDE-related problems [<xref ref-type="bibr" rid="ref-10">10</xref>,<xref ref-type="bibr" rid="ref-11">11</xref>]. However, dealing with high-dimensional complex systems can not be exempt from the curse of dimensionality [<xref ref-type="bibr" rid="ref-7">7</xref>]. PINN integrates mathematical models, such as commonly known PDEs or boundary conditions, directly into the network architecture to address this issue. This is achieved by reinforcing the loss function with a residual term derived from the governing equation, which acts as a penalizing term, restricting the space of acceptable solutions and avoiding fitting DNN solely through the available data [<xref ref-type="bibr" rid="ref-12">12</xref>,<xref ref-type="bibr" rid="ref-13">13</xref>].</p>
<sec id="s1_1">
<label>1.1</label>
<title>Related Work</title>
<p>The inverse PDE problems addressed in this work mainly focus on scenarios where abundant measure data is available for a specific dynamic system governed by some PDEs [<xref ref-type="bibr" rid="ref-14">14</xref>]. Without loss of generality, the PDE description can be formulated as follows:
<disp-formula id="eqn-1"><label>(1)</label><mml:math id="mml-eqn-1" display="block"><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>F</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:msup><mml:mi>u</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>,</mml:mo><mml:mo>&#x22EF;</mml:mo><mml:mo>,</mml:mo><mml:mi mathvariant="normal">&#x2207;</mml:mi><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:msup><mml:mi mathvariant="normal">&#x2207;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>u</mml:mi><mml:mo>&#x22C5;</mml:mo><mml:mi mathvariant="normal">&#x2207;</mml:mi><mml:mi>u</mml:mi><mml:mo>&#x22EF;</mml:mo><mml:mo>;</mml:mo><mml:mi>&#x03BB;</mml:mi><mml:mo>]</mml:mo></mml:mrow></mml:math></disp-formula>where <inline-formula id="ieqn-1"><mml:math id="mml-ieqn-1"><mml:mi>u</mml:mi><mml:mo>=</mml:mo><mml:mi>u</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> is the latent solution of the PDE; <inline-formula id="ieqn-2"><mml:math id="mml-ieqn-2"><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is the time derivate term; <inline-formula id="ieqn-3"><mml:math id="mml-ieqn-3"><mml:mi mathvariant="normal">&#x2207;</mml:mi></mml:math></inline-formula> standards for gradient operation; <inline-formula id="ieqn-4"><mml:math id="mml-ieqn-4"><mml:mi>F</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mo>&#x22C5;</mml:mo><mml:mo>]</mml:mo></mml:mrow></mml:math></inline-formula> is a complex nonlinear function composed of <inline-formula id="ieqn-5"><mml:math id="mml-ieqn-5"><mml:mi>u</mml:mi></mml:math></inline-formula> and its derivate term and <inline-formula id="ieqn-6"><mml:math id="mml-ieqn-6"><mml:mi>&#x03BB;</mml:mi></mml:math></inline-formula> implies all known parameters.</p>
<p>The primary objective in inverse PDE problems can be regarded as to find the <inline-formula id="ieqn-7"><mml:math id="mml-ieqn-7"><mml:mi>F</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mo>&#x22C5;</mml:mo><mml:mo>]</mml:mo></mml:mrow></mml:math></inline-formula> that optimally fits both the collected data and the physics laws [<xref ref-type="bibr" rid="ref-8">8</xref>,<xref ref-type="bibr" rid="ref-9">9</xref>]. An earlier idea on data-driven inverse PDE problem was to compare the experimental data&#x2019;s numerical differentiations with analytic derivatives of candidate functions and apply the symbolic regression and the evolutionary algorithm to determine the nonlinear dynamical system [<xref ref-type="bibr" rid="ref-15">15</xref>,<xref ref-type="bibr" rid="ref-16">16</xref>]. Recently, Rudy et al. [<xref ref-type="bibr" rid="ref-17">17</xref>&#x2013;<xref ref-type="bibr" rid="ref-19">19</xref>] shifted their focus towards sparse regression techniques. They used sparsity-promoting techniques by constructing an overcomplete dictionary comprising several simple functions and their derivatives, constituting the governing PDEs&#x2019; components. Thus, they proposed sequential threshold ridge regression (STRidge) to select candidates that most accurately represent the data.</p>
<p>However, these tasks still involve manually designing the differentiation operator and treating the whole process as conventional convex or nonconvex optimization, leading to representation errors. Raissi et al. [<xref ref-type="bibr" rid="ref-8">8</xref>] pioneered the concept of PINN, which embeds known physics law into deep neural networks to express PDEs. PINN can be seen as a universal nonlinear function approximator and excels in searching for nonlinear functions that satisfy the constraint conditions in the modeling process of PDEs by utilizing physics laws to constrain the training process and convergence of neural networks.</p>
<p>However, PINN still lags in accuracy and speed and struggles to handle the curse of dimensionality. In order to overcome these challenges, abundant works based on PINN have been undertaken [<xref ref-type="bibr" rid="ref-5">5</xref>,<xref ref-type="bibr" rid="ref-20">20</xref>&#x2013;<xref ref-type="bibr" rid="ref-23">23</xref>]. Chen et al. [<xref ref-type="bibr" rid="ref-24">24</xref>] proposed a method for physics-informed learning of governing equations from limited data. It leverages pre-trained DNN with an Alternating Direction Optimization (ADO) algorithm. In this approach, DNN works as a surrogate model, while STRidge acts as the sparse selection algorithm. Long et al. [<xref ref-type="bibr" rid="ref-25">25</xref>,<xref ref-type="bibr" rid="ref-26">26</xref>] combined numerical approximation of differential operators by convolutions with a symbolic multi-layer neural network for model recovery to learn the underlying PDE model&#x2019;s differential operators and the nonlinear response function. They also proved the feasibility of using convolutional neural networks as alternative models [<xref ref-type="bibr" rid="ref-25">25</xref>&#x2013;<xref ref-type="bibr" rid="ref-28">28</xref>]. Rao et al. [<xref ref-type="bibr" rid="ref-29">29</xref>] proposed the Physics encoded Recurrent Convolutional Neural Network (PeRCNN), which performs convolutional operations on slices of collected data. They introduced the Pi-block for cascading convolution and recursive operations. In addition, they employed STRidge to implement PDE modeling and extended its application to various tasks. Huang et al. [<xref ref-type="bibr" rid="ref-30">30</xref>,<xref ref-type="bibr" rid="ref-31">31</xref>] treated data as robust principal components and outliers, optimizing the distribution of each part by convex optimization derivation and using STRidge as the sparse dictionary matching algorithm. Numerous methods nowadays continue to adopt the paradigm of combining PINN with sparse regression, particularly STRidge [<xref ref-type="bibr" rid="ref-32">32</xref>&#x2013;<xref ref-type="bibr" rid="ref-36">36</xref>].</p>
</sec>
<sec id="s1_2">
<label>1.2</label>
<title>Our Method</title>
<p>STRidge plays a vital role in existing methods based on sparse dictionary regression. As proposed by Rudy et al. [<xref ref-type="bibr" rid="ref-17">17</xref>], STRidge combined the threshold least squares (STLS) and Ridge regression sequentially. STRidge can deal with the challenge of correlation in the data to some extent by substituting ridge regression for least squares in STLS. However, Ridge regression tends to retain the influence of all features rather than selecting some of them, especially when confronted with multiple related features. Hence, it still exhibits some shortcomings in dealing with high-dimensional multi-collinearity issues.</p>
<p>This study proposes Physics Informed Neural Network-Sequentially Threshold Least Squares-Lasso (PINN-STLasso), where Lasso Regression is incorporated into STLS. This new sparse regression module, STLasso, is integrated into the PINN framework to handle highly correlated data effectively. Using STLasso and PINN separately for sparse coefficients and DNN parameters, the governing equation of a dynamic system can be accurately determined by only a few observation data. The main contributions of the study are as follows:</p>
<p>(1) A novel module, STLasso, is proposed for sparse regression. The sparse coefficients, crucial for representing the discovered PDE, are computed through STLasso.</p>
<p>(2) A framework of PINN-STLasso is constructed. The sparse regression and DNN parameters are trained separately within the framework, improving the interpretability of the overall process.</p>
<p>(3) Experiments on canonical inverse PDE problems are conducted. The obtained results demonstrate that the proposed PINN-STLasso is superior to several recent relevant methods in terms of accuracy and efficiency.</p>
<p>The rest of this paper is organized as follows. <xref ref-type="sec" rid="s2">Section 2</xref> introduces the proposed STLasso and PINN-STLasso. Experiments on canonical inverse PDE problems are presented in <xref ref-type="sec" rid="s3">Section 3</xref> to demonstrate their performance. Finally, conclusions are drawn in <xref ref-type="sec" rid="s4">Section 4</xref>.</p>
</sec>
</sec>
<sec id="s2">
<label>2</label>
<title>The Proposed PINN-STLasso</title>
<sec id="s2_1">
<label>2.1</label>
<title>STLasso</title>
<p>As described in <xref ref-type="sec" rid="s1">Section 1</xref>, an effective solution to the inverse PDE problem often involves utilizing a well-constructed sparse dictionary <inline-formula id="ieqn-8"><mml:math id="mml-ieqn-8"><mml:mi mathvariant="normal">&#x0398;</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:msup><mml:mi>u</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>,</mml:mo><mml:mo>&#x22EF;</mml:mo><mml:mo>,</mml:mo><mml:mi mathvariant="normal">&#x2207;</mml:mi><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:msup><mml:mi mathvariant="normal">&#x2207;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>u</mml:mi><mml:mo>&#x22C5;</mml:mo><mml:mi mathvariant="normal">&#x2207;</mml:mi><mml:mi>u</mml:mi><mml:mo>&#x22EF;</mml:mo><mml:mo>]</mml:mo></mml:mrow></mml:math></inline-formula>. This study attempts to find a sparse coefficient <inline-formula id="ieqn-9"><mml:math id="mml-ieqn-9"><mml:mrow><mml:mi mathvariant="normal">&#x039B;</mml:mi></mml:mrow></mml:math></inline-formula>, such that
<disp-formula id="eqn-2"><label>(2)</label><mml:math id="mml-eqn-2" display="block"><mml:mrow><mml:mtext>Residual</mml:mtext></mml:mrow><mml:mo>&#x003A;</mml:mo><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:mi mathvariant="normal">&#x0398;</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x039B;</mml:mi></mml:mrow><mml:mo stretchy="false">&#x2192;</mml:mo><mml:mn>0</mml:mn></mml:math></disp-formula></p>
<p>With the help of the Universal Approximation theorem, a DNN can be employed to represent the latent solution <inline-formula id="ieqn-10"><mml:math id="mml-ieqn-10"><mml:mi>u</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>&#x03B8;</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> of PDEs, whose derivative is <inline-formula id="ieqn-11"><mml:math id="mml-ieqn-11"><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B8;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>. <inline-formula id="ieqn-12"><mml:math id="mml-ieqn-12"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula> is adjustable parameters of the network. The constructed <inline-formula id="ieqn-13"><mml:math id="mml-ieqn-13"><mml:mi>u</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>&#x03B8;</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> and <inline-formula id="ieqn-14"><mml:math id="mml-ieqn-14"><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>&#x03B8;</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> are required to simultaneously satisfy the results of collected measurements and adhere to known spares rules based on the thought of PINN. Thus, the loss function of the whole network is
<disp-formula id="eqn-3"><label>(3)</label><mml:math id="mml-eqn-3" display="block"><mml:msub><mml:mi>L</mml:mi><mml:mrow><mml:mi>l</mml:mi><mml:mi>o</mml:mi><mml:mi>s</mml:mi><mml:mi>s</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>L</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>e</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>u</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>m</mml:mi><mml:mi>e</mml:mi><mml:mi>n</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mi>&#x03B1;</mml:mi><mml:msub><mml:mi>L</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mi>p</mml:mi><mml:mi>a</mml:mi><mml:mi>r</mml:mi><mml:mi>s</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:msub><mml:mrow><mml:mo symmetric="true">&#x2016;</mml:mo><mml:mi mathvariant="normal">&#x039B;</mml:mi><mml:mo symmetric="true">&#x2016;</mml:mo></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:math></disp-formula>where commonly
<disp-formula id="eqn-4"><label>(4)</label><mml:math id="mml-eqn-4" display="block"><mml:mtable columnalign="right left right left right left right left right left right left" rowspacing="3pt" columnspacing="0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em" displaystyle="true"><mml:mtr><mml:mtd /><mml:mtd><mml:msub><mml:mi>L</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>e</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>u</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>m</mml:mi><mml:mi>e</mml:mi><mml:mi>n</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msub></mml:mfrac><mml:msubsup><mml:mrow><mml:mo symmetric="true">&#x2016;</mml:mo><mml:mi>u</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>&#x03B8;</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msub><mml:mo symmetric="true">&#x2016;</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:mtd></mml:mtr><mml:mtr><mml:mtd /><mml:mtd><mml:msub><mml:mi>L</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mi>a</mml:mi><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>s</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mi>s</mml:mi></mml:mrow></mml:msub></mml:mfrac><mml:msubsup><mml:mrow><mml:mo symmetric="true">&#x2016;</mml:mo><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>&#x03B8;</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi mathvariant="normal">&#x0398;</mml:mi><mml:mi mathvariant="normal">&#x039B;</mml:mi><mml:mo symmetric="true">&#x2016;</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p><inline-formula id="ieqn-15"><mml:math id="mml-ieqn-15"><mml:mi>&#x03B1;</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x03B2;</mml:mi></mml:math></inline-formula> are weights of different loss, <inline-formula id="ieqn-16"><mml:math id="mml-ieqn-16"><mml:msub><mml:mrow><mml:mo symmetric="true">&#x2016;</mml:mo><mml:mo>&#x22C5;</mml:mo><mml:mo symmetric="true">&#x2016;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> refers to <inline-formula id="ieqn-17"><mml:math id="mml-ieqn-17"><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> norm. <inline-formula id="ieqn-18"><mml:math id="mml-ieqn-18"><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> and <inline-formula id="ieqn-19"><mml:math id="mml-ieqn-19"><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mi>s</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is the number of used data in each process.</p>
<p>Two kinds of parameters exist to be adjusted in this process: DNN parameters <inline-formula id="ieqn-20"><mml:math id="mml-ieqn-20"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula> and sparse coefficient <inline-formula id="ieqn-21"><mml:math id="mml-ieqn-21"><mml:mi mathvariant="normal">&#x039B;</mml:mi></mml:math></inline-formula>. Generally, the optimization will be done individually, which means fixing one while training another. Common methods, such as ADO, recurrently adjust <inline-formula id="ieqn-22"><mml:math id="mml-ieqn-22"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula> and <inline-formula id="ieqn-23"><mml:math id="mml-ieqn-23"><mml:mi mathvariant="normal">&#x039B;</mml:mi></mml:math></inline-formula>, resulting in a delay in training speed. On the other hand, noticing that norm is applied to ensure the sparsity of <inline-formula id="ieqn-24"><mml:math id="mml-ieqn-24"><mml:mi mathvariant="normal">&#x039B;</mml:mi></mml:math></inline-formula>, <inline-formula id="ieqn-25"><mml:math id="mml-ieqn-25"><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula>, but directly solving <inline-formula id="ieqn-26"><mml:math id="mml-ieqn-26"><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> norm is Non-deterministic Polynomial-hard (NP-hard) and unfixable. Generally, it uses convex relaxation-based techniques to transform the <inline-formula id="ieqn-27"><mml:math id="mml-ieqn-27"><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> norm minimization problem into an <inline-formula id="ieqn-28"><mml:math id="mml-ieqn-28"><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> norm minimization problem, such as <inline-formula id="ieqn-29"><mml:math id="mml-ieqn-29"><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> regularization. Then, the loss function of the sparse regression part will be
<disp-formula id="eqn-5"><label>(5)</label><mml:math id="mml-eqn-5" display="block"><mml:munder><mml:mo movablelimits="true" form="prefix">min</mml:mo><mml:mrow><mml:mi mathvariant="normal">&#x039B;</mml:mi></mml:mrow></mml:munder><mml:msubsup><mml:mrow><mml:mo symmetric="true">&#x2016;</mml:mo><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:mi mathvariant="normal">&#x0398;</mml:mi><mml:mi mathvariant="normal">&#x039B;</mml:mi><mml:mo symmetric="true">&#x2016;</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>+</mml:mo><mml:msub><mml:mrow><mml:mo symmetric="true">&#x2016;</mml:mo><mml:mi mathvariant="normal">&#x039B;</mml:mi><mml:mo symmetric="true">&#x2016;</mml:mo></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></disp-formula></p>
<p>It fits the traditional paradigm of Least Absolute Shrinkage and Selection Operator (Lasso) regression. Accordingly, this study proposes the STLasso module. The STLasso module is incorporated into the training process of DNN, using sparse regression without the need of recurrent optimization by integrating Lasso regression into STLS and forming a module. The detailed process of STLasso is depicted in Algorithm 1 and Algorithm 2.</p>
<fig id="fig-7">
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_52585-fig-7.tif"/>
</fig>
<fig id="fig-8">
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_52585-fig-8.tif"/>
</fig>
</sec>
<sec id="s2_2">
<label>2.2</label>
<title>PINN-STLasso</title>
<p>Inspired by literature [<xref ref-type="bibr" rid="ref-24">24</xref>], this study proposes the PINN-STLasso by integrating previously demonstrated STLasso into PINN. <xref ref-type="fig" rid="fig-1">Fig. 1</xref> depicts the schematic architecture of the entire network to solve an inverse PDE problem. In addition, unlike common methods that recurrently optimize <inline-formula id="ieqn-46"><mml:math id="mml-ieqn-46"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula> and <inline-formula id="ieqn-47"><mml:math id="mml-ieqn-47"><mml:mrow><mml:mi mathvariant="normal">&#x039B;</mml:mi></mml:mrow></mml:math></inline-formula>, the proposed method can directly train two parameters separately. Specifically, our method focuses directly on reducing the loss <xref ref-type="disp-formula" rid="eqn-6">(6)</xref>.</p>
<p><disp-formula id="eqn-6"><label>(6)</label><mml:math id="mml-eqn-6" display="block"><mml:mi>L</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msub></mml:mfrac><mml:msubsup><mml:mrow><mml:mo symmetric="true">&#x2016;</mml:mo><mml:mi>u</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>&#x03B8;</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msub><mml:mo symmetric="true">&#x2016;</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>+</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mi>s</mml:mi></mml:mrow></mml:msub></mml:mfrac><mml:mrow><mml:mo>(</mml:mo><mml:msubsup><mml:mrow><mml:mo symmetric="true">&#x2016;</mml:mo><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>&#x03B8;</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi mathvariant="normal">&#x0398;</mml:mi><mml:mi mathvariant="normal">&#x039B;</mml:mi><mml:mo symmetric="true">&#x2016;</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>+</mml:mo><mml:msub><mml:mrow><mml:mo symmetric="true">&#x2016;</mml:mo><mml:mi mathvariant="normal">&#x039B;</mml:mi><mml:mo symmetric="true">&#x2016;</mml:mo></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula></p>

<fig id="fig-1">
<label>Figure 1</label>
<caption>
<title>Schematic architecture of the network</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_52585-fig-1.tif"/>
</fig>
<p><xref ref-type="fig" rid="fig-2">Fig. 2</xref> illustrates the training process of the proposed PINN-STLasso. Treating STLasso as an independent module can be inserted into networks more flexibly. An additional Lasso regression operation <inline-formula id="ieqn-48"><mml:math id="mml-ieqn-48"><mml:mrow><mml:mover><mml:mi mathvariant="normal">&#x039B;</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mo>&#x003A;</mml:mo></mml:mrow><mml:mspace width="negativethinmathspace" /><mml:mspace width="negativethinmathspace" /><mml:mspace width="negativethinmathspace" /><mml:mo>=</mml:mo><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>s</mml:mi><mml:mi>o</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi mathvariant="normal">&#x0398;</mml:mi><mml:mo>,</mml:mo><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>;</mml:mo><mml:mrow><mml:mover><mml:mi mathvariant="normal">&#x039B;</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> is introduced between pretraining and formal training, which can accelerate the convergence of the DNN during formal training. Different colored blocks are utilized to denote their roles respectively, where brown ones operate on sparse coefficients <inline-formula id="ieqn-49"><mml:math id="mml-ieqn-49"><mml:mi mathvariant="normal">&#x039B;</mml:mi></mml:math></inline-formula> and those yellow corresponding to adjustments on the surrogate DNN model&#x2019;s parameters <inline-formula id="ieqn-50"><mml:math id="mml-ieqn-50"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula>. This network architecture provides a more precise visualization of the overall decision-making process, thereby improving the interpretability of the overall process.</p>
<fig id="fig-2">
<label>Figure 2</label>
<caption>
<title>Illustration of training process of PINN-STLasso</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_52585-fig-2.tif"/>
</fig>
</sec>
</sec>
<sec id="s3">
<label>3</label>
<title>Experiments</title>
<p>This study conducted experiments on two canonical PDEs&#x2019; inverse problems, the Burgers equation and the Navier-Stocks equation, using scarce data. In addition, it applied the proposed method to experimental conditions to illustrate its effectiveness.</p>
<sec id="s3_1">
<label>3.1</label>
<title>Burgers Equation</title>
<p>The Burgers equation is a nonlinear partial differential equation that simulates the propagation and reflection of shock waves, commonly found in simplified fluid mechanics, nonlinear acoustics, and gas dynamics, whose general form can be expressed as follows <xref ref-type="disp-formula" rid="eqn-7">(7)</xref>:</p>
<p><disp-formula id="eqn-7"><label>(7)</label><mml:math id="mml-eqn-7" display="block"><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mi>u</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mi>&#x03BD;</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>x</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub></mml:math></disp-formula>where <inline-formula id="ieqn-51"><mml:math id="mml-ieqn-51"><mml:mi>u</mml:mi></mml:math></inline-formula> is flow velocity; <inline-formula id="ieqn-52"><mml:math id="mml-ieqn-52"><mml:mi>x</mml:mi></mml:math></inline-formula> and <inline-formula id="ieqn-53"><mml:math id="mml-ieqn-53"><mml:mi>t</mml:mi></mml:math></inline-formula> corresponding to spatial and temporal dimension; <inline-formula id="ieqn-54"><mml:math id="mml-ieqn-54"><mml:mi>&#x03BD;</mml:mi></mml:math></inline-formula> is the diffusion coefficient, which is 0.1 in this experiment. In this experiment, the data is extracted from the open dataset of the literature [<xref ref-type="bibr" rid="ref-17">17</xref>], which <inline-formula id="ieqn-55"><mml:math id="mml-ieqn-55"><mml:mi>u</mml:mi></mml:math></inline-formula> is discretized into 256 spatial grid points for 101-time steps with a Gaussian initial condition, forming a data size of <inline-formula id="ieqn-56"><mml:math id="mml-ieqn-56"><mml:msup><mml:mrow><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mn>256</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>101</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula>. Specifically, 10 points are randomly selected to simulate ten randomly placed sensors. They are used for DNN training, and 50,000 collection points are generated by Latin hypercube sampling for sparse regression, which is only 3.19% of the data used in the literature [<xref ref-type="bibr" rid="ref-17">17</xref>]. Based on the aforementioned experimental requirements, 16 candidate functions <inline-formula id="ieqn-57"><mml:math id="mml-ieqn-57"><mml:mi mathvariant="normal">&#x0398;</mml:mi><mml:mo>&#x2208;</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mn>16</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula> are utilized to reconstruct the PDE, consisting of polynomial terms, derivatives, and their multiplication. Thus, the constructed sparse dictionary will be <inline-formula id="ieqn-58"><mml:math id="mml-ieqn-58"><mml:mi mathvariant="normal">&#x0398;</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:msup><mml:mi>u</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>,</mml:mo><mml:msup><mml:mi>u</mml:mi><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msup><mml:mo>,</mml:mo><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mi>u</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msup><mml:mi>u</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msup><mml:mi>u</mml:mi><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msup><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x22EF;</mml:mo><mml:mo>,</mml:mo><mml:msup><mml:mi>u</mml:mi><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msup><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>x</mml:mi><mml:mi>x</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>}</mml:mo></mml:mrow></mml:math></inline-formula>. The setting of the sparse dictionary can be very flexible, to some extent, depending on prior knowledge and experience. 10% random noise is added to the data to simulate actual measurement errors. During the training process, the network went through 10,000 times Limited-memory Broyden&#x2013;Fletcher&#x2013;Goldfarb&#x2013;Shanno (L-BFGS-B) Pretraining. In formal training, the ADO process takes ten iterations for sparse regression and 1000 times optimization with Adam in each epoch.</p>
<p><xref ref-type="table" rid="table-1">Table 1</xref> presents a comparison of the proposed PINN-STLasso with several recent methods for inverse PDE problems in terms of reconstruction error, including Physics Informed Neural Network-Sparse Regression (PINN-SR) [<xref ref-type="bibr" rid="ref-24">24</xref>], Partial Differential Equation-Find (PDE-Find) [<xref ref-type="bibr" rid="ref-17">17</xref>]. As the proposed method can handle sparse data, whereas other methods can fail under similar conditions, the error values are directly extracted from corresponding studies. Nonetheless, the proposed PINN-STLasso outperforms other methods, even with less data. All errors are computed by <xref ref-type="disp-formula" rid="eqn-8">Eq. (8)</xref>.</p>
<p><disp-formula id="eqn-8"><label>(8)</label><mml:math id="mml-eqn-8" display="block"><mml:mi>&#x03B7;</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:msub><mml:mrow><mml:mo symmetric="true">&#x2016;</mml:mo><mml:mrow><mml:mover><mml:mi mathvariant="normal">&#x039B;</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi mathvariant="normal">&#x039B;</mml:mi><mml:mo symmetric="true">&#x2016;</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mo symmetric="true">&#x2016;</mml:mo><mml:mi mathvariant="normal">&#x039B;</mml:mi><mml:mo symmetric="true">&#x2016;</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mfrac></mml:math></disp-formula>where <inline-formula id="ieqn-59"><mml:math id="mml-ieqn-59"><mml:mrow><mml:mover><mml:mi mathvariant="normal">&#x039B;</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow></mml:math></inline-formula> and <inline-formula id="ieqn-60"><mml:math id="mml-ieqn-60"><mml:mi mathvariant="normal">&#x039B;</mml:mi></mml:math></inline-formula> stands for detected value and ground truth. <xref ref-type="fig" rid="fig-3">Fig. 3</xref> compares the reconstructed Burgers equation with the ground truth.</p>
<table-wrap id="table-1">
<label>Table 1</label>
<caption>
<title>Comparison of the proposed methods with several recent methods in finding burgers equation. <inline-formula id="ieqn-61"><mml:math id="mml-ieqn-61"><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mi>u</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mn>0.1</mml:mn><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>x</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is ground truth</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Method</th>
<th>Error</th>
<th>Found PDE</th>
</tr>
</thead>
<tbody>
<tr>
<td>PINN-STLasso</td>
<td><bold>0.15% &#x00B1; 0.06%</bold></td>
<td><inline-formula id="ieqn-62"><mml:math id="mml-ieqn-62"><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mn>0.999</mml:mn><mml:mi>u</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mn>0.099</mml:mn><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>x</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula></td>
</tr>
<tr>
<td>PINN-SR [<xref ref-type="bibr" rid="ref-24">24</xref>]</td>
<td>0.88% &#x00B1; 0.03%</td>
<td><inline-formula id="ieqn-63"><mml:math id="mml-ieqn-63"><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mn>1.009</mml:mn><mml:mi>u</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mn>0.099</mml:mn><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>x</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula></td>
</tr>
<tr>
<td>PDE-Find [<xref ref-type="bibr" rid="ref-17">17</xref>]</td>
<td>0.8% &#x00B1; 0.6%</td>
<td><inline-formula id="ieqn-64"><mml:math id="mml-ieqn-64"><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mn>1.010</mml:mn><mml:mi>u</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mn>0.103</mml:mn><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>x</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula></td>
</tr>
<tr>
<td>DLrSR [<xref ref-type="bibr" rid="ref-31">31</xref>]</td>
<td>1.271% &#x00B1; 0.960%</td>
<td><inline-formula id="ieqn-65"><mml:math id="mml-ieqn-65"><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mn>0.956</mml:mn><mml:mi>u</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mn>0.101</mml:mn><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>x</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula></td>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="fig-3">
<label>Figure 3</label>
<caption>
<title>Comparison of predicted burgers equation with ground truth</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_52585-fig-3.tif"/>
</fig>
</sec>
<sec id="s3_2">
<label>3.2</label>
<title>Navier-Stocks Equation</title>
<p>Navier-Stokes (NS) equations are commonly employed to describe the motion equation of momentum conservation in viscous incompressible fluids. The NS equation can determine the fluid flow with certain initial and boundary conditions. In this study, the NS equation is utilized to model a 2D fluid flow passing a circular cylinder with the local rotation dynamics, as shown in <xref ref-type="fig" rid="fig-4">Fig. 4</xref>, where partial data used in the modeling process is highlighted by a red box, whose general form is<disp-formula id="eqn-9"><label>(9)</label><mml:math id="mml-eqn-9" display="block"><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mtext mathvariant="bold">u</mml:mtext></mml:mrow><mml:mo>&#x22C5;</mml:mo><mml:mi mathvariant="normal">&#x2207;</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mi>w</mml:mi><mml:mo>+</mml:mo><mml:mi>v</mml:mi><mml:msup><mml:mi mathvariant="normal">&#x2207;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mi>w</mml:mi></mml:math></disp-formula>where <inline-formula id="ieqn-66"><mml:math id="mml-ieqn-66"><mml:mi>w</mml:mi></mml:math></inline-formula> is the spatiotemporally variant vorticity; <inline-formula id="ieqn-67"><mml:math id="mml-ieqn-67"><mml:mrow><mml:mtext mathvariant="bold">u</mml:mtext></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub><mml:mo>}</mml:mo></mml:mrow></mml:math></inline-formula> is the velocity of the fluid in two dimensions; <inline-formula id="ieqn-68"><mml:math id="mml-ieqn-68"><mml:mi>&#x03BD;</mml:mi></mml:math></inline-formula> is the kinematic viscosity, which is 0.01 in this experiment. Similar to Burgers&#x2019; equation, 500 points, consisting of <inline-formula id="ieqn-69"><mml:math id="mml-ieqn-69"><mml:mrow><mml:mo>{</mml:mo><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mi>w</mml:mi><mml:mo>}</mml:mo></mml:mrow></mml:math></inline-formula> and 60 times steps&#x2019; records, are randomly selected to simulate randomly placed sensors. They are used for DNN training with 60,000 collection points generated by Latin hypercube sampling for sparse regression, which is only 10% of data used in [<xref ref-type="bibr" rid="ref-17">17</xref>]. The size of the protentional sparse library is <inline-formula id="ieqn-70"><mml:math id="mml-ieqn-70"><mml:mi mathvariant="normal">&#x0398;</mml:mi><mml:mo>&#x2208;</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mn>60</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula>, in the form of <inline-formula id="ieqn-71"><mml:math id="mml-ieqn-71"><mml:mi mathvariant="normal">&#x0398;</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mrow><mml:mi>x</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mrow><mml:mi>y</mml:mi><mml:mi>y</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mi>u</mml:mi><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mi>v</mml:mi><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mi>&#x03C9;</mml:mi><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x22EF;</mml:mo><mml:mo>,</mml:mo><mml:msup><mml:mi>u</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msup><mml:mi>v</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msup><mml:mi>&#x03C9;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>}</mml:mo></mml:mrow></mml:math></inline-formula>, consisting of polynomial terms, derivatives, and their multiplication. 10% random noise is also added. During the training process, the network has gone through 5000 times pretraining with Adam optimizer and 10,000 times L-BFGS-B Pretraining. In formal training, the ADO process takes ten iterations for sparse regression and 1000 times optimization with Adam in each epoch. 20,000 Adam optimizations were used in the post-training phase.</p>
<fig id="fig-4">
<label>Figure 4</label>
<caption>
<title>Dynamic model used in this work</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_52585-fig-4.tif"/>
</fig>
<p><xref ref-type="table" rid="table-2">Table 2</xref> shows a comparison of several methods in NS equation reconstruction problems. PDE-Find fails to get a result in the 10% noise condition. Therefore, the presented result is in 1% noise condition. Despite using less data, the proposed PINN-STLasso still outperforms other methods in complex problems. <xref ref-type="fig" rid="fig-5">Fig. 5</xref> compares the predicted Navier-Stokes equation with the ground truth at a specific time.</p>
<table-wrap id="table-2">
<label>Table 2</label>
<caption>
<title>Comparison of the proposed methods with several recent methods in finding NS equation. <inline-formula id="ieqn-72"><mml:math id="mml-ieqn-72"><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mn>0.01</mml:mn><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>x</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mn>0.01</mml:mn><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>y</mml:mi><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is ground truth</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Method</th>
<th>Error</th>
<th>Found PDE</th>
</tr>
</thead>
<tbody>
<tr>
<td>PINN-STLasso</td>
<td><bold>0.80% &#x00B1; 0.75%</bold></td>
<td><inline-formula id="ieqn-73"><mml:math id="mml-ieqn-73"><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mn>0.999</mml:mn><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:mn>0.992</mml:mn><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mn>0.0099</mml:mn><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>x</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mn>0.0099</mml:mn><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>y</mml:mi><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula></td>
</tr>
<tr>
<td>PINN-SR [<xref ref-type="bibr" rid="ref-24">24</xref>]</td>
<td>1.22% &#x00B1; 0.69%</td>
<td><inline-formula id="ieqn-74"><mml:math id="mml-ieqn-74"><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mn>0.996</mml:mn><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:mn>0.991</mml:mn><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mn>0.010</mml:mn><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>x</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mn>0.010</mml:mn><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>y</mml:mi><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula></td>
</tr>
<tr>
<td>PDE-Find [<xref ref-type="bibr" rid="ref-17">17</xref>]</td>
<td>7.00% &#x00B1; 6.00%</td>
<td><inline-formula id="ieqn-75"><mml:math id="mml-ieqn-75"><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mn>0.988</mml:mn><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:mn>0.983</mml:mn><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mn>0.0107</mml:mn><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>x</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mn>0.0083</mml:mn><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>y</mml:mi><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula></td>
</tr>
</tbody>
</table>
</table-wrap><fig id="fig-5">
<label>Figure 5</label>
<caption>
<title>Comparison of predicted Navier-Stokes equation with ground truth in a specific time: (a) Ground truth of Navier-Stokes equation (b) Model prediction result of Navier-Stokes equation
</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_52585-fig-5.tif"/>
</fig>
</sec>
<sec id="s3_3">
<label>3.3</label>
<title>Experimental Reaction-Diffusion Equation</title>
<p>This study conducts an experiment to discover the governing equation of a cell migration and proliferation process to demonstrate the performance of the proposed STLasso on complicated systems and actual experimental situations. The data used in the experiment is collected from vitro cell migration (scratch) assays, which remain sparse and noisy. With cells distributed in wells as uniformly as possible, results for initial cell densities of 14,000, 16,000, 18,000, and 20,000 cells per well are collected. After seeding, cells are grown overnight for attachment and some growth. To quantify the cell density profile, in each record node, images captured by high-precision cameras are uniformly divided with a width of 50 <inline-formula id="ieqn-76"><mml:math id="mml-ieqn-76"><mml:mi>&#x03BC;</mml:mi><mml:mi>m</mml:mi></mml:math></inline-formula>, from which manual cell counting is employed to estimate the cell density at positions <inline-formula id="ieqn-77"><mml:math id="mml-ieqn-77"><mml:mi>x</mml:mi><mml:mo>=</mml:mo><mml:mn>25</mml:mn><mml:mo>,</mml:mo><mml:mn>75</mml:mn><mml:mo>,</mml:mo><mml:mn>125</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x22EF;</mml:mo><mml:mspace width="negativethinmathspace" /><mml:mo>,</mml:mo><mml:mn>1925</mml:mn><mml:mtext>&#x00A0;</mml:mtext><mml:mrow><mml:mi mathvariant="normal">&#x0B5;</mml:mi></mml:mrow><mml:mrow><mml:mtext>m</mml:mtext></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mi>x</mml:mi><mml:mn>38</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula>. After seeding, cells are grown overnight for attachment and some growth. Images of the collective cell spreading are recorded every two hours for 48 h. This research will use cell density distributions at different time instants, specifically, 12, 24, 36, and 48 h after seeding. Three identically prepared experiments are replicated for each cell density, and their mean is considered the final result. The specified description of the experimental data can be found in the literature [<xref ref-type="bibr" rid="ref-37">37</xref>].</p>
<p>This study aims to discover PDEs that can describe the changes in cell concentration under different cell densities. Based on known prior knowledge, the process of cell migration and proliferation can be viewed as a typical Reaction-Diffusion process with migration as cell reaction and proliferation as growing diffusion. Thus, we assume that the PDEs describing this process hold the general form of <xref ref-type="disp-formula" rid="eqn-10">Eq. (10)</xref>.</p>
<p><disp-formula id="eqn-10"><label>(10)</label><mml:math id="mml-eqn-10" display="block"><mml:msub><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mrow><mml:mtext>t</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:msub><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mi>x</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mi>F</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>&#x03C1;</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula>where <inline-formula id="ieqn-78"><mml:math id="mml-ieqn-78"><mml:mi>&#x03C1;</mml:mi></mml:math></inline-formula> is cell density; <inline-formula id="ieqn-79"><mml:math id="mml-ieqn-79"><mml:mi>&#x03B3;</mml:mi></mml:math></inline-formula> is aiming diffusion coefficient to be found; <inline-formula id="ieqn-80"><mml:math id="mml-ieqn-80"><mml:mi>F</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>&#x03C1;</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> is underlying nonlinear reaction function.</p>
<p>Similar to the previous experimental design, a sparse dictionary with nine candidate function terms <inline-formula id="ieqn-81"><mml:math id="mml-ieqn-81"><mml:mrow><mml:mo>{</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>&#x03C1;</mml:mi><mml:mo>,</mml:mo><mml:msup><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>,</mml:mo><mml:msup><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msup><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mi>x</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mi>&#x03C1;</mml:mi><mml:msub><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msup><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:msub><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msup><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msup><mml:msub><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mi>x</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>}</mml:mo></mml:mrow></mml:math></inline-formula> is established to perform coefficient regression. <xref ref-type="table" rid="table-3">Table 3</xref> lists the results of all cell densities that were discovered. These results follow the Reaction-Diffusion (RD) equation form <inline-formula id="ieqn-82"><mml:math id="mml-ieqn-82"><mml:msub><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:msub><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mi>x</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mi>&#x03B1;</mml:mi><mml:mi>&#x03C1;</mml:mi><mml:mo>+</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:msup><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula>, which conforms to the Fisher-Kolmogorov model preset [<xref ref-type="bibr" rid="ref-37">37</xref>].</p>
<table-wrap id="table-3">
<label>Table 3</label>
<caption>
<title>Found PDEs under various cell densities. Full-field error is the mean of error between measurements and predictions at every time recorder nodes</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Cell densities (<inline-formula id="ieqn-83"><mml:math id="mml-ieqn-83"><mml:mrow><mml:mtext>cell</mml:mtext></mml:mrow><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x0B5;</mml:mi></mml:mrow><mml:msup><mml:mrow><mml:mtext>m</mml:mtext></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula>)</th>
<th>Full field error</th>
<th>Found PDE</th>
</tr>
</thead>
<tbody>
<tr>
<td>14,000</td>
<td>0.0902</td>
<td><inline-formula id="ieqn-84"><mml:math id="mml-ieqn-84"><mml:msub><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>965.094</mml:mn><mml:msub><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mi>x</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mn>0.0784</mml:mn><mml:mi>&#x03C1;</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>48.201</mml:mn><mml:msup><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula></td>
</tr>
<tr>
<td>16,000</td>
<td>0.0855</td>
<td><inline-formula id="ieqn-85"><mml:math id="mml-ieqn-85"><mml:msub><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>668.841</mml:mn><mml:msub><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mi>x</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mn>0.0708</mml:mn><mml:mi>&#x03C1;</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>46.106</mml:mn><mml:msup><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula></td>
</tr>
<tr>
<td>18,000</td>
<td>0.0885</td>
<td><inline-formula id="ieqn-86"><mml:math id="mml-ieqn-86"><mml:msub><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>511.801</mml:mn><mml:msub><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mi>x</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mn>0.0641</mml:mn><mml:mi>&#x03C1;</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>42.588</mml:mn><mml:msup><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula></td>
</tr>
<tr>
<td>20,000</td>
<td>0.0954</td>
<td><inline-formula id="ieqn-87"><mml:math id="mml-ieqn-87"><mml:msub><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>509.233</mml:mn><mml:msub><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mi>x</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mn>0.0655</mml:mn><mml:mi>&#x03C1;</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>46.478</mml:mn><mml:msup><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula></td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The measurements and prediction in different time nodes (12, 24, 36, and 48 h) are depicted as shown in <xref ref-type="fig" rid="fig-6">Fig. 6a</xref>&#x2013;<xref ref-type="fig" rid="fig-6">d</xref> to more intuitively demonstrate the relationship between the discovered equations and measured values. Prediction is derived for each found PDE considering the measurement at0 h initial and <inline-formula id="ieqn-88"><mml:math id="mml-ieqn-88"><mml:msub><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>=</mml:mo><mml:mn>1900</mml:mn><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:math></inline-formula> a boundary condition. In each individual experiment, only 38 pieces of measurement were used, which is extremely scarce and can fail for most other methods. Experimental results showed that the proposed PINN-STLasso can successfully discover equations and demonstrate relatively high accuracy.</p>
<fig id="fig-6">
<label>Figure 6</label>
<caption>
<title>Discovery results for cell migration and proliferation under different cell densities, (a) 14,000 cells per well; (b) 16,000 cells per well; (c) 18,000 cells per well; (d) 20,000 cells per well. In all figures, dots represent measurement, and lines depict prediction</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_52585-fig-6a.tif"/><graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_52585-fig-6b.tif"/><graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_52585-fig-6c.tif"/>
</fig>
</sec>
<sec id="s3_4">
<label>3.4</label>
<title>Discussion</title>
<p>This study conducts experiments on several canonical inverse PDE problems and real experimental conditions. Results and comparison indicated that the proposed PINN-STLasso outperforms existing methods in the following aspects:</p>
<p>1) Based on the idea of calculating equation coefficients through the sparse regression method STLasso, which introduces known physical prior knowledge into the computational process. Specifically, in PINN-STLasso, the training of surrogate models and the formation of the sparse dictionaries are guided by physical priors, significantly reducing the demand for training data and enhancing the robustness of data noise in this method.</p>
<p>2) The sparse regression and DNN parameters are trained by different network parts, improving the overall process&#x2019;s interpretability. Unlike most common methods based solely on deep networks, PINN-STLasso takes DNN, specifically, Multi-layer Perceptron, as a surrogate model. With the help of the universal approximate rule, the role of surrogate DNN can be interpreted as a nonlinear approximation of the original solution. In contrast, Lasso sparse computing is a clear iterative solution process with a completely transparent calculation principle and process. Accordingly, the proposed PINN-STLasso is very superior in terms of interpretability.</p>
</sec>
</sec>
<sec id="s4">
<label>4</label>
<title>Conclusion</title>
<p>This research proposes PINN-STLasso, a method incorporating Lasso Regression into Physics-Informed Neural Networks for solving Inverse PDE problems. A sparse regression module, STLasso, is established for optimizing sparse parameters by combining Lasso and STLS. STLasso is then inserted into PINN to identify PDE expressions from observations. Experiments conducted on canonical PDE systems demonstrate that the proposed PINN-STLasso outperforms several recent methods regarding prediction accuracy.</p>
<p>However, there still exist several challenges in this research to be further explored: Lasso regression cannot address problems in complex domains, such as the Schr&#x00F6;dinger equation, requiring the development of specialized methods; for highly complicated problems, such as the Reaction-Diffusion process, the DNN-based method still performs poorly, necessitating further exploration of related issues.</p>
</sec>
</body>
<back>
<ack><p>The authors acknowledge the support from the National Key Lab of Aerospace Power System and Plasma Technology of Xi&#x2019;an Jiaotong University, China. We are also thankful for the insightful comments from anonymous reviewers, which have greatly improved this manuscript.</p>
</ack>
<sec><title>Funding Statement</title>
<p>The authors received no specific funding for this study.</p>
</sec>
<sec><title>Author Contributions</title>
<p>Study conception and design: Meng Ma, Liu Fu; data collection: Meng Ma, Liu Fu; analysis and interpretation of results: Meng Ma, Liu Fu, Xu Guo, Zhi Zhai; draft manuscript preparation: Meng Ma, Liu Fu, Xu Guo. All authors reviewed the results and approved the final version of the manuscript.</p>
</sec>
<sec sec-type="data-availability"><title>Availability of Data and Materials</title>
<p>Data will be made available on request.</p>
</sec>
<sec><title>Ethics Approval</title>
<p>Not applicable.</p>
</sec>
<sec sec-type="COI-statement"><title>Conflicts of Interest</title>
<p>The authors declare that they have no conflicts of interest to report regarding the present study.</p>
</sec>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>1.</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Bleecker</surname> <given-names>D</given-names></string-name></person-group>. <source>Basic partial differential equations</source>. <publisher-loc>Boca Raton, FL, USA</publisher-loc>: <publisher-name>Chapman and Hall/CRC</publisher-name>; <year>2018</year>.</mixed-citation></ref>
<ref id="ref-2"><label>2.</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Renardy</surname> <given-names>M</given-names></string-name>, <string-name><surname>Rogers</surname> <given-names>RC</given-names></string-name></person-group>. <source>An introduction to partial differential equations</source>. <publisher-loc>Midtown Manhattan, New York City, USA</publisher-loc>: <publisher-name>Springer Science &#x0026; Business Media</publisher-name>; <year>2006</year>.</mixed-citation></ref>
<ref id="ref-3"><label>3.</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Evans</surname> <given-names>LC</given-names></string-name></person-group>. <source>Partial differential equations</source>. <publisher-loc>Providence, Rhode Island, USA</publisher-loc>: <publisher-name>American Mathematical Society</publisher-name>; <year>2022</year>.</mixed-citation></ref>
<ref id="ref-4"><label>4.</label><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><surname>Wandel</surname> <given-names>N</given-names></string-name>, <string-name><surname>Weinmann</surname> <given-names>M</given-names></string-name>, <string-name><surname>Klein</surname> <given-names>R</given-names></string-name></person-group>. <article-title>Learning incompressible fluid dynamics from scratch&#x2014;towards fast, differentiable fluid models that generalize</article-title>. <comment>arxiv Preprint arxiv:2006.08762</comment>. 2020.</mixed-citation></ref>
<ref id="ref-5"><label>5.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Yuan</surname> <given-names>L</given-names></string-name>, <string-name><surname>Ni</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Deng</surname> <given-names>X</given-names></string-name>, <string-name><surname>Hao</surname> <given-names>S</given-names></string-name></person-group>. <article-title>A-PINN: auxiliary physics informed neural networks for forward and inverse problems of nonlinear integro-differential equations</article-title>. <source>J Comput Phys</source>. <year>2022</year>;<volume>462</volume>:<fpage>111260</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.jcp.2022.111260</pub-id>.</mixed-citation></ref>
<ref id="ref-6"><label>6.</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Ames</surname> <given-names>WF</given-names></string-name></person-group>. <source>Numerical methods for partial differential equations</source>. <publisher-loc>Cambridge, MA, USA</publisher-loc>: <publisher-name>Academic press</publisher-name>; <year>2014</year>.</mixed-citation></ref>
<ref id="ref-7"><label>7.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Cuomo</surname> <given-names>S</given-names></string-name>, <string-name><surname>Di Cola</surname> <given-names>VS</given-names></string-name>, <string-name><surname>Giampaolo</surname> <given-names>F</given-names></string-name>, <string-name><surname>Rozza</surname> <given-names>G</given-names></string-name>, <string-name><surname>Raissi</surname> <given-names>M</given-names></string-name>, <string-name><surname>Piccialli</surname> <given-names>F</given-names></string-name></person-group>. <article-title>Scientific machine learning through physics&#x2013;informed neural networks: where we are and what&#x2019;s next</article-title>. <source>J Sci Comput</source>. <year>2022</year>;<volume>92</volume>(<issue>3</issue>):<fpage>88</fpage>. doi:<pub-id pub-id-type="doi">10.1007/s10915-022-01939-z</pub-id>.</mixed-citation></ref>
<ref id="ref-8"><label>8.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Raissi</surname> <given-names>M</given-names></string-name>, <string-name><surname>Perdikaris</surname> <given-names>P</given-names></string-name>, <string-name><surname>Karniadakis</surname> <given-names>GE</given-names></string-name></person-group>. <article-title>Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations</article-title>. <source>J Comput Phys</source>. <year>2019</year>;<volume>378</volume>:<fpage>686</fpage>&#x2013;<lpage>707</lpage>. doi:<pub-id pub-id-type="doi">10.1016/j.jcp.2018.10.045</pub-id>.</mixed-citation></ref>
<ref id="ref-9"><label>9.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Cai</surname> <given-names>S</given-names></string-name>, <string-name><surname>Mao</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Yin</surname> <given-names>M</given-names></string-name>, <string-name><surname>Karniadakis</surname> <given-names>GE</given-names></string-name></person-group>. <article-title>Physics-informed neural networks (PINNs) for fluid mechanics: a review</article-title>. <source>Acta Mech Sin</source>. <year>2021</year>;<volume>37</volume>(<issue>12</issue>):<fpage>1727</fpage>&#x2013;<lpage>38</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s10409-021-01148-1</pub-id>.</mixed-citation></ref>
<ref id="ref-10"><label>10.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Jiang</surname> <given-names>X</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>X</given-names></string-name>, <string-name><surname>Wen</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Li</surname> <given-names>E</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>H</given-names></string-name></person-group>. <article-title>Practical uncertainty quantification for space-dependent inverse heat conduction problem via ensemble physics-informed neural networks</article-title>. <source>Int Commun Heat Mass Transf</source>. <year>2023</year>;<volume>147</volume>(<issue>2</issue>):<fpage>106940</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.icheatmasstransfer.2023.106940</pub-id>.</mixed-citation></ref>
<ref id="ref-11"><label>11.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Wen</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Li</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>H</given-names></string-name>, <string-name><surname>Peng</surname> <given-names>Y</given-names></string-name></person-group>. <article-title>Data-driven spatiotemporal modeling for structural dynamics on irregular domains by stochastic dependency neural estimation</article-title>. <source>Comput Methods Appl Mech Eng</source>. <year>2023</year>;<volume>404</volume>:<fpage>115831</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.cma.2022.115831</pub-id>.</mixed-citation></ref>
<ref id="ref-12"><label>12.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Zhang</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Cai</surname> <given-names>S</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>H</given-names></string-name></person-group>. <article-title>A symmetry group based supervised learning method for solving partial differential equations</article-title>. <source>Comput Methods Appl Mech Eng</source>. <year>2023</year>;<volume>414</volume>(<issue>5</issue>):<fpage>116181</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.cma.2023.116181</pub-id>.</mixed-citation></ref>
<ref id="ref-13"><label>13.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Zhang</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>H</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>L</given-names></string-name>, <string-name><surname>Guo</surname> <given-names>L</given-names></string-name></person-group>. <article-title>Enforcing continuous symmetries in physics-informed neural network for solving forward and inverse problems of partial differential equations</article-title>. <source>J Comput Phys</source>. <year>2023</year>;<volume>492</volume>(<issue>153</issue>):<fpage>112415</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.jcp.2023.112415</pub-id>.</mixed-citation></ref>
<ref id="ref-14"><label>14.</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Isakov</surname> <given-names>V</given-names></string-name></person-group>. <source>Inverse problems for partial differential equations</source>. <publisher-loc>Midtown Manhattan, New York, NY, USA</publisher-loc>: <publisher-name>Springer</publisher-name>; <year>2006</year>.</mixed-citation></ref>
<ref id="ref-15"><label>15.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Bongard</surname> <given-names>J</given-names></string-name>, <string-name><surname>Lipson</surname> <given-names>H</given-names></string-name></person-group>. <article-title>Automated reverse engineering of nonlinear dynamical systems</article-title>. <source>Proc Nat Acad Sci</source>. <year>2007</year>;<volume>104</volume>(<issue>24</issue>):<fpage>9943</fpage>&#x2013;<lpage>48</lpage>. doi:<pub-id pub-id-type="doi">10.1073/pnas.0609476104</pub-id>; <pub-id pub-id-type="pmid">17553966</pub-id></mixed-citation></ref>
<ref id="ref-16"><label>16.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Schmidt</surname> <given-names>M</given-names></string-name>, <string-name><surname>Lipson</surname> <given-names>H</given-names></string-name></person-group>. <article-title>Distilling free-form natural laws from experimental data</article-title>. <source>Science</source>. <year>2009</year>;<volume>324</volume>(<issue>5923</issue>):<fpage>81</fpage>&#x2013;<lpage>5</lpage>. doi:<pub-id pub-id-type="doi">10.1126/science.1165893</pub-id>; <pub-id pub-id-type="pmid">19342586</pub-id></mixed-citation></ref>
<ref id="ref-17"><label>17.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Rudy</surname> <given-names>SH</given-names></string-name>, <string-name><surname>Brunton</surname> <given-names>SL</given-names></string-name>, <string-name><surname>Proctor</surname> <given-names>JL</given-names></string-name>, <string-name><surname>Kutz</surname> <given-names>JN</given-names></string-name></person-group>. <article-title>Data-driven discovery of partial differential equations</article-title>. <source>Sci Adv</source>. <year>2017</year>;<volume>3</volume>(<issue>4</issue>):<fpage>e1602614</fpage>. doi:<pub-id pub-id-type="doi">10.1126/sciadv.1602614</pub-id>; <pub-id pub-id-type="pmid">28508044</pub-id></mixed-citation></ref>
<ref id="ref-18"><label>18.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Schaeffer</surname> <given-names>H</given-names></string-name></person-group>. <article-title>Learning partial differential equations via data discovery and sparse optimization</article-title>. <source>Proc Royal Soc A: Math, Phy Eng Sci</source>. <year>2017</year>;<volume>473</volume>(<issue>2197</issue>):<fpage>20160446</fpage>. doi:<pub-id pub-id-type="doi">10.1098/rspa.2016.0446</pub-id>; <pub-id pub-id-type="pmid">28265183</pub-id></mixed-citation></ref>
<ref id="ref-19"><label>19.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Brunton</surname> <given-names>SL</given-names></string-name>, <string-name><surname>Proctor</surname> <given-names>JL</given-names></string-name>, <string-name><surname>Kutz</surname> <given-names>JN</given-names></string-name></person-group>. <article-title>Discovering governing equations from data by sparse identification of nonlinear dynamical systems</article-title>. <source>Proc Nat Acad Sci</source>. <year>2016</year>;<volume>113</volume>(<issue>15</issue>):<fpage>3932</fpage>&#x2013;<lpage>37</lpage>. doi:<pub-id pub-id-type="doi">10.1073/pnas.1517384113</pub-id>; <pub-id pub-id-type="pmid">27035946</pub-id></mixed-citation></ref>
<ref id="ref-20"><label>20.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Zheng</surname> <given-names>J</given-names></string-name>, <string-name><surname>Yang</surname> <given-names>Y</given-names></string-name></person-group>. <article-title>M-WDRNNs: mixed-weighted deep residual neural networks for forward and inverse PDE problems</article-title>. <source>Axioms</source>. <year>2023</year>;<volume>12</volume>(<issue>8</issue>):<fpage>750</fpage>. doi:<pub-id pub-id-type="doi">10.3390/axioms12080750</pub-id>.</mixed-citation></ref>
<ref id="ref-21"><label>21.</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Lemhadri</surname> <given-names>I</given-names></string-name>, <string-name><surname>Ruan</surname> <given-names>F</given-names></string-name>, <string-name><surname>Tibshirani</surname> <given-names>R</given-names></string-name></person-group>. <article-title>LassoNet: neural networks with feature sparsity</article-title>. In: <conf-name>Proceedings of The 24th International Conference on Artificial Intelligence and Statistics</conf-name>, <year>2021</year>; PMLR; vol. <volume>130</volume>, p. <fpage>10</fpage>&#x2013;<lpage>8</lpage>.</mixed-citation></ref>
<ref id="ref-22"><label>22.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Lu</surname> <given-names>L</given-names></string-name>, <string-name><surname>Meng</surname> <given-names>X</given-names></string-name>, <string-name><surname>Mao</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Karniadakis</surname> <given-names>GE</given-names></string-name></person-group>. <article-title>DeepXDE: a deep learning library for solving differential equations</article-title>. <source>Siam Rev Soc Ind Appl Math</source>. <year>2021</year>;<volume>63</volume>(<issue>1</issue>):<fpage>208</fpage>&#x2013;<lpage>28</lpage>. doi:<pub-id pub-id-type="doi">10.1137/19M1274067</pub-id> <comment>2021/01/01</comment>.</mixed-citation></ref>
<ref id="ref-23"><label>23.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Yang</surname> <given-names>M</given-names></string-name>, <string-name><surname>Foster</surname> <given-names>JT</given-names></string-name></person-group>. <article-title>Multi-output physics-informed neural networks for forward and inverse PDE problems with uncertainties</article-title>. <source>Comput Methods Appl Mech Eng</source>. <year>2022</year>;<volume>402</volume>(<issue>1</issue>):<fpage>115041</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.cma.2022.115041</pub-id>.</mixed-citation></ref>
<ref id="ref-24"><label>24.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Chen</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Sun</surname> <given-names>H</given-names></string-name></person-group>. <article-title>Physics-informed learning of governing equations from scarce data</article-title>. <source>Nat Commun</source>. <year>2021</year>;<volume>12</volume>(<issue>1</issue>):<fpage>6136</fpage>. doi:<pub-id pub-id-type="doi">10.1038/s41467-021-26434-1</pub-id>; <pub-id pub-id-type="pmid">34675223</pub-id></mixed-citation></ref>
<ref id="ref-25"><label>25.</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Long</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Lu</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Ma</surname> <given-names>X</given-names></string-name>, <string-name><surname>Dong</surname> <given-names>B</given-names></string-name></person-group>. <article-title>PDE-Net: learning PDEs from data</article-title>. In: <source>Proceedings of the 35th International Conference on Machine Learning</source>, <year>2018</year>; PMLR; vol. <volume>80</volume>, p. <fpage>3208</fpage>&#x2013;<lpage>16</lpage>.</mixed-citation></ref>
<ref id="ref-26"><label>26.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Long</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Lu</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Dong</surname> <given-names>B</given-names></string-name></person-group>. <article-title>PDE-Net 2.0: learning pdes from data with a numeric-symbolic hybrid deep network</article-title>. <source>J Comput Phys</source>. <year>2019</year>;<volume>399</volume>(<issue>24</issue>):<fpage>108925</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.jcp.2019.108925</pub-id>.</mixed-citation></ref>
<ref id="ref-27"><label>27.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Dong</surname> <given-names>B</given-names></string-name>, <string-name><surname>Jiang</surname> <given-names>Q</given-names></string-name>, <string-name><surname>Shen</surname> <given-names>Z</given-names></string-name></person-group>. <article-title>Image restoration: wavelet frame shrinkage, nonlinear evolution PDEs, and beyond</article-title>. <source>Multiscale Model Simul</source>. <year>2017</year>;<volume>15</volume>(<issue>1</issue>):<fpage>606</fpage>&#x2013;<lpage>60</lpage>. doi:<pub-id pub-id-type="doi">10.1137/15M1037457</pub-id>.</mixed-citation></ref>
<ref id="ref-28"><label>28.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Cai</surname> <given-names>J</given-names></string-name>, <string-name><surname>Dong</surname> <given-names>B</given-names></string-name>, <string-name><surname>Osher</surname> <given-names>S</given-names></string-name>, <string-name><surname>Shen</surname> <given-names>Z</given-names></string-name></person-group>. <article-title>Image restoration: total variation, wavelet frames, and beyond</article-title>. <source>J Am Math Soc</source>. <year>2012</year>;<volume>25</volume>(<issue>4</issue>):<fpage>1033</fpage>&#x2013;<lpage>89</lpage>. doi:<pub-id pub-id-type="doi">10.1090/S0894-0347-2012-00740-1</pub-id>.</mixed-citation></ref>
<ref id="ref-29"><label>29.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Rao</surname> <given-names>C</given-names></string-name>, <string-name><surname>Ren</surname> <given-names>P</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>Q</given-names></string-name>, <string-name><surname>Buyukozturk</surname> <given-names>O</given-names></string-name>, <string-name><surname>Sun</surname> <given-names>H</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>Y</given-names></string-name></person-group>. <article-title>Encoding physics to learn reaction-diffusion processes</article-title>. <source>Nat Mach Intell</source>. <year>2023</year>;<volume>5</volume>(<issue>7</issue>):<fpage>765</fpage>&#x2013;<lpage>79</lpage>. doi:<pub-id pub-id-type="doi">10.1038/s42256-023-00685-7</pub-id>.</mixed-citation></ref>
<ref id="ref-30"><label>30.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Huang</surname> <given-names>K</given-names></string-name>, <string-name><surname>Tao</surname> <given-names>S</given-names></string-name>, <string-name><surname>Wu</surname> <given-names>D</given-names></string-name>, <string-name><surname>Yang</surname> <given-names>C</given-names></string-name>, <string-name><surname>Gui</surname> <given-names>W</given-names></string-name></person-group>. <article-title>Physical informed sparse learning for robust modeling of distributed parameter system and its industrial applications</article-title>. <source>IEEE Trans Autom Sci Eng</source>. <year>2023</year>. doi:<pub-id pub-id-type="doi">10.1109/TASE.2023.3298806</pub-id>.</mixed-citation></ref>
<ref id="ref-31"><label>31.</label><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><surname>Li</surname> <given-names>J</given-names></string-name>, <string-name><surname>Sun</surname> <given-names>G</given-names></string-name>, <string-name><surname>Zhao</surname> <given-names>G</given-names></string-name>, <string-name><surname>Lehman</surname> <given-names>LWH</given-names></string-name></person-group>, editors. <article-title>Robust low-rank discovery of data-driven partial differential equations</article-title>. <source>Proc AAAI Conf Artif Intell.</source> <year>2020</year>;<volume>34</volume>(<issue>1</issue>):<fpage>767</fpage>&#x2013;<lpage>74</lpage>. doi:<pub-id pub-id-type="doi">10.1609/aaai.v34i01.5420</pub-id>.</mixed-citation></ref>
<ref id="ref-32"><label>32.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Yu</surname> <given-names>J</given-names></string-name>, <string-name><surname>Lu</surname> <given-names>L</given-names></string-name>, <string-name><surname>Meng</surname> <given-names>X</given-names></string-name>, <string-name><surname>Karniadakis</surname> <given-names>GE</given-names></string-name></person-group>. <article-title>Gradient-enhanced physics-informed neural networks for forward and inverse PDE problems</article-title>. <source>Comput Methods Appl Mech Eng</source>. <year>2022</year>;<volume>393</volume>(<issue>6</issue>):<fpage>114823</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.cma.2022.114823</pub-id>.</mixed-citation></ref>
<ref id="ref-33"><label>33.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Yang</surname> <given-names>L</given-names></string-name>, <string-name><surname>Meng</surname> <given-names>X</given-names></string-name>, <string-name><surname>Karniadakis</surname> <given-names>GE</given-names></string-name></person-group>. <article-title>B-PINNs: bayesian physics-informed neural networks for forward and inverse PDE problems with noisy data</article-title>. <source>J Comput Phys</source>. <year>2021</year>;<volume>425</volume>:<fpage>109913</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.jcp.2020.109913</pub-id>.</mixed-citation></ref>
<ref id="ref-34"><label>34.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Pakravan</surname> <given-names>S</given-names></string-name>, <string-name><surname>Mistani</surname> <given-names>PA</given-names></string-name>, <string-name><surname>Aragon-Calvo</surname> <given-names>MA</given-names></string-name>, <string-name><surname>Gibou</surname> <given-names>F</given-names></string-name></person-group>. <article-title>Solving inverse-PDE problems with physics-aware neural networks</article-title>. <source>J Comput Phys</source>. <year>2021</year>;<volume>440</volume>(<issue>4</issue>):<fpage>110414</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.jcp.2021.110414</pub-id>.</mixed-citation></ref>
<ref id="ref-35"><label>35.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Smets</surname> <given-names>BMN</given-names></string-name>, <string-name><surname>Portegies</surname> <given-names>J</given-names></string-name>, <string-name><surname>Bekkers</surname> <given-names>EJ</given-names></string-name>, <string-name><surname>Duits</surname> <given-names>R</given-names></string-name></person-group>. <article-title>PDE-based group equivariant convolutional neural networks</article-title>. <source>J Math Imaging Vis</source>. <year>2023</year>;<volume>65</volume>(<issue>1</issue>):<fpage>209</fpage>&#x2013;<lpage>39</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s10851-022-01114-x</pub-id>.</mixed-citation></ref>
<ref id="ref-36"><label>36.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Hess</surname> <given-names>P</given-names></string-name>, <string-name><surname>Dr&#x00FC;ke</surname> <given-names>M</given-names></string-name>, <string-name><surname>Petri</surname> <given-names>S</given-names></string-name>, <string-name><surname>Strnad</surname> <given-names>FM</given-names></string-name>, <string-name><surname>Boers</surname> <given-names>N</given-names></string-name></person-group>. <article-title>Physically constrained generative adversarial networks for improving precipitation fields from earth system models</article-title>. <source>Nat Mach Intell</source>. <year>2022</year>;<volume>4</volume>(<issue>10</issue>):<fpage>828</fpage>&#x2013;<lpage>39</lpage>. doi:<pub-id pub-id-type="doi">10.1038/s42256-022-00540-1</pub-id>.</mixed-citation></ref>
<ref id="ref-37"><label>37.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Jin</surname> <given-names>W</given-names></string-name>, <string-name><surname>Shah</surname> <given-names>ET</given-names></string-name>, <string-name><surname>Penington</surname> <given-names>CJ</given-names></string-name>, <string-name><surname>McCue</surname> <given-names>SW</given-names></string-name>, <string-name><surname>Chopin</surname> <given-names>LK</given-names></string-name>, <string-name><surname>Simpson</surname> <given-names>MJ</given-names></string-name></person-group>. <article-title>Reproducibility of scratch assays is affected by the initial degree of confluence: experiments, modelling and model selection</article-title>. <source>J Theor Biol</source>. <year>2016</year>;<volume>390</volume>:<fpage>136</fpage>&#x2013;<lpage>45</lpage>. doi:<pub-id pub-id-type="doi">10.1016/j.jtbi.2015.10.040</pub-id>; <pub-id pub-id-type="pmid">26646767</pub-id></mixed-citation></ref>
</ref-list>
</back></article>