<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xml:lang="en" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">CMES</journal-id>
<journal-id journal-id-type="nlm-ta">CMES</journal-id>
<journal-id journal-id-type="publisher-id">CMES</journal-id>
<journal-title-group>
<journal-title>Computer Modeling in Engineering &#x0026; Sciences</journal-title>
</journal-title-group>
<issn pub-type="epub">1526-1506</issn>
<issn pub-type="ppub">1526-1492</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">56753</article-id>
<article-id pub-id-type="doi">10.32604/cmes.2024.056753</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>From Imperfection to Perfection: Advanced 3D Facial Reconstruction Using MICA Models and Self-Supervision Learning</article-title>
<alt-title alt-title-type="left-running-head">From Imperfection to Perfection: Advanced 3D Facial Reconstruction Using MICA Models and Self-Supervision Learning</alt-title>
<alt-title alt-title-type="right-running-head">From Imperfection to Perfection: Advanced 3D Facial Reconstruction Using MICA Models and Self-Supervision Learning</alt-title>
</title-group>
<contrib-group>
<contrib id="author-1" contrib-type="author">
<name name-style="western"><surname>Le</surname><given-names>Thinh D.</given-names></name></contrib>
<contrib id="author-2" contrib-type="author">
<name name-style="western"><surname>Nguyen</surname><given-names>Duong Q.</given-names></name></contrib>
<contrib id="author-3" contrib-type="author">
<name name-style="western"><surname>Nguyen</surname><given-names>Phuong D.</given-names></name></contrib>
<contrib id="author-4" contrib-type="author" corresp="yes">
<name name-style="western"><surname>Nguyen-Xuan</surname><given-names>H.</given-names></name><email>ngx.hung@hutech.edu.vn</email></contrib>
<aff><institution>CIRTECH Institute, HUTECH University</institution>, <addr-line>Ho Chi Minh City, 72308</addr-line>, <country>Viet Nam</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>&#x002A;</label>Corresponding Author: H. Nguyen-Xuan. Email: <email>ngx.hung@hutech.edu.vn</email></corresp>
</author-notes>
<pub-date date-type="collection" publication-format="electronic">
<year>2025</year>
</pub-date>
<pub-date date-type="pub" publication-format="electronic">
<day>27</day><month>1</month><year>2025</year>
</pub-date>
<volume>142</volume>
<issue>2</issue>
<fpage>1459</fpage>
<lpage>1479</lpage>
<history>
<date date-type="received">
<day>30</day>
<month>7</month>
<year>2024</year>
</date>
<date date-type="accepted">
<day>03</day>
<month>9</month>
<year>2024</year>
</date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2025 The Authors.</copyright-statement>
<copyright-year>2025</copyright-year>
<copyright-holder>Published by Tech Science Press.</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_CMES_56753.pdf"></self-uri>
<abstract>
<p>Research on reconstructing imperfect faces is a challenging task. In this study, we explore a data-driven approach using a pre-trained MICA (MetrIC fAce) model combined with 3D printing to address this challenge. We propose a training strategy that utilizes the pre-trained MICA model and self-supervised learning techniques to improve accuracy and reduce the time needed for 3D facial structure reconstruction. Our results demonstrate high accuracy, evaluated by the geometric loss function and various statistical measures. To showcase the effectiveness of the approach, we used 3D printing to create a model that covers facial wounds. The findings indicate that our method produces a model that fits well and achieves comprehensive 3D facial reconstruction. This technique has the potential to aid doctors in treating patients with facial injuries.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>3D face reconstruction</kwd>
<kwd>self-supervised learning</kwd>
<kwd>face defect</kwd>
<kwd>3D printed prototypes</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label>
<title>Introduction</title>
<p>Currently, people who are injured due to traffic accidents, occupational accidents, birth defects, and diseases have lost a part of their body. As illustrated, many parts of the human body are injured and lose their natural function as shown in <xref ref-type="fig" rid="fig-1">Fig. 1</xref>. Herein, the head area, which is soft and most vulnerable to external impacts, has the highest injury rate and probability of death. Reconstructing the wound and creating a product to cover it is crucial for patients to reduce the risk of re-injury, improve aesthetics, and increase confidence [<xref ref-type="bibr" rid="ref-1">1</xref>,<xref ref-type="bibr" rid="ref-2">2</xref>]. This research introduces an alternative way to reduce the risk of re-injury and avoid impact when moving. It also helps to increase aesthetics and confidence for people with injured faces. In addition, with the advancements in 3D printing and scanning technologies in rapid prototyping, the reconstruction process of organs has become faster, more accurate, and safer [<xref ref-type="bibr" rid="ref-3">3</xref>,<xref ref-type="bibr" rid="ref-4">4</xref>]. However, recreating the functions and the true shapes of the facial features, which have been damaged by injury, remains highly challenging and requires multiple trials in the treatment process.</p>
<fig id="fig-1">
<label>Figure 1</label>
<caption>
<title>Replacing some parts of the human body with 3D printing technology [<xref ref-type="bibr" rid="ref-6">6</xref>,<xref ref-type="bibr" rid="ref-10">10</xref>,<xref ref-type="bibr" rid="ref-12">12</xref>,<xref ref-type="bibr" rid="ref-13">13</xref>,<xref ref-type="bibr" rid="ref-20">20</xref>,<xref ref-type="bibr" rid="ref-22">22</xref>]</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_56753-fig-1.tif"/>
</fig>
<p>The reconstruction of human body parts is divided into three main research directions including [<xref ref-type="bibr" rid="ref-5">5</xref>]: the reconstruction of the body [<xref ref-type="bibr" rid="ref-6">6</xref>,<xref ref-type="bibr" rid="ref-7">7</xref>], the reconstruction of the extremities [<xref ref-type="bibr" rid="ref-8">8</xref>,<xref ref-type="bibr" rid="ref-9">9</xref>], and the reconstruction of the head as shown in <xref ref-type="fig" rid="fig-1">Fig. 1</xref>. The reconstruction of extremities has garnered significant research interest as compared to other areas of the human body such as the reconstruction of bones in the hands [<xref ref-type="bibr" rid="ref-10">10</xref>&#x2013;<xref ref-type="bibr" rid="ref-12">12</xref>] and the legs [<xref ref-type="bibr" rid="ref-13">13</xref>]. The reconstruction of the body is often divided into two parts: the reconstruction of the spine [<xref ref-type="bibr" rid="ref-14">14</xref>&#x2013;<xref ref-type="bibr" rid="ref-16">16</xref>] and the reconstruction of the ribs [<xref ref-type="bibr" rid="ref-17">17</xref>&#x2013;<xref ref-type="bibr" rid="ref-19">19</xref>]. Previous studies have been focusing on the reconstruction of the body and the extremities as shown in <xref ref-type="fig" rid="fig-2">Fig. 2</xref> because the main reason is that these areas are rather less susceptible to injury compared to other areas of the human body and they are also relatively easy to implement, easy to assemble, and less risky for patients in the treatment process. The upper body can be divided into two main groups, namely, the reconstruction of the face [<xref ref-type="bibr" rid="ref-20">20</xref>,<xref ref-type="bibr" rid="ref-21">21</xref>] and the reconstruction of the skull [<xref ref-type="bibr" rid="ref-22">22</xref>]. Due to the high fatality rate associated with injuries sustained in this region, it is considered the most perilous area for humans. Consequently, research on this region of the human body is relatively scarce compared to other areas.</p>
<fig id="fig-2">
<label>Figure 2</label>
<caption>
<title>Some studies on reconstruction of parts of the human body [<xref ref-type="bibr" rid="ref-10">10</xref>&#x2013;<xref ref-type="bibr" rid="ref-20">20</xref>,<xref ref-type="bibr" rid="ref-22">22</xref>&#x2013;<xref ref-type="bibr" rid="ref-28">28</xref>]</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_56753-fig-2.tif"/>
</fig>
<p>Having analyzed the unique risks associated with the patient&#x2019;s head area, we recognize the importance of conducting thorough studies to enhance patient safety and optimize outcomes in this critical area. A study on patients with head trauma treated at the Walter Reed National Military Medical Center was conducted, as described by Farber et al. [<xref ref-type="bibr" rid="ref-23">23</xref>&#x2013;<xref ref-type="bibr" rid="ref-25">25</xref>]. The application of microsurgery techniques to treat severe head wounds in a war zone was implemented for the first time. This study explored various methods to heal these injuries to achieve the best possible outcome for patients. In addition, Zhou et al. [<xref ref-type="bibr" rid="ref-26">26</xref>,<xref ref-type="bibr" rid="ref-27">27</xref>] proposed an extended forehead flap method to reconstruct the deformities of the nose or the central points of the face. The method was performed on 22 patients, including 13 men and 9 women. The outcome was that 17 people had burns and 5 people had other injuries. This method of using forehead-expanded flaps was shown to be capable of reconstructing the deformities of the nose and mouth in patients with only one extensive treatment. This flap can be flexibly adjusted to fit the size of the deformity right in the middle of the face. Wang et al. [<xref ref-type="bibr" rid="ref-28">28</xref>] used a combination of virtual surgical planning and 3D printing in reconstructing facial deformities on the jawbone. Study data were collected from the jaw structure of patients in the period from 2013 to 2020. The patients were divided into two groups, in which group 1 (20 people) used the above technique and group 2 (14 people) used conventional manual surgery. The results indicate that the present approach offers numerous advantages to patients receiving the treatment. The positive outcomes of the aforementioned studies have established a new trend in providing care for those who have been injured in hazardous areas. The proposed method generates great potential for regenerating and rejuvenating various body parts. However, it is important to note that the study is only limited to an experimental setting and has not been implemented practically yet.</p>
<p>In the previous study [<xref ref-type="bibr" rid="ref-29">29</xref>], we obtained promising results in 3D reconstructing face defects. We also identified the limitations of the published methods for regenerating new body parts to replace damaged ones and proposed an effective solution. Specifically, we implemented a process using 3D printing technology and deep learning to extract the filling to aid in masking for patients with imperfect facial defects. A general framework of the wound covering model was depicted in <xref ref-type="fig" rid="fig-3">Fig. 3</xref>. This approach can assist patients with facial trauma regain their confidence in daily life while also protecting and cleaning the wound from external elements. Additionally, we introduced a dataset that is capable of covering multiple facial trauma cases. However, the deep learning model used in the previous work required significant computational resources for training due to its complexity, resulting in training times of several days, and presenting a challenge to conducting research in this field.</p>
<p>In the present study, we contribute the following:
<list list-type="bullet">
<list-item>
<p>Using the MICA model [<xref ref-type="bibr" rid="ref-30">30</xref>] for faster reconstruction of imperfect 3D facial data;</p></list-item>
<list-item>
<p>Proposing a new training strategy by integrating the pre-trained MICA model with self-supervised learning to improve 3D facial reconstruction;</p></list-item>
<list-item>
<p>Assessing the effectiveness of the proposed methods and discussing the challenges of filling in missing parts for patients with facial trauma.</p></list-item>
</list></p>
<fig id="fig-3">
<label>Figure 3</label>
<caption>
<title>The proposed strategy and the results obtained from this study</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_56753-fig-3.tif"/>
</fig>
</sec>
<sec id="s2">
<label>2</label>
<title>Application of the MICA Model for Imperfect 3D Facial Reconstruction</title>
<p>In this section, we present the MICA model [<xref ref-type="bibr" rid="ref-30">30</xref>] for imperfect 3D facial reconstruction. Additionally, the geometric loss function is presented to evaluate the error of the reconstruction model. The imperfect 3D facial reconstruction process using the MICA model is detailed in <xref ref-type="fig" rid="fig-4">Fig. 4</xref>.</p>
<fig id="fig-4">
<label>Figure 4</label>
<caption>
<title>The MICA model&#x2019;s imperfect 3D facial reconstruction process</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_56753-fig-4.tif"/>
</fig>
<p>Among 3D face reconstruction models, the MICA model stands out as effective due to its ability to analyze facial features and to reconstruct finer details compared with the previous methods. Additionally, it performs well on evaluation datasets with high reliability. The MICA model is applied with 2D image input, which is highly suitable as it requires fewer computing resources and is more cost-effective than 3D input formats. The MICA model includes two main components: Identity Encoder (IE) and Geometry Decoder (GD) as shown in <xref ref-type="fig" rid="fig-5">Fig. 5</xref>. We modify the inputs and outputs in the process of reconstructing the patient&#x2019;s injured head area as shown in <xref ref-type="fig" rid="fig-6">Fig. 6</xref>. The input is a 2D image, which shows the actual patient&#x2019;s face with the wound. The output is a 3D mesh restored to its pre-injury state.</p>
<fig id="fig-5">
<label>Figure 5</label>
<caption>
<title>A framework of the MICA model</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_56753-fig-5.tif"/>
</fig><fig id="fig-6">
<label>Figure 6</label>
<caption>
<title>The proposed approach</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_56753-fig-6.tif"/>
</fig>
<sec id="s2_1">
<label>2.1</label>
<title>Identity Encoder</title>
<p>IE is formed by two main components: the ArcFace model and the Linear Mapping model. The goal of Encoder is to extract identity-specific information from an input 2D image using two components: ArcFace and Linear Mapping. First, the ArcFace model determines the level of similarity between the input image and the reference image of the same identity and then generates a feature vector to capture identity-specific information. Second, the Linear Mapping model maps this feature vector to a higher dimensional space to improve the discriminative power of the feature representation.
<list list-type="bullet">
<list-item>
<p>ArcFace: The ArcFace architecture (a model developed based on ResNet-100) [<xref ref-type="bibr" rid="ref-31">31</xref>&#x2013;<xref ref-type="bibr" rid="ref-33">33</xref>] uses a shortcut to map input from previous layers to following layers, thereby avoiding the phenomenon of vanishing gradients [<xref ref-type="bibr" rid="ref-34">34</xref>]. This is important for updating model weights and enabling the model to learn the best features. At the same time, This approach is widely recognized for its effectiveness, as demonstrated by its application in several studies [<xref ref-type="bibr" rid="ref-33">33</xref>,<xref ref-type="bibr" rid="ref-34">34</xref>]. These works highlight the superiority of additive angular margin loss in achieving more discriminative features compared to traditional methods. Moreover, additional research [<xref ref-type="bibr" rid="ref-35">35</xref>,<xref ref-type="bibr" rid="ref-36">36</xref>] has further validated its advantages, particularly in improving recognition accuracy under challenging conditions. This is necessary when the input images have only one color, which makes it extremely difficult to identify the identity differences among the people in the photos. This model is trained with Glint 360k data [<xref ref-type="bibr" rid="ref-37">37</xref>] consisting of about 170 million images of 360 thousand people from 4 different continents. The study uses the knowledge learned from this model to enrich the results of the problem.</p></list-item>
<list-item>
<p>Linear mapping (<inline-formula id="ieqn-1"><mml:math id="mml-ieqn-1"><mml:mrow><mml:mi>&#x02133;</mml:mi></mml:mrow></mml:math></inline-formula>): At the end of the model, an additional block of three fully connected layers with the ReLU activation function is used to process the information and adjust its size to fit the Decoder components when the information is passed on to them. The pre-trained model will also be used to train the data in the study further.</p></list-item>
</list></p>
</sec>
<sec id="s2_2">
<label>2.2</label>
<title>Geometry Decoder</title>
<p>In the GD, the MICA model utilizes FLAME [<xref ref-type="bibr" rid="ref-38">38</xref>&#x2013;<xref ref-type="bibr" rid="ref-40">40</xref>], which is the well-known model for transferring feature information from 2D data vectors to anthropometric morphology of faces. This is a model that has been trained on 33,000 precisely aligned 3D faces. In MICA, this model is further trained on many other datasets including LYHM [<xref ref-type="bibr" rid="ref-41">41</xref>], and Stirling [<xref ref-type="bibr" rid="ref-42">42</xref>]. This extensive training allows the Decoder component to reconstruct important features such as nose shape, size, and face thickness. As previously discussed in the preceding section, the FLAME model is a well-known model for transferring features, and it is employed within the Decoder part of the MICA model. By incorporating incomplete facial data into the transfer learning process, we can capitalize on the robust and well-optimized pre-trained MICA model. This approach is not only suitable for our study but also demonstrates its efficacy in harnessing the full potential of the pre-existing MICA model, thus enhancing the overall performance of the 3D face reconstruction process.</p>
<p>In [<xref ref-type="bibr" rid="ref-30">30</xref>], ArcFace architecture is expanded by a small mapping network <inline-formula id="ieqn-2"><mml:math id="mml-ieqn-2"><mml:mrow><mml:mi>&#x02133;</mml:mi></mml:mrow></mml:math></inline-formula> that maps the ArcFace features to their latent space, which can then be interpreted by GD:
<disp-formula id="eqn-1"><label>(1)</label><mml:math id="mml-eqn-1" display="block"><mml:mtable columnalign="right left right left right left right left right left right left" rowspacing="3pt" columnspacing="0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em" displaystyle="true"><mml:mtr><mml:mtd><mml:mrow><mml:mtext mathvariant="bold">z</mml:mtext></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mi>&#x02133;</mml:mi></mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>A</mml:mi><mml:mi>r</mml:mi><mml:mi>c</mml:mi><mml:mi>F</mml:mi><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>e</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>I</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo stretchy="false">)</mml:mo><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>where <inline-formula id="ieqn-3"><mml:math id="mml-ieqn-3"><mml:mrow><mml:mtext mathvariant="bold">z</mml:mtext></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mrow><mml:mn>300</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula> and <italic>I</italic> is the RGB input image. Research by Zielonka et al. [<xref ref-type="bibr" rid="ref-30">30</xref>] used FLAME as a GD, which consists of a single linear layer:
<disp-formula id="eqn-2"><label>(2)</label><mml:math id="mml-eqn-2" display="block"><mml:mtable columnalign="right left right left right left right left right left right left" rowspacing="3pt" columnspacing="0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em" displaystyle="true"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>&#x1D4A2;</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn><mml:mi>D</mml:mi><mml:mi>M</mml:mi><mml:mi>M</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mtext mathvariant="bold">z</mml:mtext></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mrow><mml:mtext mathvariant="bold">B</mml:mtext></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mrow><mml:mtext mathvariant="bold">z</mml:mtext></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mtext mathvariant="bold">A</mml:mtext></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>where <inline-formula id="ieqn-4"><mml:math id="mml-ieqn-4"><mml:mrow><mml:mtext mathvariant="bold">A</mml:mtext></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn><mml:mi>N</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> is the geometry of the average human face and <inline-formula id="ieqn-5"><mml:math id="mml-ieqn-5"><mml:mrow><mml:mtext mathvariant="bold">B</mml:mtext></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn><mml:mi>N</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mn>300</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula> contains the principal components of the 3DMM and <italic>N</italic> is the number of vertices of the output. Then, the loss function of the MICA model is defined as follows:
<disp-formula id="eqn-3"><label>(3)</label><mml:math id="mml-eqn-3" display="block"><mml:mtable columnalign="right left right left right left right left right left right left" rowspacing="3pt" columnspacing="0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em" displaystyle="true"><mml:mtr><mml:mtd><mml:mrow><mml:mi>&#x02112;</mml:mi></mml:mrow><mml:mo>=</mml:mo><mml:munder><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>I</mml:mi><mml:mo>,</mml:mo><mml:mrow><mml:mi>&#x1D4A2;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:mi>&#x1D49F;</mml:mi></mml:mrow></mml:mrow></mml:munder><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:msub><mml:mi>k</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mrow><mml:mi>&#x1D4A2;</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn><mml:mi>D</mml:mi><mml:mi>M</mml:mi><mml:mi>M</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mrow><mml:mi mathvariant="bold-italic">M</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>A</mml:mi><mml:mi>r</mml:mi><mml:mi>c</mml:mi><mml:mi>F</mml:mi><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>e</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>I</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo stretchy="false">)</mml:mo><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:mi>&#x1D4A2;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>where <inline-formula id="ieqn-6"><mml:math id="mml-ieqn-6"><mml:mrow><mml:mi>&#x1D4A2;</mml:mi></mml:mrow></mml:math></inline-formula> is the ground truth mesh, <italic>D</italic> is the data set and <inline-formula id="ieqn-7"><mml:math id="mml-ieqn-7"><mml:msub><mml:mi>k</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is a region dependent weight.</p>
</sec>
<sec id="s2_3">
<label>2.3</label>
<title>Geometric Loss Function</title>
<p>The geometric loss function is a critical metric in 3D modeling and reconstruction tasks, as it measures the discrepancy between predicted and ground truth models in three-dimensional space. It is commonly defined using the Euclidean distance, which provides a straightforward and intuitive metric for evaluating differences between corresponding points in 3D space [<xref ref-type="bibr" rid="ref-43">43</xref>]. For two points <inline-formula id="ieqn-8"><mml:math id="mml-ieqn-8"><mml:mi>p</mml:mi></mml:math></inline-formula> and <inline-formula id="ieqn-9"><mml:math id="mml-ieqn-9"><mml:mi>q</mml:mi></mml:math></inline-formula> in 3D space, the Euclidean distance is given by:
<disp-formula id="eqn-4"><label>(4)</label><mml:math id="mml-eqn-4" display="block"><mml:mtable columnalign="right left right left right left right left right left right left" rowspacing="3pt" columnspacing="0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em" displaystyle="true"><mml:mtr><mml:mtd><mml:mi>d</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:msqrt><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>q</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mn>2</mml:mn></mml:msup><mml:mo>+</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>q</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mn>2</mml:mn></mml:msup><mml:mo>+</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>3</mml:mn></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>q</mml:mi><mml:mn>3</mml:mn></mml:msub><mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mn>2</mml:mn></mml:msup></mml:msqrt><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p>In this study, we use the mean of all distances of vertices to compute the geometric loss. Let <italic>P</italic> and <italic>Q</italic> be the sets of vertices for two 3D models with <inline-formula id="ieqn-10"><mml:math id="mml-ieqn-10"><mml:mi>n</mml:mi></mml:math></inline-formula> vertices each. The geometric loss function is then computed as:
<disp-formula id="eqn-5"><label>(5)</label><mml:math id="mml-eqn-5" display="block"><mml:mtable columnalign="right left right left right left right left right left right left" rowspacing="3pt" columnspacing="0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em" displaystyle="true"><mml:mtr><mml:mtd><mml:mi>L</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mi>n</mml:mi></mml:mfrac><mml:munderover><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:munderover><mml:mi>d</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>q</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy="false">)</mml:mo><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>where <inline-formula id="ieqn-11"><mml:math id="mml-ieqn-11"><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math></inline-formula> and <inline-formula id="ieqn-12"><mml:math id="mml-ieqn-12"><mml:msub><mml:mi>q</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math></inline-formula> are corresponding vertices in the sets <italic>P</italic> and <italic>Q</italic>, respectively. This approach allows us to quantify the overall difference between the two 3D models by averaging the distances between corresponding vertices.</p>
</sec>
</sec>
<sec id="s3">
<label>3</label>
<title>Recommendations from the Study</title>
<p>In this section, we propose incorporating self-supervised learning techniques into a pre-trained MICA model to enhance the results of imperfect 3D face reconstruction.</p>
<sec id="s3_1">
<label>3.1</label>
<title>Self-Supervised Learning</title>
<p>The self-supervised learning algorithm was adopted to enhance the learning ability of the pre-trained Encoder component in our model. The central idea is that training on a vast dataset, followed by introducing a small amount of new data, can significantly alter model weights, resulting in poor performance. The self-supervised learning algorithm, named Swapping Assignments Between Views (SwAV), was developed by Caron et al. [<xref ref-type="bibr" rid="ref-44">44</xref>] and is derived from the contrastive instance learning algorithm [<xref ref-type="bibr" rid="ref-45">45</xref>]. These methods are based on traditional clustering techniques [<xref ref-type="bibr" rid="ref-45">45</xref>,<xref ref-type="bibr" rid="ref-46">46</xref>]. They typically operate offline, alternating between assigning steps to each model cluster. In this approach, image features from the entire dataset are initially clustered, and the assigned clusters are used to predict different views of the image during training. However, clustering-based methods are unsuitable for online learning models due to the extensive steps required to compute the necessary image features for clustering. This self-supervised learning technique can be conceptualized as comparing different views of the same image and the clusters to which their features are assigned. Specifically, the research determines the code from an augmented version of an image sample and predicts this code from other augmented versions of the same image. Given two image features <inline-formula id="ieqn-13"><mml:math id="mml-ieqn-13"><mml:msub><mml:mrow><mml:mtext mathvariant="bold">z</mml:mtext></mml:mrow><mml:mi>t</mml:mi></mml:msub></mml:math></inline-formula> and <inline-formula id="ieqn-14"><mml:math id="mml-ieqn-14"><mml:msub><mml:mrow><mml:mtext mathvariant="bold">z</mml:mtext></mml:mrow><mml:mi>s</mml:mi></mml:msub></mml:math></inline-formula> from two distinct augmented versions of the same image, their code <inline-formula id="ieqn-15"><mml:math id="mml-ieqn-15"><mml:msub><mml:mrow><mml:mtext mathvariant="bold">q</mml:mtext></mml:mrow><mml:mi>t</mml:mi></mml:msub></mml:math></inline-formula> and <inline-formula id="ieqn-16"><mml:math id="mml-ieqn-16"><mml:msub><mml:mrow><mml:mtext mathvariant="bold">q</mml:mtext></mml:mrow><mml:mi>s</mml:mi></mml:msub></mml:math></inline-formula> is computed by matching these attributes with a set of <italic>K</italic> prototypes <inline-formula id="ieqn-17"><mml:math id="mml-ieqn-17"><mml:mo fence="false" stretchy="false">{</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">c</mml:mtext></mml:mrow><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">c</mml:mtext></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">c</mml:mtext></mml:mrow><mml:mi>K</mml:mi></mml:msub><mml:mo fence="false" stretchy="false">}</mml:mo></mml:math></inline-formula>. Subsequently, the study constructs the <italic>swapped</italic> prediction step using the loss function as follows:
<disp-formula id="eqn-6"><label>(6)</label><mml:math id="mml-eqn-6" display="block"><mml:mtable columnalign="right left right left right left right left right left right left" rowspacing="3pt" columnspacing="0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em" displaystyle="true"><mml:mtr><mml:mtd><mml:mi>L</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">z</mml:mtext></mml:mrow><mml:mi>t</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">z</mml:mtext></mml:mrow><mml:mi>s</mml:mi></mml:msub><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mi>&#x2113;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">z</mml:mtext></mml:mrow><mml:mi>t</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">q</mml:mtext></mml:mrow><mml:mi>s</mml:mi></mml:msub><mml:mo stretchy="false">)</mml:mo><mml:mo>+</mml:mo><mml:mi>&#x2113;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">z</mml:mtext></mml:mrow><mml:mi>s</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">q</mml:mtext></mml:mrow><mml:mi>t</mml:mi></mml:msub><mml:mo stretchy="false">)</mml:mo><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>where the function <inline-formula id="ieqn-18"><mml:math id="mml-ieqn-18"><mml:mi>L</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">z</mml:mtext></mml:mrow><mml:mi>t</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">z</mml:mtext></mml:mrow><mml:mi>s</mml:mi></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> measures the fit between the attribute <inline-formula id="ieqn-19"><mml:math id="mml-ieqn-19"><mml:msub><mml:mrow><mml:mtext mathvariant="bold">z</mml:mtext></mml:mrow><mml:mi>t</mml:mi></mml:msub></mml:math></inline-formula> and <inline-formula id="ieqn-20"><mml:math id="mml-ieqn-20"><mml:msub><mml:mrow><mml:mtext mathvariant="bold">q</mml:mtext></mml:mrow><mml:mi>s</mml:mi></mml:msub></mml:math></inline-formula> the code.</p>
<sec id="s3_1_1">
<label>3.1.1</label>
<title>Online Clustering</title>
<p>Online clustering is a technique used in self-supervised learning of visual representations to group similar image patches or features into clusters. This approach is based on the idea that similar patches or features will have analogous representations in the learned visual space. In online clustering, the visual features of numerous unlabeled images are extracted and clustered incrementally as new images become available. The process begins with an empty set of clusters. As new features are extracted from an image, they are either assigned to one of the existing clusters or used to create a new cluster. Each image <inline-formula id="ieqn-21"><mml:math id="mml-ieqn-21"><mml:msub><mml:mrow><mml:mtext mathvariant="bold">x</mml:mtext></mml:mrow><mml:mi>n</mml:mi></mml:msub></mml:math></inline-formula> is transformed into an augmented view <inline-formula id="ieqn-22"><mml:math id="mml-ieqn-22"><mml:msub><mml:mrow><mml:mtext mathvariant="bold">x</mml:mtext></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> by applying a transformation <inline-formula id="ieqn-23"><mml:math id="mml-ieqn-23"><mml:mi>t</mml:mi></mml:math></inline-formula> from the set <inline-formula id="ieqn-24"><mml:math id="mml-ieqn-24"><mml:mrow><mml:mi>&#x1D4AF;</mml:mi></mml:mrow></mml:math></inline-formula> of transformations. The augmented view is then mapped to a vector using a non-linear mapping function <inline-formula id="ieqn-25"><mml:math id="mml-ieqn-25"><mml:msub><mml:mi>f</mml:mi><mml:mi>&#x03B8;</mml:mi></mml:msub></mml:math></inline-formula> resulting in <inline-formula id="ieqn-26"><mml:math id="mml-ieqn-26"><mml:msub><mml:mrow><mml:mtext mathvariant="bold">x</mml:mtext></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. This feature is then projected onto the unit sphere, i.e., <inline-formula id="ieqn-27"><mml:math id="mml-ieqn-27"><mml:msub><mml:mrow><mml:mtext mathvariant="bold">z</mml:mtext></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mi>&#x03B8;</mml:mi></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">x</mml:mtext></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mo fence="false" stretchy="false">&#x2016;</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mi>&#x03B8;</mml:mi></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">x</mml:mtext></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo><mml:msub><mml:mo fence="false" stretchy="false">&#x2016;</mml:mo><mml:mn>2</mml:mn></mml:msub></mml:math></inline-formula>. Subsequently, the code <inline-formula id="ieqn-28"><mml:math id="mml-ieqn-28"><mml:msub><mml:mrow><mml:mtext mathvariant="bold">q</mml:mtext></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is computed from this attribute by mapping <inline-formula id="ieqn-29"><mml:math id="mml-ieqn-29"><mml:msub><mml:mrow><mml:mtext mathvariant="bold">q</mml:mtext></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> to a set of <italic>K</italic> trainable prototype vectors <inline-formula id="ieqn-30"><mml:math id="mml-ieqn-30"><mml:mo fence="false" stretchy="false">{</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">c</mml:mtext></mml:mrow><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">c</mml:mtext></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">c</mml:mtext></mml:mrow><mml:mi>K</mml:mi></mml:msub><mml:mo fence="false" stretchy="false">}</mml:mo></mml:math></inline-formula>. The study represents these prototypes by a matrix <inline-formula id="ieqn-31"><mml:math id="mml-ieqn-31"><mml:mrow><mml:mtext mathvariant="bold">C</mml:mtext></mml:mrow></mml:math></inline-formula> with columns <inline-formula id="ieqn-32"><mml:math id="mml-ieqn-32"><mml:mo fence="false" stretchy="false">{</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">c</mml:mtext></mml:mrow><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">c</mml:mtext></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">c</mml:mtext></mml:mrow><mml:mi>K</mml:mi></mml:msub><mml:mo fence="false" stretchy="false">}</mml:mo></mml:math></inline-formula>.</p>
</sec>
<sec id="s3_1_2">
<label>3.1.2</label>
<title>Swapped Prediction Problem</title>
<p>The loss function in <xref ref-type="disp-formula" rid="eqn-6">Eq. (6)</xref> has two terms that establish a &#x201C;swapped&#x201D; prediction problem by predicting the code <inline-formula id="ieqn-33"><mml:math id="mml-ieqn-33"><mml:msub><mml:mrow><mml:mtext mathvariant="bold">q</mml:mtext></mml:mrow><mml:mi>t</mml:mi></mml:msub></mml:math></inline-formula> from the feature <inline-formula id="ieqn-34"><mml:math id="mml-ieqn-34"><mml:msub><mml:mrow><mml:mtext mathvariant="bold">z</mml:mtext></mml:mrow><mml:mi>s</mml:mi></mml:msub></mml:math></inline-formula> and <inline-formula id="ieqn-35"><mml:math id="mml-ieqn-35"><mml:msub><mml:mrow><mml:mtext mathvariant="bold">q</mml:mtext></mml:mrow><mml:mi>s</mml:mi></mml:msub></mml:math></inline-formula> from <inline-formula id="ieqn-36"><mml:math id="mml-ieqn-36"><mml:msub><mml:mrow><mml:mtext mathvariant="bold">z</mml:mtext></mml:mrow><mml:mi>t</mml:mi></mml:msub></mml:math></inline-formula>. Each term represents the cross-entropy loss function between the code and the probability obtained from using the softmax function that contains the dot product between and all the prototypes in, i.e.,
<disp-formula id="eqn-7"><label>(7)</label><mml:math id="mml-eqn-7" display="block"><mml:mtable columnalign="right left right left right left right left right left right left" rowspacing="3pt" columnspacing="0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em" displaystyle="true"><mml:mtr><mml:mtd><mml:mi>&#x2113;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">z</mml:mtext></mml:mrow><mml:mi>t</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">q</mml:mtext></mml:mrow><mml:mi>s</mml:mi></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mo>&#x2212;</mml:mo><mml:munder><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mi>k</mml:mi></mml:munder><mml:msubsup><mml:mrow><mml:mtext mathvariant="bold">q</mml:mtext></mml:mrow><mml:mi>s</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>k</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:msubsup><mml:mi>log</mml:mi><mml:mo>&#x2061;</mml:mo><mml:msubsup><mml:mrow><mml:mtext mathvariant="bold">p</mml:mtext></mml:mrow><mml:mi>t</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>k</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:mspace width="1em" /><mml:mtext>&#xA0;where&#xA0;</mml:mtext><mml:mspace width="1em" /><mml:msubsup><mml:mrow><mml:mtext mathvariant="bold">p</mml:mtext></mml:mrow><mml:mi>t</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>k</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>exp</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mi>&#x03C4;</mml:mi></mml:mfrac><mml:msubsup><mml:mrow><mml:mtext mathvariant="bold">z</mml:mtext></mml:mrow><mml:mi>t</mml:mi><mml:mrow><mml:mi mathvariant="normal">&#x22A4;</mml:mi></mml:mrow></mml:msubsup><mml:msub><mml:mrow><mml:mtext mathvariant="bold">c</mml:mtext></mml:mrow><mml:mi>k</mml:mi></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:msub><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:msup><mml:mi>k</mml:mi><mml:mrow><mml:mi mathvariant="normal">&#x2032;</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:msub><mml:mi>exp</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mi>&#x03C4;</mml:mi></mml:mfrac><mml:msubsup><mml:mrow><mml:mtext mathvariant="bold">z</mml:mtext></mml:mrow><mml:mi>t</mml:mi><mml:mrow><mml:mi mathvariant="normal">&#x22A4;</mml:mi></mml:mrow></mml:msubsup><mml:msub><mml:mrow><mml:mtext mathvariant="bold">c</mml:mtext></mml:mrow><mml:mrow><mml:msup><mml:mi>k</mml:mi><mml:mrow><mml:mi mathvariant="normal">&#x2032;</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mfrac><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>where <inline-formula id="ieqn-37"><mml:math id="mml-ieqn-37"><mml:mi>&#x03C4;</mml:mi></mml:math></inline-formula> is the temperature parameter. Using this loss function for all images and pairs of augmentations yields a general loss function for the swapped prediction problem:
<disp-formula id="eqn-8"><label>(8)</label><mml:math id="mml-eqn-8" display="block"><mml:mtable columnalign="right left right left right left right left right left right left" rowspacing="3pt" columnspacing="0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em" displaystyle="true"><mml:mtr><mml:mtd><mml:mo>&#x2212;</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mi>N</mml:mi></mml:mfrac><mml:munderover><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>N</mml:mi></mml:munderover><mml:munder><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>s</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo>&#x223C;</mml:mo><mml:mrow><mml:mi>&#x1D4AF;</mml:mi></mml:mrow></mml:mrow></mml:munder><mml:mrow><mml:mo>[</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mi>&#x03C4;</mml:mi></mml:mfrac><mml:msubsup><mml:mrow><mml:mtext mathvariant="bold">z</mml:mtext></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x22A4;</mml:mi></mml:mrow></mml:msubsup><mml:mrow><mml:mtext mathvariant="bold">C</mml:mtext></mml:mrow><mml:msub><mml:mrow><mml:mtext mathvariant="bold">q</mml:mtext></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mi>s</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mi>&#x03C4;</mml:mi></mml:mfrac><mml:msubsup><mml:mrow><mml:mtext mathvariant="bold">z</mml:mtext></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x22A4;</mml:mi></mml:mrow></mml:msubsup><mml:mrow><mml:mtext mathvariant="bold">C</mml:mtext></mml:mrow><mml:msub><mml:mrow><mml:mtext mathvariant="bold">q</mml:mtext></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:mi>log</mml:mi><mml:mo>&#x2061;</mml:mo><mml:munderover><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>K</mml:mi></mml:munderover><mml:mi>exp</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mfrac><mml:mrow><mml:msubsup><mml:mrow><mml:mtext mathvariant="bold">z</mml:mtext></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x22A4;</mml:mi></mml:mrow></mml:msubsup><mml:msub><mml:mrow><mml:mtext mathvariant="bold">c</mml:mtext></mml:mrow><mml:mi>k</mml:mi></mml:msub></mml:mrow><mml:mi>&#x03C4;</mml:mi></mml:mfrac><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>log</mml:mi><mml:mo>&#x2061;</mml:mo><mml:munderover><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>K</mml:mi></mml:munderover><mml:mi>exp</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mfrac><mml:mrow><mml:msubsup><mml:mrow><mml:mtext mathvariant="bold">z</mml:mtext></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x22A4;</mml:mi></mml:mrow></mml:msubsup><mml:msub><mml:mrow><mml:mtext mathvariant="bold">c</mml:mtext></mml:mrow><mml:mi>k</mml:mi></mml:msub></mml:mrow><mml:mi>&#x03C4;</mml:mi></mml:mfrac><mml:mo>)</mml:mo></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p>This loss function is minimized based on two <inline-formula id="ieqn-38"><mml:math id="mml-ieqn-38"><mml:mrow><mml:mtext mathvariant="bold">C</mml:mtext></mml:mrow></mml:math></inline-formula> prototype parameters and the parameter of the image Encoder <inline-formula id="ieqn-39"><mml:math id="mml-ieqn-39"><mml:msub><mml:mi>f</mml:mi><mml:mi>&#x03B8;</mml:mi></mml:msub></mml:math></inline-formula>, which is used to output the <inline-formula id="ieqn-40"><mml:math id="mml-ieqn-40"><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">z</mml:mtext></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mi>n</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> attribute.</p>
</sec>
</sec>
<sec id="s3_2">
<label>3.2</label>
<title>Computing Codes Online</title>
<p>To enable the study to function online as well as offline, codes (<inline-formula id="ieqn-41"><mml:math id="mml-ieqn-41"><mml:msub><mml:mrow><mml:mtext mathvariant="bold">q</mml:mtext></mml:mrow><mml:mi>t</mml:mi></mml:msub></mml:math></inline-formula>, <inline-formula id="ieqn-42"><mml:math id="mml-ieqn-42"><mml:msub><mml:mrow><mml:mtext mathvariant="bold">q</mml:mtext></mml:mrow><mml:mi>s</mml:mi></mml:msub></mml:math></inline-formula>) are computed using only the image attributes within a batch. This approach allows the application of <inline-formula id="ieqn-43"><mml:math id="mml-ieqn-43"><mml:mrow><mml:mtext mathvariant="bold">C</mml:mtext></mml:mrow></mml:math></inline-formula> prototypes across different batches. SwAV employs a clustering method to group multiple versions of the prototypes. Codes are calculated using the <inline-formula id="ieqn-44"><mml:math id="mml-ieqn-44"><mml:mrow><mml:mtext mathvariant="bold">C</mml:mtext></mml:mrow></mml:math></inline-formula> prototypes in such a way that all samples in a specific image batch are evenly distributed among the prototypes. This even distribution ensures each code for different images in the batch is unique, avoiding trivial solutions where every image has the same code. Given the feature vector <inline-formula id="ieqn-45"><mml:math id="mml-ieqn-45"><mml:mrow><mml:mtext mathvariant="bold">B</mml:mtext></mml:mrow></mml:math></inline-formula> consisting of <inline-formula id="ieqn-46"><mml:math id="mml-ieqn-46"><mml:mrow><mml:mtext mathvariant="bold">Z</mml:mtext></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">z</mml:mtext></mml:mrow><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">z</mml:mtext></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">z</mml:mtext></mml:mrow><mml:mi>B</mml:mi></mml:msub></mml:math></inline-formula>, this study aims to map them to the prototypes <inline-formula id="ieqn-47"><mml:math id="mml-ieqn-47"><mml:mrow><mml:mtext mathvariant="bold">C</mml:mtext></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">c</mml:mtext></mml:mrow><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">c</mml:mtext></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">c</mml:mtext></mml:mrow><mml:mi>K</mml:mi></mml:msub></mml:math></inline-formula>. The mappings are represented as <inline-formula id="ieqn-48"><mml:math id="mml-ieqn-48"><mml:mrow><mml:mtext mathvariant="bold">Q</mml:mtext></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">q</mml:mtext></mml:mrow><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">q</mml:mtext></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">q</mml:mtext></mml:mrow><mml:mi>K</mml:mi></mml:msub></mml:math></inline-formula> and <inline-formula id="ieqn-49"><mml:math id="mml-ieqn-49"><mml:mrow><mml:mtext mathvariant="bold">Q</mml:mtext></mml:mrow></mml:math></inline-formula> is optimized to maximize the similarity between the properties and the prototypes:</p>
<p><disp-formula id="eqn-9"><label>(9)</label><mml:math id="mml-eqn-9" display="block"><mml:mtable columnalign="right left right left right left right left right left right left" rowspacing="3pt" columnspacing="0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em" displaystyle="true"><mml:mtr><mml:mtd><mml:munder><mml:mo movablelimits="true" form="prefix">max</mml:mo><mml:mrow><mml:mrow><mml:mtext mathvariant="bold">Q</mml:mtext></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:mi>&#x1D4AC;</mml:mi></mml:mrow></mml:mrow></mml:munder><mml:mi>Tr</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msup><mml:mrow><mml:mtext mathvariant="bold">Q</mml:mtext></mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x22A4;</mml:mi></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mtext mathvariant="bold">C</mml:mtext></mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x22A4;</mml:mi></mml:mrow></mml:msup><mml:mrow><mml:mtext mathvariant="bold">Z</mml:mtext></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mi>&#x03B5;</mml:mi><mml:mi>H</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mtext mathvariant="bold">Q</mml:mtext></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>where <italic>H</italic> is the entropy function, <inline-formula id="ieqn-50"><mml:math id="mml-ieqn-50"><mml:mi>H</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mtext mathvariant="bold">Q</mml:mtext></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mtext mathvariant="bold">Q</mml:mtext></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mi>log</mml:mi><mml:mo>&#x2061;</mml:mo><mml:msub><mml:mrow><mml:mtext mathvariant="bold">Q</mml:mtext></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, and <inline-formula id="ieqn-51"><mml:math id="mml-ieqn-51"><mml:mi>&#x03B5;</mml:mi></mml:math></inline-formula> is the parameter that controls the smoothness of the mapping. It is observed that strong entropy regularization (i.e., using high <inline-formula id="ieqn-52"><mml:math id="mml-ieqn-52"><mml:mi>&#x03B5;</mml:mi></mml:math></inline-formula>) can often lead to a trivial solution, where all templates collapse into a unique representation and all prototypes have the same property. Therefore, <inline-formula id="ieqn-53"><mml:math id="mml-ieqn-53"><mml:mi>&#x03B5;</mml:mi></mml:math></inline-formula> should be fixed at a small value. To ensure equal partitioning, Asano et al. [<xref ref-type="bibr" rid="ref-46">46</xref>] constrained the matrix that belonged to the transportation polytope. To address this issue, we propose an adaptation of the solution proposed by [<xref ref-type="bibr" rid="ref-44">44</xref>], which involves constraining the transportation polytope to the minibatch when working with mini-batches as follows:</p>
<p><disp-formula id="eqn-10"><label>(10)</label><mml:math id="mml-eqn-10" display="block"><mml:mtable columnalign="right left right left right left right left right left right left" rowspacing="3pt" columnspacing="0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em" displaystyle="true"><mml:mtr><mml:mtd><mml:mrow><mml:mi>&#x1D4AC;</mml:mi></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtext mathvariant="bold">Q</mml:mtext></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:msubsup><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mrow><mml:mo>+</mml:mo></mml:mrow><mml:mrow><mml:mi>K</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>B</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x2223;</mml:mo><mml:mrow><mml:mtext mathvariant="bold">Q</mml:mtext></mml:mrow><mml:msub><mml:mrow><mml:mtext mathvariant="bold">1</mml:mtext></mml:mrow><mml:mi>B</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mi>K</mml:mi></mml:mfrac><mml:msub><mml:mrow><mml:mtext mathvariant="bold">1</mml:mtext></mml:mrow><mml:mi>K</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msup><mml:mrow><mml:mtext mathvariant="bold">Q</mml:mtext></mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x22A4;</mml:mi></mml:mrow></mml:msup><mml:msub><mml:mrow><mml:mtext mathvariant="bold">1</mml:mtext></mml:mrow><mml:mi>K</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mi>B</mml:mi></mml:mfrac><mml:msub><mml:mrow><mml:mtext mathvariant="bold">1</mml:mtext></mml:mrow><mml:mi>B</mml:mi></mml:msub><mml:mo>}</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>where <inline-formula id="ieqn-54"><mml:math id="mml-ieqn-54"><mml:msub><mml:mrow><mml:mtext mathvariant="bold">1</mml:mtext></mml:mrow><mml:mi>K</mml:mi></mml:msub></mml:math></inline-formula> is a vector of ones with <italic>K</italic> dimensions. These constraints ensure that on average each prototype is selected at least <italic>B</italic>/<italic>K</italic> times in a batch. When a continuous solution for <inline-formula id="ieqn-55"><mml:math id="mml-ieqn-55"><mml:msup><mml:mrow><mml:mtext mathvariant="bold">Q</mml:mtext></mml:mrow><mml:mo>&#x2217;</mml:mo></mml:msup></mml:math></inline-formula> in <xref ref-type="disp-formula" rid="eqn-10">Eq. (10)</xref> is found, the discrete code can be calculated using the rounding process. In an online setting, we used mini-batches with discrete codes that performed worse than continuous codes. These soft codes <inline-formula id="ieqn-56"><mml:math id="mml-ieqn-56"><mml:msup><mml:mrow><mml:mtext mathvariant="bold">Q</mml:mtext></mml:mrow><mml:mo>&#x2217;</mml:mo></mml:msup></mml:math></inline-formula> are the solution of <xref ref-type="disp-formula" rid="eqn-9">Eq. (9)</xref> over the set <inline-formula id="ieqn-57"><mml:math id="mml-ieqn-57"><mml:mrow><mml:mi>&#x1D4AC;</mml:mi></mml:mrow></mml:math></inline-formula> and can be written as:</p>
<p><disp-formula id="eqn-11"><label>(11)</label><mml:math id="mml-eqn-11" display="block"><mml:mtable columnalign="right left right left right left right left right left right left" rowspacing="3pt" columnspacing="0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em" displaystyle="true"><mml:mtr><mml:mtd><mml:msup><mml:mrow><mml:mtext mathvariant="bold">Q</mml:mtext></mml:mrow><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>=</mml:mo><mml:mi>Diag</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mtext mathvariant="bold">u</mml:mtext></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mi>exp</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mfrac><mml:mrow><mml:msup><mml:mrow><mml:mtext mathvariant="bold">C</mml:mtext></mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x22A4;</mml:mi></mml:mrow></mml:msup><mml:mrow><mml:mtext mathvariant="bold">Z</mml:mtext></mml:mrow></mml:mrow><mml:mi>&#x03B5;</mml:mi></mml:mfrac><mml:mo>)</mml:mo></mml:mrow><mml:mi>Diag</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mtext mathvariant="bold">v</mml:mtext></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>where <inline-formula id="ieqn-58"><mml:math id="mml-ieqn-58"><mml:mrow><mml:mtext mathvariant="bold">u</mml:mtext></mml:mrow></mml:math></inline-formula> and <inline-formula id="ieqn-59"><mml:math id="mml-ieqn-59"><mml:mrow><mml:mtext mathvariant="bold">v</mml:mtext></mml:mrow></mml:math></inline-formula> are the renormalization vectors in <inline-formula id="ieqn-60"><mml:math id="mml-ieqn-60"><mml:msup><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mi>K</mml:mi></mml:msup></mml:math></inline-formula> and <inline-formula id="ieqn-61"><mml:math id="mml-ieqn-61"><mml:msup><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mi>B</mml:mi></mml:msup></mml:math></inline-formula>, respectively. The vector renormalization is calculated using a small number of matrix multiplications and the iterative Sinkhorn-Knopp algorithm [<xref ref-type="bibr" rid="ref-47">47</xref>].</p>
</sec>
<sec id="s3_3">
<label>3.3</label>
<title>Post-Processing: Filling Extraction</title>
<p>To achieve completion of the wound healing process and effectively aid in concealing facial imperfections, we have employed the filling extraction technique, which was initially developed and discussed in one of our prior publications [<xref ref-type="bibr" rid="ref-29">29</xref>]. This approach is specifically tailored to analyze the wound-affected area on the face, isolate it, and extract it separately for subsequent utilization in 3D printing applications.</p>
<p>By employing this technique, our final printable product is an amalgamation of the original, imperfect face and the separately extracted wound portion. Notably, the extracted part is displayed in a distinct color to differentiate it from the rest of the facial structure, as demonstrated in <xref ref-type="fig" rid="fig-7">Fig. 7</xref>. The combination of these elements allows for a more comprehensive understanding of the wound area and its impact on the overall facial reconstruction process.</p>
<fig id="fig-7">
<label>Figure 7</label>
<caption>
<title>A model for supervised learning</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_56753-fig-7.tif"/>
</fig>
</sec>
</sec>
<sec id="s4">
<label>4</label>
<title>Experiments</title>
<sec id="s4_1">
<label>4.1</label>
<title>Building Dataset</title>
<p>To effectively implement the transfer learning and self-supervised learning processes within the MICA model, we have undertaken a comprehensive data preprocessing approach, which consists of two primary steps. Each step has been carefully designed to ensure the optimization of our model&#x2019;s performance and applicability in the context of reconstructing incomplete facial structures.</p>
<p>The first step focuses on preparing the input data in a 2D format. This decision is grounded in two main considerations. Firstly, utilizing a 2D input format is consistent with the MICA model&#x2019;s architectural requirements, thus allowing for seamless integration of the data. Secondly, the adoption of 2D input data presents an alternative method for facial reconstruction, which significantly simplifies the process for medical professionals. As a result, physicians are no longer required to provide 3D MRI (Magnetic Resonance Imaging) scans. They can rely on more readily available 2D images. This adaptation seeks to create a more conducive environment that enables healthcare practitioners to efficiently engage with our model, ultimately delivering enhanced value and outcomes for patients. Additionally, patients can preview their facial appearance after treatment, providing them with a clear understanding of the expected results. The second step of the data pre-processing procedure involves employing augmentation techniques within the mathematical space to standardize the model&#x2019;s output. This standardization serves as the Ground Truth for the MICA model to calculate the loss value, ensuring that the reconstructed facial structures are accurate and reliable. By integrating these augmentation techniques, we aim to improve the model&#x2019;s overall performance and facilitate a more precise reconstruction of facial defects. The specific details of the input and output parameters are described in greater detail below:
<list list-type="bullet">
<list-item>
<p>The 2D input format is obtained by capturing images from all 3D data with varying lighting directions. The 2D images have a resolution of <inline-formula id="ieqn-62"><mml:math id="mml-ieqn-62"><mml:mn>1024</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1024</mml:mn></mml:math></inline-formula> pixels and are captured from 3687 3D samples within our dataset, ensuring that the angle of capture reveals any scarring present.</p></list-item>
<list-item>
<p>The MICA model&#x2019;s output, with 3D format, is modified to match with the Ground Truth. By default, MICA generates eyeballs for the output mesh but the Ground Truth mesh of the Cir3D-FaIR dataset has no eyeballs, so we have conducted eyeball removal procedures for the entire set of MICA model&#x2019;s outputs. Another problem is that 3D printing needs a watertight mesh to have a good processing step, but the output mesh of the MICA model has a hole that makes the mesh non-watertight, therefore the hole-fixing procedures need to be used. The computational techniques of eyeball removal and hole-fixing procedures introduced in our previous research [<xref ref-type="bibr" rid="ref-29">29</xref>] and employed in this space have been encapsulated within the Cirmesh library and made available in Python developed by our group, with the easy installation using the command: <italic>pip install Cirmesh</italic>. After that, the MICA model&#x2019;s output and the ground truth mesh could be compared with each other.</p></list-item>
</list></p>
<p>To prepare for the training process, the research divides the data into a 6:2:2 ratio for the training, validation, and test sets, respectively. During the data splitting procedure, the study ensures that images of different individuals are allocated to each dataset, preventing the exchange of facial information among three sets.</p>
</sec>
<sec id="s4_2">
<label>4.2</label>
<title>Determining the Set of Parameters</title>
<p>Runtime: The experiments were conducted on Google Colab Pro. The proposed model requires computational cost and runtime. The first epoch takes about 30 min for the training and validation process, while other epochs take only about 90 s to complete as the data is already stored.</p>
<p>Hyperparameters: In all experiments, this research fixed the version of hyperparameters. The optimal algorithm is Adam [<xref ref-type="bibr" rid="ref-47">47</xref>] whose initial learning rate value is <inline-formula id="ieqn-63"><mml:math id="mml-ieqn-63"><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:msup><mml:mn>10</mml:mn><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mn>3</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula>, which is kept constant throughout the training. At the same time, a batch size of 50 images is used for each epoch. Finally, the experimental process is conducted for 50 epochs for each case as shown in <xref ref-type="fig" rid="fig-8">Fig. 8</xref>.</p>
<fig id="fig-8">
<label>Figure 8</label>
<caption>
<title>Loss function value corresponding to each epoch</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_56753-fig-8.tif"/>
</fig>
<p><xref ref-type="fig" rid="fig-8">Fig. 8</xref> indicates the change in the loss function value for each epoch in two cases: 20% data and 100% data. Both cases share a similar pattern of loss function change for the first 30 epochs. The case with 100% data shows faster convergence in the following 20 epochs. This suggests that the present model is highly stable.</p>
</sec>
<sec id="s4_3">
<label>4.3</label>
<title>Experiment Settings</title>
<p>We used a self-supervised algorithm for model training: We conducted a total of 5 experiments. In the first experiment, the study used the pre-trained model of MICA [<xref ref-type="bibr" rid="ref-30">30</xref>] with its enriched data. In the remaining cases, we utilized the self-supervised learning technique as mentioned in <xref ref-type="sec" rid="s3">Section 3</xref> to train the Encoder component before proceeding to the supervised learning process. The present method conducted 50 epochs with self-supervised learning in two main cases. As discussed earlier, our study initially used a sample of 20% of the training data and then used a larger sample to evaluate the loss of the proposed models. <xref ref-type="fig" rid="fig-9">Fig. 9</xref> shows the loss of self-supervised algorithm training when sampling 20% and 100% of the data. When approaching the 50th epoch, the loss function of the method with 20% of the data sample performed better than that with 100% of the data sample. This can be explained by the fact that when dealing with larger amounts of data, self-study may have some difficulties in adapting to the new data. The strategy of training the MICA model combined with self-supervised learning is described in Algorithm 1.</p>
<fig id="fig-9">
<label>Figure 9</label>
<caption>
<title>Loss of training set</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_56753-fig-9.tif"/>
</fig>
<fig id="fig-13">
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_56753-fig-13.tif"/>
</fig>
<p>During the training process, <xref ref-type="fig" rid="fig-10">Fig. 10</xref> shows that the loss function path of the training set of all three cases is stable and all values go below 0.5 at the 10th epoch. After that, the loss line decreased slightly and approached near zero at the 50th epoch. In all cases, the loss lines are not significantly different from each other. For the validation set, the loss lines seem to be volatile at the 10th epoch, and then become stable again as they move toward the 50th epoch.</p>
<fig id="fig-10">
<label>Figure 10</label>
<caption>
<title>Loss of validation set</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_56753-fig-10.tif"/>
</fig>
<p>In our recent research [<xref ref-type="bibr" rid="ref-29">29</xref>], the outcomes of the developed model were assessed using the geometric loss function. So, in this study, we explored the performance of a self-supervised learning model on a unique dataset, comparing it to the well-known transfer learning approach, specifically the Mica model. Using the geometric loss function for assessment, the results demonstrate the effectiveness of our approach in enhancing model performance. As shown in <xref ref-type="table" rid="table-1">Table 1</xref>, the transfer learning method yielded a loss of 0.36, indicating a reasonable level of accuracy in reconstructing facial injuries. However, when applying our self-supervised learning model to just 20% of the training data, we observed a reduction in loss to 0.3, showing an improvement over the transfer learning method. Upon expanding the dataset to 100% of the available data, our self-supervised learning model continued to improve, reducing the loss marginally further. This consistent improvement, even with increased data, showcases the potential of our model in handling larger datasets efficiently.</p>
<table-wrap id="table-1">
<label>Table 1</label>
<caption>
<title>Geometric distance function values of the outcomes</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Method</th>
<th>Mean distance</th>
</tr>
</thead>
<tbody>
<tr>
<td>Transfer learning</td>
<td>0.3642</td>
</tr>
<tr>
<td>Self-supervised learning with 20% data</td>
<td>0.3353</td>
</tr>
<tr>
<td>Self-supervised learning with 100% data</td>
<td>0.3247</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>These findings suggest that employing a limited amount of data in the pre-trained model using the self-supervised learning technique is a feasible approach that offers superior efficiency compared to the transfer learning method. Notably, our datasets comprise only 3687 images, accounting for less than one-tenth of the data in the purely pre-trained MICA model. As a result, our approach effectively leverages the capabilities of the trained MICA model while remaining applicable to the current research context.</p>
<p><xref ref-type="fig" rid="fig-11">Fig. 11</xref> shows some examples illustrating the model&#x2019;s ability to reconstruct the structure of human facial wounds and the deviation between the output and the reality, with scar types divided into 4 main groups including The top of the head, Forehead, Chin and Nose. The Deviation column describes the difference between the Output column and the Ground Truth column with the unit of measure calculated in units of coordinates, specifically brighter regions signify larger errors, which can be identified as wounds. As shown in <xref ref-type="fig" rid="fig-11">Fig. 11</xref>, wounds at the forehead position are the most elusive, producing the highest errors, and wounds around the nose area give extremely good results. For a more detailed analysis of the results, we perform a statistic with the output of a total of 291 object files as input including 53 object files are The top of the head, 71 object files are Forehead, 65 object files are Chin and 102 object files are Nose as shown in <xref ref-type="table" rid="table-2">Table 2</xref>. Specifically, the deviations in the coordinate units are converted to % and averaged by the number of each location, with the smallest mean error being the Nose region with 1.2% and the largest average error being the Forehead region with 7.8%, the other two regions are almost similar. This result satisfied the requirements of the Hospital cooperating in the field of plastic surgery, it supports the doctor in most stages of facial regeneration and the doctor&#x2019;s intervention is only to bring completeness.</p>
<fig id="fig-11">
<label>Figure 11</label>
<caption>
<title>Deviation between obtained results and referred ones</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_56753-fig-11.tif"/>
</fig><table-wrap id="table-2">
<label>Table 2</label>
<caption>
<title>Geometric distance error for each scar location</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Scar location</th>
<th>Quantity</th>
<th>Mean distance error %</th>
</tr>
</thead>
<tbody>
<tr>
<td>The top of the head</td>
<td>53</td>
<td>6.5</td>
</tr>
<tr>
<td>Forehead</td>
<td>71</td>
<td>7.8</td>
</tr>
<tr>
<td>Chin</td>
<td>65</td>
<td>4.7</td>
</tr>
<tr>
<td>Nose</td>
<td>102</td>
<td>1.2</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Although some errors will inevitably occur while reconstructing injured areas, the proposed model successfully reconstructed the majority of the face, leaving virtually no unaffected regions. These findings suggest that the model is proficient in addressing the challenges of reconstructing defects stemming from wounds, even when solely relying on 2D input. The results of this study have been applied to the creation of experimental products using 3D printing technology, as shown in <xref ref-type="fig" rid="fig-12">Fig. 12</xref>, to be utilized in laboratory experiments. The plastic material used is PLA (Polylactic Acid), a versatile biopolymer. PLA is easily synthesized from abundant renewable resources and is biodegradable. It has demonstrated significant potential as a biomaterial in various healthcare applications, including tissue engineering and regenerative medicine [<xref ref-type="bibr" rid="ref-48">48</xref>]. However, it is important to note that PLA is not suitable for implantation into human skin. Our 3D printing results are intended to demonstrate its efficiency and to provide an example of its potential real-world applications. Due to the high costs associated with bioprinting technologies that can create implantable materials for humans, we hope to conduct experiments shortly using materials that are compatible with human skin grafting. The 3D printer we use features an enclosed chamber to maintain a stable temperature during the printing process. The printing procedure always begins with cleaning the hot end. We print at a lower speed of about 60 mm/s, and the PLA printing temperature is set at a high 205&#x00B0;C. The flow rate is increased by 5% above normal. The 3D printer is placed on a stable surface, and all components are checked to ensure they do not cause vibrations during printing. Additionally, we use an ADXL345 3-axis accelerometer to compensate for any vibrations and reduce issues with poor interlayer adhesion and rough surfaces. We monitor print quality by visually inspecting every 3D printed sample. If there is significant poor interlayer adhesion or surface roughness, the samples are reprinted. The results of this study reveal significant potential for practical applications, especially in creating models that mask flaws in the human face.</p>
<fig id="fig-12">
<label>Figure 12</label>
<caption>
<title>3D printed prototypes</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMES_56753-fig-12.tif"/>
</fig>
</sec>
<sec id="s4_4">
<label>4.4</label>
<title>Limitation</title>
<p>The approach in this study provides relatively high performance in the task of imperfect 3D facial reconstruction. However, the current results may not fully capture the complexities of real-world scenarios due to the significant differences in individual faces. Creating a real-world dataset for this purpose is both challenging and costly. Therefore, in this study, we have adopted an open-solution approach. Our method has been tested on synthetic data, with the expectation that it can be applied to real-world data once available, thereby enhancing the reliability and generalizability of our conclusions.</p>
</sec>
</sec>
<sec id="s5">
<label>5</label>
<title>Conclusions</title>
<p>This paper has demonstrated the advantages of integrating a pre-trained model relying on a self-supervised learning method with defective 3D face reconstruction. The proposed approach significantly benefits the underlying problem by capitalizing on the capabilities of the existing models. The noteworthy findings can be pointed out as follows:
<list list-type="bullet">
<list-item>
<p>In this research, we re-introduce the Cir3D-FaIR dataset, meticulously designed to address a distinct patient-specific issue, imperfect faces. This dataset is unique in its composition and application, marking its inaugural use in the training of AI models. Given the novelty of the CirFair3D dataset and the specificity of the problem it aims to solve, identifying an existing neural network model for direct comparison posed a significant challenge. As a consequence, conventional benchmarks for performance comparison were not readily applicable.</p></list-item>
<list-item>
<p>In reason of this, we adopted a self-supervised learning approach tailored to the unique characteristics of the CirFair3D dataset. The absence of pre-existing models directly comparable to our scenario underscores the innovative aspect of our methodology. This study, therefore, not only introduces a new dataset but also pioneers a self-supervised learning approach tailored to leverage the full potential of this novel dataset.</p></list-item>
<list-item>
<p>By strategically integrating pre-trained models and self-supervised learning, it is possible to achieve an efficient system where new data can be incorporated without requiring a complete retraining process. This approach yields reliable results without demanding an extensive amount of training data. Additionally, it enhances the system&#x2019;s performance without adding complexity to the model. These results demonstrate the feasibility of reconstructing incomplete faces using only 2D images, providing surgeons and orthopedic trauma specialists with additional data and improving procedures to better support their patients.</p></list-item>
<list-item>
<p>The training time significantly reduced from a duration of several days to a matter of mere hours, facilitating the extraction of facial features from incomplete faces. Moreover, this reduction allows for the implementation of the training process on platforms like Google Colab without necessitating a high-performance computer.</p></list-item>
<list-item>
<p>Although the obtained results exhibit a relatively wide range of errors, from 0.3 to 6 as shown in <xref ref-type="fig" rid="fig-11">Fig. 11</xref>, which is larger than the error margins observed in our previous research, the average error remains at a manageable level of 0.32, indicating that the overall effect is not significantly compromised. This can be easily understood, as the method attempts to extract information from a 2D format, which inherently lacks depth information compared to the 3D format.</p>
</list-item>
</list></p>
</sec>
</body>
<back>
<ack>
<p>Authors would like to thank Hoang Nguyen and Associate Professor Nguyen Thanh Binh for their recommendation for his assistance in this study. We would like to thank the Vietnam Institute for Advanced Study in Mathematics (VIASM) for the hospitality during our visit in 2023 when we started to work on this paper.</p>
</ack>
<sec><title>Funding Statement</title>
<p>The authors received no specific funding for this study.</p>
</sec>
<sec><title>Author Contributions</title>
<p>The authors confirm contribution to the paper as follows: study conception and design: Thinh D. Le, Duong Q. Nguyen, H. Nguyen-Xuan, Phuong D. Nguyen; data collection: Phuong D. Nguyen; analysis and interpretation of results: Thinh D. Le, Duong Q. Nguyen, Phuong D. Nguyen; draft manuscript preparation: Thinh D. Le, Duong Q. Nguyen, H. Nguyen-Xuan. All authors reviewed the results and approved the final version of the manuscript.</p>
</sec>
<sec sec-type="data-availability"><title>Availability of Data and Materials</title>
<p>Our source code and data can be accessed at <ext-link ext-link-type="uri" xlink:href="https://github.com/SIMOGroup/ImperfectionFacialReconstruction">https://github.com/SIMOGroup/ImperfectionFacialReconstruction</ext-link> (accessed on 01 May 2024).</p>
</sec>
<sec><title>Ethics Approval</title>
<p>Not applicable.</p>
</sec>
<sec sec-type="COI-statement"><title>Conflicts of Interest</title>
<p>The authors declare no conflicts of interest to report regarding the present study.</p>
</sec>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>1.</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Meisel</surname> <given-names>EM</given-names></string-name>, <string-name><surname>Daw</surname> <given-names>PA</given-names></string-name>, <string-name><surname>Xu</surname> <given-names>X</given-names></string-name>, <string-name><surname>Patel</surname> <given-names>R</given-names></string-name></person-group>. <article-title>Digital technologies for splints manufacturing</article-title>. In: <conf-name>Proceedings of the 38th International MATADOR Conference</conf-name>, <year>2022</year>; p. <fpage>475</fpage>&#x2013;<lpage>88</lpage>.</mixed-citation></ref>
<ref id="ref-2"><label>2.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Asanovic</surname> <given-names>I</given-names></string-name>, <string-name><surname>Millward</surname> <given-names>H</given-names></string-name>, <string-name><surname>Lewis</surname> <given-names>A</given-names></string-name></person-group>. <article-title>Development of a 3D scan posture-correction procedure to facilitate the direct-digital splinting approach</article-title>. <source>Virt Phys Prototyp</source>. <year>2019</year>;<volume>14</volume>(<issue>1</issue>):<fpage>92</fpage>&#x2013;<lpage>103</lpage>. doi:<pub-id pub-id-type="doi">10.1080/17452759.2018.1500862</pub-id>.</mixed-citation></ref>
<ref id="ref-3"><label>3.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Cheng</surname> <given-names>K</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>R</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>J</given-names></string-name>, <string-name><surname>Jiang</surname> <given-names>X</given-names></string-name>, <string-name><surname>Dong</surname> <given-names>X</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>Topological optimization of 3D printed bone analog with PEKK for surgical mandibular reconstruction</article-title>. <source>J Mech Behav Biomed Mater</source>. <year>2020 Jul</year>;<volume>107</volume>:<fpage>103758</fpage>; <pub-id pub-id-type="pmid">32279058</pub-id></mixed-citation></ref>
<ref id="ref-4"><label>4.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Popescu</surname> <given-names>D</given-names></string-name>, <string-name><surname>Zapciu</surname> <given-names>A</given-names></string-name>, <string-name><surname>Tarba</surname> <given-names>C</given-names></string-name>, <string-name><surname>Laptoiu</surname> <given-names>D</given-names></string-name></person-group>. <article-title>Fast production of customized three-dimensional-printed hand splints</article-title>. <source>Rapid Prototyp J</source>. <year>2020</year>;<volume>26</volume>(<issue>1</issue>):<fpage>134</fpage>&#x2013;<lpage>44</lpage>. doi:<pub-id pub-id-type="doi">10.1108/RPJ-01-2019-0009</pub-id>.</mixed-citation></ref>
<ref id="ref-5"><label>5.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Yin</surname> <given-names>C</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>T</given-names></string-name>, <string-name><surname>Wei</surname> <given-names>Q</given-names></string-name>, <string-name><surname>Cai</surname> <given-names>H</given-names></string-name>, <string-name><surname>Cheng</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Tian</surname> <given-names>Y</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>Tailored surface treatment of 3D printed porous Ti6Al4V by microarc oxidation for enhanced Osseointegration via optimized bone in-growth patterns and interlocked bone/implant Interface</article-title>. <source>Bioact Mater</source>. <year>2022 Jan</year>;<volume>7</volume>:<fpage>26</fpage>&#x2013;<lpage>38</lpage>. doi:<pub-id pub-id-type="doi">10.1021/acsami.6b05893</pub-id>; <pub-id pub-id-type="pmid">27341499</pub-id></mixed-citation></ref>
<ref id="ref-6"><label>6.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Goldsmith</surname> <given-names>I</given-names>
<suffix>MBBS, MD FRCSE, RCPS FRCS, CTh FRCS</suffix></string-name></person-group>. <article-title>Chest wall reconstruction with 3D printing: anatomical and functional considerations</article-title>. <source>Innovat: Technol Techniq Cardiothor Vascul Surg</source>. <year>2022 Jun</year>;<volume>17</volume>(<issue>3</issue>):<fpage>191</fpage>&#x2013;<lpage>200</lpage>. doi:<pub-id pub-id-type="doi">10.1177/15569845221102138</pub-id>; <pub-id pub-id-type="pmid">35699725</pub-id></mixed-citation></ref>
<ref id="ref-7"><label>7.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Alvarez</surname> <given-names>AG</given-names></string-name>, <string-name><surname>Evans</surname> <given-names>PL</given-names></string-name>, <string-name><surname>Dovgalski</surname> <given-names>L</given-names></string-name>, <string-name><surname>Goldsmith</surname> <given-names>I</given-names></string-name></person-group>. <article-title>Design, additive manufacture and clinical application of a patient-specific titanium implant to anatomically reconstruct a large chest wall defect</article-title>. <source>Rapid Prototyp J</source>. <year>2021 Jan</year>;<volume>27</volume>(<issue>2</issue>):<fpage>304</fpage>&#x2013;<lpage>10</lpage>. doi:<pub-id pub-id-type="doi">10.1108/RPJ-08-2019-0208</pub-id>.</mixed-citation></ref>
<ref id="ref-8"><label>8.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Poier</surname> <given-names>PH</given-names></string-name>, <string-name><surname>Weigert</surname> <given-names>MC</given-names></string-name>, <string-name><surname>Rosenmann</surname> <given-names>GC</given-names></string-name>, <string-name><surname>Carvalho</surname> <given-names>MGR</given-names></string-name>, <string-name><surname>Ulbricht</surname> <given-names>L</given-names></string-name>, <string-name><surname>Foggiatto</surname> <given-names>JA</given-names></string-name></person-group>. <article-title>The development of low-cost wrist, hand, and finger orthosis for children with cerebral palsy using additive manufacturing</article-title>. <source>Jos&#x00E9; Aguiomar Foggiatto</source>. <year>2021</year>;<volume>73</volume>(<issue>3</issue>):<fpage>445</fpage>&#x2013;<lpage>53</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s42600-021-00157-0</pub-id>.</mixed-citation></ref>
<ref id="ref-9"><label>9.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Li</surname> <given-names>J</given-names></string-name>, <string-name><surname>Tanaka</surname> <given-names>H</given-names></string-name></person-group>. <article-title>Feasibility study applying a parametric model as the design generator for 3D-printed orthosis for fracture immobilization</article-title>. <source>3D Print Med</source>. <year>2018</year>;<volume>4</volume>(<issue>1</issue>):<fpage>1</fpage>&#x2013;<lpage>15</lpage>. doi:<pub-id pub-id-type="doi">10.1186/s41205-017-0024-1</pub-id>; <pub-id pub-id-type="pmid">29782615</pub-id></mixed-citation></ref>
<ref id="ref-10"><label>10.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Sheha</surname> <given-names>ED</given-names></string-name>, <string-name><surname>Gandhi</surname> <given-names>SD</given-names></string-name>, <string-name><surname>Colman</surname> <given-names>MW</given-names></string-name></person-group>. <article-title>3D printing in spine surgery</article-title>. <source>Ann Transl Med</source>. <year>2019</year>;<volume>7</volume>(<issue>5</issue>):<fpage>164</fpage>. doi:<pub-id pub-id-type="doi">10.21037/atm.2019.08.88</pub-id>; <pub-id pub-id-type="pmid">31624730</pub-id></mixed-citation></ref>
<ref id="ref-11"><label>11.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Modi</surname> <given-names>YK</given-names></string-name>, <string-name><surname>Khare</surname> <given-names>N</given-names></string-name></person-group>. <article-title>Patient-specific polyamide wrist splint using reverse engineering and selective laser sintering</article-title>. <source>Mat Technol</source>. <year>2022</year>;<volume>37</volume>(<issue>2</issue>):<fpage>71</fpage>&#x2013;<lpage>8</lpage>. doi:<pub-id pub-id-type="doi">10.1080/10667857.2020.1810926</pub-id>.</mixed-citation></ref>
<ref id="ref-12"><label>12.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Chih-Hsing Chu</surname> <given-names>IJW</given-names></string-name>, <string-name><surname>Sun</surname> <given-names>JR</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>CH</given-names></string-name></person-group>. <article-title>Customized designs of short thumb orthoses using 3D hand parametric models</article-title>. <source>Assist Technol</source>. <year>2022</year>;<volume>34</volume>(<issue>1</issue>):<fpage>104</fpage>&#x2013;<lpage>11</lpage>. doi:<pub-id pub-id-type="doi">10.1080/10400435.2019.1709917</pub-id>; <pub-id pub-id-type="pmid">31891329</pub-id></mixed-citation></ref>
<ref id="ref-13"><label>13.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Sharma</surname> <given-names>S</given-names></string-name>, <string-name><surname>Singh</surname> <given-names>J</given-names></string-name>, <string-name><surname>Kumar</surname> <given-names>H</given-names></string-name>, <string-name><surname>Sharma</surname> <given-names>A</given-names></string-name>, <string-name><surname>Aggarwal</surname> <given-names>V</given-names></string-name>, <string-name><surname>Amoljit Singh Gill</surname> <given-names>NJ</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>Utilization of rapid prototyping technology for the fabrication of an orthopedic shoe inserts for foot pain reprieve using thermo-softening viscoelastic polymers: a novel experimental approach</article-title>. <source>Meas Control</source>. <year>2020</year>;<volume>53</volume>(<issue>3&#x2013;4</issue>):<fpage>519</fpage>&#x2013;<lpage>30</lpage>. doi:<pub-id pub-id-type="doi">10.1177/0020294019887194</pub-id>.</mixed-citation></ref>
<ref id="ref-14"><label>14.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Girolami</surname> <given-names>M</given-names></string-name>, <string-name><surname>Boriani</surname> <given-names>S</given-names></string-name>, <string-name><surname>Bandiera</surname> <given-names>S</given-names></string-name>, <string-name><surname>Barbanti-Br&#x00F3;dano</surname> <given-names>G</given-names></string-name>, <string-name><surname>Ghermandi</surname> <given-names>R</given-names></string-name>, <string-name><surname>Terzi</surname> <given-names>S</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>Biomimetic 3D-printed custom-made prosthesis for anterior column reconstruction in the thoracolumbar spine: a tailored option following en bloc resection for spinal tumors</article-title>. <source>Europ Spine J</source>. <year>2018</year>;<volume>27</volume>:<fpage>3073</fpage>&#x2013;<lpage>83</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s00586-018-5708-8</pub-id>; <pub-id pub-id-type="pmid">30039254</pub-id></mixed-citation></ref>
<ref id="ref-15"><label>15.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Parr</surname> <given-names>WCH</given-names></string-name>, <string-name><surname>Burnard</surname> <given-names>JL</given-names></string-name>, <string-name><surname>Wilson</surname> <given-names>PJ</given-names></string-name>, <string-name><surname>Mobbs</surname> <given-names>RJ</given-names></string-name></person-group>. <article-title>3D printed anatomical (bio)models in spine surgery: clinical benefits and value to health care providers</article-title>. <source>J Spine Surg</source>. <year>2019</year>;<volume>5</volume>(<issue>4</issue>):<fpage>549</fpage>&#x2013;<lpage>60</lpage>. doi:<pub-id pub-id-type="doi">10.21037/jss.2019.12.07</pub-id>; <pub-id pub-id-type="pmid">32043006</pub-id></mixed-citation></ref>
<ref id="ref-16"><label>16.</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Cai</surname> <given-names>H</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Wei</surname> <given-names>F</given-names></string-name>, <string-name><surname>Yu</surname> <given-names>M</given-names></string-name>, <string-name><surname>Xu</surname> <given-names>N</given-names></string-name>, <string-name><surname>Li</surname> <given-names>Z</given-names></string-name></person-group>. <chapter-title>3D printing in spine surgery</chapter-title>. In: <source>Advances in experimental medicine and biology</source>. <publisher-loc>Singapore</publisher-loc>: <publisher-name>Springer</publisher-name>; <year>2018</year>. p. <fpage>1</fpage>&#x2013;<lpage>27</lpage>. doi:<pub-id pub-id-type="doi">10.1007/978-981-13-1396-7_27</pub-id>.</mixed-citation></ref>
<ref id="ref-17"><label>17.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Goldsmith</surname> <given-names>I</given-names></string-name>, <string-name><surname>Evans</surname> <given-names>PL</given-names></string-name>, <string-name><surname>Goodrum</surname> <given-names>H</given-names></string-name>, <string-name><surname>Warbrick-Smith</surname> <given-names>J</given-names></string-name>, <string-name><surname>Bragg</surname> <given-names>T</given-names></string-name></person-group>. <article-title>Chest wall reconstruction with an anatomically designed 3D printed titanium ribs and hemi-sternum implant</article-title>. <source>3D Print Med</source>. <year>2020</year>;<volume>6</volume>(<issue>1</issue>):<fpage>1</fpage>&#x2013;<lpage>26</lpage>. doi:<pub-id pub-id-type="doi">10.1186/s41205-020-00079-0</pub-id>; <pub-id pub-id-type="pmid">32975713</pub-id></mixed-citation></ref>
<ref id="ref-18"><label>18.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Wu</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Chen</surname> <given-names>N</given-names></string-name>, <string-name><surname>Xu</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>X</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>L</given-names></string-name>, <string-name><surname>Wu</surname> <given-names>C</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>Application of 3D printing technology to thoracic wall tumor resection and thoracic wall reconstruction</article-title>. <source>J Thorac Dis</source>. <year>2018</year>;<volume>10</volume>(<issue>12</issue>):<fpage>6880</fpage>&#x2013;<lpage>90</lpage>. doi:<pub-id pub-id-type="doi">10.21037/jtd.2018.11.109</pub-id>; <pub-id pub-id-type="pmid">30746234</pub-id></mixed-citation></ref>
<ref id="ref-19"><label>19.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Zhou</surname> <given-names>XT</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>DS</given-names></string-name>, <string-name><surname>Yang</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>GI</given-names></string-name>, <string-name><surname>Xie</surname> <given-names>ZX</given-names></string-name>, <string-name><surname>Chen</surname> <given-names>MH</given-names></string-name>, <etal>et al.</etal></person-group> <article-title>Analysis of the advantages of 3D printing in the surgical treatment of multiple rib fractures: 5 cases report</article-title>. <source>J Cardiothorac Surg</source>. <year>2019</year>;<volume>14</volume>(<issue>105</issue>):<fpage>1</fpage>&#x2013;<lpage>7</lpage>. doi:<pub-id pub-id-type="doi">10.1186/s13019-019-0930-y</pub-id>; <pub-id pub-id-type="pmid">31186011</pub-id></mixed-citation></ref>
<ref id="ref-20"><label>20.</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Miti&#x0107;</surname> <given-names>J</given-names></string-name>, <string-name><surname>Vitkovi&#x0107;</surname> <given-names>N</given-names></string-name>, <string-name><surname>Mani&#x0107;</surname> <given-names>M</given-names></string-name>, <string-name><surname>Trajanovi&#x0107;</surname> <given-names>M</given-names></string-name></person-group>. <article-title>Reconstruction of the missing part in the human mandible</article-title>. In: <conf-name>In European Conference on Computer Vision</conf-name>, <year>2020</year>; p. <fpage>202</fpage>&#x2013;<lpage>5</lpage>.</mixed-citation></ref>
<ref id="ref-21"><label>21.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Cai</surname> <given-names>M</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>S</given-names></string-name>, <string-name><surname>Xiao</surname> <given-names>G</given-names></string-name>, <string-name><surname>Fan</surname> <given-names>S</given-names></string-name></person-group>. <article-title>3D face reconstruction and dense alignment with a new generated dataset</article-title>. <source>Displays</source>. <year>2021</year>;<volume>70</volume>:<fpage>102094</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.displa.2021.102094</pub-id>.</mixed-citation></ref>
<ref id="ref-22"><label>22.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Zhang</surname> <given-names>B</given-names></string-name>, <string-name><surname>Sun</surname> <given-names>H</given-names></string-name>, <string-name><surname>Wu</surname> <given-names>L</given-names></string-name>, <string-name><surname>Ma</surname> <given-names>L</given-names></string-name>, <string-name><surname>Xing</surname> <given-names>F</given-names></string-name>, <string-name><surname>Kong</surname> <given-names>Q</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>3D printing of calcium phosphate bioceramic with tailored biodegradation rate for skull bone tissue reconstruction</article-title>. <source>Bio-Desi Manufact</source>. <year>2019</year>;<volume>2</volume>:<fpage>161</fpage>&#x2013;<lpage>71</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s42242-019-00046-7</pub-id>.</mixed-citation></ref>
<ref id="ref-23"><label>23.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Farber</surname> <given-names>SJ</given-names></string-name>, <string-name><surname>Latham</surname> <given-names>KP</given-names></string-name>, <string-name><surname>Kantar</surname> <given-names>RS</given-names></string-name>, <string-name><surname>Perkins</surname> <given-names>JN</given-names></string-name>, <string-name><surname>Rodriguez</surname> <given-names>ED</given-names></string-name></person-group>. <article-title>Reconstructing the face of war</article-title>. <source>Mil Med</source>. <year>2019 Jul</year>;<volume>184</volume>(<issue>7&#x2013;8</issue>):<fpage>236</fpage>&#x2013;<lpage>46</lpage>. doi:<pub-id pub-id-type="doi">10.1093/milmed/usz103</pub-id>; <pub-id pub-id-type="pmid">31287139</pub-id></mixed-citation></ref>
<ref id="ref-24"><label>24.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Farber</surname> <given-names>SJ</given-names></string-name>, <string-name><surname>Kantar</surname> <given-names>RS</given-names></string-name>, <string-name><surname>Diaz-Siso</surname> <given-names>JR</given-names></string-name>, <string-name><surname>Rodriguez</surname> <given-names>ED</given-names></string-name></person-group>. <article-title>Face transplantation: an update for the united states trauma system</article-title>. <source>J Craniofac Surg</source>. <year>2018 Jun</year>;<volume>29</volume>(<issue>4</issue>):<fpage>832</fpage>&#x2013;<lpage>8</lpage>. doi:<pub-id pub-id-type="doi">10.1097/SCS.0000000000004615</pub-id>; <pub-id pub-id-type="pmid">29771838</pub-id></mixed-citation></ref>
<ref id="ref-25"><label>25.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Kantar</surname> <given-names>RS</given-names></string-name>, <string-name><surname>Rifkin</surname> <given-names>WJ</given-names></string-name>, <string-name><surname>Cammarata</surname> <given-names>MJ</given-names></string-name>, <string-name><surname>Maliha</surname> <given-names>SG</given-names></string-name>, <string-name><surname>Diaz-Siso</surname> <given-names>JR</given-names></string-name>, <string-name><surname>Farber</surname> <given-names>SJ</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>Single-stage primary cleft lip and palate repair: a review of the literature</article-title>. <source>Ann Plast Surg</source>. <year>2018 Nov</year>;<volume>81</volume>(<issue>5</issue>):<fpage>619</fpage>&#x2013;<lpage>23</lpage>. doi:<pub-id pub-id-type="doi">10.1097/SAP.0000000000001543</pub-id>; <pub-id pub-id-type="pmid">29944528</pub-id></mixed-citation></ref>
<ref id="ref-26"><label>26.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Zhou</surname> <given-names>SB</given-names></string-name>, <string-name><surname>Gao</surname> <given-names>BW</given-names></string-name>, <string-name><surname>Tan</surname> <given-names>PC</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>HZ</given-names></string-name>, <string-name><surname>Li</surname> <given-names>QF</given-names></string-name>, <string-name><surname>Xie</surname> <given-names>F</given-names></string-name></person-group>. <article-title>A strategy for integrative reconstruction of midface defects using an extended forehead flap</article-title>. <source>Facial Plast Surg Aesth Med</source>. <year>2021 Nov</year>;<volume>23</volume>(<issue>6</issue>):<fpage>430</fpage>&#x2013;<lpage>6</lpage>. doi:<pub-id pub-id-type="doi">10.1089/fpsam.2020.0484</pub-id>; <pub-id pub-id-type="pmid">33877902</pub-id></mixed-citation></ref>
<ref id="ref-27"><label>27.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Chan</surname> <given-names>TJ</given-names></string-name>, <string-name><surname>Long</surname> <given-names>C</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>E</given-names></string-name>, <string-name><surname>Prisman</surname> <given-names>E</given-names></string-name></person-group>. <article-title>The state of virtual surgical planning in maxillary reconstruction: a systematic review</article-title>. <source>Oral Oncol</source>. <year>2022 Oct</year>;<fpage>133</fpage>:106058. doi:<pub-id pub-id-type="doi">10.1016/j.oraloncology.2022.106058</pub-id>; <pub-id pub-id-type="pmid">35952582</pub-id></mixed-citation></ref>
<ref id="ref-28"><label>28.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Wang</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Qu</surname> <given-names>X</given-names></string-name>, <string-name><surname>Jiang</surname> <given-names>J</given-names></string-name>, <string-name><surname>Sun</surname> <given-names>J</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>C</given-names></string-name>, <string-name><surname>He</surname> <given-names>Y</given-names></string-name></person-group>. <article-title>Aesthetical and accuracy outcomes of reconstruction of maxillary defect by 3D virtual surgical planning</article-title>. <source>Front Oncol</source>. <year>2021 Oct</year>;<volume>11</volume>:<fpage>718946</fpage>. doi:<pub-id pub-id-type="doi">10.3389/fonc.2021.718946</pub-id>; <pub-id pub-id-type="pmid">34737946</pub-id></mixed-citation></ref>
<ref id="ref-29"><label>29.</label><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><surname>Nguyen</surname> <given-names>PD</given-names></string-name>, <string-name><surname>Le</surname> <given-names>TD</given-names></string-name>, <string-name><surname>Nguyen</surname> <given-names>DQ</given-names></string-name>, <string-name><surname>Nguyen</surname> <given-names>TQ</given-names></string-name>, <string-name><surname>Chou</surname> <given-names>LW</given-names></string-name>, <string-name><surname>Nguyen-Xuan</surname> <given-names>H</given-names></string-name></person-group>. <article-title>3D Facial imperfection regeneration: deep learning approach and 3D printing prototypes</article-title>. <comment>arXiv:2303.14381</comment>. <year>2023</year>.</mixed-citation></ref>
<ref id="ref-30"><label>30.</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Zielonka</surname> <given-names>W</given-names></string-name>, <string-name><surname>Bolkart</surname> <given-names>T</given-names></string-name>, <string-name><surname>Thies</surname> <given-names>J</given-names></string-name></person-group>. <chapter-title>Towards metrical reconstruction of human faces</chapter-title>. In: <person-group person-group-type="editor"><string-name><surname>Avidan</surname> <given-names>S</given-names></string-name>, <string-name><surname>Brostow</surname> <given-names>G</given-names></string-name>, <string-name><surname>Ciss&#x00E9;</surname> <given-names>M</given-names></string-name>, <string-name><surname>Farinella</surname> <given-names>GM</given-names></string-name>, <string-name><surname>Hassner</surname> <given-names>T</given-names></string-name></person-group>, editors. <source>Computer Vision&#x2013;ECCV 2022</source>. <publisher-loc>Cham</publisher-loc>: <publisher-name>Springer Nature Switzerland</publisher-name>; <year>2022</year>. p. <fpage>250</fpage>&#x2013;<lpage>69</lpage>. doi:<pub-id pub-id-type="doi">10.1007/978-3-031-19778-9_15</pub-id>.</mixed-citation></ref>
<ref id="ref-31"><label>31.</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Deng</surname> <given-names>J</given-names></string-name>, <string-name><surname>Guo</surname> <given-names>J</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>T</given-names></string-name>, <string-name><surname>Gong</surname> <given-names>M</given-names></string-name>, <string-name><surname>Zafeiriou</surname> <given-names>S</given-names></string-name></person-group>. <chapter-title>Sub-center ArcFace: Boosting face recognition by large-scale noisy web faces</chapter-title>. In: <source>Computer Vision&#x2013;ECCV 2020</source>. <publisher-loc>Cham</publisher-loc>: <publisher-name>Springer</publisher-name>; <year>2020</year>. p. <fpage>741</fpage>&#x2013;<lpage>57</lpage>. doi:<pub-id pub-id-type="doi">10.1007/978-3-030-58621-8_43</pub-id>.</mixed-citation></ref>
<ref id="ref-32"><label>32.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Zhao</surname> <given-names>J</given-names></string-name>, <string-name><surname>Yan</surname> <given-names>S</given-names></string-name>, <string-name><surname>Feng</surname> <given-names>J</given-names></string-name></person-group>. <article-title>Towards age-invariant face recognition</article-title>. <source>IEEE Trans Pattern Anal Mach Intell</source>. <year>2020 Jul</year>;<volume>44</volume>(<issue>1</issue>):<fpage>474</fpage>&#x2013;<lpage>87</lpage>. doi:<pub-id pub-id-type="doi">10.1109/TPAMI.2020.3011426</pub-id>; <pub-id pub-id-type="pmid">32750831</pub-id></mixed-citation></ref>
<ref id="ref-33"><label>33.</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Deng</surname> <given-names>J</given-names></string-name>, <string-name><surname>Guo</surname> <given-names>J</given-names></string-name>, <string-name><surname>Xue</surname> <given-names>N</given-names></string-name>, <string-name><surname>Zafeiriou</surname> <given-names>S</given-names></string-name></person-group>. <article-title>ArcFace: additive angular margin loss for deep face recognition</article-title>. In: <conf-name>2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</conf-name>, <year>2019</year>; <publisher-loc>Long Beach, CA, USA</publisher-loc>. doi:<pub-id pub-id-type="doi">10.1109/CVPR.2019.00482</pub-id>.</mixed-citation></ref>
<ref id="ref-34"><label>34.</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>He</surname> <given-names>K</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>X</given-names></string-name>, <string-name><surname>Ren</surname> <given-names>S</given-names></string-name>, <string-name><surname>Sun</surname> <given-names>J</given-names></string-name></person-group>. <article-title>Deep residual learning for image recognition</article-title>. In: <conf-name>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</conf-name>, <year>2016</year>; <publisher-loc>Las Vegas. NV, USA</publisher-loc>. doi:<pub-id pub-id-type="doi">10.1109/CVPR.2016.90</pub-id>.</mixed-citation></ref>
<ref id="ref-35"><label>35.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Kajla</surname> <given-names>NI</given-names></string-name>, <string-name><surname>Missen</surname> <given-names>MMS</given-names></string-name>, <string-name><surname>Luqman</surname> <given-names>MM</given-names></string-name>, <string-name><surname>Coustaty</surname> <given-names>M</given-names></string-name>, <string-name><surname>Mehmood</surname> <given-names>A</given-names></string-name></person-group>. <article-title>Additive angular margin loss in deep graph neural network classifier for learning graph edit distance</article-title>. <source>IEEE Access</source>. <year>2020 Nov</year>;<volume>9</volume>:<fpage>201752</fpage>&#x2013;<lpage>61</lpage>. doi:<pub-id pub-id-type="doi">10.1109/ACCESS.2020.3035886</pub-id>.</mixed-citation></ref>
<ref id="ref-36"><label>36.</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Wang</surname> <given-names>X</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>S</given-names></string-name>, <string-name><surname>Chi</surname> <given-names>C</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>S</given-names></string-name>, <string-name><surname>Mei</surname> <given-names>T</given-names></string-name></person-group>. <article-title>Loss function search for face recognition</article-title>. In: <conf-name>Proceedings of the 37th International Conference on Machine Learning</conf-name>, <year>2020</year>; <publisher-name>PMLR</publisher-name>.</mixed-citation></ref>
<ref id="ref-37"><label>37.</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>An</surname> <given-names>X</given-names></string-name>, <string-name><surname>Zhu</surname> <given-names>X</given-names></string-name>, <string-name><surname>Gao</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Xiao</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Zhao</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Feng</surname> <given-names>Z</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>Partial FC: training 10 million identities on a single machine</article-title>. In: <conf-name>Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops</conf-name>, <year>2021</year>; <publisher-loc>Montreal, QC, Canada</publisher-loc>.</mixed-citation></ref>
<ref id="ref-38"><label>38.</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Sanyal</surname> <given-names>S</given-names></string-name>, <string-name><surname>Bolkart</surname> <given-names>T</given-names></string-name>, <string-name><surname>Feng</surname> <given-names>H</given-names></string-name>, <string-name><surname>Black</surname> <given-names>MJ</given-names></string-name></person-group>. <article-title>Learning to regress 3D face shape and expression from an image without 3D supervision</article-title>. In: <conf-name>Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</conf-name>, <year>2019</year>; <publisher-loc>Long Beach, CA, USA</publisher-loc>.</mixed-citation></ref>
<ref id="ref-39"><label>39.</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Cudeiro</surname> <given-names>D</given-names></string-name>, <string-name><surname>Bolkart</surname> <given-names>T</given-names></string-name>, <string-name><surname>Laidlaw</surname> <given-names>C</given-names></string-name>, <string-name><surname>Ranjan</surname> <given-names>A</given-names></string-name>, <string-name><surname>Black</surname> <given-names>MJ</given-names></string-name></person-group>. <article-title>Capture, learning, and synthesis of 3D speaking styles</article-title>. In: <conf-name>Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</conf-name>, <year>2019</year>; <publisher-loc>Long Beach, CA, USA</publisher-loc>.</mixed-citation></ref>
<ref id="ref-40"><label>40.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Li</surname> <given-names>T</given-names></string-name>, <string-name><surname>Bolkart</surname> <given-names>T</given-names></string-name>, <string-name><surname>Black</surname> <given-names>MJ</given-names></string-name>, <string-name><surname>Li</surname> <given-names>H</given-names></string-name>, <string-name><surname>Romero</surname> <given-names>J</given-names></string-name></person-group>. <article-title>Learning a model of facial shape and expression from 4D scans</article-title>. <source>ACM Trans Graph</source>. <year>2017</year>;<volume>36</volume>(<issue>6</issue>):<fpage>194</fpage>&#x2013;<lpage>201</lpage>. doi:<pub-id pub-id-type="doi">10.1145/3130800.3130813</pub-id>.</mixed-citation></ref>
<ref id="ref-41"><label>41.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Dai</surname> <given-names>H</given-names></string-name>, <string-name><surname>Pears</surname> <given-names>N</given-names></string-name>, <string-name><surname>Smith</surname> <given-names>W</given-names></string-name>, <string-name><surname>Duncan</surname> <given-names>C</given-names></string-name></person-group>. <article-title>Statistical modeling of craniofacial shape and texture</article-title>. <source>Int J Comput Vis</source>. <year>2020</year>;<volume>128</volume>(<issue>2</issue>):<fpage>547</fpage>&#x2013;<lpage>71</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s11263-019-01260-7</pub-id>.</mixed-citation></ref>
<ref id="ref-42"><label>42.</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Feng</surname> <given-names>ZH</given-names></string-name>, <string-name><surname>Huber</surname> <given-names>P</given-names></string-name>, <string-name><surname>Kittler</surname> <given-names>J</given-names></string-name>, <string-name><surname>Hancock</surname> <given-names>P</given-names></string-name>, <string-name><surname>Wu</surname> <given-names>XJ</given-names></string-name>, <string-name><surname>Zhao</surname> <given-names>Q</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>Evaluation of dense 3D reconstruction from 2D face images in the wild</article-title>. In: <conf-name>13th IEEE International Conference on Automatic Face &#x0026; Gesture Recognition (FG 2018)</conf-name>, <year>2018</year>; <publisher-loc>Xi&#x2019;an, China</publisher-loc>.</mixed-citation></ref>
<ref id="ref-43"><label>43.</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Zhou</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Wu</surname> <given-names>C</given-names></string-name>, <string-name><surname>Li</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Cao</surname> <given-names>C</given-names></string-name>, <string-name><surname>Ye</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Saragih</surname> <given-names>J</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>Fully convolutional mesh autoencoder using efficient spatially varying kernels</article-title>. In: <conf-name>Proceedings of the 34th International Conference on Neural Information Processing Systems</conf-name>, <year>2020</year>; <publisher-loc>Red Hook, NY, USA</publisher-loc>: <publisher-name>Curran Associates Inc.</publisher-name>; p. <fpage>9251</fpage>&#x2013;<lpage>62</lpage>.</mixed-citation></ref>
<ref id="ref-44"><label>44.</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Caron</surname> <given-names>M</given-names></string-name>, <string-name><surname>Misra</surname> <given-names>I</given-names></string-name>, <string-name><surname>Mairal</surname> <given-names>J</given-names></string-name>, <string-name><surname>Goyal</surname> <given-names>P</given-names></string-name>, <string-name><surname>Bojanowski</surname> <given-names>P</given-names></string-name>, <string-name><surname>Joulin</surname> <given-names>A</given-names></string-name></person-group>. <article-title>Unsupervised learning of visual features by contrasting cluster assignments</article-title>. In: <conf-name>34th Conference on Neural Information Processing Systems (NeurIPS 2020)</conf-name>, <year>2020</year>; <publisher-loc>Vancouver, BC, Canada</publisher-loc>.</mixed-citation></ref>
<ref id="ref-45"><label>45.</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Wu</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Xiong</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Yu</surname> <given-names>SX</given-names></string-name>, <string-name><surname>Lin</surname> <given-names>D</given-names></string-name></person-group>. <article-title>Unsupervised feature learning via non-parametric instance discrimination</article-title>. In: <conf-name>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</conf-name>, <year>2018</year>; <publisher-loc>Munich, Germany</publisher-loc>.</mixed-citation></ref>
<ref id="ref-46"><label>46.</label><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><surname>Asano</surname> <given-names>YM</given-names></string-name>, <string-name><surname>Rupprecht</surname> <given-names>C</given-names></string-name>, <string-name><surname>Vedaldi</surname> <given-names>A</given-names></string-name></person-group>. <article-title>Self-labelling via simultaneous clustering and representation learning</article-title>. <comment>arXiv:1911.05371</comment>. <year>2019</year>.</mixed-citation></ref>
<ref id="ref-47"><label>47.</label><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><surname>Loshchilov</surname> <given-names>I</given-names></string-name>, <string-name><surname>Hutter</surname> <given-names>F</given-names></string-name></person-group>. <article-title>Decoupled weight decay regularization</article-title>. <comment>arXiv:1711.05101</comment>. <year>2019</year>.</mixed-citation></ref>
<ref id="ref-48"><label>48.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>DeStefano</surname> <given-names>V</given-names></string-name>, <string-name><surname>Khan</surname> <given-names>S</given-names></string-name>, <string-name><surname>Tabada</surname> <given-names>A</given-names></string-name></person-group>. <article-title>Applications of PLA in modern medicine</article-title>. <source>Eng Regenerat</source>. <year>2020 Jan</year>;<volume>1</volume>(<issue>5</issue>):<fpage>76</fpage>&#x2013;<lpage>87</lpage>. doi:<pub-id pub-id-type="doi">10.1016/j.engreg.2020.08.002</pub-id>; <pub-id pub-id-type="pmid">38620328</pub-id></mixed-citation></ref>
</ref-list>
</back></article>