<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">CMC</journal-id>
<journal-id journal-id-type="nlm-ta">CMC</journal-id>
<journal-id journal-id-type="publisher-id">CMC</journal-id>
<journal-title-group>
<journal-title>Computers, Materials &#x0026; Continua</journal-title>
</journal-title-group>
<issn pub-type="epub">1546-2226</issn>
<issn pub-type="ppub">1546-2218</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">29618</article-id>
<article-id pub-id-type="doi">10.32604/cmc.2023.029618</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Building 3-D Human Data Based on Handed Measurement and CNN</article-title>
<alt-title alt-title-type="left-running-head">Building 3-D Human Data Based on Handed Measurement and CNN</alt-title>
<alt-title alt-title-type="right-running-head">Building 3-D Human Data Based on Handed Measurement and CNN</alt-title>
</title-group>
<contrib-group content-type="authors">
<contrib id="author-1" contrib-type="author">
<name name-style="western"><surname>Nguyen</surname><given-names>Bich</given-names></name><xref ref-type="aff" rid="aff-1">1</xref></contrib>
<contrib id="author-2" contrib-type="author">
<name name-style="western"><surname>Nguyen</surname><given-names>Binh</given-names></name><xref ref-type="aff" rid="aff-1">1</xref></contrib>
<contrib id="author-3" contrib-type="author">
<name name-style="western"><surname>Tran</surname><given-names>Hai</given-names></name><xref ref-type="aff" rid="aff-2">2</xref></contrib>
<contrib id="author-4" contrib-type="author">
<name name-style="western"><surname>Pham</surname><given-names>Vuong</given-names></name><xref ref-type="aff" rid="aff-1">1</xref></contrib>
<contrib id="author-5" contrib-type="author">
<name name-style="western"><surname>Lam Thuy</surname><given-names>Le Nhi</given-names></name><xref ref-type="aff" rid="aff-1">1</xref></contrib>
<contrib id="author-6" contrib-type="author" corresp="yes">
<name name-style="western"><surname>Bao</surname><given-names>Pham The</given-names></name><xref ref-type="aff" rid="aff-1">1</xref><email>ptbao@sgu.edu.vn</email></contrib>
<aff id="aff-1"><label>1</label><institution>Faculty of Information Science, Sai Gon University</institution>, <addr-line>Ho Chi Minh, 70000</addr-line>, <country>Vietnam</country></aff>
<aff id="aff-2"><label>2</label><institution>Faculty of Information Technology, University of Education</institution>, <addr-line>Ho Chi Minh, 70000</addr-line>, <country>Vietnam</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>&#x002A;</label>Corresponding Author: Pham The Bao. Email: <email>ptbao@sgu.edu.vn</email></corresp>
</author-notes>
<pub-date pub-type="epub" date-type="pub" iso-8601-date="2022-10-28"><day>28</day>
<month>10</month>
<year>2022</year></pub-date>
<volume>74</volume>
<issue>2</issue>
<fpage>2431</fpage>
<lpage>2441</lpage>
<history>
<date date-type="received"><day>08</day><month>3</month><year>2022</year></date>
<date date-type="accepted"><day>01</day><month>6</month><year>2022</year></date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2023 Nguyen et al.</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Nguyen et al.</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_CMC_29618.pdf"></self-uri>
<abstract>
<p>3-dimension (3-D) printing technology is growing strongly with many applications, one of which is the garment industry. The application of human body models to the garment industry is necessary to respond to the increasing personalization demand and still guarantee aesthetics. This paper proposes a method to construct 3-D human models by applying deep learning. We calculate the location of the main slices of the human body, including the neck, chest, belly, buttocks, and the rings of the extremities, using pre-existing information. Then, on the positioning frame, we find the key points (fixed and unaltered) of these key slices and update these points to match the current parameters. To add points to a star slice, we use a deep learning model to mimic the form of the human body at that slice position. We use interpolation to produce sub-slices of different body sections based on the main slices to create complete body parts morphologically. We combine all slices to construct a full 3-D representation of the human body.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>3-D human model</kwd>
<kwd>deep learning</kwd>
<kwd>interpolation</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1"><label>1</label><title>Introduction</title>
<p>Recently, applying 3-D printing [<xref ref-type="bibr" rid="ref-1">1</xref>,<xref ref-type="bibr" rid="ref-2">2</xref>] in fashion has attracted attention but is limited to printing parts of body. This limitation has resulted in inadequate product size, and the printing price is higher than the clothes, which are being made traditionally. Particularly, in the sewing factory, they only can sew to international standard measurements. However, clothes are out of shape if the human shape is different from the standard measures.</p>
<p>Anthropometric surveys toward applications in the garment industry were conducted early by nations worldwide but were primarily performed by traditional hand measurement. Japan was the first country to undertake a large-scale countrywide survey using 3-D body scanners from 1992 to 1994. Other nations, in turn, performed anthropometric surveys and built their own body measuring systems utilizing 3-D body scanning technologies.</p>
<p>Many research groups have conducted building 3-D human models, from building a part of the human body (ears, human organs) to constructing the entire body to serve different needs such as health care, fashion, etc. Thanks to the quick growth of technology and skill, many methods of building 3-D human models have been improved. They are being applied in modern printing and scanning 3-D equipment to respond demands of building complete human models. Kieu&#x00A0;et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="ref-3">3</xref>] applied regression and interpolation to build 3-D human model. They divided the model into parts, then used Hermite interpolation to interpolate each part of the model. Nguyen&#x00A0;et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="ref-4">4</xref>] used Convolutional Neural Network to create a set of points of 3-D human model. This method used the diameter of a circle to create key slices of the model and used interpolation to create remaining slices. Zeng&#x00A0;et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="ref-5">5</xref>] followed the anthropometric modeling approach. They proposed a local mapping technique based on new feature selection, allowing automatic anthropometric parameter modeling for each body aspect.</p>
<p>Point cloud learning has been gaining more attention recently due to its effectiveness in many applications such as computer vision, autonomous vehicles, and robotics. Deep learning has dramatically solved visual problems [<xref ref-type="bibr" rid="ref-6">6</xref>]. 3-D data can often be represented in various formats, including depth images, point clouds, meshes, and volumetric. As a commonly used format, the point cloud representation preserves the original geometry information in 3-D space without any changes.</p>
<p>In this study, we propose a new method based on deep learning and point cloud learning to conduct training of 3-D models. First, we convert the 3-D model into point clouds: Get the uniform point per slice with a probability proportional to the surface area, using the built-in function in the pytorch3D library. This step converts the input models into a set of model point clouds to form a complete mesh. Then, we move the input cloud points such that all of these points come close to the hand measurement data. Finally, we construct a 3-D model of the person corresponding to the data. We repeat those steps until all the cloud points have the lowest error compared to the hand-measured data.</p>
<p>We train our method on the 3-D model set and the model surface. Contrary to previous approaches, which need a lot of data and need to calculate the interpolation in detail, our point cloud learning method has a lower error and better control over input data and data when training the model. Also, our model has improved how to train the model on the whole 3-D model.</p>
</sec>
<sec id="s2"><label>2</label><title>Relate Work</title>
<sec id="s2_1"><label>2.1</label><title>3-D Models</title>
<p>3-D models are being applied in many types of industries. The health industry uses sophisticated agency models made of many slices of two-dimensional pictures from a Magnetic Resonance Imaging or Computed tomography scan to create vivid and realistic scenes. These models are used in video game business as assets for computers and video games. In recent decades, the Earth Science community has begun to use 3-D geological models as a common practice. Physical devices such as 3-D printers or Computer Numerical Control machines can also be formulated based on 3-D models [<xref ref-type="bibr" rid="ref-7">7</xref>].</p>
<p>Files with the extension <italic>.obj</italic> are commonly used to save 3-D models. These files provide 3-D coordinates, texture maps, polygon faces, and other object information. The first character of each line in the <italic>.obj</italic> file is always an attribute type, followed by arguments. The common attributes in 3-D models are <italic>v, vn, f, vp,</italic> and <italic>vt.</italic> In this study, we use three attributes including <italic>v</italic>, <italic>f</italic>, and <italic>vn</italic>.</p>
</sec>
<sec id="s2_2"><label>2.2</label><title>Deep Learning</title>
<p>Deep learning is a form of machine learning that uses a collection of algorithms to decode high-level model data utilizing many processing layers with structural combinations or by integrating many additional nonlinear factors. Deep learning comprises several methods, each of which has its own set of applications to the problem [<xref ref-type="bibr" rid="ref-8">8</xref>,<xref ref-type="bibr" rid="ref-9">9</xref>]:
<list list-type="simple">
<list-item><p>&#x2212; Deep Boltzmann Machines (DBM)</p></list-item>
<list-item><p>&#x2212; Deep Belief Networks (DBN)</p></list-item>
<list-item><p>&#x2212; Convolutional Neural Networks (CNN)</p></list-item>
<list-item><p>&#x2212; Stacked Auto&#x2013;Encoders</p></list-item>
</list></p>
<p>Although they may also incorporate propositional formulae or latent variables arranged by layers, most current deep learning models are based on artificial neural networks, especially CNN. Nodes in DBN and DBM are examples of deep growth models [<xref ref-type="bibr" rid="ref-10">10</xref>].</p>
<p>In deep learning, each level learns to transform the input data into a slightly more abstract and synthetic representation. In the application of image recognition, input is a matrix of pixels. The first layer of a CNN model encodes edges. The second layer can compose and encode the alignment of the edges. The third layer can encode the nose and eyes. The fourth layer can recognize that the image has a face. Deep learning can learn critical features to place at what level optimally [<xref ref-type="bibr" rid="ref-6">6</xref>,<xref ref-type="bibr" rid="ref-11">11</xref>&#x2013;<xref ref-type="bibr" rid="ref-13">13</xref>].</p>
<p>An artificial neural network consists of a group of artificial neurons (nodes) connecting and processing information by passing connections and computing new values at the nodes (connectionism approach to computation). In many cases, an artificial neural network is an adaptive system that changes its structure based on external or internal information flowing through the network during the learning process.</p>
</sec>
<sec id="s2_3"><label>2.3</label><title>Error</title>
<p>To calculate errors of 3-D model, we use the method in [<xref ref-type="bibr" rid="ref-3">3</xref>] to calculate perimeter base on <italic>i<sup>th</sup></italic> slice from model, as shows in <xref ref-type="disp-formula" rid="eqn-1">Eq. (1)</xref>.
<disp-formula id="eqn-1"><label>(1)</label><mml:math id="mml-eqn-1" display="block"><mml:msub><mml:mi>P</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mo>&#x2211;</mml:mo></mml:mrow><mml:mrow><mml:mi>J</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:msqrt><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>+</mml:mo><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>+</mml:mo><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>z</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>z</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:msqrt><mml:mo>+</mml:mo><mml:msqrt><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>+</mml:mo><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>+</mml:mo><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>z</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>z</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:msqrt></mml:math></disp-formula>where</p>
<p><italic>n</italic> is the number of points in slice <italic>i<sup>th</sup></italic>.</p>
<p><inline-formula id="ieqn-1"><mml:math id="mml-ieqn-1"><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>z</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> are the coordinates of the <italic>j<sup>th</sup></italic> point at slice <italic>i<sup>th</sup></italic>.</p>
<p>Error of slice is calculated using <xref ref-type="disp-formula" rid="eqn-2">Eq. (2)</xref>.
<disp-formula id="eqn-2"><label>(2)</label><mml:math id="mml-eqn-2" display="block"><mml:mrow><mml:mtext>e</mml:mtext></mml:mrow><mml:mi>s</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msubsup><mml:mrow><mml:mo>&#x2211;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msubsup><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mrow><mml:mi>g</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mrow><mml:mi>p</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mi>n</mml:mi></mml:mfrac></mml:math></disp-formula>where</p>
<p><inline-formula id="ieqn-2"><mml:math id="mml-ieqn-2"><mml:msub><mml:mi>p</mml:mi><mml:mrow><mml:mi>g</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is the perimeter of the real data scanned by the 3-D scanner.</p>
<p><inline-formula id="ieqn-3"><mml:math id="mml-ieqn-3"><mml:msub><mml:mi>p</mml:mi><mml:mrow><mml:mi>p</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is the perimeter of the 3-D model slice that we collect from the hand measurements.
<disp-formula id="eqn-3"><label>(3)</label><mml:math id="mml-eqn-3" display="block"><mml:mi>l</mml:mi><mml:mi>o</mml:mi><mml:mi>s</mml:mi><mml:mi>s</mml:mi><mml:mo>=</mml:mo><mml:mi>l</mml:mi><mml:mi>o</mml:mi><mml:mi>s</mml:mi><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mrow><mml:mtext mathvariant="italic">chamfer</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mo>&#x00D7;</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mrow><mml:mtext mathvariant="italic">chamfer</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mi>l</mml:mi><mml:mi>o</mml:mi><mml:mi>s</mml:mi><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mi>e</mml:mi><mml:mi>d</mml:mi><mml:mi>g</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:msub><mml:mo>&#x00D7;</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>e</mml:mi><mml:mi>d</mml:mi><mml:mi>g</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mi>l</mml:mi><mml:mi>o</mml:mi><mml:mi>s</mml:mi><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mrow><mml:mtext mathvariant="italic">normal</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mo>&#x00D7;</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mrow><mml:mtext mathvariant="italic">normal</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mi>l</mml:mi><mml:mi>o</mml:mi><mml:mi>s</mml:mi><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mrow><mml:mtext mathvariant="italic">laplacian</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mo>&#x00D7;</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mrow><mml:mtext mathvariant="italic">laplacian</mml:mtext></mml:mrow></mml:mrow></mml:msub></mml:math></disp-formula></p>
<p>The error is calculated based on 3-D model after training and 3-D output model, using <xref ref-type="disp-formula" rid="eqn-3">Eq. (3)</xref>.
where</p>
<p><inline-formula id="ieqn-4"><mml:math id="mml-ieqn-4"><mml:mi>l</mml:mi><mml:mi>o</mml:mi><mml:mi>s</mml:mi><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mrow><mml:mtext mathvariant="italic">chamfer</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mi>l</mml:mi><mml:mi>o</mml:mi><mml:mi>s</mml:mi><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mi>e</mml:mi><mml:mi>d</mml:mi><mml:mi>g</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mi>l</mml:mi><mml:mi>o</mml:mi><mml:mi>s</mml:mi><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mrow><mml:mtext mathvariant="italic">normal</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mi>l</mml:mi><mml:mi>o</mml:mi><mml:mi>s</mml:mi><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mrow><mml:mtext mathvariant="italic">laplacian</mml:mtext></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> is Chamfer, edge, normal, and Laplacian errors, respectively.</p>
<p><inline-formula id="ieqn-5"><mml:math id="mml-ieqn-5"><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mrow><mml:mtext mathvariant="italic">chamfer</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>e</mml:mi><mml:mi>d</mml:mi><mml:mi>g</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mrow><mml:mtext mathvariant="italic">normal</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mrow><mml:mtext mathvariant="italic">laplacian</mml:mtext></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> is Chamfer, edge, normal, Laplacian weights, respectively.</p>
</sec>
</sec>
<sec id="s3"><label>3</label><title>The Proposed Model for Creating a 3-D Representation of the Human Body</title>
<p>To build a 3-D human model, we deform a 3-D input model to create a target model according to hand measurements. To turn the input model into a target (according to the problem requirements) in each iteration, we move point in the set of points of the model from the original input model in order to build the output model so that the points of the input model get closer to the corresponding point of the output model . After finishing iterations, the distance between the points on the input model and the corresponding points on the output model will be reduced.</p>
<p>We set up an Algorithm 1 to make the meshes structure and algorithm 2 to build the train model. Based on Algorithm 1 and Algorithm 2, we create a 3-D model of the human body by Algorithm 3. In this article, we use deep learning to transform the original model. We choose two input 3-D models to apply the proposed method, including the 3-D sphere model (3.1) and 3-D standard humanmodel (3.2).
</p>
<fig id="fig-5"><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_29618-fig-5.png"/></fig>
<fig id="fig-6"><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_29618-fig-6.png"/></fig>
<fig id="fig-7"><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_29618-fig-7.png"/></fig>
<sec id="s3_1"><label>3.1</label><title>Building the Human Model from Sphere Model</title>
<p>The input model is first selected as a sphere model make up of 12 fixed points (<xref ref-type="fig" rid="fig-1">Fig. 1</xref>), after which further points are added, and connecting edges are added to complete the sphere model. From the set of point clouds of the sphere model, we deform this set to fit the set of point clouds of the target model according to the hand measurements.</p>
<fig id="fig-1"><label>Figure 1</label><caption><title>The input model shape (1a) and point set (1b)</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_29618-fig-1.png"/></fig>
<p>After using Algorithm 1, 2, we receive a shape of 3-D human model shows in <xref ref-type="fig" rid="fig-2">Fig. 2</xref>.</p>
<fig id="fig-2"><label>Figure 2</label><caption><title>3-D human model after training from sphere model</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_29618-fig-2.png"/></fig>
<fig id="fig-8"><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_29618-fig-8.png"/></fig>
<p>We see that the arms and the legs are sticky together, creating &#x2018;membranes&#x2019; between them. We can remove these &#x2018;membranes&#x2019;, but it takes a lot of time and has a poor result because the shape will appear wrong when removing &#x2018;membranes&#x2019;.</p>
</sec>
<sec id="s3_2"><label>3.2</label><title>Building the Human Model from 3-D Human Model</title>
<p>Because the method of 3.1 produces a poor result, we change the input model. Instead of a sphere, we choose an input model of a 3-D standard human model. We choose the so-called standard input model from a set of 3-D human body models that are given. We define the standard model with <xref ref-type="disp-formula" rid="eqn-4">Eq. (4)</xref>, shows in Algorithm 4.
<disp-formula id="eqn-4"><label>(4)</label><mml:math id="mml-eqn-4" display="block"><mml:mrow><mml:mtext mathvariant="italic">model</mml:mtext></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mtext mathvariant="italic">Maxmodels</mml:mtext></mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>M</mml:mi><mml:mi>a</mml:mi><mml:mi>x</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mi>M</mml:mi><mml:mi>i</mml:mi><mml:mi>n</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo><mml:mo stretchy="false">)</mml:mo><mml:mspace width="thinmathspace" /></mml:math></disp-formula>
where h<sub>i</sub>: height of the i <sup>th</sup>model.</p>
<p>The process is identical to changing from a sphere to a person. However, instead of using a spherical model as the input mesh, we use a 3-D model of the human. <xref ref-type="fig" rid="fig-3">Fig. 3a</xref> shows the results of the test. However, because these models have a rough surface, we decided to add the vn property to the model .obj file to smooth the surface (Algorithm 5), and the result is shown in <xref ref-type="fig" rid="fig-3">Fig. 3b</xref>.</p>
<fig id="fig-3"><label>Figure 3</label><caption><title>3-D human model after training from 3-D standard model (3a) and 3-D human model after adding <italic>vn</italic> property (3b)</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_29618-fig-3.png"/></fig>
<fig id="fig-9"><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_29618-fig-9.png"/></fig>
</sec>
</sec>
<sec id="s4"><label>4</label><title>Experimental Results and Discussion</title>
<p>We trained the model using Python and Google Colab platform with graphics processing unit assistance to accelerate the training process. We executed the trained model on the personal computer with an Intel Core i5 7th Gen processor and 8GigaByte random access memory.</p>
<p>The initial data was gathered from two sources, emphasizing youthful and middle-aged persons of both sexes, including over 1000 human models (378 male models and more than 600 female models) provided by the garment research group at Hanoi University of Technology [<xref ref-type="bibr" rid="ref-14">14</xref>]. More than 3000 foreign human models (more than 1500 male models and 1500 female models) were gathered by Yipin Yang&#x00A0;et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="ref-15">15</xref>].</p>
<p>The errors acquired from experimenting with the equipment as mentioned above, with 12500 data points and 5000 data points, are shown in <xref ref-type="table" rid="table-1">Tab. 1</xref>.</p>
<table-wrap id="table-1"><label>Table 1</label><caption><title>Error of 3-D human models</title></caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left">Error</th>
<th align="left">12500 points</th>
<th align="left">5000 points</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Average</td>
<td align="left">0.00026</td>
<td align="left">0.000615</td>
</tr>
<tr>
<td align="left">Max</td>
<td align="left">0.000322</td>
<td align="left">0.000675</td>
</tr>
<tr>
<td align="left">Min</td>
<td align="left">0.000232</td>
<td align="left">0.000572</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>From <xref ref-type="table" rid="table-1">Tab. 1</xref>, we can see that the error rate depends on the data points, the more points, the lower the error rate.</p>

<p>After training the 3-D model with 10000 iterations, we found that the error rate significantly decreased in the first 1000 iterations, then slightly reduced. The difference between loops is minimal from the 6000 iterations onwards. Based on <xref ref-type="fig" rid="fig-4">Fig. 4</xref>, we trained the 3-D model with 7000 iterations to save training time. The error of slices of 3-D models were calculated with the formula in [<xref ref-type="bibr" rid="ref-3">3</xref>], applied to <xref ref-type="table" rid="table-2">Tabs. 2</xref> and <xref ref-type="table" rid="table-3">3</xref>.</p>
<fig id="fig-4"><label>Figure 4</label><caption><title>3-D model error on each loop</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_29618-fig-4.png"/></fig><table-wrap id="table-2"><label>Table 2</label><caption><title>Error of principal slices of 3-D human models (cm)</title></caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left">Slices</th>
<th align="left">12500 points</th>
<th align="left">5000 points</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Neck</td>
<td align="left">0.4711</td>
<td align="left">1.50532</td>
</tr>
<tr>
<td align="left">Chest</td>
<td align="left">0.4966</td>
<td align="left">2.2675</td>
</tr>
<tr>
<td align="left">Stomach</td>
<td align="left">0.4689</td>
<td align="left">1.493659</td>
</tr>
<tr>
<td align="left">Buttocks</td>
<td align="left">0.5543</td>
<td align="left">1.213283</td>
</tr>
<tr>
<td align="left">Upper arm (left)</td>
<td align="left">0.7402</td>
<td align="left">1.45723</td>
</tr>
<tr>
<td align="left">Upper arm (right)</td>
<td align="left">0.8505</td>
<td align="left">1.086267</td>
</tr>
<tr>
<td align="left">Wrist (left)</td>
<td align="left">0.7366</td>
<td align="left">1.404546</td>
</tr>
<tr>
<td align="left">Wrist (right)</td>
<td align="left">0.7608</td>
<td align="left">1.16059</td>
</tr>
<tr>
<td align="left">Thigh (left)</td>
<td align="left">0.3234</td>
<td align="left">0.9478933</td>
</tr>
<tr>
<td align="left">Thigh (right)</td>
<td align="left">0.4167</td>
<td align="left">0.9461028</td>
</tr>
<tr>
<td align="left">Calf (left)</td>
<td align="left">0.556</td>
<td align="left">1.496387</td>
</tr>
<tr>
<td align="left">Calf (right)</td>
<td align="left">0.6215</td>
<td align="left">1.442509</td>
</tr>
<tr>
<td align="left">Ankle (left)</td>
<td align="left">0.3653</td>
<td align="left">1.486258</td>
</tr>
<tr>
<td align="left">Ankle (right)</td>
<td align="left">0.354</td>
<td align="left">1.167027</td>
</tr>
</tbody>
</table>
</table-wrap><table-wrap id="table-3"><label>Table 3</label><caption><title>Error of principal slices of 3-D male models and 3-D female models (cm)</title></caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left">Slices</th>
<th align="left">Male</th>
<th align="left">Female</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Neck</td>
<td align="left">0.4711</td>
<td align="left">0.39228</td>
</tr>
<tr>
<td align="left">Chest</td>
<td align="left">0.4966</td>
<td align="left">1.89187</td>
</tr>
<tr>
<td align="left">Stomach</td>
<td align="left">0.4689</td>
<td align="left">0.46008</td>
</tr>
<tr>
<td align="left">Buttocks</td>
<td align="left">0.5543</td>
<td align="left">0.73364</td>
</tr>
<tr>
<td align="left">Upper arm (left)</td>
<td align="left">0.7402</td>
<td align="left">0.5525</td>
</tr>
<tr>
<td align="left">Upper arm (right)</td>
<td align="left">0.8505</td>
<td align="left">0.61004</td>
</tr>
<tr>
<td align="left">Wrist (left)</td>
<td align="left">0.7366</td>
<td align="left">0.71498</td>
</tr>
<tr>
<td align="left">Wrist (right)</td>
<td align="left">0.7608</td>
<td align="left">0.7865</td>
</tr>
<tr>
<td align="left">Thigh (left)</td>
<td align="left">0.3234</td>
<td align="left">0.53612</td>
</tr>
<tr>
<td align="left">Thigh (right)</td>
<td align="left">0.4167</td>
<td align="left">0.40858</td>
</tr>
<tr>
<td align="left">Calf (left)</td>
<td align="left">0.556</td>
<td align="left">0.43845</td>
</tr>
<tr>
<td align="left">Calf (right)</td>
<td align="left">0.6215</td>
<td align="left">0.4083</td>
</tr>
<tr>
<td align="left">Ankle (left)</td>
<td align="left">0.3653</td>
<td align="left">0.48125</td>
</tr>
<tr>
<td align="left">Ankle (right)</td>
<td align="left">0.354</td>
<td align="left">0.37997</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Moreover, as shown in <xref ref-type="table" rid="table-3">Tab. 3</xref>, 3-D male and 3-D female models also have differences in error. Specifically, the chest error of 3-D female models was higher than that of 3-D male models because they differed from the body structure.</p>

<p>The more the number of data points, the more decrease in error, implying that the error rate is proportional to the number of points in the model. More data points lead to more comprehensive and precise results, increasing accuracy and lowering error.</p>
<p>To verify the effectiveness of the method compared with other methods, <xref ref-type="table" rid="table-4">Tab. 4</xref> shows the results of errors of the method compared with other methods. Our method has errors lower than other methods, including interpolation [<xref ref-type="bibr" rid="ref-3">3</xref>], CNN [<xref ref-type="bibr" rid="ref-4">4</xref>].</p>
<table-wrap id="table-4"><label>Table 4</label><caption><title>Errors of other methods</title></caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="3">Error</th>
<th align="left" rowspan="3">Interpolation [<xref ref-type="bibr" rid="ref-3">3</xref>]</th>
<th align="left" colspan="2">CNN [<xref ref-type="bibr" rid="ref-4">4</xref>]</th>
<th align="left" colspan="4">Proposed method</th>
</tr>
<tr>
<th align="left" rowspan="2">Male</th>
<th align="left" rowspan="2">Female</th>
<th align="left" colspan="2">Male</th>
<th align="left" colspan="2">Female</th>
</tr>
<tr>
<th align="left">5000 points</th>
<th align="left">125000 points</th>
<th align="left">5000 points</th>
<th align="left">125000 points</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Average</td>
<td align="left">11.02621</td>
<td align="left">0.09860</td>
<td align="left">0.13160</td>
<td align="left">0.00062</td>
<td align="left">0.00027</td>
<td align="left">0.00059</td>
<td align="left">0.00026</td>
</tr>
<tr>
<td align="left">Max</td>
<td align="left">125.55837</td>
<td align="left">&#x2014;</td>
<td align="left">&#x2014;</td>
<td align="left">0.00068</td>
<td align="left">0.00031</td>
<td align="left">0.00069</td>
<td align="left">0.00032</td>
</tr>
<tr>
<td align="left">Min</td>
<td align="left">1.38504</td>
<td align="left">&#x2014;</td>
<td align="left">&#x2014;</td>
<td align="left">0.00057</td>
<td align="left">0.00023</td>
<td align="left">0.00059</td>
<td align="left">0.00023</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The proposed method reduces the computational step since we train on the surface (known as the skin) and the entire model synchronously, without dividing into main slices and separate parts (arms, legs, body, etc.) like interpolation [<xref ref-type="bibr" rid="ref-3">3</xref>] and CNN [<xref ref-type="bibr" rid="ref-4">4</xref>] approach. The interpolation approach [<xref ref-type="bibr" rid="ref-3">3</xref>] is calculated based on the 14 main slices of the garment standard, which leads to the joining of slices leading to differential errors in the joined parts. The CNN approach [<xref ref-type="bibr" rid="ref-4">4</xref>] is based on interpolation from circles to body slices. Although this approach is practical, it reduces the number of computations in slices and increases the number of iterations because each certain distance is calculated once. However, there is also a mistake in concatenating the rings together (like [<xref ref-type="bibr" rid="ref-4">4</xref>]).</p>
</sec>
<sec id="s5"><label>5</label><title>Conclusion</title>
<p>As a result, in practical applications, particularly in clothing, it is essential to have a reasonably realistic human model in terms of shape and size without needing to be extremely specific about each part and minute detail. Then, in terms of both time and accuracy, building a model with 5000 data points is a viable alternative.</p>
<p>Machine learning, especially deep learning, has been used to develop a method for creating a 3-D model of the human body. This technique takes less time to execute and has fewer errors than prior approaches. However, this technique has several drawbacks, including that the resulting model has a rough surface and needs to add a smoothing step. Besides, the resulting model makes errors about shape, especially the wrist. There are drawbacks that we could not fix in research time. We will develop automatic smoothing algorithms while training 3-D models and improve 3-D model algorithms with more than data points to reduce error in the shape of the wrist in the future.</p>
</sec>
</body>
<back>
<fn-group>
<fn fn-type="other"><p><bold>Funding Statement:</bold> The authors received Funding for this study from Sai Gon University (Grant No. CSA2021&#x2013;08).</p></fn>
<fn fn-type="conflict"><p><bold>Conflicts of Interest:</bold> The authors declare that they have no conflicts of interest to report regarding the present study.</p></fn>
</fn-group>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>[1]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>N.</given-names> <surname>Shahrubudin</surname></string-name>, <string-name><given-names>T. C.</given-names> <surname>Lee</surname></string-name> and <string-name><given-names>R.</given-names> <surname>Ramlan</surname></string-name></person-group>, &#x201C;<article-title>An overview on 3D printing technology: Technological, materials, and applications</article-title>,&#x201D; <source>Procedia Manufacturing</source>, vol. <volume>35</volume>, pp. <fpage>1286</fpage>&#x2013;<lpage>1296</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-2"><label>[2]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J. M.</given-names> <surname>Ai</surname></string-name> and <string-name><given-names>P.</given-names> <surname>Du</surname></string-name></person-group>, &#x201C;<article-title>Discussion on 3D print model and technology</article-title>,&#x201D; <source>Applied Mechanics and Materials</source>, vol. <volume>543&#x2013;547</volume>, pp. <fpage>130</fpage>&#x2013;<lpage>133</lpage>, <year>2014</year>.</mixed-citation></ref>
<ref id="ref-3"><label>[3]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>T. T. M.</given-names> <surname>Kieu</surname></string-name>, <string-name><given-names>N. T.</given-names> <surname>Mau</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Van and P</surname></string-name></person-group>. <person-group person-group-type="author"><string-name><given-names>T.</given-names> <surname>Bao</surname></string-name></person-group>, &#x201C;<article-title>Combining 3D interpolation, regression, and body features to build 3D human data for garment: An application to building 3D Vietnamese female data model</article-title>,&#x201D; <source>International Journal of Advanced Computer Science and Applications (IJACSA)</source>, vol. <volume>11</volume>, no. <issue>1</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>9</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-4"><label>[4]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M. T.</given-names> <surname>Nguyen</surname></string-name>, <string-name><given-names>T. V.</given-names> <surname>Dang</surname></string-name>, <string-name><given-names>M. K. T.</given-names> <surname>Thi</surname></string-name> and <string-name><given-names>P. T.</given-names> <surname>Bao</surname></string-name></person-group>, &#x201C;<article-title>Generating point cloud from measurements and shapes based on convolutional neural network: An application for building 3D human model</article-title>,&#x201D; <source>Computational Intelligence and Neuroscience</source>, vol. <volume>2019</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>15</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-5"><label>[5]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Zeng</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Fu</surname></string-name> and <string-name><given-names>H.</given-names> <surname>Chao</surname></string-name></person-group>, &#x201C;<article-title>3D human body reshaping with anthropometric modeling</article-title>,&#x201D; in <conf-name>Int. Conf. on Internet Multimedia Computing and Service</conf-name>, <conf-loc>Springer, Singapore</conf-loc>, pp. <fpage>96</fpage>&#x2013;<lpage>107</lpage>, <year>2017</year>.</mixed-citation></ref>
<ref id="ref-6"><label>[6]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Bengio</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Courville</surname></string-name> and <string-name><given-names>P.</given-names> <surname>Vincent</surname></string-name></person-group>, &#x201C;<article-title>Representation learning: A review and new perspectives</article-title>,&#x201D; <source>Instilute of Electrical and Electronics Engineers Transactions on Pattern Analysis and Machine Intelligence</source>, vol. <volume>35</volume>, no. <issue>8</issue>, pp. <fpage>1798</fpage>&#x2013;<lpage>1828</lpage>, <year>2013</year>.</mixed-citation></ref>
<ref id="ref-7"><label>[7]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>R.</given-names> <surname>Scopigno</surname></string-name>, <string-name><given-names>P.</given-names> <surname>Cignoni</surname></string-name>, <string-name><given-names>N.</given-names> <surname>Pietroni</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Callieri</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Dellepiane</surname></string-name></person-group>, &#x201C;<article-title>Digital fabrication techniques for cultural heritage: A survey</article-title>,&#x201D; <source>Computer Graphics Forum</source>, vol. <volume>36</volume>, no. <issue>1</issue>, pp. <fpage>6</fpage>&#x2013;<lpage>21</lpage>, <year>2017</year>.</mixed-citation></ref>
<ref id="ref-8"><label>[8]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>Deng</surname></string-name> and <string-name><given-names>D.</given-names> <surname>Yu</surname></string-name></person-group>, &#x201C;<article-title>Deep learning: Methods and applications</article-title>,&#x201D; <source>Foundations and Trends in Signal Processing</source>, vol. <volume>7</volume>, no. <issue>3&#x2013;4</issue>, pp. <fpage>197</fpage>&#x2013;<lpage>387</lpage>, <year>2014</year>.</mixed-citation></ref>
<ref id="ref-9"><label>[9]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Jeong</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Park</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Lee</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Kang</surname></string-name> and <string-name><given-names>J.</given-names> <surname>Chun</surname></string-name></person-group>, &#x201C;<article-title>Developing parametric design fashion products using 3D printing technology</article-title>,&#x201D; <source>Fashion and Textiles</source>, vol. <volume>8</volume>, no. <issue>1</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>25</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-10"><label>[10]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>B.</given-names> <surname>Yoshua</surname></string-name></person-group>, &#x201C;<article-title>Learning deep architectures for AI</article-title>,&#x201D; <source>Foundations and Trends Machine Learning</source>, vol. <volume>2</volume>, no. <issue>1</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>127</lpage>, <year>2009</year>.</mixed-citation></ref>
<ref id="ref-11"><label>[11]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>LeCun</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Bengio</surname></string-name> and <string-name><given-names>G.</given-names> <surname>Hinton</surname></string-name></person-group>, &#x201C;<article-title>Deep learning</article-title>,&#x201D; <source>Nature</source>, vol. <volume>521</volume>, no. <issue>7553</issue>, pp. <fpage>436</fpage>&#x2013;<lpage>444</lpage>, <year>2015</year>.</mixed-citation></ref>
<ref id="ref-12"><label>[12]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Schmidhuber</surname></string-name></person-group>, &#x201C;<article-title>Deep learning in neural networks: An overview</article-title>,&#x201D; <source>Neural Networks</source>, vol. <volume>61</volume>, pp. <fpage>85</fpage>&#x2013;<lpage>117</lpage>, <year>2015</year>.</mixed-citation></ref>
<ref id="ref-13"><label>[13]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Guo</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Wang</surname></string-name>, <string-name><given-names>Q.</given-names> <surname>Hu</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Liu</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Liu</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>Deep learning for 3D point clouds: A survey</article-title>,&#x201D; <source>Institute of Electrical and Electronics Engineers Transactions on Pattern Analysis and Machine Intelligence</source>, vol. <volume>43</volume>, no. <issue>12</issue>, pp. <fpage>4338</fpage>&#x2013;<lpage>4364</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-14"><label>[14]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>T. T. M.</given-names> <surname>Kieu</surname></string-name> and <string-name><given-names>S.</given-names> <surname>Park</surname></string-name></person-group>, &#x201C;<article-title>Development &#x2018;Aodai&#x2019; pattern for Vietnamese women using 3D scan data</article-title>,&#x201D; in <conf-name>Korean Society for Clothing Industry (KSCI) Int. Conf.</conf-name>, <conf-loc>Taiwan</conf-loc>, pp. <fpage>390</fpage>&#x2013;<lpage>391</lpage>, <year>2011</year>.</mixed-citation></ref>
<ref id="ref-15"><label>[15]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Yang</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Yu</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Zhou</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Du</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Davis</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>Semantic parametric reshaping of human body models</article-title>,&#x201D; in <conf-name>2014 2nd Int. Conf. on 3D Vision</conf-name>, <conf-loc>Tokyo, Japan</conf-loc>, vol. <volume>2</volume>, pp. <fpage>41</fpage>&#x2013;<lpage>48</lpage>, <year>2014</year>.</mixed-citation></ref>
</ref-list>
</back>
</article>







