<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xml:lang="en" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">CMC</journal-id>
<journal-id journal-id-type="nlm-ta">CMC</journal-id>
<journal-id journal-id-type="publisher-id">CMC</journal-id>
<journal-title-group>
<journal-title>Computers, Materials &#x0026; Continua</journal-title>
</journal-title-group>
<issn pub-type="epub">1546-2226</issn>
<issn pub-type="ppub">1546-2218</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">46872</article-id>
<article-id pub-id-type="doi">10.32604/cmc.2023.046872</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>An Assisted Diagnosis of Alzheimer&#x2019;s Disease Incorporating Attention Mechanisms Med-3D Transfer Modeling</article-title>
<alt-title alt-title-type="left-running-head">An Assisted Diagnosis of Alzheimer&#x2019;s Disease Incorporating Attention Mechanisms Med-3D Transfer Modeling</alt-title>
<alt-title alt-title-type="right-running-head">An Assisted Diagnosis of Alzheimer&#x2019;s Disease Incorporating Attention Mechanisms Med-3D Transfer Modeling</alt-title>
</title-group>
<contrib-group>
<contrib id="author-1" contrib-type="author" corresp="yes">
<name name-style="western"><surname>Li</surname><given-names>Yanmei</given-names></name><xref ref-type="aff" rid="aff-1">1</xref><email>liym@cqut.edu.cn</email></contrib>
<contrib id="author-2" contrib-type="author">
<name name-style="western"><surname>Tang</surname><given-names>Jinghong</given-names></name><xref ref-type="aff" rid="aff-1">1</xref></contrib>
<contrib id="author-3" contrib-type="author">
<name name-style="western"><surname>Ding</surname><given-names>Weiwu</given-names></name><xref ref-type="aff" rid="aff-1">1</xref></contrib>
<contrib id="author-4" contrib-type="author">
<name name-style="western"><surname>Luo</surname><given-names>Jian</given-names></name><xref ref-type="aff" rid="aff-2">2</xref></contrib>
<contrib id="author-5" contrib-type="author">
<name name-style="western"><surname>Ahmad</surname><given-names>Naveed</given-names></name><xref ref-type="aff" rid="aff-3">3</xref></contrib>
<contrib id="author-6" contrib-type="author">
<name name-style="western"><surname>Kumar</surname><given-names>Rajesh</given-names></name><xref ref-type="aff" rid="aff-4">4</xref></contrib>
<aff id="aff-1"><label>1</label><institution>School of Artificial Intelligence, Chongqing University of Technology</institution>, <addr-line>Chongqing, 400054</addr-line>, <country>China</country></aff>
<aff id="aff-2"><label>2</label><institution>Computer School, China West Normal University</institution>, <addr-line>Nanchong, 637009</addr-line>, <country>China</country></aff>
<aff id="aff-3"><label>3</label><institution>Department of Computer Science, Prince Sultan University</institution>, <addr-line>Riyadh, 11586</addr-line>, <country>Saudi Arabia</country></aff>
<aff id="aff-4"><label>4</label><institution>Yangtze Delta Region Institute (Huzhou), University of Electronic Science and Technology of China</institution>, <addr-line>Huzhou, 313001</addr-line>, <country>China</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>&#x002A;</label>Corresponding Author: Yanmei Li. Email: <email>liym@cqut.edu.cn</email></corresp>
</author-notes>
<pub-date date-type="collection" publication-format="electronic">
<year>2024</year></pub-date>
<pub-date date-type="pub" publication-format="electronic"><day>30</day>
<month>1</month>
<year>2024</year></pub-date>
<volume>78</volume>
<issue>1</issue>
<fpage>713</fpage>
<lpage>733</lpage>
<history>
<date date-type="received"><day>17</day><month>10</month><year>2023</year></date>
<date date-type="accepted"><day>21</day><month>11</month><year>2023</year></date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2024 Li et al.</copyright-statement>
<copyright-year>2024</copyright-year>
<copyright-holder>Li et al.</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_CMC_46872.pdf"></self-uri>
<abstract>
<p>Alzheimer&#x2019;s disease (AD) is a complex, progressive neurodegenerative disorder. The subtle and insidious onset of its pathogenesis makes early detection of a formidable challenge in both contemporary neuroscience and clinical practice. In this study, we introduce an advanced diagnostic methodology rooted in the Med-3D transfer model and enhanced with an attention mechanism. We aim to improve the precision of AD diagnosis and facilitate its early identification. Initially, we employ a spatial normalization technique to address challenges like clarity degradation and unsaturation, which are commonly observed in imaging datasets. Subsequently, an attention mechanism is incorporated to selectively focus on the salient features within the imaging data. Building upon this foundation, we present the novel Med-3D transfer model, designed to further elucidate and amplify the intricate features associated with AD pathogenesis. Our proposed model has demonstrated promising results, achieving a classification accuracy of 92&#x0025;. To emphasize the robustness and practicality of our approach, we introduce an adaptive &#x2018;hot-updating&#x2019; auxiliary diagnostic system. This system not only enables continuous model training and optimization but also provides a dynamic platform to meet the real-time diagnostic and therapeutic demands of AD.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>Alzheimer&#x2019;s disease</kwd>
<kwd>channel attention</kwd>
<kwd>Med-3D</kwd>
<kwd>hot update</kwd>
</kwd-group>
<funding-group>
<award-group id="awg1">
<funding-source>National Natural Science Foundation of China</funding-source>
<award-id>62076044</award-id>
</award-group>
<award-group id="awg2">
<funding-source>cientific Research Foundation of Chongqing University of Technology</funding-source>
<award-id>2020ZDZ015</award-id>
</award-group>
</funding-group>
</article-meta>
</front>
<body>
<sec id="s1"><label>1</label><title>Introduction</title>
<p>Alzheimer&#x2019;s disease is a progressive neurodegenerative disease that results in impaired memory and cognitive function [<xref ref-type="bibr" rid="ref-1">1</xref>]. Often, in the early stages of Alzheimer&#x2019;s disease, it is challenging for physicians to diagnose, becoming apparent only in the middle stages when symptoms are evident. Moreover, definitive diagnosis is occasionally confirmed only post-mortem. Adding to the challenge, early AD is hard to differentiate from Mild Cognitive Impairment (MCI), with an annual conversion rate from MCI to AD of approximately 10&#x0025;&#x2013;12&#x0025; [<xref ref-type="bibr" rid="ref-2">2</xref>]. Statistics reveal that over 55 million people worldwide suffer from Alzheimer&#x2019;s disease. China alone has over 10 million AD patients, holding the highest number globally. Alarmingly, this figure is anticipated to grow over time, with global AD cases expected to reach 78 million by 2030. Given the disease&#x2019;s unique characteristics and the absence of effective treatments, early detection of AD is crucial. Consequently, there is an urgent need for research into early diagnostic techniques and methods for AD.</p>
<p>Age is a major risk factor for the onset of Alzheimer&#x2019;s disease, and the diagnosis of AD is usually based on history, clinical presentation, and observation of daily behavior. With the aging of the world&#x2019;s population, many middle-income countries are struggling to meet the actual needs of their healthcare services, but due to the lack of data and the bias of physicians&#x2019; diagnostic experience, it is difficult for patients to receive objective and repeatable judgments. Therefore, how to fully utilize the limited resources to improve the accuracy of AD diagnosis is a serious challenge.</p>
<p>In recent years, the development and application of artificial intelligence have been robust, with its implementation in the medical field being notably prominent [<xref ref-type="bibr" rid="ref-3">3</xref>]. Deep learning can autonomously identify features from images that might be imperceptible or challenging for human observation. With the advancement of Convolutional Neural Networks [<xref ref-type="bibr" rid="ref-4">4</xref>] (CNNs), an increasing number of professionals are leaning towards deep models for network development, bringing the accurate diagnosis of AD within reach.</p>
<p>Transfer learning [<xref ref-type="bibr" rid="ref-5">5</xref>] is a prevalent technique in deep learning that leverages pre-trained models, refining them further to better suit new tasks or domains. Typically, only a limited amount of training data for the target task is needed to enhance performance. Given these benefits, transfer learning is frequently employed for diverse classification challenges, consistently yielding commendable outcomes. In medical image diagnosis, datasets can be challenging to amass; in such scenarios, the concept of transfer learning becomes pivotal, allowing for precise diagnostics even when models are trained on limited samples.</p>
<p>Magnetic Resonance Imaging (MRI) [<xref ref-type="bibr" rid="ref-6">6</xref>] offers a potent technique for visualizing the anatomical and functional neural changes linked to AD. However, due to the subdued intensity of brain cells [<xref ref-type="bibr" rid="ref-7">7</xref>,<xref ref-type="bibr" rid="ref-8">8</xref>], MRI scans often struggle to discern the subtle signal variations in the brain. Thus, this work aims to analyze these faint signal changes in brain MRI scans related to Alzheimer&#x2019;s disease, endeavoring to detect these shifts as early in the progression of AD as possible. The main contributions are as follows:</p>
<p>(1) During MRI image scanning, it is common for the subject&#x2019;s head to bob, which may result in reduced clarity and unsaturation of the image, or even unusable of the vacant image. To reduce such errors, employ a spatial normalization method for image preprocessing.</p>
<p>(2) Med-3D is mainly used as a network for segmentation and classification tasks and has rich scanning regions, target organs, and pathological features. Based on this, an improved Med-3D network transfer method is proposed for the Alzheimer&#x2019;s disease classification task.</p>
<p>(3) To verify the validity of the model, a hot updating Alzheimer&#x2019;s disease auxiliary diagnosis system is proposed, which facilitates the model training and optimization, and better serves the diagnosis and treatment of Alzheimer&#x2019;s disease. Alzheimer&#x2019;s disease diagnosis and treatment.</p>
<p>In summary, the structure of this paper is divided as follows: <xref ref-type="sec" rid="s1">Section 1</xref> focuses on the background information of Alzheimer&#x2019;s disease research and the purpose of the study; <xref ref-type="sec" rid="s2">Section 2</xref> focuses on the current status of deep learning techniques in Alzheimer&#x2019;s disease diagnosis. <xref ref-type="sec" rid="s3">Section 3</xref> delves into our proposed improved Med-3D network transfer method for the classification task of Alzheimer&#x2019;s disease. <xref ref-type="sec" rid="s4">Section 4</xref> is an introduction to the experimental dataset used and its findings analyzed, and based on this, the practical application of our proposed model is presented. <xref ref-type="sec" rid="s5">Section 5</xref> concludes the paper.</p>
</sec>
<sec id="s2"><label>2</label><title>Related Work</title>
<p>Deep learning is a technique that employs multi-layer artificial neural networks capable of learning and extracting features from complex data. This ability enables efficient classification and recognition due to the deeper layers of the networks. In recent years, with the rise in computational power and the decrease in storage costs, building deep neural network models with vast parameters has become feasible. The convolutional neural network is a standard model in deep learning, demonstrating outstanding performance in various recognition and classification tasks. The fundamental structure of a CNN comprises a convolutional layer, a pooling layer, a fully connected layer, a ReLU activation function layer, and an output layer. Collectively, these components constitute the distinctive structure of a CNN, making it particularly effective for processing image data. With the ongoing development and refinement of CNNs, deep learning theory has evolved. Deep learning has extensive applications, encompassing target detection, image segmentation, and image super-resolution reconstruction, among others. Moreover, in the medical arena, deep learning has yielded notable results in classifying diseases, such as Alzheimer&#x2019;s disease.</p>
<p>Convolutional networks have made many advances in AD diagnosis in recent years. Basaia et al. [<xref ref-type="bibr" rid="ref-9">9</xref>] applied CNN to the ADNI database and tested the classification of AD, cMCI, and sMCI. The model&#x2019;s experimental results were promising, proving that CNN is effective for the classification of AD. Islam et al. [<xref ref-type="bibr" rid="ref-10">10</xref>] proposed a deep CNN for pairwise checking of AD, using the OASIS dataset for numerous experiments. The results indicated that the classification accuracy between non-dementia and mild dementia was superior to other models. Lu et al. [<xref ref-type="bibr" rid="ref-11">11</xref>] utilized fluorodeoxyglucose positron emission tomography (FDG-PET) to capture brain metabolic activities and designed a multi-scale deep neural network based on FDG-PET to recognize patients with AD at the MCI stage. The results showed that the model, using only FDG-PET metabolic data, achieved an accuracy of 82.51&#x0025;, outperforming other FDG-PET models of the same period. In literature [<xref ref-type="bibr" rid="ref-12">12</xref>], an efficient 3D convolutional neural network (3D ConvNet) was introduced. The 3D ConvNet, comprising 8 layers, utilized the first 5 layers for feature extraction, while the last 3 layers were fully connected layers for AD/NC classification, enabling swift AD classification on larger datasets. Li et al. [<xref ref-type="bibr" rid="ref-13">13</xref>] proposed multiclustered dense convolutional neural networks for AD classification. Initially, the brain image was segmented into different regions. Subsequently, the means clustering method grouped the patches of different regions, the learned features of each region were assembled for classification, and the classification results from different regions were integrated to enhance the accuracy of the outcome. The results suggested promising development prospects for this method. Wang et al. [<xref ref-type="bibr" rid="ref-14">14</xref>] prioritized early AD diagnosis and employed a recurrent neural network to construct the AD early detection model (AD-EDM). The results demonstrated that the AD-EDM&#x2019;s accuracy was markedly superior to traditional models. Jain et al. [<xref ref-type="bibr" rid="ref-15">15</xref>] adopted a transfer learning approach to expedite CNN network training, leveraging VGG-16 trained on the ImageNet dataset as a feature extractor. The 3-way classification accuracy achieved was 95.73&#x0025;. Kazemi et al. [<xref ref-type="bibr" rid="ref-16">16</xref>] undertook thorough preprocessing of the fMRI data before deploying the classic AlexNet model in the CNN architecture. Processes included tissue removal, spatial smoothing, high-pass filtering, and spatial normalization, enhancing the AlexNet model&#x2019;s accuracy for AD detection. Recognizing the importance of identifying MCI patients at high risk of progressing to AD, Lin et al. [<xref ref-type="bibr" rid="ref-17">17</xref>] introduced a CNN-based method to forecast those at high conversion risk from MCI to AD. The MRI data underwent corrections and brain structural feature extraction using FreeSurfer to boost AD classification accuracy. Results indicated CNN&#x2019;s potential in predicting MCI conversion.</p>
<p>Oh et al. [<xref ref-type="bibr" rid="ref-18">18</xref>] developed an end-to-end CNN model and employed a gradient-based visualization technique to enhance diagnostic accuracy. The findings underscored this method&#x2019;s superiority and identified the temporal and parietal lobes as vital diagnostic biomarkers for AD. In literature [<xref ref-type="bibr" rid="ref-19">19</xref>], a DL algorithm was formulated to predict AD&#x2019;s final diagnosis. A CNN with InceptionV3 architecture was trained on 90&#x0025; of the ADNI dataset, using the remaining 10&#x0025; as a test set. The results highlighted the algorithm&#x2019;s 100&#x0025; sensitivity and 82&#x0025; specificity. Literature [<xref ref-type="bibr" rid="ref-20">20</xref>] offered a DL model to assist in AD diagnosis, employing two independent CNN models to cultivate a multimodal deep learning network. The multimodal model&#x2019;s diagnostic outcomes were integrated with clinical neuropsychological findings to minimize early AD misdiagnosis. Trials on the ADNI database confirmed the method&#x2019;s efficacy in AD diagnosis. Finally, Lee et al. [<xref ref-type="bibr" rid="ref-21">21</xref>] crafted a multimodal recurrent neural network to predict MCI-to-AD conversion, amalgamating various biomarkers. The integrated architecture&#x2019;s results suggested its potential utility in clinical trials. Li et al. [<xref ref-type="bibr" rid="ref-22">22</xref>] introduced Multihead Central Self Attention (MCSA) to capture highly localized relations, and the introduction of sliding windows helps to capture spatial structure. Meanwhile, the introduced Simple Visual Transformer (SimViT) extracts multi-scale hierarchical features from different layers for dense prediction tasks. Experiments demonstrate that the proposed generalized backbone network exhibits good performance. Zhang et al. [<xref ref-type="bibr" rid="ref-23">23</xref>] proposed a densely connected convolutional neural network with the attentional mechanism for learning multilevel features of MR images of the brain and extended its convolutional operation to 3D. Experimental results showed that the algorithmic performance of this method ranked among the best and improved the discrimination of MCI subjects who are highly likely to transform into attention deficit disorder. Yan et al. [<xref ref-type="bibr" rid="ref-24">24</xref>] explored the effects of different image-filtering methods and pyramid-squeezing attention mechanisms on image classification of Alzheimer&#x2019;s disease, and the experimental results showed that different image-filtering methods and attention mechanisms provide effective help in the diagnosis and classification of Alzheimer&#x2019;s disease. Li et al. [<xref ref-type="bibr" rid="ref-25">25</xref>] conducted a comprehensive literature survey and analysis on the application of deep learning attention methods in medical image analysis. The remaining challenges, potential solutions, and future research directions were discussed. Illakiya et al. [<xref ref-type="bibr" rid="ref-26">26</xref>] proposed an adaptive hybrid attention network with two attention modules, which extracts spatial and contextual information on a global scale while capturing important long-range dependencies. Experiments show that the proposed network exhibits good performance. Mohi ud din dar et al. [<xref ref-type="bibr" rid="ref-27">27</xref>] used transfer learning as a determinant to gain the advantage of pre-trained health data classification models. The study developed a novel framework for recognizing different AD stages. The main advantage of this new approach is the creation of lightweight neural networks. Liu et al. [<xref ref-type="bibr" rid="ref-28">28</xref>] proposed the multimodal hybrid transformer, a disease classification transformer. The model uses a novel cascading modal converter architecture to integrate multimodal information through cross-focusing to make more informed predictions. A novel modal culling mechanism is also proposed to ensure unprecedented modal independence and robustness in handling missing data. Experiments demonstrate that the proposed multifunctional network exhibits excellent performance. de Mendonga et al. [<xref ref-type="bibr" rid="ref-29">29</xref>] used a graph kernel constructed from texture features extracted from sMR images to classify MR images for AD diagnosis. This method facilitates the task of Alzheimer&#x2019;s disease image classification by SVM, in addition to allowing the use of different texture attributes for the diagnosis of Alzheimer&#x2019;s disease, by using a graph kernel approach to represent the texture features of different regions of the brain image.</p>
<p>Based on the problems analyzed above, this paper proposes an improved Med-3D network transfer method for the classification task of Alzheimer&#x2019;s disease, and the classification model has a high classification accuracy. The constructed Alzheimer&#x2019;s disease-assisted diagnosis system based on hot update SDK can allow users to use the latest AD diagnostic model without any sensation to achieve the latest and highest accuracy, and the modified system avoids the trouble caused to hospitals due to upgrading the version or replacing the model.</p>
</sec>
<sec id="s3"><label>3</label><title>Improved Classification Tasks for Alzheimer&#x2019;s Disease Med-3D Network Transfer Approach</title>
<p>Based on the problems analyzed above, this paper proposes an improved Med-3D network transfer method for the classification task of Alzheimer&#x2019;s disease, and the classification model has a high classification accuracy. The constructed Alzheimer&#x2019;s disease diagnosis system based on hot updating [<xref ref-type="bibr" rid="ref-30">30</xref>] SDK can allow users to use the latest AD diagnosis model without any feeling and achieve the highest accuracy, and the system avoids the trouble caused by upgrading the version or replacing the model to the hospitals. The block diagram of the proposed algorithm is shown in <xref ref-type="fig" rid="fig-1">Fig. 1</xref>.</p>
<fig id="fig-1"><label>Figure 1</label><caption><title>The flowchart of the proposed algorithm</title></caption><graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_46872-fig-1.tif"/></fig>
<sec id="s3_1"><label>3.1</label><title>3D MRI Image Sharpness Processing</title>
<p>Subject movement, such as head bobbing, is common during MRI scanning, which can lead to reduced sharpness, unsaturation of the image, or even produce unusable images. To address these issues, we implemented a specific image preprocessing scheme.</p>
<p>Due to varied image acquisition protocols and equipment used, the spacing between pixels in each image differs. For instance, one pixel in an image might represent 1&#x2005;mm of actual distance, while in another image, one pixel might signify 1&#x2005;cm. This discrepancy is significant, and the CNN network cannot inherently learn and adjust for it. Such variations introduce substantial uncertainty into the analysis. To mitigate this, we employ a particular image preprocessing strategy. To diminish the effects of these variations on the analysis, we adopted the following spatial normalization method [<xref ref-type="bibr" rid="ref-31">31</xref>], as illustrated in <xref ref-type="disp-formula" rid="eqn-1">Eqs. (1)</xref> and <xref ref-type="disp-formula" rid="eqn-2">(2)</xref> below:
<disp-formula id="eqn-1"><label>(1)</label><mml:math id="mml-eqn-1" display="block"><mml:msubsup><mml:mrow><mml:mrow><mml:mtext>sp</mml:mtext></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mtext>med</mml:mtext></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mtext>j</mml:mtext></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mtext>al</mml:mtext></mml:mrow></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mtext>f</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>med</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:msubsup><mml:mrow><mml:mrow><mml:mtext>sp</mml:mtext></mml:mrow></mml:mrow><mml:mrow><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mrow><mml:mtext>n</mml:mtext></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mtext>al</mml:mtext></mml:mrow></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mrow><mml:mtext>sp</mml:mtext></mml:mrow></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mrow><mml:mtext>n</mml:mtext></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mtext>al</mml:mtext></mml:mrow></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mrow><mml:mtext>sp</mml:mtext></mml:mrow></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mtext>N</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>j</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mrow><mml:mtext>n</mml:mtext></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mtext>al</mml:mtext></mml:mrow></mml:mrow></mml:msubsup><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula>
<disp-formula id="eqn-2"><label>(2)</label><mml:math id="mml-eqn-2" display="block"><mml:msubsup><mml:mrow><mml:mrow><mml:mi mathvariant="normal">s</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mtext>al</mml:mtext></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mtext>i</mml:mtext></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mtext>j</mml:mtext></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mo>&#x2032;</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mfrac><mml:msub><mml:mrow><mml:mtext>s</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>al</mml:mtext></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mtext>i</mml:mtext></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mtext>j</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mrow><mml:mtext>sp</mml:mtext></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mtext>i</mml:mtext></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mtext>j</mml:mtext></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mtext>al</mml:mtext></mml:mrow></mml:mrow></mml:msubsup></mml:mfrac><mml:mo>&#x00D7;</mml:mo><mml:msubsup><mml:mrow><mml:mrow><mml:mtext>sp</mml:mtext></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mtext>med</mml:mtext></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mtext>j</mml:mtext></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mtext>al</mml:mtext></mml:mrow></mml:mrow></mml:msubsup></mml:math></disp-formula></p>
<p>Transforming an image to a consistent resolution helps counteract the variances in pixel spacing. To prevent excessive discrepancies, each image&#x2019;s differences are aligned with its median spacing, ensuring that the target pixel spacing remains consistent within its domain.<inline-formula id="ieqn-1"><mml:math id="mml-ieqn-1"><mml:msubsup><mml:mrow><mml:mrow><mml:mi mathvariant="normal">s</mml:mi><mml:mi mathvariant="normal">p</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mtext>i</mml:mtext></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mtext>j</mml:mtext></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mtext>al</mml:mtext></mml:mrow></mml:mrow></mml:msubsup></mml:math></inline-formula> represents the spacing on each of the x, y, and z axes of the ith image in domain j, where al pertains to x, y, or z. <inline-formula id="ieqn-2"><mml:math id="mml-ieqn-2"><mml:msubsup><mml:mrow><mml:mrow><mml:mtext>sp</mml:mtext></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mtext>med</mml:mtext></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mtext>j</mml:mtext></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mtext>al</mml:mtext></mml:mrow></mml:mrow></mml:msubsup></mml:math></inline-formula> signifies the median spacing within the jth domain. <inline-formula id="ieqn-3"><mml:math id="mml-ieqn-3"><mml:msub><mml:mrow><mml:mtext>f</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>med</mml:mtext></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> denotes the median-taking operation, and <inline-formula id="ieqn-4"><mml:math id="mml-ieqn-4"><mml:msub><mml:mrow><mml:mtext>N</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>j</mml:mtext></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> represents data extraction from the jth domain. <inline-formula id="ieqn-5"><mml:math id="mml-ieqn-5"><mml:msubsup><mml:mrow><mml:mrow><mml:mtext>s</mml:mtext></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mtext>al</mml:mtext></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mtext>i</mml:mtext></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mtext>j</mml:mtext></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mo>&#x2032;</mml:mo></mml:mrow></mml:mrow></mml:msubsup></mml:math></inline-formula> is the latest size calculated from the original image.</p>
<p>Given that the source domain images are multimodal, encompassing both CT and MRI, there are inherent grayscale discrepancies between them. Hence, image normalization is essential. To sidestep the effects of outliers, gray values ranging between 0&#x0025;&#x2013;0.5&#x0025; and 99.5&#x0025;&#x2013;100&#x0025; are excluded. The mean value is then subtracted and divided by the standard deviation. Subsequently, the processed image is normalized to a specific grayscale [<xref ref-type="bibr" rid="ref-31">31</xref>], as outlined in <xref ref-type="disp-formula" rid="eqn-3">Eq. (3)</xref>:
<disp-formula id="eqn-3"><label>(3)</label><mml:math id="mml-eqn-3" display="block"><mml:msubsup><mml:mrow><mml:mrow><mml:mtext>v</mml:mtext></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mtext>i</mml:mtext></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mo>&#x2032;</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mrow><mml:mtext>v</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>i</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mrow><mml:mtext>v</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>m</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:msub><mml:mrow><mml:mtext>v</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>sd</mml:mtext></mml:mrow></mml:mrow></mml:msub></mml:math></disp-formula></p>
<p>Due to the differences in threshold ranges between the different thresholds, the mean <inline-formula id="ieqn-6"><mml:math id="mml-ieqn-6"><mml:msub><mml:mrow><mml:mtext>v</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>m</mml:mtext></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> and <inline-formula id="ieqn-7"><mml:math id="mml-ieqn-7"><mml:msub><mml:mrow><mml:mtext>v</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>sd</mml:mtext></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula>, the standard deviation of the monomers were used to make the conversion of <inline-formula id="ieqn-8"><mml:math id="mml-ieqn-8"><mml:msub><mml:mrow><mml:mtext>v</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>i</mml:mtext></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> to <inline-formula id="ieqn-9"><mml:math id="mml-ieqn-9"><mml:msubsup><mml:mrow><mml:mrow><mml:mtext>v</mml:mtext></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mtext>i</mml:mtext></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mo>&#x2032;</mml:mo></mml:mrow></mml:mrow></mml:msubsup></mml:math></inline-formula>. After the steps described, the input image is transformed into the model&#x2019;s input data, with grayscale normalization completed.</p>
</sec>
<sec id="s3_2"><label>3.2</label><title>Proposed Med-3D-Based Classification Network</title>
<p>Med-3D primarily addresses segmentation tasks, supporting 3D segmented datasets, which contain both MRI and CT, with a rich set of scan regions, target organs, and pathology features, target organs and pathology features. Its source domain encompasses features of the brain, heart, hippocampus, pancreas, prostate, blood vessels, liver, and spleen [<xref ref-type="bibr" rid="ref-31">31</xref>]. These extensive source domain attributes expedite the network&#x2019;s convergence and amplify diagnostic precision post-migration. In our work, an improved Med-3D network transfer method for the classification task of Alzheimer&#x2019;s disease is proposed. <xref ref-type="fig" rid="fig-2">Fig. 2</xref> shows the improved Med-3D network transfer method proposed by our institute.</p>
<fig id="fig-2"><label>Figure 2</label><caption><title>Improved transfer method for Med-3D networks</title></caption><graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_46872-fig-2.tif"/></fig>
<sec id="s3_2_1"><label>3.2.1</label><title>Backbone Model Structure</title>
<p>Med-3D predominantly employs the 3D-ResNet as its backbone network. This mainstay network comprises four modules: layer1, layer2, layer3, and layer4. Before accessing these layers, one must navigate through conv1, followed by bn1, relu, and maxpool. Conv1 represents a singular 3D convolution operation, kernel_size&#x2009;&#x003D;&#x2009;7, stride&#x2009;&#x003D;&#x2009;2&#x2009;&#x002A;&#x2009;2&#x2009;&#x002A;&#x2009;2, padding&#x2009;&#x003D;&#x2009;3, devoid of a bias term. The bn1 layer constitutes a singular 3D operation, which is articulated in <xref ref-type="disp-formula" rid="eqn-4">Eq. (4)</xref>:
<disp-formula id="eqn-4"><label>(4)</label><mml:math id="mml-eqn-4" display="block"><mml:mrow><mml:mtext>y</mml:mtext></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mrow><mml:mtext>x</mml:mtext></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:mtext>E</mml:mtext></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mtext>x</mml:mtext></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:msqrt><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mtext>Var</mml:mtext></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mtext>x</mml:mtext></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mi mathvariant="normal">&#x03F5;</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:msqrt></mml:mfrac><mml:mrow><mml:mi mathvariant="normal">&#x03B3;</mml:mi></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mi mathvariant="normal">&#x03B2;</mml:mi></mml:mrow></mml:math></disp-formula></p>
<p><inline-formula id="ieqn-10"><mml:math id="mml-ieqn-10"><mml:mrow><mml:mtext>x</mml:mtext></mml:mrow></mml:math></inline-formula> denotes the value of the input, <inline-formula id="ieqn-11"><mml:math id="mml-ieqn-11"><mml:mrow><mml:mtext>E</mml:mtext></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mtext>x</mml:mtext></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> is the mean, <inline-formula id="ieqn-12"><mml:math id="mml-ieqn-12"><mml:mrow><mml:mtext>Var</mml:mtext></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mtext>x</mml:mtext></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> is the variance, <inline-formula id="ieqn-13"><mml:math id="mml-ieqn-13"><mml:mrow><mml:mi mathvariant="normal">&#x03B3;</mml:mi></mml:mrow></mml:math></inline-formula> and <inline-formula id="ieqn-14"><mml:math id="mml-ieqn-14"><mml:mrow><mml:mi mathvariant="normal">&#x03B2;</mml:mi></mml:mrow></mml:math></inline-formula> are the learnable parameters, and <inline-formula id="ieqn-15"><mml:math id="mml-ieqn-15"><mml:mrow><mml:mtext>y</mml:mtext></mml:mrow></mml:math></inline-formula> is the input result. Normalization can effectively avoid gradient vanishing and gradient explosion, and expedite the model&#x2019;s convergence. This is succeeded by the ReLU operation, represented the <xref ref-type="disp-formula" rid="eqn-5">Eq. (5)</xref>:
<disp-formula id="eqn-5"><label>(5)</label><mml:math id="mml-eqn-5" display="block"><mml:mrow><mml:mtext>f</mml:mtext></mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mtext>x</mml:mtext></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mtable columnalign="left left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mo movablelimits="true" form="prefix">max</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mrow><mml:mtext>x</mml:mtext></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd><mml:mtd><mml:mrow><mml:mtext>x</mml:mtext></mml:mrow><mml:mo>&#x2265;</mml:mo><mml:mn>0</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>0</mml:mn><mml:mo>,</mml:mo></mml:mtd><mml:mtd><mml:mrow><mml:mtext>x</mml:mtext></mml:mrow><mml:mo>&#x003C;</mml:mo><mml:mn>0</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo fence="true" stretchy="true" symmetric="true"></mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>The subsequent operation is Maxpool, with kernel_size&#x2009;&#x003D;&#x2009;7, stride&#x2009;&#x003D;&#x2009;2, padding&#x2009;&#x003D;&#x2009;1. Following this, the sequence progresses to the backbone network layer. This layer employs a 3D-ResNet composition. Using ResNet as the primary structure. Layered layers are not fixed-layer CNN models but dynamically generated layers. The options include ResNet-10, ResNet-18, ResNet-34, ResNet-50, ResNet-101, ResNet-152, and ResNet-200. The ResNet configurations with 10, 18, and 34 layers are generated by the model1 module, while ResNets with 50, 101, 152, and 200 layers are produced by the model2. The specific Backbone module in our proposed model is shown in <xref ref-type="fig" rid="fig-3">Fig. 3</xref>.</p>
<fig id="fig-3"><label>Figure 3</label><caption><title>Backbone module</title></caption><graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_46872-fig-3.tif"/></fig>
</sec>
<sec id="s3_2_2"><label>3.2.2</label><title>Proposed Model Structure</title>
<p>Our model uses the Med-3D network as the backbone of the model, and then we use the fully connected layer to replace the original segmentation network to achieve the purpose of classification, and the relocation diagram is shown in <xref ref-type="fig" rid="fig-4">Fig. 4</xref>. After that, to improve the effect of the model, we proposed an improvement program by adding the SE module of the attention mechanism, and the improvement schematic is shown in <xref ref-type="fig" rid="fig-4">Fig. 4</xref>.</p>
<fig id="fig-4"><label>Figure 4</label><caption><title>The proposed channel attention-based transfer approach</title></caption><graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_46872-fig-4.tif"/></fig>
<p>In this paper, a network model based on Med-3D network transfer is proposed, and the SE module attention module is added after the output layer of layer4. Modules are plugged into the Med-3D network model, and the SE module added contains a pooling layer Pooling, a fully connected layer FC, and an activation layer ReLU, after which the image is scaled to its original size after undergoing Sigmoid. Although the parameters of the model are increased, it will improve the performance of the model substantially. The detail of the attention module is displayed in <xref ref-type="fig" rid="fig-5">Fig. 5</xref>.</p>
<fig id="fig-5"><label>Figure 5</label><caption><title>Attention module design</title></caption><graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_46872-fig-5.tif"/></fig>
<p>The improved method adds a common SE module after layer 4 and then accesses a fully connected layer for the AD classification task. The detailed information on the migrated network for the number of layers is shown in <xref ref-type="table" rid="table-1">Table 1</xref>.</p>
<table-wrap id="table-1"><label>Table 1</label><caption><title>3D-ResNet different deep network layer structure</title></caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left">Name</th>
<th align="left">conv1</th>
<th align="left">conv2_x</th>
<th align="left">conv3_x</th>
<th align="left">conv4_x</th>
<th align="left">conv5_x</th>
<th align="left"/>
</tr>
</thead>
<tbody>
<tr>
<td align="left">3D-ResNet10</td>
<td/>
<td align="left"><inline-formula id="ieqn-16"><mml:math id="mml-ieqn-16"><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>64</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>64</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-17"><mml:math id="mml-ieqn-17"><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>128</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>128</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-18"><mml:math id="mml-ieqn-18"><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>256</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>256</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-19"><mml:math id="mml-ieqn-19"><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>512</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>512</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn></mml:math></inline-formula></td>
<td/>
</tr>
<tr>
<td align="left">3D-ResNet18</td>
<td/>
<td align="left"><inline-formula id="ieqn-20"><mml:math id="mml-ieqn-20"><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>64</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>64</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>2</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-21"><mml:math id="mml-ieqn-21"><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>128</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>128</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>2</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-22"><mml:math id="mml-ieqn-22"><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>256</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>256</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>2</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-23"><mml:math id="mml-ieqn-23"><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>512</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>512</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>2</mml:mn></mml:math></inline-formula></td>
<td/>
</tr>
<tr>
<td align="left">3D-ResNet34</td>
<td/>
<td align="left"><inline-formula id="ieqn-24"><mml:math id="mml-ieqn-24"><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>64</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>64</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-25"><mml:math id="mml-ieqn-25"><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>64</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>64</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>4</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-26"><mml:math id="mml-ieqn-26"><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>64</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>64</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>6</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-27"><mml:math id="mml-ieqn-27"><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>64</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>64</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn></mml:math></inline-formula></td>
<td/>
</tr>
<tr>
<td align="left">3D-ResNet50</td>
<td align="center">3&#x2009;&#x002A;&#x2009;7&#x2009;&#x002A;&#x2009;7,<break/>64,<break/>stride2</td>
<td align="left"><inline-formula id="ieqn-28"><mml:math id="mml-ieqn-28"><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>64</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>64</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>256</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-29"><mml:math id="mml-ieqn-29"><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>128</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>128</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>512</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>4</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-30"><mml:math id="mml-ieqn-30"><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>256</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>256</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>1024</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>6</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-31"><mml:math id="mml-ieqn-31"><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>512</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>512</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2048</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn></mml:math></inline-formula></td>
<td align="center">Average pool, softmax</td>
</tr>
<tr>
<td align="left">3D-ResNet101</td>
<td/>
<td align="left"><inline-formula id="ieqn-32"><mml:math id="mml-ieqn-32"><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>64</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>64</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>256</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-33"><mml:math id="mml-ieqn-33"><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>128</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>128</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>512</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>4</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-34"><mml:math id="mml-ieqn-34"><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>256</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>256</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>1024</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>23</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-35"><mml:math id="mml-ieqn-35"><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>512</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>512</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2048</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn></mml:math></inline-formula></td>
<td/>
</tr>
<tr>
<td align="left">3D-ResNet152</td>
<td/>
<td align="left"><inline-formula id="ieqn-36"><mml:math id="mml-ieqn-36"><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>64</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>64</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>256</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-37"><mml:math id="mml-ieqn-37"><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>128</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>128</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>512</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>8</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-38"><mml:math id="mml-ieqn-38"><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>256</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>256</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>1024</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>36</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-39"><mml:math id="mml-ieqn-39"><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>512</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>512</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2048</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn></mml:math></inline-formula></td>
<td/>
</tr>
<tr>
<td align="left">3D-ResNet200</td>
<td/>
<td align="left"><inline-formula id="ieqn-40"><mml:math id="mml-ieqn-40"><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>64</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>64</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>256</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-41"><mml:math id="mml-ieqn-41"><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>128</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>128</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>512</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>24</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-42"><mml:math id="mml-ieqn-42"><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>256</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>256</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>1024</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>36</mml:mn></mml:math></inline-formula></td>
<td align="left"><inline-formula id="ieqn-43"><mml:math id="mml-ieqn-43"><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>512</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>512</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2048</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn></mml:math></inline-formula></td>
<td/>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s3_2_3"><label>3.2.3</label><title>Loss Function</title>
<p>The loss function plays an important role in assessing the difference between the predicted and actual categories of a model as well as the model&#x2019;s ability. In real-world tasks, both segmentation and classification tasks, the smaller the value of the loss function, the better the model shows its ability. In the experiments, if the loss value decreases faster, it proves that the model&#x2019;s learning ability is stronger, and at the same time, the size of the required dataset is smaller, and better results can be achieved with the same amount of training data.</p>
<p>Considering that this paper deals with multiple classification tasks, the cross-entropy loss function commonly used in multiple classification models is used. This loss function has achieved good results in multiple classification tasks, and its formula is shown in <xref ref-type="disp-formula" rid="eqn-6">Eq. (6)</xref>:
<disp-formula id="eqn-6"><label>(6)</label><mml:math id="mml-eqn-6" display="block"><mml:msub><mml:mrow><mml:mtext>Loss</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>CE</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mtext>n</mml:mtext></mml:mrow></mml:mfrac><mml:msubsup><mml:mrow><mml:mo>&#x2211;</mml:mo></mml:mrow><mml:mrow><mml:mrow><mml:mtext>i</mml:mtext></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mtext>n</mml:mtext></mml:mrow></mml:mrow></mml:msubsup><mml:msub><mml:mrow><mml:mtext>L</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>i</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mtext>n</mml:mtext></mml:mrow></mml:mfrac><mml:msubsup><mml:mrow><mml:mo>&#x2211;</mml:mo></mml:mrow><mml:mrow><mml:mrow><mml:mtext>i</mml:mtext></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mtext>n</mml:mtext></mml:mrow></mml:mrow></mml:msubsup><mml:msubsup><mml:mrow><mml:mo>&#x2211;</mml:mo></mml:mrow><mml:mrow><mml:mrow><mml:mtext>c</mml:mtext></mml:mrow><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mrow><mml:mtext>m</mml:mtext></mml:mrow></mml:mrow></mml:msubsup><mml:msub><mml:mrow><mml:mtext>y</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>ic</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mi>log</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mrow><mml:mtext>p</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>ic</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula>where n represents the number of samples, m denotes the number of categories, i denotes the index where the current category is located, <inline-formula id="ieqn-44"><mml:math id="mml-ieqn-44"><mml:msub><mml:mrow><mml:mtext>y</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>ic</mml:mtext></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> and <inline-formula id="ieqn-45"><mml:math id="mml-ieqn-45"><mml:msub><mml:mrow><mml:mtext>p</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>ic</mml:mtext></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> denote the version as the true predicted probability of category c and the predicted probability of the model output. The final predicted categorization categories are derived from the Softmax function which is commonly used in multi-categorization tasks and its formula is shown in <xref ref-type="disp-formula" rid="eqn-7">Eq. (7)</xref>:
<disp-formula id="eqn-7"><label>(7)</label><mml:math id="mml-eqn-7" display="block"><mml:msub><mml:mrow><mml:mtext>p</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>ic</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mtext>softmax</mml:mtext></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:msup><mml:mrow><mml:mtext>z</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>c</mml:mtext></mml:mrow></mml:mrow></mml:msup><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>exp</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msup><mml:mrow><mml:mtext>z</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>c</mml:mtext></mml:mrow></mml:mrow></mml:msup><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:msubsup><mml:mrow><mml:mo>&#x2211;</mml:mo></mml:mrow><mml:mrow><mml:mrow><mml:mtext>i</mml:mtext></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mtext>m</mml:mtext></mml:mrow></mml:mrow></mml:msubsup><mml:mi>exp</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msup><mml:mrow><mml:mtext>z</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>i</mml:mtext></mml:mrow></mml:mrow></mml:msup><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mfrac></mml:math></disp-formula></p>
<p>In the above equation, <inline-formula id="ieqn-46"><mml:math id="mml-ieqn-46"><mml:msup><mml:mrow><mml:mtext>z</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>c</mml:mtext></mml:mrow></mml:mrow></mml:msup></mml:math></inline-formula> represents the output of the fully connected layer, which is subsequently obtained by the Softmax function <inline-formula id="ieqn-47"><mml:math id="mml-ieqn-47"><mml:msub><mml:mrow><mml:mtext>p</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>ic</mml:mtext></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula>. The model adopts the stochastic gradient descent algorithm (SGD) as the optimization strategy. Compared to other gradient descent algorithms, SGD does not need to update the model parameters only after training all the samples, but instead updates the weights of the network model in a minimum-batch manner. This not only reduces the time required for training but also reduces the possibility of overfitting the model.</p>
</sec>
</sec>
</sec>
<sec id="s4"><label>4</label><title>Experimental Results and Analysis</title>
<sec id="s4_1"><label>4.1</label><title>ADNI MRI Dataset</title>
<p>The data used in this paper was obtained from The Alzheimer&#x2019;s Disease Neuroimaging Initiative (ADNI) [<xref ref-type="bibr" rid="ref-32">32</xref>] database (<ext-link ext-link-type="uri" xlink:href="https://ida.loni.usc.edu/login.jsp?project=ADNI">https://ida.loni.usc.edu/login.jsp?project=ADNI</ext-link>). ADNI was founded by a consortium of medical centers and universities in the United States and Canada. It aims to provide publicly available datasets for the identification and precise diagnosis of the biomedical hallmarks of AD and the ongoing follow-up of trial participants. With its expansion, ADNI has become the leading source for studying AD through longitudinal, multi-site MRI images.</p>
<p>The main purpose of ADNI-1 was to study biomedical markers for use in clinical trials. The project spanned 5 years, beginning in 2004. It collected data from 200 NC, 400 MCI, and 200 AD subjects. MRI and PET [<xref ref-type="bibr" rid="ref-33">33</xref>] were the primary imaging modalities, accumulating a vast amount of brain imaging data. Additionally, genetic profiles [<xref ref-type="bibr" rid="ref-34">34</xref>] were gathered, and concurrent research led to the discovery of blood and cerebrospinal fluid biomedical markers.</p>
<p>ADNI-GO commenced in 2009 and ran for two years. It added 200 subjects diagnosed with a new classification called Early Mild Cognitive Impairment (EMCI) [<xref ref-type="bibr" rid="ref-35">35</xref>] to the ADNI-1 cohort. MR protocols were adapted with the intent of detecting early-stage AD.</p>
<p>ADNI-2 started in 2011 and built upon the prior study. It consisted of 150 NC, 100 EMCI, and 150 Late Cognitive Mild Impairment (LMCI) [<xref ref-type="bibr" rid="ref-36">36</xref>] subjects from both ADNI-1 and ADNI-GO. An additional 107 Significant Memory Concern (SMC) subjects were included to bridge the difference between NC and MCI. ADNI-2&#x2019;s notable contribution was the incorporation of Florbetapir amyloid PET scans for all participants in ADNI-GO and ADNI-2. This was pivotal in the pursuit of identifying biomedical markers indicative of cognitive decline.</p>
<p>The ADNI-3 program was launched in 2016, primarily aiming to elucidate the interrelationships and characterizations of various biomarkers within the Alzheimer&#x2019;s disease spectrum, encompassing clinical, cognitive, imaging, genetic, and biochemical facets. ADNI-3 introduced broader brain scanning techniques to identify tau protein tangles (tau PET), a vital indicator of pathology. Concurrently, this phase persistently sought to pinpoint, enhance, standardize, and validate the methods and biomarkers used in AD clinical trials. Building on the extensive prior data, ADNI-3 added 133 elderly controls, 151 MCI, and 87 AD subjects to the study.</p>
<p>The timeline of ADNI-1, ADNI-GO, ADNI-2, and ADNI-3 development is shown in <xref ref-type="fig" rid="fig-6">Fig. 6</xref>. After years of data accumulation, the ADNI dataset has formed a large amount of information that can be utilized by researchers, providing strong research support. Since it contains MRI data of long-term changes in certain patients, it also further helps people to understand the whole process of MCI to AD, helps people to use better diagnosis of AD, and also provides some ideas to slow down the development of MCI.</p>
<fig id="fig-6"><label>Figure 6</label><caption><title>ADNI development stages</title></caption><graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_46872-fig-6.tif"/></fig>
<p>For this study, MRI images from 787 subjects were downloaded. Of these, data from 511 subjects were designated as the training set, while data from 276 subjects comprised the test set. In the training set, there were 154 subjects with Alzheimer&#x2019;s disease with an average age of 76.03; 90 were male and 64 were female. Mild Cognitive Impairment was diagnosed in 187 subjects with an average age of 74.78; 96 were male and 91 were female. The normal control (NC) group had 170 subjects with an average age of 75.28; 90 were male and 80 were female. In the test set, there were 83 subjects with AD, averaging 76.23 years of age; 40 were male and 43 were female. The MCI group had 101 subjects with an average age of 74.38; 54 were male and 47 were female. The NC group included 92 subjects with an average age of 76.38; 42 were male and 50 were female. <xref ref-type="table" rid="table-2">Table 2</xref> presents detailed information about the demographic characteristics of the ADNI training set and test set in the experiment.</p>
<table-wrap id="table-2"><label>Table 2</label><caption><title>ADNI training set/test set</title></caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left">Class</th>
<th align="left">Subjects</th>
<th align="left">Age (Average)</th>
<th align="left">Gender (Female (Male))</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">AD</td>
<td align="left">154/83</td>
<td align="left">76.13</td>
<td align="left">64 (90)/43 (40)</td>
</tr>
<tr>
<td align="left">MCI</td>
<td align="left">187/101</td>
<td align="left">74.58</td>
<td align="left">91 (96)/47 (54)</td>
</tr>
<tr>
<td align="left">NC</td>
<td align="left">170/92</td>
<td align="left">75.93</td>
<td align="left">80 (90)/50 (42)</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s4_2"><label>4.2</label><title>Experimentation and Analysis</title>
<p>To evaluate the model&#x2019;s performance, metrics such as Accuracy, Precision, and Recall were computed for comparison. The experiments indicate that the ResNet50 transfer is the most effective, both in the direct transfer module and when the SE module is incorporated. Throughout the experiment, 60 Epochs were executed, and the Loss for each iteration was documented, as depicted in <xref ref-type="fig" rid="fig-7">Fig. 7</xref>. The experimental findings are presented in <xref ref-type="table" rid="table-3">Tables 3</xref> and <xref ref-type="table" rid="table-4">4</xref> below.</p>
<fig id="fig-7"><label>Figure 7</label><caption><title>Loss descent chart</title></caption><graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_46872-fig-7.tif"/></fig><table-wrap id="table-3"><label>Table 3</label><caption><title>Comparison of accuracy, precision, and recall for different depth ResNet transfer models</title></caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left">Method</th>
<th align="left">Task</th>
<th align="left">Accuracy</th>
<th align="left">Precision</th>
<th align="left">Recall</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">ResNet10&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">AD/MCI/NC</td>
<td align="left">0.86</td>
<td align="left">0.84</td>
<td align="left">0.73</td>
</tr>
<tr>
<td align="left">ResNet18&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">AD/MCI/NC</td>
<td align="left">0.85</td>
<td align="left">0.83</td>
<td align="left">0.77</td>
</tr>
<tr>
<td align="left">ResNet34&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">AD/MCI/NC</td>
<td align="left">0.82</td>
<td align="left">0.71</td>
<td align="left">0.82</td>
</tr>
<tr>
<td align="left"><bold>ResNet50</bold>&#x2009;<bold>&#x002B;</bold>&#x2009;<bold>FC</bold></td>
<td align="left"><bold>AD/MCI/NC</bold></td>
<td align="left"><bold>0.89</bold></td>
<td align="left"><bold>0.96</bold></td>
<td align="left"><bold>0.89</bold></td>
</tr>
<tr>
<td align="left">ResNet101&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">AD/MCI/NC</td>
<td align="left">0.79</td>
<td align="left">0.73</td>
<td align="left">0.84</td>
</tr>
<tr>
<td align="left">ResNet152&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">AD/MCI/NC</td>
<td align="left">0.83</td>
<td align="left">0.81</td>
<td align="left">0.86</td>
</tr>
<tr>
<td align="left">ResNet200&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">AD/MCI/NC</td>
<td align="left">0.87</td>
<td align="left">0.82</td>
<td align="left">0.89</td>
</tr>
</tbody>
</table>
</table-wrap><table-wrap id="table-4"><label>Table 4</label><caption><title>Comparison of accuracy, precision, and recall of ResNet-improved SE models at different depths</title></caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left">Method</th>
<th align="left">Task</th>
<th align="left">Accuracy</th>
<th align="left">Precision</th>
<th align="left">Recall</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">ResNet10&#x2009;&#x002B;&#x2009;SE&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">AD/MCI/NC</td>
<td align="left">0.91</td>
<td align="left">0.86</td>
<td align="left">0.74</td>
</tr>
<tr>
<td align="left">ResNet18&#x2009;&#x002B;&#x2009;SE&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">AD/MCI/NC</td>
<td align="left">0.88</td>
<td align="left">0.73</td>
<td align="left">0.87</td>
</tr>
<tr>
<td align="left">ResNet34&#x2009;&#x002B;&#x2009;SE&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">AD/MCI/NC</td>
<td align="left">0.86</td>
<td align="left">0.81</td>
<td align="left">0.72</td>
</tr>
<tr>
<td align="left"><bold>ResNet50</bold>&#x2009;<bold>&#x002B;</bold>&#x2009;<bold>SE</bold>&#x2009;<bold>&#x002B;</bold>&#x2009;<bold>FC</bold></td>
<td align="left"><bold>AD/MCI/NC</bold></td>
<td align="left"><bold>0.92</bold></td>
<td align="left"><bold>0.93</bold></td>
<td align="left"><bold>0.89</bold></td>
</tr>
<tr>
<td align="left">ResNet101&#x2009;&#x002B;&#x2009;SE&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">AD/MCI/NC</td>
<td align="left">0.84</td>
<td align="left">0.69</td>
<td align="left">0.82</td>
</tr>
<tr>
<td align="left">ResNet152&#x2009;&#x002B;&#x2009;SE&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">AD/MCI/NC</td>
<td align="left">0.86</td>
<td align="left">0.88</td>
<td align="left">0.83</td>
</tr>
<tr>
<td align="left">ResNet200&#x2009;&#x002B;&#x2009;SE&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">AD/MCI/NC</td>
<td align="left">0.89</td>
<td align="left">0.84</td>
<td align="left">0.87</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The experimental models converged in Loss for each model after running 60 Epochs.</p>
<p>In <xref ref-type="table" rid="table-3">Table 3</xref>&#x2019;s direct transfer category, ResNet50&#x2009;&#x002B;&#x2009;FC emerged as the top performer, achieving an 89&#x0025; classification accuracy rate, 96&#x0025; precision, and 89&#x0025; recall. In <xref ref-type="table" rid="table-4">Table 4</xref>, SE channel attention category, ResNet50&#x2009;&#x002B;&#x2009;SE&#x2009;&#x002B;&#x2009;FC remained the best, registering 92&#x0025; classification accuracy, 93&#x0025; precision, and 89&#x0025; recall. The experimental outcomes reveal that increased model depth during Med-3D network-based transfer does not necessarily yield superior results for AD classification. A plausible explanation might be that as the model deepens, it requires an extensive dataset for convergence in the initial training. This makes it challenging for the model to converge; the unsaturated state of the migrated network, combined with static parameters that are not updated, may result in diminished model performance. Observations from the Loss data suggest that deeper networks exhibit greater fluctuations in Loss during migration. This underscores why ResNet50, possessing a balanced set of model parameters, delivered the finest classification outcomes.</p>

</sec>
<sec id="s4_3"><label>4.3</label><title>Ablation Experiment</title>
<p>The purpose of the ablation experiment was to explore the specific roles of the transfer module and the SE channel attention module in the model. To discern the impact of each module, four experimental configurations were employed in the ablation studies: transfer&#x2009;&#x002B;&#x2009;SE&#x2009;&#x002B;&#x2009;FC, transfer&#x2009;&#x002B;&#x2009;FC, no transfer&#x2009;&#x002B;&#x2009;FC, and no transfer&#x2009;&#x002B;&#x2009;SE&#x2009;&#x002B;&#x2009;FC. Subsequent experiments were conducted across various depths of the ResNet model, utilizing consistent training and test sets to nullify any potential influence from the data.</p>
<p>In terms of loss performance, the model incorporating the transfer scheme exhibits a lower initial loss, whereas the model initialized with random parameters displays a higher initial loss. Furthermore, the transfer model converges more rapidly. As the model&#x2019;s depth increases, its convergence rate diminishes, leading to decreased accuracy in deeper configurations. This decline can be attributed to the fact that greater model depth necessitates a larger parameter set, which in turn requires more data for convergence. Given that the dataset size remains constant in this study, there is a noticeable trend: as the model&#x2019;s depth increases, its performance does not necessarily improve, and might even degrade. Nevertheless, with a larger dataset, a model with greater depth could potentially yield better outcomes. Excluding ResNet50, models equipped with the SE module outperformed their counterparts without the SE module, underscoring the SE module&#x2019;s capacity to enhance AD classification. The experimental metrics, namely Accuracy, Precision, and Recall, are shown in <xref ref-type="table" rid="table-5">Table 5</xref>.</p>
<table-wrap id="table-5"><label>Table 5</label><caption><title>Comparison of accuracy, precision, and recall for four ResNet schemes of different depths</title></caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left">Method</th>
<th align="left">Accuracy</th>
<th align="left">Precision</th>
<th align="left">Recall</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">ResNet10&#x2009;&#x002B;&#x2009;SE&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">0.91</td>
<td align="left">0.86</td>
<td align="left">0.74</td>
</tr>
<tr>
<td align="left">ResNet10&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">0.86</td>
<td align="left">0.84</td>
<td align="left">0.73</td>
</tr>
<tr>
<td align="left">NewTrain10&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">0.73</td>
<td align="left">0.78</td>
<td align="left">0.72</td>
</tr>
<tr>
<td align="left">NewTrain10&#x2009;&#x002B;&#x2009;SE&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">0.76</td>
<td align="left">0.63</td>
<td align="left">0.76</td>
</tr>
<tr>
<td align="left">ResNet18&#x2009;&#x002B;&#x2009;SE&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">0.88</td>
<td align="left">0.73</td>
<td align="left">0.87</td>
</tr>
<tr>
<td align="left">ResNet18&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">0.85</td>
<td align="left">0.83</td>
<td align="left">0.77</td>
</tr>
<tr>
<td align="left">NewTrain18&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">0.65</td>
<td align="left">0.63</td>
<td align="left">0.70</td>
</tr>
<tr>
<td align="left">NewTrain18&#x2009;&#x002B;&#x2009;SE&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">0.76</td>
<td align="left">0.62</td>
<td align="left">0.71</td>
</tr>
<tr>
<td align="left">ResNet34&#x2009;&#x002B;&#x2009;SE&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">0.86</td>
<td align="left">0.81</td>
<td align="left">0.72</td>
</tr>
<tr>
<td align="left">ResNet34&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">0.82</td>
<td align="left">0.71</td>
<td align="left">0.82</td>
</tr>
<tr>
<td align="left">NewTrain34&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">0.63</td>
<td align="left">0.72</td>
<td align="left">0.65</td>
</tr>
<tr>
<td align="left">NewTrain34&#x2009;&#x002B;&#x2009;SE&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">0.76</td>
<td align="left">0.82</td>
<td align="left">0.69</td>
</tr>
<tr>
<td align="left"><bold>ResNet50</bold>&#x2009;<bold>&#x002B;</bold>&#x2009;<bold>SE</bold>&#x2009;<bold>&#x002B;</bold>&#x2009;<bold>FC</bold></td>
<td align="left"><bold>0.92</bold></td>
<td align="left"><bold>0.93</bold></td>
<td align="left"><bold>0.89</bold></td>
</tr>
<tr>
<td align="left">ResNet50&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">0.89</td>
<td align="left">0.96</td>
<td align="left">0.89</td>
</tr>
<tr>
<td align="left">NewTrain50&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">0.54</td>
<td align="left">0.64</td>
<td align="left">0.57</td>
</tr>
<tr>
<td align="left">NewTrain50&#x2009;&#x002B;&#x2009;SE&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">0.53</td>
<td align="left">0.65</td>
<td align="left">0.67</td>
</tr>
<tr>
<td align="left">ResNet101&#x2009;&#x002B;&#x2009;SE&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">0.84</td>
<td align="left">0.69</td>
<td align="left">0.82</td>
</tr>
<tr>
<td align="left">ResNet101&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">0.79</td>
<td align="left">0.73</td>
<td align="left">0.84</td>
</tr>
<tr>
<td align="left">NewTrain101&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">0.67</td>
<td align="left">0.76</td>
<td align="left">0.67</td>
</tr>
<tr>
<td align="left">NewTrain101&#x2009;&#x002B;&#x2009;SE&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">0.72</td>
<td align="left">0.85</td>
<td align="left">0.78</td>
</tr>
<tr>
<td align="left">ResNet152&#x2009;&#x002B;&#x2009;SE&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">0.86</td>
<td align="left">0.88</td>
<td align="left">0.83</td>
</tr>
<tr>
<td align="left">ResNet152&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">0.83</td>
<td align="left">0.81</td>
<td align="left">0.86</td>
</tr>
<tr>
<td align="left">NewTrain152&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">0.75</td>
<td align="left">0.73</td>
<td align="left">0.86</td>
</tr>
<tr>
<td align="left">NewTrain152&#x2009;&#x002B;&#x2009;SE&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">0.77</td>
<td align="left">0.86</td>
<td align="left">0.88</td>
</tr>
<tr>
<td align="left">ResNet200&#x2009;&#x002B;&#x2009;SE&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">0.89</td>
<td align="left">0.84</td>
<td align="left">0.87</td>
</tr>
<tr>
<td align="left">ResNet200&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">0.87</td>
<td align="left">0.82</td>
<td align="left">0.89</td>
</tr>
<tr>
<td align="left">NewTrain200&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">0.68</td>
<td align="left">0.77</td>
<td align="left">0.86</td>
</tr>
<tr>
<td align="left">NewTrain200&#x2009;&#x002B;&#x2009;SE&#x2009;&#x002B;&#x2009;FC</td>
<td align="left">0.72</td>
<td align="left">0.85</td>
<td align="left">0.78</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>To facilitate the comparison of the performance of each method, a visual representation is provided in <xref ref-type="fig" rid="fig-8">Fig. 8</xref>. From this figure, it is evident that ResNet50 is the most effective, with the ResNet50 &#x002B; SE &#x002B; FC tri-classification achieving an accuracy of 92&#x0025;. Without transfer, ResNet152&#x2009;&#x002B;&#x2009;SE &#x002B;&#x2009;FC emerges as the top performer, attaining a 77&#x0025; accuracy rate. Regarding precision, ResNet50&#x2009;&#x002B;&#x2009;SE &#x002B;&#x2009;FC leads with 93&#x0025;, and in terms of recall, ResNet50 again proves superior with 89&#x0025;. These findings affirm the efficacy of both the transfer strategy and the attention mechanism [<xref ref-type="bibr" rid="ref-37">37</xref>].</p>
<fig id="fig-8"><label>Figure 8</label><caption><title>Comparison of four programs precision, recall, accuracy</title></caption><graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_46872-fig-8.tif"/></fig>
<p>In summary, using the same training and test sets, and after co-training for 60 Epochs, the transfer model outperformed the directly trained group, boosting accuracy by approximately 20&#x0025;. Incorporating the SE module further increased accuracy by around 4&#x0025;, fully validating the effectiveness of transfer based on the Med-3D network and demonstrating the SE module&#x2019;s capacity to enhance AD classification. To offer a clear perspective on model performance, this subsection contrasts our approach with other models tested on the ADNI dataset. The findings are detailed in <xref ref-type="table" rid="table-6">Table 6</xref>.</p>
<table-wrap id="table-6"><label>Table 6</label><caption><title>Comparison of other models</title></caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left">Method</th>
<th align="left">DataSet</th>
<th align="left">Accuracy</th>
<th align="left">Precision</th>
<th align="left">Recall</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">SVM [<xref ref-type="bibr" rid="ref-38">38</xref>]</td>
<td align="left">ADNI</td>
<td align="left">0.89</td>
<td align="left">&#x2013;</td>
<td align="left">&#x2013;</td>
</tr>
<tr>
<td align="left">MDNN [<xref ref-type="bibr" rid="ref-11">11</xref>]</td>
<td align="left">ADNI</td>
<td align="left">0.81</td>
<td align="left">0.83</td>
<td align="left">0.73</td>
</tr>
<tr>
<td align="left">DenseNet [<xref ref-type="bibr" rid="ref-13">13</xref>]</td>
<td align="left">ADNI</td>
<td align="left">0.89</td>
<td align="left">0.90</td>
<td align="left">0.87</td>
</tr>
<tr>
<td align="left">RevGAN&#x2009;&#x002B;&#x2009;3DCNN [<xref ref-type="bibr" rid="ref-39">39</xref>]</td>
<td align="left">ADNI</td>
<td align="left">0.89</td>
<td align="left">0.96</td>
<td align="left">0.90</td>
</tr>
<tr>
<td align="left">U-net [<xref ref-type="bibr" rid="ref-40">40</xref>]</td>
<td align="left">ADNI</td>
<td align="left">0.86</td>
<td align="left">&#x2013;</td>
<td align="left">&#x2013;</td>
</tr>
<tr>
<td align="left">CNN&#x2009;&#x002B;&#x2009;EL [<xref ref-type="bibr" rid="ref-41">41</xref>]</td>
<td align="left">ADNI</td>
<td align="left">0.84</td>
<td align="left">&#x2013;</td>
<td align="left">&#x2013;</td>
</tr>
<tr>
<td align="left"><bold>Med_-3D</bold>&#x2009;<bold>&#x002B;</bold>&#x2009;<bold>SE (Ours)</bold></td>
<td align="left">ADNI</td>
<td align="left"><bold>0.92</bold></td>
<td align="left"><bold>0.93</bold></td>
<td align="left"><bold>0.89</bold></td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The aforementioned table juxtaposes recent AD classification models, assessing their accuracy, precision, and recall metrics. When our proposed transfer and attention mechanism approach is contrasted with traditional SVM methods and various other models, it consistently outpaces competitors in terms of accuracy, with an improvement of 3 percentage points in accuracy compared to the best results literature [<xref ref-type="bibr" rid="ref-39">39</xref>]. Simultaneously, both precision and recall rates remain commendably high. These outcomes further substantiate the efficacy of Alzheimer&#x2019;s Disease Classification Methods grounded on Med-3D Network Transfer.</p>
</sec>
<sec id="s4_4"><label>4.4</label><title>AD-Assisted Diagnosis System Based on Hot Updating</title>
<p>High-performance and high-precision models are emerging, to ensure that the proposed model is efficient in diagnosis, in the input of a user&#x2019;s medical image can be output at the same time the results, which is the current pursuit of all diagnostic systems [<xref ref-type="bibr" rid="ref-42">42</xref>].</p>
<p>According to the survey, it was found that most hospitals are still using previous machine learning models for assisted diagnosis, and for Alzheimer&#x2019;s disease research diagnostic accuracy, the current clinical accuracy for AD diagnosis is too low, increasing the workload of doctors. This paper proposes an assisted diagnosis solution based on hot update SDK to solve the current problem of low accuracy of clinical assisted diagnosis system, which enables users to use the latest AD diagnosis model without any sensation to achieve the latest and highest accuracy.</p>
<sec id="s4_4_1"><label>4.4.1</label><title>Hot Updating Methods</title>
<p>The overall system adheres to the traditional B/S architecture, but with a distinct difference: the hot update module is separated from the browser, serving as the main body interacting with the user and comprising the data layer, hot update layer, system service layer, communication layer, and interaction layer. The overall architecture consists of a data layer, a hot update layer, a service layer, a communication layer, and, finally, an interaction layer. Except for the interaction layer, each layer provides a certain amount of services to the layers above.</p>
<p>Users can use the AD diagnostic service with the latest diagnostic accuracy senselessly, eliminating the need to manually update the software or manually update the version, which greatly reduces the difficulty for users and makes the system user-friendly. The principle of hot update is shown in <xref ref-type="fig" rid="fig-9">Fig. 9</xref>:</p>
<fig id="fig-9"><label>Figure 9</label><caption><title>Hot update schematic</title></caption><graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_46872-fig-9.tif"/></fig>
<p>As can be seen from the hot update schematic 7 above, achieving the hot update effect requires roughly eight steps. First, users must access our provided access SDK [<xref ref-type="bibr" rid="ref-43">43</xref>] through the AD diagnostic service system (any Web diagnostic system can use this SDK). At this stage, the access SDK lacks diagnostic capabilities. It will then query the server&#x2019;s database for the most recent diagnostic SDK version number. Once obtained, the access SDK can load the diagnostic SDK with its full diagnostic functions, subsequently downloading the resources to the Web terminal and utilizing the AD diagnostic service the diagnostic SDK offers.</p>
<p>The red line illustrates the procedure for retrieving the diagnostic SDK&#x2019;s version number from the database. Once the training of the new deep learning model reaches high accuracy, the SDK manager is informed, prompting the integration of this newly trained model to unveil the latest version. The server then updates the database with this latest version for future access SDK inquiries.</p>
</sec>
<sec id="s4_4_2"><label>4.4.2</label><title>System Configuration and Development Environment</title>
<p>The system is developed on the MACOS platform, with an NXP Mifare1 chip and 16 GB RAM. On the browser side, HTML, CSS, and JavaScript are utilized as development languages, with React (provided by Meta, formerly Facebook) as the framework. The server side primarily uses Python3.8; this choice of Python facilitates the use of the deep learning interface provided by PyTorch. The deep learning model itself is constructed with PyTorch. The system runs on a Linux server supplied by Tencent Cloud, boasting a 1-core 2 G specification. For data storage, the server employs a MySQL database. Interactions between the front and back end are facilitated using the HTTP (Hypertext Transfer) protocol.</p>
</sec>
<sec id="s4_4_3"><label>4.4.3</label><title>System Realization Demonstration</title>
<p>The SDK hot update stands as the central focus of the AD-assisted diagnosis system discussed in this paper. The activity diagram aptly captures the interplay between various roles during the system&#x2019;s operation. This hot update module encompasses SDK release, doctor access, doctor account management, patient diagnosis, and condition analysis.
<list list-type="simple">
<list-item><label>1)</label><p>Worktop</p></list-item>
</list></p>
<p>The workbench page can partially visualize and query embedded data. This embedded data includes the total number of online accesses, the latest version of the SDK, daily platform activities, and daily hospital counts. It displays the fluctuation in access numbers over the past 12 months via a curve graph and the distribution of accessed SDKs by category using a pie chart. Concurrently, when the system administrator posts an announcement, a sequential announcement message is appended. The appearance of the workbench interface is demonstrated in the <xref ref-type="fig" rid="fig-10">Fig. 10</xref>.</p>
<fig id="fig-10"><label>Figure 10</label><caption><title>Workbench page</title></caption><graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_46872-fig-10.tif"/></fig>
<list list-type="simple">
<list-item><label>2)</label><p>AD Diagnosis</p>
</list-item>
</list>
<p>The diagnostic AD page allows users to upload MRI images to the server and showcases diagnostic results from the server using charts. It also offers image debugging capabilities. Given that the diagnostic SDK employed is for a three-category task, there are three bar charts displayed. The longest bar indicates the patient&#x2019;s diagnostic result. For instance, as depicted in <xref ref-type="fig" rid="fig-11">Fig. 11</xref> below, if the longest bar represents an AD diagnosis, then the patient is diagnosed with AD.</p>
<fig id="fig-11"><label>Figure 11</label><caption><title>AD diagnostics page</title></caption><graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_46872-fig-11.tif"/></fig>
<p>Additionally, Cornerstone&#x2019;s integration facilitates grayscale value [<xref ref-type="bibr" rid="ref-44">44</xref>] adjustments, aiding doctors in making more informed decisions. An example of grayscale adjustment is presented in <xref ref-type="fig" rid="fig-12">Fig. 12</xref>.</p>
<fig id="fig-12"><label>Figure 12</label><caption><title>Grayscale adjustment comparison chart</title></caption><graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_46872-fig-12.tif"/></fig>
</sec>
</sec>
</sec>
<sec id="s5"><label>5</label><title>Conclusions</title>
<p>We constructed a new Alzheimer&#x2019;s disease-assisted diagnosis system based on a hot update SDK for input MRI images for AD diagnosis compared to traditional medical diagnostic methods. The system supports the following assertions: (1) The Med-3D transfer model can process the input 3D MRI images and the model diagnosis has a high accuracy rate. (2) The system can visualize the diagnostic results of the model, and after inputting a brain MRI image, it can display the acquisition of cornerstone-rendered medical images and the diagnostic results on the browser. (3) Based on the hot update SDK, the assisted diagnosis program allows users to use the latest AD diagnostic model without any feeling, to achieve the latest and highest accuracy, and to avoid upgrading the version or replacing the model to the hospitals to bring trouble. At the same time, it supports the ability of visual data display, which is convenient for model trainers to make further adjustments to the deep learning model. Although our experiments show that the proposed Med-3D transfer model performs well on AD diagnostic tasks, it does not differentiate the exploration of its more detailed categorization stages, such as EMCI, LMCI, etc. So in our subsequent studies, we will also investigate this aspect more deeply and apply the model to more practical scenarios.</p>
</sec>
</body>
<back>
<ack>
<p>The authors would like to thank the anonymous reviewers and the editor for their help.</p>
</ack>
<sec><title>Funding Statement</title>
<p>This research was funded by the National Natural Science Foundation of China (No. 62076044), Scientific Research Foundation of Chongqing University of Technology (No. 2020ZDZ015).</p></sec>
<sec><title>Author Contributions</title>
<p>The authors confirm contribution to the paper as follows: study conception and design: Yanmei Li, Jinghong Tang, Jian Luo, Naveed Ahmad, Rajesh Kumar; data collection: Jinghong Tang, Weiwu Ding; analysis and interpretation of results: Jinghong Tang, Weiwu Ding; draft manuscript preparation: Yanmei Li, Jinghong Tang. All authors reviewed the results and approved the final version of the manuscript.</p></sec>
<sec sec-type="data-availability"><title>Availability of Data and Materials</title>
<p>The data used in this paper can all be found in Google Scholar, where Part IV uses the dataset on <ext-link ext-link-type="uri" xlink:href="https://adni.loni.usc.edu/">https://adni.loni.usc.edu/</ext-link>.</p></sec>
<sec sec-type="COI-statement"><title>Conflicts of Interest</title>
<p>The authors declare that they have no conflicts of interest to report regarding the present study.</p></sec>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>[1]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>E. D.</given-names> <surname>Roberson</surname></string-name> and <string-name><given-names>L.</given-names> <surname>Mucke</surname></string-name></person-group>, &#x201C;<article-title>100 years and counting: Prospects for defeating Alzheimer&#x2019;s disease</article-title>,&#x201D; <source>Science</source>, vol. <volume>314</volume>, no. <issue>5800</issue>, pp. <fpage>781</fpage>&#x2013;<lpage>784</lpage>, <year>2006</year>; <pub-id pub-id-type="pmid">17082448</pub-id></mixed-citation></ref>
<ref id="ref-2"><label>[2]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Gauthier</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Reisberg</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Zaudig</surname></string-name>, <string-name><given-names>R. C.</given-names> <surname>Petersen</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Ritchie</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>Mild cognitive impairment</article-title>,&#x201D; <source>The Lancet</source>, vol. <volume>367</volume>, no. <issue>9518</issue>, pp. <fpage>1262</fpage>&#x2013;<lpage>1270</lpage>, <year>2006</year>.</mixed-citation></ref>
<ref id="ref-3"><label>[3]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>P.</given-names> <surname>Hamet</surname></string-name> and <string-name><given-names>J.</given-names> <surname>Tremblay</surname></string-name></person-group>, &#x201C;<article-title>Artificial intelligence in medicine</article-title>,&#x201D; <source>Metabolism</source>, vol. <volume>69</volume>, pp. <fpage>S36</fpage>&#x2013;<lpage>S40</lpage>, <year>2017</year>.</mixed-citation></ref>
<ref id="ref-4"><label>[4]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>C. Y.</given-names> <surname>Chen</surname></string-name>, <string-name><given-names>K. J. D.</given-names> <surname>Hong</surname></string-name>, <string-name><given-names>W. C.</given-names> <surname>Wu</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Mupparapu</surname></string-name></person-group>, &#x201C;<article-title>The use of deep convolutional neural networks in biomedical imaging: A review</article-title>,&#x201D; <source>Journal of Orofacial Sciences</source>, vol. <volume>11</volume>, no. <issue>1</issue>, pp. <fpage>3</fpage>&#x2013;<lpage>10</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-5"><label>[5]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Hon</surname></string-name> and <string-name><given-names>M. N.</given-names> <surname>Khan</surname></string-name></person-group>, &#x201C;<article-title>Towards Alzheimer&#x2019;s disease classification through transfer learning</article-title>,&#x201D; in <conf-name>Proc. of IEEE Int. Conf. on Bioinformatics and Biomedicine</conf-name>, <conf-loc>Kansas City, MO, USA</conf-loc>, pp. <fpage>1166</fpage>&#x2013;<lpage>1169</lpage>, <year>2017</year>.</mixed-citation></ref>
<ref id="ref-6"><label>[6]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Geethanath</surname></string-name> and <string-name><given-names>J. T.</given-names> <suffix>Jr.</suffix> <surname>Vaughan</surname></string-name></person-group>, &#x201C;<article-title>Accessible magnetic resonance imaging: A review</article-title>,&#x201D; <source>Journal of Magnetic Resonance Imaging</source>, vol. <volume>49</volume>, no. <issue>7</issue>, pp. <fpage>e65</fpage>&#x2013;<lpage>e77</lpage>, <year>2019</year>; <pub-id pub-id-type="pmid">30637891</pub-id></mixed-citation></ref>
<ref id="ref-7"><label>[7]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>R.</given-names> <surname>Cuingnet</surname></string-name>, <string-name><given-names>E.</given-names> <surname>Gerardin</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Tessieras</surname></string-name>, <string-name><given-names>G.</given-names> <surname>Auzias</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Leh&#x00E9;ricy</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>Automatic classification of patients with Alzheimer&#x2019;s disease from structural MRI: A comparison of ten methods using the ADNI database</article-title>,&#x201D; <source>Neuroimage</source>, vol. <volume>56</volume>, no. <issue>2</issue>, pp. <fpage>766</fpage>&#x2013;<lpage>781</lpage>, <year>2011</year>; <pub-id pub-id-type="pmid">20542124</pub-id></mixed-citation></ref>
<ref id="ref-8"><label>[8]</label><mixed-citation publication-type="thesis"><person-group person-group-type="author"><string-name><given-names>A. M.</given-names> <surname>Warsi</surname></string-name></person-group>, &#x201C;<article-title>The fractal nature and functional connectivity of brain function as measured by BOLD MRI in Alzheimer&#x2019;s disease</article-title>.&#x201D; <comment>Ph.D. dissertation</comment>, <publisher-name>McMaster University</publisher-name>, <publisher-loc>Canada</publisher-loc>, <year>2012</year>.</mixed-citation></ref>
<ref id="ref-9"><label>[9]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Basaia</surname></string-name>, <string-name><given-names>F.</given-names> <surname>Agosta</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Wagner</surname></string-name>, <string-name><given-names>E.</given-names> <surname>Canu</surname></string-name>, <string-name><given-names>G.</given-names> <surname>Magnani</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>Automated classification of Alzheimer&#x2019;s disease and mild cognitive impairment using a single MRI and deep neural networks</article-title>,&#x201D; <source>NeuroImage: Clinical</source>, vol. <volume>21</volume>, pp. <fpage>101645</fpage>, <year>2019</year>; <pub-id pub-id-type="pmid">30584016</pub-id></mixed-citation></ref>
<ref id="ref-10"><label>[10]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Islam</surname></string-name> and <string-name><given-names>Y.</given-names> <surname>Zhang</surname></string-name></person-group>, &#x201C;<article-title>Early diagnosis of Alzheimer&#x2019;s disease: A neuroimaging study with deep learning architectures</article-title>,&#x201D; in <conf-name>Proc. of IEEE Conf. on Computer Vision and Pattern Recognition Workshops</conf-name>, <conf-loc>Salt Lake City, UT, USA</conf-loc>, pp. <fpage>1881</fpage>&#x2013;<lpage>1883</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-11"><label>[11]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>D.</given-names> <surname>Lu</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Popuri</surname></string-name>, <string-name><given-names>G. W.</given-names> <surname>Ding</surname></string-name>, <string-name><given-names>R.</given-names> <surname>Balachandar</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Faisal Beg</surname></string-name></person-group>, &#x201C;<article-title>Multiscale deep neural network based analysis of FDG-PET images for the early diagnosis of Alzheimer&#x2019;s disease</article-title>,&#x201D; <source>Medical Image Analysis</source>, vol. <volume>46</volume>, pp. <fpage>26</fpage>&#x2013;<lpage>34</lpage>, <year>2018</year>; <pub-id pub-id-type="pmid">29502031</pub-id></mixed-citation></ref>
<ref id="ref-12"><label>[12]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>K.</given-names> <surname>B&#x00E4;ckstr&#x00F6;m</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Nazari</surname></string-name>, <string-name><given-names>I. Y.</given-names> <surname>Gu</surname></string-name> and <string-name><given-names>A. S.</given-names> <surname>Jakola</surname></string-name></person-group>, &#x201C;<article-title>An efficient 3D deep convolutional network for Alzheimer&#x2019;s disease diagnosis using MR images</article-title>,&#x201D; in <conf-name>Proc. of 2018 IEEE 15th Int. Symp. on Biomedical Imaging</conf-name>, <conf-loc>Washington DC, USA</conf-loc>, pp. <fpage>149</fpage>&#x2013;<lpage>153</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-13"><label>[13]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>F.</given-names> <surname>Li</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Liu</surname></string-name></person-group>, &#x201C;<article-title>Alzheimer&#x2019;s disease diagnosis based on multiple cluster dense convolutional networks</article-title>,&#x201D; <source>Computerized Medical Imaging and Graphics</source>, vol. <volume>70</volume>, pp. <fpage>101</fpage>&#x2013;<lpage>110</lpage>, <year>2018</year>; <pub-id pub-id-type="pmid">30340094</pub-id></mixed-citation></ref>
<ref id="ref-14"><label>[14]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>T.</given-names> <surname>Wang</surname></string-name>, <string-name><given-names>L. J.</given-names> <surname>Qiu</surname></string-name>, <string-name><given-names>G. R.</given-names> <surname>Qiu</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Yu</surname></string-name></person-group>, &#x201C;<article-title>Early detection models for persons with probable Alzheimer&#x2019;s disease with deep learning</article-title>,&#x201D; in <conf-name>Proc. of 2018 2nd IEEE Advanced Information Management, Communicates, Electronic and Automation Control Conf.</conf-name>, <conf-loc>Xi&#x2019;an, China</conf-loc>, pp. <fpage>2089</fpage>&#x2013;<lpage>2092</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-15"><label>[15]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>R.</given-names> <surname>Jain</surname></string-name>, <string-name><given-names>N.</given-names> <surname>Jain</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Aggarwal</surname></string-name> and <string-name><given-names>D. J.</given-names> <surname>Hemanth</surname></string-name></person-group>, &#x201C;<article-title>Convolutional neural network based Alzheimer&#x2019;s disease classification from magnetic resonance brain images</article-title>,&#x201D; <source>Cognitive Systems Research</source>, vol. <volume>57</volume>, pp. <fpage>147</fpage>&#x2013;<lpage>159</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-16"><label>[16]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Kazemi</surname></string-name> and <string-name><given-names>S.</given-names> <surname>Houghten</surname></string-name></person-group>, &#x201C;<article-title>A deep learning pipeline to classify different stages of Alzheimer&#x2019;s disease from fMRI data</article-title>,&#x201D; in <conf-name>Proc. of 2018 IEEE Conf. on Computational Intelligence in Bioinformatics and Computational Biology</conf-name>, <conf-loc>Saint Louis, MO, USA</conf-loc>, pp. <fpage>1</fpage>&#x2013;<lpage>8</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-17"><label>[17]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>W.</given-names> <surname>Lin</surname></string-name>, <string-name><given-names>T.</given-names> <surname>Tong</surname></string-name>, <string-name><given-names>Q.</given-names> <surname>Gao</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Guo</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Du</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>Convolutional neural networks-based MRI image analysis for the Alzheimer&#x2019;s disease prediction from mild cognitive impairment</article-title>,&#x201D; <source>Frontiers in Neuroscience</source>, vol. <volume>12</volume>, pp. <fpage>777</fpage>, <year>2018</year>; <pub-id pub-id-type="pmid">30455622</pub-id></mixed-citation></ref>
<ref id="ref-18"><label>[18]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>K.</given-names> <surname>Oh</surname></string-name>, <string-name><given-names>C. Y.</given-names> <surname>Chung</surname></string-name>, <string-name><given-names>K. W.</given-names> <surname>Kim</surname></string-name>, <string-name><given-names>W. S.</given-names> <surname>Kim</surname></string-name> and <string-name><given-names>I. S.</given-names> <surname>Oh</surname></string-name></person-group>, &#x201C;<article-title>Classification and visualization of Alzheimer&#x2019;s disease using volumetric convolutional neural network and transfer learning</article-title>,&#x201D; <source>Scientific Reports</source>, vol. <volume>9</volume>, no. <issue>1</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>16</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-19"><label>[19]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Ding</surname></string-name>, <string-name><given-names>H. J.</given-names> <surname>Sohn</surname></string-name>, <string-name><given-names>G. M.</given-names> <surname>Kawczynski</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Trivedi</surname></string-name>, <string-name><given-names>R.</given-names> <surname>Harnish</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>A deep learning model to predict a diagnosis of Alzheimer disease by using 18F-FDG PET of the brain</article-title>,&#x201D; <source>Radiology</source>, vol. <volume>290</volume>, no. <issue>2</issue>, pp. <fpage>456</fpage>&#x2013;<lpage>464</lpage>, <year>2019</year>; <pub-id pub-id-type="pmid">30398430</pub-id></mixed-citation></ref>
<ref id="ref-20"><label>[20]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>F.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>Z.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Du</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Wang</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>Multi-modal deep learning model for auxiliary diagnosis of Alzheimer&#x2019;s disease</article-title>,&#x201D; <source>Neurocomputing</source>, vol. <volume>361</volume>, pp. <fpage>185</fpage>&#x2013;<lpage>195</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-21"><label>[21]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>G.</given-names> <surname>Lee</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Nho</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Kang</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Sohn</surname></string-name> and <string-name><given-names>D.</given-names> <surname>Kim</surname></string-name></person-group>, &#x201C;<article-title>Predicting Alzheimer&#x2019;s disease progression using multi-modal deep learning approach</article-title>,&#x201D; <source>Scientific Reports</source>, vol. <volume>9</volume>, no. <issue>1</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>12</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-22"><label>[22]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>G.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Xu</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Cheng</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Si</surname></string-name> and <string-name><given-names>C.</given-names> <surname>Zheng</surname></string-name></person-group>, &#x201C;<article-title>SimVit: Exploring a simple vision transformer with sliding windows</article-title>,&#x201D; in <conf-name>Proc. of 2022 IEEE Int. Conf. on Multimedia and Expo</conf-name>, <conf-loc>Taipei, Taiwan</conf-loc>, pp. <fpage>1</fpage>&#x2013;<lpage>6</lpage>, <year>2022</year>.</mixed-citation></ref>
<ref id="ref-23"><label>[23]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Zheng</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Gao</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Feng</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Liang</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>A 3D densely connected convolution neural network with connection-wise attention mechanism for Alzheimer&#x2019;s disease classification</article-title>,&#x201D; <source>Magnetic Resonance Imaging</source>, vol. <volume>78</volume>, pp. <fpage>119</fpage>&#x2013;<lpage>126</lpage>, <year>2021</year>; <pub-id pub-id-type="pmid">33588019</pub-id></mixed-citation></ref>
<ref id="ref-24"><label>[24]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>B.</given-names> <surname>Yan</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Yang</surname></string-name>, <string-name><given-names>T.</given-names> <surname>Li</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>Quantifying the impact of pyramid squeeze attention mechanism and filtering approaches on Alzheimer&#x2019;s disease classification</article-title>,&#x201D; <source>Computers in Biology and Medicine</source>, vol. <volume>148</volume>, pp. <fpage>105944</fpage>, <year>2022</year>; <pub-id pub-id-type="pmid">35969934</pub-id></mixed-citation></ref>
<ref id="ref-25"><label>[25]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>X.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>P.</given-names> <surname>Yan</surname></string-name>, <string-name><given-names>G.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Jiang</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>Deep learning attention mechanism in medical image analysis: Basics and beyonds</article-title>,&#x201D; <source>International Journal of Network Dynamics and Intelligence</source>, pp. <fpage>93</fpage>&#x2013;<lpage>116</lpage>, <year>2023</year>.</mixed-citation></ref>
<ref id="ref-26"><label>[26]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>T.</given-names> <surname>Illakiya</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Ramamurthy</surname></string-name>, <string-name><given-names>V. M.</given-names> <surname>Siddharth</surname></string-name>, <string-name><given-names>R.</given-names> <surname>Mishra</surname></string-name> and <string-name><given-names>A.</given-names> <surname>Udainiya</surname></string-name></person-group>, &#x201C;<article-title>AHANet: Adaptive hybrid attention network for Alzheimer&#x2019;s disease classification using brain magnetic resonance imaging</article-title>,&#x201D; <source>Bioengineering</source>, vol. <volume>10</volume>, no. <issue>6</issue>, pp. <fpage>714</fpage>, <year>2023</year>; <pub-id pub-id-type="pmid">37370645</pub-id></mixed-citation></ref>
<ref id="ref-27"><label>[27]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>G.</given-names> <surname>Mohi ud din dar</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Bhagat</surname></string-name>, <string-name><given-names>S. I.</given-names> <surname>Ansarullah</surname></string-name>, <string-name><given-names>M. T. B.</given-names> <surname>Othman</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Hamid</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>A novel framework for classification of different Alzheimer&#x2019;s disease stages using CNN model</article-title>,&#x201D; <source>Electronics</source>, vol. <volume>12</volume>, no. <issue>2</issue>, pp. <fpage>469</fpage>, <year>2023</year>.</mixed-citation></ref>
<ref id="ref-28"><label>[28]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>Liu</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Liu</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>X. V.</given-names> <surname>To</surname></string-name>, <string-name><given-names>F.</given-names> <surname>Nasrallah</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>Cascaded multi-modal mixing transformers for Alzheimer&#x2019;s disease classification with incomplete data</article-title>,&#x201D; <source>NeuroImage</source>, vol. <volume>277</volume>, pp. <fpage>120267</fpage>, <year>2023</year>; <pub-id pub-id-type="pmid">37422279</pub-id></mixed-citation></ref>
<ref id="ref-29"><label>[29]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>L. J. C.</given-names> <surname>de Mendonga</surname></string-name> and <string-name><given-names>J. R.</given-names> <surname>Ferrari</surname></string-name></person-group>, &#x201C;<article-title>Alzheimer&#x2019;s disease classification based on graph kernel SVMs constructed with 3D texture features extracted from MR images</article-title>,&#x201D; <source>Expert Systems with Applications</source>, vol. <volume>211</volume>, pp. <fpage>118633</fpage>, <year>2023</year>.</mixed-citation></ref>
<ref id="ref-30"><label>[30]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>C. A.</given-names> <surname>Noubissi</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Iguchi-Cartigny</surname></string-name> and <string-name><given-names>L. J.</given-names> <surname>Lanet</surname></string-name></person-group>, &#x201C;<article-title>Hot updates for Java based smart cards</article-title>,&#x201D; in <conf-name>Proc. of IEEE 27th Int. Conf. on Data Engineering Workshops</conf-name>, <conf-loc>Hannover, Germany</conf-loc>, pp. <fpage>168</fpage>&#x2013;<lpage>173</lpage>, <year>2011</year>.</mixed-citation></ref>
<ref id="ref-31"><label>[31]</label><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Chen</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Ma</surname></string-name> and <string-name><given-names>Y.</given-names> <surname>Zheng</surname></string-name></person-group>, &#x201C;<article-title>Med3D: Transfer learning for 3D medical image analysis</article-title>,&#x201D; arXiv preprint arXiv:1904.00625, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-32"><label>[32]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>R. C.</given-names> <suffix>Jr.</suffix> <surname>Jack</surname></string-name>, <string-name><given-names>A. M.</given-names> <surname>Bernstein</surname></string-name>, <string-name><given-names>C. N.</given-names> <surname>Fox</surname></string-name>, <string-name><given-names>P.</given-names> <surname>Thompson</surname></string-name>, <string-name><given-names>G.</given-names> <surname>Alexander</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>The Alzheimer&#x2019;s disease neuroimaging initiative (ADNI): MRI methods</article-title>,&#x201D; <source>Journal of Magnetic Resonance Imaging</source>, vol. <volume>27</volume>, no. <issue>4</issue>, pp. <fpage>685</fpage>&#x2013;<lpage>691</lpage>, <year>2008</year>; <pub-id pub-id-type="pmid">18302232</pub-id></mixed-citation></ref>
<ref id="ref-33"><label>[33]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>G.</given-names> <surname>Ch&#x00E9;telat</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Arbizu</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Barthe</surname></string-name>, <string-name><given-names>V.</given-names> <surname>Garibotto</surname></string-name>, <string-name><given-names>I.</given-names> <surname>Law</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>Amyloid-PET and 18F-FDG-PET in the diagnostic investigation of Alzheimer&#x2019;s disease and other dementias</article-title>,&#x201D; <source>The Lancet Neurology</source>, vol. <volume>19</volume>, no. <issue>11</issue>, pp. <fpage>951</fpage>&#x2013;<lpage>962</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-34"><label>[34]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J. S.</given-names> <surname>Andrews</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Fulton-Howard</surname></string-name> and <string-name><given-names>A.</given-names> <surname>Goate</surname></string-name></person-group>, &#x201C;<article-title>Interpretation of risk loci from genome-wide association studies of Alzheimer&#x2019;s disease</article-title>,&#x201D; <source>The Lancet Neurology</source>, vol. <volume>19</volume>, no. <issue>4</issue>, pp. <fpage>326</fpage>&#x2013;<lpage>335</lpage>, <year>2020</year>; <pub-id pub-id-type="pmid">31986256</pub-id></mixed-citation></ref>
<ref id="ref-35"><label>[35]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Liu</surname></string-name>, <string-name><given-names>G.</given-names> <surname>Tan</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Lan</surname></string-name> and <string-name><given-names>J.</given-names> <surname>Wang</surname></string-name></person-group>, &#x201C;<article-title>Identification of early mild cognitive impairment using multi-modal data and graph convolutional networks</article-title>,&#x201D; <source>BMC Bioinformatics</source>, vol. <volume>21</volume>, no. <issue>6</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>12</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-36"><label>[36]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>F.</given-names> <surname>Mohammadian</surname></string-name>, <string-name><given-names>A. Z.</given-names> <surname>Sadeghi</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Noroozian</surname></string-name>, <string-name><given-names>V.</given-names> <surname>Malekian</surname></string-name>, <string-name><given-names>M. A.</given-names> <surname>Sisara</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>Quantitative assessment of resting-state functional connectivity MRI to differentiate amnestic mild cognitive impairment, late-onset Alzheimer&#x2019;s disease from normal subjects</article-title>,&#x201D; <source>Journal of Magnetic Resonance Imaging</source>, vol. <volume>57</volume>, no. <issue>6</issue>, pp. <fpage>1702</fpage>&#x2013;<lpage>1712</lpage>, <year>2023</year>; <pub-id pub-id-type="pmid">36226735</pub-id></mixed-citation></ref>
<ref id="ref-37"><label>[37]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Z.</given-names> <surname>Niu</surname></string-name>, <string-name><given-names>G.</given-names> <surname>Zhong</surname></string-name> and <string-name><given-names>H.</given-names> <surname>Yu</surname></string-name></person-group>, &#x201C;<article-title>A review on the attention mechanism of deep learning</article-title>,&#x201D; <source>Neurocomputing</source>, vol. <volume>452</volume>, pp. <fpage>48</fpage>&#x2013;<lpage>62</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-38"><label>[38]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>T. Y.</given-names> <surname>Zhang</surname></string-name> and <string-name><given-names>Q. S.</given-names> <surname>Liu</surname></string-name></person-group>, &#x201C;<article-title>Individual identification using multi-metric of DTI in Alzheimer&#x2019;s disease and mild cognitive impairment</article-title>,&#x201D; <source>Chinese Physics B</source>, vol. <volume>27</volume>, no. <issue>8</issue>, pp. <fpage>088702</fpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-39"><label>[39]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Sharma</surname></string-name> and <string-name><given-names>P.</given-names> <surname>Dey</surname></string-name></person-group>, &#x201C;<article-title>A machine learning approach to unmask novel gene signatures and prediction of Alzheimer&#x2019;s disease within different brain regions</article-title>,&#x201D; <source>Genomics</source>, vol. <volume>113</volume>, no. <issue>4</issue>, pp. <fpage>1778</fpage>&#x2013;<lpage>1789</lpage>, <year>2021</year>; <pub-id pub-id-type="pmid">33878365</pub-id></mixed-citation></ref>
<ref id="ref-40"><label>[40]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Z.</given-names> <surname>Fan</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>G.</given-names> <surname>Zhu</surname></string-name>, <string-name><given-names>P.</given-names> <surname>Li</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>U-net based analysis of MRI for Alzheimer&#x2019;s disease diagnosis</article-title>,&#x201D; <source>Neural Computing and Applications</source>, vol. <volume>33</volume>, pp. <fpage>13587</fpage>&#x2013;<lpage>13599</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-41"><label>[41]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>D.</given-names> <surname>Pan</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Zeng</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Jia</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Huang</surname></string-name>, <string-name><given-names>T.</given-names> <surname>Frizzell</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>Early detection of Alzheimer&#x2019;s disease using magnetic resonance imaging: A novel approach combining convolutional neural networks and ensemble learning</article-title>,&#x201D; <source>Frontiers in Neuroscience</source>, vol. <volume>14</volume>, pp. <fpage>259</fpage>, <year>2020</year>; <pub-id pub-id-type="pmid">32477040</pub-id></mixed-citation></ref>
<ref id="ref-42"><label>[42]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M. Z.</given-names> <surname>Rahman</surname></string-name>, <string-name><given-names>M. A.</given-names> <surname>Akbar</surname></string-name>, <string-name><given-names>V.</given-names> <surname>Leiva</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Tahir</surname></string-name>, <string-name><given-names>M. T.</given-names> <surname>Riaz</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>An intelligent health monitoring and diagnosis system based on the internet of things and fuzzy logic for cardiac arrhythmia COVID-19 patients</article-title>,&#x201D; <source>Computers in Biology and Medicine</source>, vol. <volume>154</volume>, pp. <fpage>106583</fpage>, <year>2023</year>; <pub-id pub-id-type="pmid">36716687</pub-id></mixed-citation></ref>
<ref id="ref-43"><label>[43]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S. J.</given-names> <surname>Marshall</surname></string-name> and <string-name><given-names>A. M.</given-names> <surname>Thomson</surname></string-name></person-group>, &#x201C;<article-title>The Pandora software development kit for pattern recognition</article-title>,&#x201D; <source>The European Physical Journal C</source>, vol. <volume>75</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>16</lpage>, <year>2015</year>.</mixed-citation></ref>
<ref id="ref-44"><label>[44]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M. F.</given-names> <surname>Reyes</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Auer</surname></string-name>, <string-name><given-names>N.</given-names> <surname>Merkle</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Henry</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Schmitt</surname></string-name></person-group>, &#x201C;<article-title>SAR-to-optical image translation based on conditional generative adversarial networks-optimization, opportunities and limits</article-title>,&#x201D; <source>Remote Sensing</source>, vol. <volume>11</volume>, no. <issue>17</issue>, pp. <fpage>2067</fpage>, <year>2019</year>.</mixed-citation></ref>
</ref-list>
</back></article>