<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xml:lang="en" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">CMC</journal-id>
<journal-id journal-id-type="nlm-ta">CMC</journal-id>
<journal-id journal-id-type="publisher-id">CMC</journal-id>
<journal-title-group>
<journal-title>Computers, Materials &#x0026; Continua</journal-title>
</journal-title-group>
<issn pub-type="epub">1546-2226</issn>
<issn pub-type="ppub">1546-2218</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">47167</article-id>
<article-id pub-id-type="doi">10.32604/cmc.2024.047167</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Enhancing Multicriteria-Based Recommendations by Alleviating Scalability and Sparsity Issues Using Collaborative Denoising Autoencoder</article-title>
<alt-title alt-title-type="left-running-head">Enhancing Multicriteria-Based Recommendations by Alleviating Scalability and Sparsity Issues Using Collaborative Denoising Autoencoder</alt-title>
<alt-title alt-title-type="right-running-head">Enhancing Multicriteria-Based Recommendations by Alleviating Scalability and Sparsity Issues Using Collaborative Denoising Autoencoder</alt-title>
</title-group>
<contrib-group>
<contrib id="author-1" contrib-type="author" corresp="yes">
<name name-style="western"><surname>Abinaya</surname><given-names>S.</given-names></name><email>s.abinaya@vit.ac.in</email></contrib>
<contrib id="author-2" contrib-type="author">
<name name-style="western"><surname>Kumar</surname><given-names>K. Uttej</given-names></name></contrib>
<aff><institution>School of Computer Science &#x0026; Engineering, Vellore Institute of Technology</institution>, <addr-line>Chennai</addr-line>, <country>Tamilnadu, 600 127</country>, <country>India</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>&#x002A;</label>Corresponding Author: S. Abinaya. Email: <email>s.abinaya@vit.ac.in</email></corresp>
</author-notes>
<pub-date date-type="collection" publication-format="electronic"><year>2024</year></pub-date>
<pub-date date-type="pub" publication-format="electronic"><day>27</day><month>2</month><year>2024</year></pub-date>
<volume>78</volume>
<issue>2</issue>
<fpage>2269</fpage>
<lpage>2286</lpage>
<history>
<date date-type="received"><day>27</day><month>10</month><year>2023</year>
</date>
<date date-type="accepted"><day>05</day><month>12</month><year>2023</year>
</date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2024 Abinaya and Kumar</copyright-statement>
<copyright-year>2024</copyright-year>
<copyright-holder>Abinaya and Kumar</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_CMC_47167.pdf"></self-uri>
<abstract>
<p>A Recommender System (RS) is a crucial part of several firms, particularly those involved in e-commerce. In conventional RS, a user may only offer a single rating for an item-that is insufficient to perceive consumer preferences. Nowadays, businesses in industries like e-learning and tourism enable customers to rate a product using a variety of factors to comprehend customers&#x2019; preferences. On the other hand, the collaborative filtering (CF) algorithm utilizing AutoEncoder (AE) is seen to be effective in identifying user-interested items. However, the cost of these computations increases nonlinearly as the number of items and users increases. To triumph over the issues, a novel expanded stacked autoencoder (ESAE) with Kernel Fuzzy C-Means Clustering (KFCM) technique is proposed with two phases. In the first phase of offline, the sparse multicriteria rating matrix is smoothened to a complete matrix by predicting the users&#x2019; intact rating by the ESAE approach and users are clustered using the KFCM approach. In the next phase of online, the top-N recommendation prediction is made by the ESAE approach involving only the most similar user from multiple clusters. Hence the ESAE_KFCM model upgrades the prediction accuracy of 98.2% in Top-N recommendation with a minimized recommendation generation time. An experimental check on the Yahoo! Movies (YM) movie dataset and TripAdvisor (TA) travel dataset confirmed that the ESAE_KFCM model constantly outperforms conventional RS algorithms on a variety of assessment measures.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>Recommender systems</kwd>
<kwd>multicriteria rating</kwd>
<kwd>collaborative filtering</kwd>
<kwd>sparsity issue</kwd>
<kwd>scalability issue</kwd>
<kwd>stacked-autoencoder</kwd>
<kwd>Kernel Fuzzy C-Means Clustering</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label>
<title>Introduction</title>
<p>Over recent decades, Recommender Systems (RS) have grown to be a pivotal solution in tackling the issues of the overwhelming amount of information, particularly within the dynamic landscape of e-commerce. These systems serve the essential function of providing personalized suggestions, encompassing a wide array of items and services, to individual users. This is achieved through the intricate analysis of users&#x2019; diverse information sources, including their explicit ratings, detailed reviews, historical purchasing patterns, and even implicit behavioral cues. Recommendation systems often employ content-driven and cooperation-driven filtering approaches. In the realm of content-driven filtering, the technique revolves around delving into the preferences and attributes of the present item, along with the historical choices of the active user [<xref ref-type="bibr" rid="ref-1">1</xref>]. This method essentially tailors&#x2019; recommendations based on the intrinsic qualities of items and the user&#x2019;s demonstrated inclinations, creating a personalized tapestry of suggestions. On the other hand, the landscape of collaborative filtering (CF) has emerged as a potent force within the e-commerce industry, amassing notable achievements [<xref ref-type="bibr" rid="ref-2">2</xref>]. This method hinges on the power of collective wisdom, orchestrating recommendations by extrapolating from the user&#x2019;s historical rating interactions. By identifying patterns in user behavior, CF techniques adeptly uncover latent connections among users with analogous preferences. This enables them to proficiently recommend items that remain unexplored within a user&#x2019;s history but are anticipated to resonate with their tastes. Within the realm of collaborative filtering (CF) algorithms, we find a crucial categorization: Memory-focused CF and Model-centric CF. Memory-driven CF, in particular, operates by recognizing akin users or comparable items concerning the current user (or a designated target item). These proximate entities&#x2019; inclinations then serve as valuable inputs in shaping the recommendations provided [<xref ref-type="bibr" rid="ref-3">3</xref>]. In contrast, Model-based CF methodologies, celebrated for their enhanced precision, delve deeper by comprehensively comprehending users&#x2019; and items&#x2019; inherent attributes. This adeptness is cultivated during the model development phase, facilitated by the deployment of sophisticated machine-learning techniques. This encompasses the strategic use of methodologies like matrix factorization [<xref ref-type="bibr" rid="ref-4">4</xref>], factorization machines [<xref ref-type="bibr" rid="ref-5">5</xref>], and deep neural networks [<xref ref-type="bibr" rid="ref-6">6</xref>&#x2013;<xref ref-type="bibr" rid="ref-9">9</xref>], all synergistically working to grasp and internalize users&#x2019; nuanced preferences.</p>
<p>The previously mentioned methodologies have conventionally been applied to solitary grading schemes. But depending only on these prove inadequate in capturing the diverse spectrum of user feedback, particularly in multifaceted service sectors like restaurants, hotels, and movies. In such contexts, users&#x2019; experiences are multidimensional and cannot be fully encapsulated through a single rating. In multi-criteria recommender systems, a more comprehensive approach is adopted. These systems enable users to provide feedback across various criteria, offering a nuanced evaluation of an item. For instance, within the realm of restaurants, a user might furnish ratings for distinct attributes such as taste, hygiene, ambiance, hospitality, and price, supplementing an overall rating. This augmentation of feedback affords multi-criteria recommender systems [<xref ref-type="bibr" rid="ref-10">10</xref>] a richer pool of information to draw upon. Consequently, these systems excel in suggesting items that align more accurately with users&#x2019; multifaceted preferences in comparison to their single-rating counterparts.</p>
<p>Several researchers have ventured into incorporating multi-criteria rating information within their recommendation frameworks [<xref ref-type="bibr" rid="ref-11">11</xref>,<xref ref-type="bibr" rid="ref-12">12</xref>]. However, the potential of applying diverse machine learning techniques to multi-criteria recommender systems remains largely untapped. This study introduces an innovative utilization of a deep learning method known as Stacked Autoencoder (SAE) [<xref ref-type="bibr" rid="ref-13">13</xref>]. In response to the intricacies of multi-criteria rating systems, we present an extended version of SAE that is tailored to address their unique demands. To adapt this technique effectively, modifications are implemented to both the interface component of the traditional network and the objective function. These adjustments are strategically introduced to simplify the comprehension of complex associations between multi-criteria evaluations and their respective aggregate ratings.</p>
<p>As RS is made to aid users in navigating through an enormous number of items, one of its main objectives is to scale up to actual datasets [<xref ref-type="bibr" rid="ref-14">14</xref>]. Conventional collaborative filtering techniques may encounter significant scalability challenges as the volume of items and users grows, leading to computational demands that surpass practical or acceptable thresholds [<xref ref-type="bibr" rid="ref-15">15</xref>]. User clustering via affinities in their user profiles is a popular technique for making recommender systems more scalable and reducing their time complexity. The intricacy of suggestions is only dependent on the size of the cluster when made by cluster representatives for the remaining cluster members [<xref ref-type="bibr" rid="ref-16">16</xref>,<xref ref-type="bibr" rid="ref-17">17</xref>]. Several model-based strategies [<xref ref-type="bibr" rid="ref-18">18</xref>] have been presented before addressing scalability and sparsity challenges in recommender systems. To realize the full potential of recommender system research, it is necessary to comprehend the most common methodologies applied to directly construct recommender algorithms or to pre-process recommendation datasets, as well as their advantages and disadvantages. As a result, our proposed methodology surpasses the performance benchmarks set by both contrasting single-score recommendation platforms with cutting-edge scalable recommendation systems utilizing diverse criteria.</p>
<p>In brief, this research offers the following major contributions:</p>
<list list-type="simple">
<list-item><label>_</label><p>The novel ESAE_KFCM has 2 offline stage, the sparse user-item multicriteria rating matrix undergoes a smoothing process to make the matrix complete without sparsity by estimating an intact rating of the item using an expanded stacked autoencoder that is not possible in a conventional autoencoder.</p></list-item>
<list-item><label>_</label><p>To attempt the problem of scalability and to generate the most excellent recommendations, a Kernel Fuzzy C-Means Clustering (KFCM) is utilized to cluster highly correlated users. Thereby, highly relevant users from multiple clusters alone are utilized in the online phase prediction of intact rating using the ESAE_KFCM approach that minimizes the time taken for recommendation generation and increases the prediction accuracy.</p></list-item>
<list-item><label>_</label><p>Extensive experiments on real-world multicriteria data sets from the Yahoo! Movies (YM) movie dataset and TripAdvisor (TA) travel dataset confirm that ESAE_KFCM outperforms the traditional recommender systems.</p></list-item>
</list>
<p>The structure of the work is as follows: The related work of the multicriteria-based recommender system is reviewed in <xref ref-type="sec" rid="s2">Section 2</xref>. In <xref ref-type="sec" rid="s3">Section 3</xref>, the proposed ESAE_KFCM approach is presented. The specifics of the experimental evaluation and results in terms of prediction accuracy and computation time are discussed in <xref ref-type="sec" rid="s4">Section 4</xref>. Finally, the conclusion and future work of the paper is discussed in <xref ref-type="sec" rid="s5">Section 5</xref>.</p>
</sec>
<sec id="s2">
<label>2</label>
<title>Related Work</title>
<p>Among the pioneering endeavors in the domain of recommendation systems employing multiple aspects is the work by Adomavicius et al. [<xref ref-type="bibr" rid="ref-10">10</xref>]. In their approach, they harnessed statistical methodologies to formulate an aggregation function that bridges the gap between diverse evaluative factors and the comprehensive overall assessment. This entails predicting multi-criteria ratings through conventional techniques [<xref ref-type="bibr" rid="ref-19">19</xref>,<xref ref-type="bibr" rid="ref-20">20</xref>] and subsequently utilizing the aggregation function to derive the overall rating, a process integral to the recommendation process. However, it is worth noting that this approach adheres to an established predilection for parameters consistently among all individuals. Lu et al. [<xref ref-type="bibr" rid="ref-21">21</xref>] presented a hybrid approach that amalgamates content-driven filtering and item-centric collaborative recommendation methods. This hybrid approach was devised to address the challenges posed by the cold-start problem and sparsity within the context of recommender systems employing multiple criteria. Nilashi has made significant contributions to the realm of multi-dimensional recommendation systems through a series of works, as evidenced in [<xref ref-type="bibr" rid="ref-22">22</xref>] and [<xref ref-type="bibr" rid="ref-23">23</xref>]. In [<xref ref-type="bibr" rid="ref-22">22</xref>], Nilashi et al. introduced algorithms rooted in fuzzy logic to enhance the precision of multi-dimensional recommendation systems. Continuing their exploration in [<xref ref-type="bibr" rid="ref-23">23</xref>], Nilashi et al. introduced a blended approach that utilizes the Ant K-means clustering technique. In this structure, essential elements are derived from each group of users through Principal Component Analysis. This information is then employed to facilitate the learning of the sequence linking global assessments and the harvested elements using Support Vector Regression (SVR). Zheng [<xref ref-type="bibr" rid="ref-12">12</xref>] introduced a set of three distinct approaches known as the &#x2018;criteria chain.&#x2019; In the first approach, the prediction process commences with the initial criteria rating, which then serves as contextual information while forecasting the subsequent criteria rating, and so forth. Ultimately, SVR is employed to anticipate the complete assessment based on the sequence of criteria evaluations in addition to Preference learning methodology [<xref ref-type="bibr" rid="ref-24">24</xref>]. The alternative method involves employing Context-Aware Matrix Factorization (CAMF) [<xref ref-type="bibr" rid="ref-25">25</xref>] for predicting the eventual rating by incorporating the ratings of criteria as contextual hints. In the sequential approach, each evaluation factor is autonomously projected, and these anticipated assessments subsequently contribute as background information when predicting the overall assessment. However, relying on sequentially predicted criteria ratings could potentially lead to a cumulative loss in predictive accuracy when estimating the final overall rating [<xref ref-type="bibr" rid="ref-26">26</xref>&#x2013;<xref ref-type="bibr" rid="ref-31">31</xref>]. <xref ref-type="table" rid="table-1">Table 1</xref> provides related works, offering insights into various algorithms and methodologies.</p>
<table-wrap id="table-1">
<label>Table 1</label>
<caption>
<title>Summary of additional related works and algorithms</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Work</th>
<th>Algorithms used</th>
<th>Inference</th>
</tr>
</thead>
<tbody>
<tr>
<td>Zhang et al. [<xref ref-type="bibr" rid="ref-19">19</xref>]</td>
<td>Pair of Probabilistic Latent Semantic Analysis (PLSA) models</td>
<td>PLSA models tend to converge towards local minima, limiting optimization efficiency.</td>
</tr>
<tr>
<td>Liu et al. [<xref ref-type="bibr" rid="ref-20">20</xref>]</td>
<td>T-test on users&#x2019; ratings data to identify significant criteria</td>
<td>Assumes a Gaussian distribution, impractical for sparse rating datasets.</td>
</tr>
<tr>
<td>Jannach et al. [<xref ref-type="bibr" rid="ref-11">11</xref>]</td>
<td>Support Vector Regression (SVR)</td>
<td>Comprehends the complex interaction between global assessments and individual evaluation scores, aligning with the framework introduced by Adomavicius and Kwon.</td>
</tr>
<tr>
<td>Shu et al. [<xref ref-type="bibr" rid="ref-32">32</xref>]</td>
<td>Specific Class Center Guided Deep Hashing (SCCGDH)</td>
<td>Proposes SCCGDH to learn specific class centers from neural networks and guide hashing learning for multimedia data. Uses networks to reduce intraclass variation and achieve inter-modal invariance, outperforming other hashing approaches.</td>
</tr>
<tr>
<td>Shu et al. [<xref ref-type="bibr" rid="ref-33">33</xref>&#x2013;<xref ref-type="bibr" rid="ref-35">35</xref>]</td>
<td>RSMFH (Matrix Factorization), ROHLSE (Online Hashing with Label Semantic Enhancement)</td>
<td>Introduces RSMFH for multimodal data, maintaining shared and specific properties via matrix factorization, enhancing discriminative ability. Proposes ROHLSE address label noise in online hashing, utilizing low-rank and sparse constraints, achieving superior results in cross-modal retrieval tasks.</td>
</tr>
<tr>
<td>Ramesh et al. [<xref ref-type="bibr" rid="ref-34">34</xref>]</td>
<td>Skill Level Navigation Patterns-Mining, CF, Content-Based Filtering</td>
<td>Addresses information overload in programming online judges (POJs) and proposes RS based on learners&#x2019; ability level navigation patterns. Outperforms other approaches in practice problem recommender systems.</td>
</tr>
<tr>
<td>Sreepada et al. [<xref ref-type="bibr" rid="ref-24">24</xref>]</td>
<td>Preference learning methodology</td>
<td>Addressed individual preferences and common criteria associated with each element. Incorporates user-centered and product-focused collaborative filtration (CF).</td>
</tr>
<tr>
<td>Sinha et al. [<xref ref-type="bibr" rid="ref-26">26</xref>]</td>
<td>Social spider optimization, matrix factorization, neural networks</td>
<td>Integrates various techniques for recommendation systems but faces challenges in prediction accuracy and scalability.</td>
</tr>
<tr>
<td>Shu et al. [<xref ref-type="bibr" rid="ref-36">36</xref>]</td>
<td>Discrete Asymmetric Zero-Shot Hashing (DAZSH)</td>
<td>Integrates pairwise similarity, class attributes, and semantic labels for zero-shot hashing learning. Efficient discrete optimization strategy. Achieves promising results in cross-modal retrieval tasks.</td>
</tr>
<tr>
<td>Kannimuthu et al. [<xref ref-type="bibr" rid="ref-37">37</xref>,<xref ref-type="bibr" rid="ref-38">38</xref>]</td>
<td>Stacked CNN, BiLSTM, Conditional GAN</td>
<td>Both studies leverage advanced techniques &#x2013; one for social media author profiling and the other for ASD prediction using neuroimaging, showcasing the authors&#x2019; expertise in diverse applications of deep learning.</td>
</tr>
<tr>
<td>Wu et al. [<xref ref-type="bibr" rid="ref-39">39</xref>] &#x0026; Gao et al. [<xref ref-type="bibr" rid="ref-40">40</xref>]</td>
<td>Graph Neural Networks (GNN)</td>
<td>A comprehensive survey on GNNs in recommender systems, categorizing models, addressing challenges, and providing insights for future development.</td>
</tr>
<tr>
<td>Gao et al. [<xref ref-type="bibr" rid="ref-41">41</xref>]</td>
<td>Large Language Models (LLMs)</td>
<td>Proposes Chat-Rec, an innovative approach to conversational recommender systems, leveraging LLMs for improved interactivity, explainability, and cross-domain recommendations.</td>
</tr>
<tr>
<td>Yang et al. [<xref ref-type="bibr" rid="ref-42">42</xref>]</td>
<td>Sequence-to-sequence paradigm, Prompt-based learning strategies</td>
<td>Proposes UniMIND, a unified framework for MG-CRS that addresses Goal Planning, Topic Prediction, Item Recommendation, and Response Generation.</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s3">
<label>3</label>
<title>Proposed Method</title>
<p>The architectural view of the proposed ESAE_KFCM approach is depicted in <xref ref-type="fig" rid="fig-1">Fig. 1</xref>. As specified in <xref ref-type="fig" rid="fig-1">Fig. 1</xref>, the smoothening approach in the phase-I offline task utilizes a novel expanded stacked autoencoder that combines users&#x2019; multicriteria-based ratings given for the specific item to predict overall or intact rating value. Thus, the sparse matrix of multicriteria ratings is made into a filled matrix of intact ratings that yields an efficient way to predict the most relevant time and identify the highly correlated user. To perform user modeling, the Kernel Fuzzy C-Means Clustering (KFCM) approach is utilized to cluster the correlated users. In the online phase, the active user is recommended with the top-N most relevant item by the based rating prediction approach. The overall process of the proposed ESAE_KFCM approach is given in Algorithm 1.</p>
<fig id="fig-1">
<label>Figure 1</label>
<caption>
<title>Proposed approach using ESAE_KFCM</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_47167-fig-1.tif"/>
</fig>
<fig id="fig-6">
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_47167-fig-6.tif"/>
</fig>
<p>The following data has been provided in a transaction database:</p>
<p>1. A collection of <italic>I</italic> users <inline-formula id="ieqn-8"><mml:math id="mml-ieqn-8"><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mo fence="true" stretchy="true" symmetric="true"></mml:mo><mml:msub><mml:mi>U</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>|</mml:mo></mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:mi>I</mml:mi><mml:mo>}</mml:mo></mml:mrow></mml:math></inline-formula>.</p>
<p>2. A collection of <italic>K</italic> diverse items <inline-formula id="ieqn-9"><mml:math id="mml-ieqn-9"><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mo fence="true" stretchy="true" symmetric="true"></mml:mo><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msub><mml:mo>|</mml:mo></mml:mrow><mml:mi>m</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:mi>K</mml:mi><mml:mo>}</mml:mo></mml:mrow></mml:math></inline-formula>.</p>
<p>3. A table of rating <inline-formula id="ieqn-10"><mml:math id="mml-ieqn-10"><mml:mi>R</mml:mi><mml:mi>&#x03B9;</mml:mi></mml:math></inline-formula> of size <inline-formula id="ieqn-11"><mml:math id="mml-ieqn-11"><mml:mrow><mml:mo>[</mml:mo><mml:mi>I</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>K</mml:mi><mml:mo>]</mml:mo></mml:mrow></mml:math></inline-formula> that contains data on the users&#x2019; history of multicriteria ratings of items <inline-formula id="ieqn-12"><mml:math id="mml-ieqn-12"><mml:mi>o</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mi>n</mml:mi><mml:mo>,</mml:mo><mml:mi>m</mml:mi><mml:mo>,</mml:mo><mml:msubsup><mml:mi>r</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mi>r</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mi>r</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:msubsup><mml:mi>r</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mi>q</mml:mi></mml:mrow></mml:msubsup><mml:mo>}</mml:mo></mml:mrow></mml:math></inline-formula> where <inline-formula id="ieqn-13"><mml:math id="mml-ieqn-13"><mml:msubsup><mml:mi>r</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula> implies the rating value for item <italic>m</italic> by user <italic>n</italic> under criteria <italic>1</italic> and <italic>q</italic> represents the total number of criteria. A zero is used to indicate the rating value of unavailable items.</p>
<sec id="s3_1">
<label>3.1</label>
<title>Off-Line Task of the ESAE_KFCM</title>
<p>Two phases of the task are carried out offline, specifically, Phase I performs Smoothening, and User modeling is done in Phase II.</p>
<sec id="s3_1_1">
<label>3.1.1</label>
<title>Smoothening-Expanded Stacked Denoising Autoencoder (ESAE) (Phase-I)</title>
<p>Here, the stacked denoising autoencoder is expanded to include a multicriteria of ratings given by the user for an item in the network. Consequently, the improved loss function is incorporated accordingly [<xref ref-type="bibr" rid="ref-27">27</xref>]. The user&#x2013;item explicit preferences in the matrix of sparse multicriteria ratings are computed to a dense matrix of overall ratings using ESAE.</p>
<p>The count of neurons that comprise the first layer of a typical autoencoder is similar to the count of parameters in the input. The fact from each attribute is supplied to the appropriate neurons within the layer of input. But in the case of a multi-criteria context, any product will have multiple quality ratings that must be fed inside the network. As a consequence, the autoencoder was expanded by having an add-on layer that encompasses multi-criteria ratings. Particularly, denoising AE aims to prevent the latent layer from just learning the identification function and to compel it to find more robust features. Therefore, the expanded add-on layer serves as a first input layer that is corrupted with the addition of Gaussian noise and is subsequently connected to the intermediary layer containing item nodes. It is indeed connected to the successive <italic>T</italic> encoding layers to uncover the items&#x2019; hidden representation.</p>
<p>The final layer of encoding is allied to the <italic>T</italic> layers of decoding, which are employed to interpret the latent features learned from corresponding encoders. The items&#x2019; exact specific ratings are predicted as an output in the final decoding layer.</p>
<p>Let <inline-formula id="ieqn-14"><mml:math id="mml-ieqn-14"><mml:msubsup><mml:mi>r</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>n</mml:mi></mml:mrow><mml:mrow><mml:mi>o</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> be the optimum rating for an item <italic>m</italic> given by user <italic>n</italic> which is attained from <inline-formula id="ieqn-15"><mml:math id="mml-ieqn-15"><mml:msubsup><mml:mi>r</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mi>r</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mi>r</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:msubsup><mml:mi>r</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mi>q</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> respectively as mentioned in <xref ref-type="disp-formula" rid="eqn-1">Eq. (1)</xref>.
<disp-formula id="eqn-1"><label>(1)</label><mml:math id="mml-eqn-1" display="block"><mml:msubsup><mml:mi>r</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mi>o</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:msubsup><mml:mi>r</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mo>&#x00D7;</mml:mo><mml:msubsup><mml:mi>w</mml:mi><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mo>+</mml:mo><mml:msubsup><mml:mi>r</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>&#x00D7;</mml:mo><mml:msubsup><mml:mi>w</mml:mi><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>+</mml:mo><mml:msubsup><mml:mi>r</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msubsup><mml:mo>&#x00D7;</mml:mo><mml:msubsup><mml:mi>w</mml:mi><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msubsup><mml:mo>+</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>+</mml:mo><mml:msubsup><mml:mi>r</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mi>q</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x00D7;</mml:mo><mml:msubsup><mml:mi>w</mml:mi><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mi>q</mml:mi></mml:mrow></mml:msubsup></mml:math></disp-formula></p>
<p>where <inline-formula id="ieqn-16"><mml:math id="mml-ieqn-16"><mml:msubsup><mml:mi>w</mml:mi><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mi>w</mml:mi><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mi>w</mml:mi><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:msubsup><mml:mi>w</mml:mi><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mi>q</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> are the multicriteria weights of the <italic>m</italic><sup><italic>th</italic></sup> item. These unified optimum values of rating are given to the subsequent layers of the ESAE model. Finally, a justifiable and intact preference value <inline-formula id="ieqn-17"><mml:math id="mml-ieqn-17"><mml:mrow><mml:mover><mml:mi>R</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow><mml:msub><mml:mi>&#x03B9;</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is predicted as in <xref ref-type="disp-formula" rid="eqn-2">Eq. (2)</xref> by a completely connected network as depicted in <xref ref-type="fig" rid="fig-2">Fig. 2</xref>.</p>
<p><disp-formula id="eqn-2"><label>(2)</label><mml:math id="mml-eqn-2" display="block"><mml:mrow><mml:mover><mml:mi>R</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow><mml:msub><mml:mi>&#x03B9;</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msup><mml:mi>G</mml:mi><mml:mrow><mml:mo>&#x2032;</mml:mo></mml:mrow></mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>G</mml:mi><mml:mrow><mml:mo>{</mml:mo><mml:msubsup><mml:mi>r</mml:mi><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mi>o</mml:mi></mml:mrow></mml:msubsup><mml:mo>}</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mi>s</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msup><mml:mi>s</mml:mi><mml:mrow><mml:mo>&#x2032;</mml:mo></mml:mrow></mml:msup><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula></p>
<fig id="fig-2">
<label>Figure 2</label>
<caption>
<title>Structure of expanded stacked autoencoder (ESAE)</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_47167-fig-2.tif"/>
</fig>
<p>where <inline-formula id="ieqn-18"><mml:math id="mml-ieqn-18"><mml:mrow><mml:mover><mml:mi>R</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow><mml:msub><mml:mi>&#x03B9;</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2208;</mml:mo><mml:msup><mml:mi>&#x211D;</mml:mi><mml:mrow><mml:mi mathvariant="normal">K</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> is the final predicted layer of intact rating value, <inline-formula id="ieqn-19"><mml:math id="mml-ieqn-19"><mml:mi>G</mml:mi><mml:mo>&#x2208;</mml:mo><mml:msup><mml:mi>&#x211D;</mml:mi><mml:mrow><mml:mi mathvariant="normal">H</mml:mi><mml:mi>&#x00D7;</mml:mi><mml:mi mathvariant="normal">K</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> and <inline-formula id="ieqn-20"><mml:math id="mml-ieqn-20"><mml:msup><mml:mi>G</mml:mi><mml:mrow><mml:mo>&#x2032;</mml:mo></mml:mrow></mml:msup><mml:mo>&#x2208;</mml:mo><mml:msup><mml:mi>&#x211D;</mml:mi><mml:mrow><mml:mi mathvariant="normal">I</mml:mi><mml:mi>&#x00D7;</mml:mi><mml:mi mathvariant="normal">H</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> are the weight matrices, <inline-formula id="ieqn-21"><mml:math id="mml-ieqn-21"><mml:msubsup><mml:mi>r</mml:mi><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mi>o</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x2208;</mml:mo><mml:msup><mml:mi>&#x211D;</mml:mi><mml:mrow><mml:mi mathvariant="normal">K</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> is the intact rating value of item <italic>m</italic> in the sparse <inline-formula id="ieqn-22"><mml:math id="mml-ieqn-22"><mml:mrow><mml:mover><mml:mi>R</mml:mi><mml:mo>&#x02D8;</mml:mo></mml:mover></mml:mrow><mml:msub><mml:mi>&#x03B9;</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, <inline-formula id="ieqn-23"><mml:math id="mml-ieqn-23"><mml:mi>s</mml:mi><mml:mo>&#x2208;</mml:mo><mml:msup><mml:mi>&#x211D;</mml:mi><mml:mrow><mml:mi mathvariant="normal">H</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> and <inline-formula id="ieqn-24"><mml:math id="mml-ieqn-24"><mml:msup><mml:mi>s</mml:mi><mml:mrow><mml:mo>&#x2032;</mml:mo></mml:mrow></mml:msup><mml:mo>&#x2208;</mml:mo><mml:msup><mml:mi>&#x211D;</mml:mi><mml:mrow><mml:mi mathvariant="normal">K</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> is the bias vectors and function <italic>&#x03C1;</italic> is the hyperbolic tangent.</p>
<p>The ESAE model is trained using the updated loss function below in <xref ref-type="disp-formula" rid="eqn-3">Eq. (3)</xref> to gain a concise representation, since predicting the loss rate for the zero values in the rating vector <inline-formula id="ieqn-25"><mml:math id="mml-ieqn-25"><mml:mi>R</mml:mi><mml:msub><mml:mi>&#x03B9;</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> results in obsolete. To prevent overfitting of the proposed ESAE model, the objective function combines regularization terms and is given away as in <xref ref-type="disp-formula" rid="eqn-3">Eq. (3)</xref>.
<disp-formula id="eqn-3"><label>(3)</label><mml:math id="mml-eqn-3" display="block"><mml:mi>L</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mover><mml:mi>R</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow><mml:mi>&#x03B9;</mml:mi><mml:mo>,</mml:mo><mml:mrow><mml:mover><mml:mi>R</mml:mi><mml:mo>&#x02D8;</mml:mo></mml:mover></mml:mrow><mml:mi>&#x03B9;</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mi>R</mml:mi><mml:msup><mml:mi>&#x03B9;</mml:mi><mml:mrow><mml:mo>&#x2032;</mml:mo></mml:mrow></mml:msup></mml:mrow></mml:msub><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>E</mml:mi><mml:mi>S</mml:mi><mml:mi>A</mml:mi><mml:mi>E</mml:mi><mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mover><mml:mi>R</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow><mml:mi>&#x03B9;</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:mover><mml:mi>R</mml:mi><mml:mo>&#x02D8;</mml:mo></mml:mover></mml:mrow><mml:msub><mml:mi>&#x03B9;</mml:mi><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>+</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msubsup><mml:mrow><mml:mo symmetric="true">&#x2016;</mml:mo><mml:mi>G</mml:mi><mml:mo symmetric="true">&#x2016;</mml:mo></mml:mrow><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>+</mml:mo><mml:msubsup><mml:mrow><mml:mo symmetric="true">&#x2016;</mml:mo><mml:msup><mml:mi>G</mml:mi><mml:mrow><mml:mo>&#x2032;</mml:mo></mml:mrow></mml:msup><mml:mo symmetric="true">&#x2016;</mml:mo></mml:mrow><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>+</mml:mo><mml:msubsup><mml:mrow><mml:mo symmetric="true">&#x2016;</mml:mo><mml:mi>w</mml:mi><mml:mo symmetric="true">&#x2016;</mml:mo></mml:mrow><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>+</mml:mo><mml:msubsup><mml:mrow><mml:mo symmetric="true">&#x2016;</mml:mo><mml:mi>s</mml:mi><mml:mo symmetric="true">&#x2016;</mml:mo></mml:mrow><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>+</mml:mo><mml:msubsup><mml:mrow><mml:mo symmetric="true">&#x2016;</mml:mo><mml:msup><mml:mi>s</mml:mi><mml:mrow><mml:mo>&#x2032;</mml:mo></mml:mrow></mml:msup><mml:mo symmetric="true">&#x2016;</mml:mo></mml:mrow><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>where <inline-formula id="ieqn-26"><mml:math id="mml-ieqn-26"><mml:mrow><mml:mover><mml:mi>R</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow><mml:mi>&#x03B9;</mml:mi></mml:math></inline-formula> indicates an absolute array of intact ratings where <inline-formula id="ieqn-27"><mml:math id="mml-ieqn-27"><mml:mrow><mml:mover><mml:mi>R</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow><mml:mi>&#x03B9;</mml:mi><mml:mo>=</mml:mo><mml:mi>R</mml:mi><mml:msup><mml:mi>&#x03B9;</mml:mi><mml:mrow><mml:mo>&#x2033;</mml:mo></mml:mrow></mml:msup><mml:mo>&#x222A;</mml:mo><mml:mi>R</mml:mi><mml:msup><mml:mi>&#x03B9;</mml:mi><mml:mrow><mml:mo>&#x2034;</mml:mo></mml:mrow></mml:msup></mml:math></inline-formula>, <inline-formula id="ieqn-28"><mml:math id="mml-ieqn-28"><mml:mi>R</mml:mi><mml:msup><mml:mi>&#x03B9;</mml:mi><mml:mrow><mml:mo>&#x2033;</mml:mo></mml:mrow></mml:msup></mml:math></inline-formula> comprises of known intact rating instances, <inline-formula id="ieqn-29"><mml:math id="mml-ieqn-29"><mml:mi>R</mml:mi><mml:msup><mml:mi>&#x03B9;</mml:mi><mml:mrow><mml:mo>&#x2034;</mml:mo></mml:mrow></mml:msup></mml:math></inline-formula> comprises of the unknown instances, <inline-formula id="ieqn-30"><mml:math id="mml-ieqn-30"><mml:mrow><mml:mo symmetric="true">&#x2016;</mml:mo><mml:mi>G</mml:mi><mml:mo symmetric="true">&#x2016;</mml:mo></mml:mrow></mml:math></inline-formula>, <inline-formula id="ieqn-31"><mml:math id="mml-ieqn-31"><mml:mrow><mml:mo symmetric="true">&#x2016;</mml:mo><mml:msup><mml:mi>G</mml:mi><mml:mrow><mml:mo>&#x2032;</mml:mo></mml:mrow></mml:msup><mml:mo symmetric="true">&#x2016;</mml:mo></mml:mrow></mml:math></inline-formula> are the vectors of weight of the ESAE model, <inline-formula id="ieqn-32"><mml:math id="mml-ieqn-32"><mml:mrow><mml:mo symmetric="true">&#x2016;</mml:mo><mml:mi>w</mml:mi><mml:mo symmetric="true">&#x2016;</mml:mo></mml:mrow></mml:math></inline-formula> indicates the vectors of the weight of multicriteria rating of respective item, vectors of the weight of bias are <inline-formula id="ieqn-33"><mml:math id="mml-ieqn-33"><mml:mrow><mml:mo symmetric="true">&#x2016;</mml:mo><mml:mi>s</mml:mi><mml:mo symmetric="true">&#x2016;</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mo symmetric="true">&#x2016;</mml:mo><mml:msup><mml:mi>s</mml:mi><mml:mrow><mml:mo>&#x2032;</mml:mo></mml:mrow></mml:msup><mml:mo symmetric="true">&#x2016;</mml:mo></mml:mrow></mml:math></inline-formula> and <inline-formula id="ieqn-34"><mml:math id="mml-ieqn-34"><mml:mi>&#x03B2;</mml:mi></mml:math></inline-formula> is the regularization hyperparameter that controls how much learning has an impact on how generalizable the model is, so it is important to determine the right value, and <inline-formula id="ieqn-35"><mml:math id="mml-ieqn-35"><mml:mi>&#x03B2;</mml:mi></mml:math></inline-formula> is defined by the outcomes of the experiment. The parameters are updated using the ADAM optimizer, a version of stochastic gradient descent. In an attempt to overwhelm the identity network, regularized dropouts via a probability <italic>p</italic> are additionally introduced at all layers. The forecasted instance <italic>j&#x2019;s</italic> intact rating is <inline-formula id="ieqn-36"><mml:math id="mml-ieqn-36"><mml:mi>E</mml:mi><mml:mi>S</mml:mi><mml:mi>A</mml:mi><mml:mi>E</mml:mi><mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>R</mml:mi><mml:msup><mml:mi>&#x03B9;</mml:mi><mml:mrow><mml:mo>&#x2032;</mml:mo></mml:mrow></mml:msup><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> and <inline-formula id="ieqn-37"><mml:math id="mml-ieqn-37"><mml:mrow><mml:mover><mml:mi>R</mml:mi><mml:mo>&#x02D8;</mml:mo></mml:mover></mml:mrow><mml:msub><mml:mi>&#x03B9;</mml:mi><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, the real overall rating of instance <italic>j</italic>. From <xref ref-type="disp-formula" rid="eqn-3">Eq. (3)</xref>, it could be seen that the loss is computed using only the available rating instance set (<inline-formula id="ieqn-38"><mml:math id="mml-ieqn-38"><mml:mi>R</mml:mi><mml:msup><mml:mi>&#x03B9;</mml:mi><mml:mrow><mml:mo>&#x2032;</mml:mo></mml:mrow></mml:msup></mml:math></inline-formula>).</p>
<p>Now the rating table <inline-formula id="ieqn-39"><mml:math id="mml-ieqn-39"><mml:mrow><mml:mover><mml:mi>R</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow><mml:mi>&#x03B9;</mml:mi></mml:math></inline-formula> is a completely smoothened matrix of users&#x2019; intact rating values for all the items that have been predicted from multicriteria ratings given for the item by the user. By this ESAE approach the accuracy of the intact rating prediction is improved which enhances Top-N recommendation.</p>
</sec>
<sec id="s3_1_2">
<label>3.1.2</label>
<title>User Modeling-Kernel Fuzzy C-Means Clustering (KFCM) (Phase-II)</title>
<p>Towards dealing with the scalability issue, the smoothened matrix <inline-formula id="ieqn-40"><mml:math id="mml-ieqn-40"><mml:mrow><mml:mover><mml:mi>R</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow><mml:mi>&#x03B9;</mml:mi></mml:math></inline-formula> using the ESAE approach as explained in <xref ref-type="sec" rid="s3_1_1">Section 3.1.1</xref> is clustered before the task of recommendation. This task is altered for ESAE_KFCM. The total amount of ratings given by the active user during the phase of recommendation is incredibly low. Prevalent clustering techniques like expectation maximization, similarity-based clustering, and k-means project the data over a scale of rating space that may be significantly larger. These clustering methods have the limitation that their prototypes are located in a highly-dimensional rating space, consequently, the descriptions are inadequate and informative (sparse user-item matrix). As a result, Zhang et al. [<xref ref-type="bibr" rid="ref-28">28</xref>] introduced a kernel fuzzy approach for clustering that is utilized in user modeling in ESAE_KFCM. The investigation demonstrates that fuzzy clustering is more resistant to outliers and noise than other approaches and can accept unequally sized clusters. In the phase of online, the most relevant users&#x2019; multicriteria rating vector alone is retrieved from the multiple clusters and the accurate unknown ratings are predicted using the ESAE approach with minimal recommendation time. The procedure of clustering is elucidated in Algorithm 2.</p>
<fig id="fig-7">
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_47167-fig-7.tif"/>
</fig>
</sec>
</sec>
<sec id="s3_2">
<label>3.2</label>
<title>Recommendation Process by ESAE_KFCM in Online Phase</title>
<p>During the stage of the online recommendation process, the active user is recommended depending on the most correlated users obtained from multiple clusters generated by <xref ref-type="sec" rid="s3_1_2">Section 3.1.2</xref>. This process of online recommendation is explained in Algorithm 3.</p>
<fig id="fig-8">
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_47167-fig-8.tif"/>
</fig>
</sec>
</sec>
<sec id="s4">
<label>4</label>
<title>Experimental Evaluation</title>
<sec id="s4_1">
<label>4.1</label>
<title>Dataset Description and Evaluation Metrics</title>
<p>The proposed ESAE_KFCM technique is tested using two real-time multicriteria data sets from the Yahoo! Movies (YM) movie sector dataset [<xref ref-type="bibr" rid="ref-11">11</xref>] and TripAdvisor (TA) [<xref ref-type="bibr" rid="ref-29">29</xref>] travel dataset sectors. The Yahoo Movies dataset has 6,078 users who rated a total of 976 movies has 62,156 ratings between 1 and 13. Each user assessed a movie based on 4 different criteria: Visuals, Direction, Acting, and Story. For experimenting, this 13-level scoring is converted to the standard 5-point format. Through web scraping on the Trip Advisor website with Beautiful Soup, a total of 60,216 records were gathered from 2500 hotels located in 93 different cities. Each hotel may be assessed based on multiple criteria such as service, value, cleanliness, rooms, staff, and quality of sleep. We sustain instances of users whose rating is at least five hotels and hotels that had at least five user ratings to extract the workable subset of data from TA. The extracted subset TA 5-5 has sparse data of 99.83% for 3550 hotels with 3160 users and 9374 instances of ratings. Similarly, YM 20-20, YM 5-5, and YM 10-10 subsets of data are extracted.</p>
<p>To assess the efficacy of the ESAE_KFCM approach, we employed prominent performance measures like, F1 score, Mean Absolute Error (MAE) [<xref ref-type="bibr" rid="ref-30">30</xref>], Good Predicted Items MAE (GPIMAE), Good Items MAE (GIMAE) [<xref ref-type="bibr" rid="ref-31">31</xref>], and computation time. Metrics for prediction accuracy, such as Mean Absolute Error (MAE), specified in <xref ref-type="disp-formula" rid="eqn-6">Eq. (6)</xref>, compare the actual ratings with the expected ratings.
<disp-formula id="eqn-6"><label>(6)</label><mml:math id="mml-eqn-6" display="block"><mml:mi>M</mml:mi><mml:mi>A</mml:mi><mml:mi>E</mml:mi><mml:mo>=</mml:mo><mml:msubsup><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>I</mml:mi></mml:mrow></mml:msubsup><mml:msubsup><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>m</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>K</mml:mi></mml:mrow></mml:msubsup><mml:mfrac><mml:mrow><mml:mo>|</mml:mo><mml:msub><mml:mi>A</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mo>,</mml:mo><mml:mi>m</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mo>,</mml:mo><mml:mi>m</mml:mi></mml:mrow></mml:msub><mml:mo>|</mml:mo></mml:mrow><mml:mrow><mml:mi>I</mml:mi><mml:mo>&#x2217;</mml:mo><mml:mi>K</mml:mi></mml:mrow></mml:mfrac></mml:math></disp-formula></p>
<p>where <inline-formula id="ieqn-51"><mml:math id="mml-ieqn-51"><mml:mi>I</mml:mi><mml:mo>,</mml:mo><mml:mi>K</mml:mi><mml:mo>,</mml:mo><mml:msub><mml:mi>A</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mo>,</mml:mo><mml:mi>m</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mo>,</mml:mo><mml:mi>m</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> denotes the total user, total items, and actual, and expected ratings of user <italic>n</italic> for item <italic>m</italic>. High recall and precision levels are ideal for a model. Recall typically decreases as precision increases, and vice versa, due to the inverse correlation. To take into account recall (<inline-formula id="ieqn-52"><mml:math id="mml-ieqn-52"><mml:mi>&#xA7A6;</mml:mi></mml:math></inline-formula>) and precision (<inline-formula id="ieqn-53"><mml:math id="mml-ieqn-53"><mml:mi>&#x20BD;</mml:mi></mml:math></inline-formula>), the metric <italic>F</italic><sub><italic>1</italic></sub> is defined in <xref ref-type="disp-formula" rid="eqn-7">Eq. (7)</xref>:
<disp-formula id="eqn-7"><label>(7)</label><mml:math id="mml-eqn-7" display="block"><mml:mrow><mml:mi mathvariant="normal">F</mml:mi><mml:mn>1</mml:mn></mml:mrow><mml:mo>=</mml:mo><mml:mn>2</mml:mn><mml:mo>&#x20BD;</mml:mo><mml:mi>&#xA7A6;</mml:mi><mml:mo>/</mml:mo><mml:mo>&#x20BD;</mml:mo><mml:mo>&#x002B;</mml:mo><mml:mi>&#xA7A6;</mml:mi></mml:math></disp-formula></p>
<p>GPIMAE and GIMAE calculate the MAE in the system&#x2019;s prediction of acceptable items and in those items, it forecasts to be good. As a result, they concentrate just on pertinent things rather than considering every item in the assessment subset. Therefore, the primary benefit of these measures is that they assess the algorithm solely for predictions that are pertinent to the user, i.e., for items that either ought to or will get placed on the list of recommendations.</p>
</sec>
<sec id="s4_2">
<label>4.2</label>
<title>Training Settings</title>
<p>The datasets TA 5-5, YM 20-20, YM 5-5, and YM 10-10 are divided into training data (60%), cross-validation (20%), and test data (20%) for the experiment. The proposed ESAE_KFCM paradigm is trained using Keras API in Python from the TensorFlow package. Utilizing the sci-kit-learn&#x2019;s GridSearchCV feature, we investigate the potential effects of several distinct parameter values on the accuracy of the ESAE_KFCM model. Specifically, noise variance is altered with [0.1,0.2,0.3,0.4,0.5], number of hidden units [500,450,350,300,600], optimization technique as [&#x2018;Nadam&#x2019;, &#x2018;Adamax&#x2019;, &#x2018;Adam&#x2019;, &#x2018;Adadelta&#x2019;, &#x2018;Adagrad&#x2019;, &#x2018;RMSprop&#x2019;, &#x2018;SGD&#x2019;], epoch with [100,200,240,280,300] and learning rate are assessed. Whenever the value of epoch value is 280, the ESAE model gets converged with corruption ratio &#x003D; 0.4 and hidden units 450. Learning rates with altering values (0.002, 0.001, 0.01, 0.02, and 0.03) were tested and exposed that the learning rate <inline-formula id="ieqn-54"><mml:math id="mml-ieqn-54"><mml:mi>&#x03B7;</mml:mi><mml:mo>=</mml:mo><mml:mn>0.02</mml:mn></mml:math></inline-formula> is given better accuracy. Similar to this, a minibatch of size 50 is used to optimize the network using a Stochastic Gradient Descent Optimizer. Also, weight decay <italic>l</italic><sub><italic>2</italic></sub>(0.001) is introduced for regularization, we chose hyperbolic tangents as a transfer function.</p>
<p>In this ESAE_KFCM technique, the optimum number of clusters (<italic>f</italic>) is found using the Silhouette Coefficient Method [<xref ref-type="bibr" rid="ref-43">43</xref>]. The max value of the silhouette coefficient (&#x03B2;), which indicates that the number of clusters formed is ideal, is 1. The first step in the experiment to find the optimum value for <italic>f</italic> is choosing the number of clusters to be assessed, which can range from 5 to 25. Based on the smallest possible clusters, the minimum <italic>f</italic> &#x003D; 5. In the interim, the maximum <italic>f</italic> &#x003D; 25. The value of &#x03B2; gives the second-highest optimum value at <italic>f</italic> &#x003D; 22 and then starts to decline at <italic>f</italic> &#x003D; 25. Since the higher <italic>f</italic> requires greater computing time, the subsequent <italic>f</italic> values do not proceed. The findings of the clustering assessment via the Silhouette Coefficient approach are displayed in <xref ref-type="fig" rid="fig-3">Fig. 3</xref>. The ideal <italic>f</italic> value is indicated in <xref ref-type="fig" rid="fig-3">Fig. 3</xref>, where the <italic>f &#x003D;</italic> 16 yields the greatest value of the Silhouette Coefficient. As a result, the process of clustering will start with some clusters <italic>f</italic> equal to 16.</p>
<fig id="fig-3">
<label>Figure 3</label>
<caption>
<title>Results of silhouette coefficient (&#x03B2;)</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_47167-fig-3.tif"/>
</fig>
</sec>
<sec id="s4_3">
<label>4.3</label>
<title>Results and Discussion</title>
<p>The performance of ESAE_KFCM is assessed and compared with single and multicriteria rating-based models as baseline methods of recommendation as listed below:
<list list-type="order">
<list-item>
<p>CDAE [<xref ref-type="bibr" rid="ref-8">8</xref>]: Collaborative DAE that uses a single rating system that incorporates latent factors for recommendation.</p></list-item>
<list-item>
<p>Hybrid_AE [<xref ref-type="bibr" rid="ref-9">9</xref>]: CF-based neural network that uses a single rating system that combines user and item side information.</p></list-item>
<list-item>
<p>Agg_CCA [<xref ref-type="bibr" rid="ref-12">12</xref>]: Aggregation approach that utilizes a multicriteria rating system to construct hybrid item and user-specific model.</p></list-item>
<list-item>
<p>Context_CCC [<xref ref-type="bibr" rid="ref-12">12</xref>]: Estimates overall preference values based on contextual situations cast by multiple criteria rating system.</p></list-item>
<list-item>
<p>Ind_CIC [<xref ref-type="bibr" rid="ref-12">12</xref>]: The BiasedMF algorithm was utilized to estimate the individual rating criteria as contexts by avoiding dependencies between multiple criteria ratings.</p></list-item>
<list-item>
<p>ESAE_CF: This approach integrates the proposed ESAE approach with CF technique-based prediction for top-N item recommendation.</p></list-item>
</list></p>
<sec id="s4_3_1">
<label>4.3.1</label>
<title>Performance Analysis of ESAE-Based Smoothening Process (Offline Phase)</title>
<p><xref ref-type="fig" rid="fig-4">Fig. 4</xref> shows the MAE and smoothening time comparison of the ESAE with conventional models on all the working datasets. The efficiency of the ESAE model&#x2019;s smoothening process in the offline phase is measured in terms of MAE and computational time.</p>
<fig id="fig-4">
<label>Figure 4</label>
<caption>
<title>MAE comparison and smoothening time of the ESAE</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_47167-fig-4.tif"/>
</fig>
<p><list list-type="bullet">
<list-item>
<p>CDAE and Hybrid_AE work on a single rating system where the smoothening process is done by autoencoder has average performance in MAE but the smoothening time is comparatively high.</p></list-item>
<list-item>
<p>Hybrid_AE outperforms the based smoothening technique since it incorporates the gain from side information to avoid the sparsity problem.</p></list-item>
<list-item>
<p>Agg_CCA and Context_CCC approaches utilize the CAMF_C method, which adopts criteria chains to anticipate the various criteria ratings one at a time and has better prediction accuracy than single rating systems such as CDAE and Hybrid_AE.</p></list-item>
<list-item>
<p>Ind_CIC outperforms Agg_CCA and Context_CCC in terms of prediction accuracy and the smoothening time was comparatively high.</p></list-item>
<list-item>
<p>As multicriteria ratings are considered for the smoothening process by utilizing an expanded stacked autoencoder in the ESAE_KFCM approach, the modeling time is comparatively elevated whereas outperforms MAE prediction accuracy compared to all the other models.</p></list-item>
<list-item>
<p>The smoothening approach is similar for both ESAE_CF and ESAE_KFCM. Subsequently, both have the same performance.</p></list-item>
</list></p>
</sec>
<sec id="s4_3_2">
<label>4.3.2</label>
<title>Performance Analysis of ESAE_KFCM Process (Online Phase)</title>
<p>Subsequently completing the smoothening process, the dataset is clustered by the proposed Kernel Fuzzy C-Means Clustering (KFCM) approach. In the online phase the multicriteria rating vector of highly correlated users&#x2019; is alone considered for rating prediction using the ESAE approach. Thereby the proposed ESAE_KFCM approach gives the most relevant Top-N items as recommendations in minimal time and greater prediction accuracy. The time taken to recommend top-N items for the active user with the conventional model is depicted in <xref ref-type="fig" rid="fig-5">Fig. 5</xref>. <xref ref-type="table" rid="table-2">Tables 2</xref>&#x2013;<xref ref-type="table" rid="table-5">5</xref> depict the results of accuracy measures in terms of MAE, F1 score, GPIMAE, and GIMAE, and the observations are listed as follows:</p>
<fig id="fig-5">
<label>Figure 5</label>
<caption>
<title>Recommendation time evaluation on the dataset</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_47167-fig-5.tif"/>
</fig><table-wrap id="table-2">
<label>Table 2</label>
<caption>
<title>The performance comparison on the YM 5-5 dataset in the Online phase</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Model</th>
<th>F1 score</th>
<th>GPIMAE</th>
<th>GIMAE</th>
<th>MAE</th>
</tr>
</thead>
<tbody>
<tr>
<td>CDAE</td>
<td>0.6491</td>
<td>0.7913</td>
<td>0.5672</td>
<td>0.6306</td>
</tr>
<tr>
<td>Hybrid_AE</td>
<td>0.6789</td>
<td>0.8406</td>
<td>0.6022</td>
<td>0.6531</td>
</tr>
<tr>
<td>Agg_CCA</td>
<td>0.4497</td>
<td>0.5901</td>
<td>0.5878</td>
<td>0.6737</td>
</tr>
<tr>
<td>Context_CCC</td>
<td>0.4826</td>
<td>0.6095</td>
<td>0.6124</td>
<td>0.6914</td>
</tr>
<tr>
<td>Indep_CIC</td>
<td>0.4636</td>
<td>0.6814</td>
<td>0.6536</td>
<td>0.7129</td>
</tr>
<tr>
<td>ESAE_CF</td>
<td>0.7458</td>
<td>0.5379</td>
<td>0.5210</td>
<td>0.5674</td>
</tr>
<tr>
<td><bold><italic>ESAE_KFCM</italic></bold></td>
<td><bold><italic>0.7816</italic></bold></td>
<td><bold><italic>0.5111</italic></bold></td>
<td><bold><italic>0.4993</italic></bold></td>
<td><bold><italic>0.5258</italic></bold></td>
</tr>
</tbody>
</table>
</table-wrap><table-wrap id="table-3">
<label>Table 3</label>
<caption>
<title>The performance comparison on the TA 5-5 dataset in the Online phase</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Model</th>
<th>F1 score</th>
<th>GPIMAE</th>
<th>GIMAE</th>
<th>MAE</th>
</tr>
</thead>
<tbody>
<tr>
<td>CDAE</td>
<td>0.6545</td>
<td>0.8693</td>
<td>0.6976</td>
<td>0.7806</td>
</tr>
<tr>
<td>Hybrid_AE</td>
<td>0.6798</td>
<td>0.8244</td>
<td>0.6314</td>
<td>0.7691</td>
</tr>
<tr>
<td>Agg_CCA</td>
<td>0.5640</td>
<td>0.5972</td>
<td>0.5417</td>
<td>0.6891</td>
</tr>
<tr>
<td>Context_CCC</td>
<td>0.5380</td>
<td>0.7577</td>
<td>0.6242</td>
<td>0.6888</td>
</tr>
<tr>
<td>Indep_CIC</td>
<td>0.5370</td>
<td>0.7439</td>
<td>0.6420</td>
<td>0.7012</td>
</tr>
<tr>
<td>ESAE_CF</td>
<td>0.7109</td>
<td>0.5592</td>
<td>0.4636</td>
<td>0.6080</td>
</tr>
<tr>
<td><bold><italic>ESAE_KFCM</italic></bold></td>
<td><bold><italic>0.7916</italic></bold></td>
<td><bold><italic>0.5289</italic></bold></td>
<td><bold><italic>0.4407</italic></bold></td>
<td><bold><italic>0.5758</italic></bold></td>
</tr>
</tbody>
</table>
</table-wrap><table-wrap id="table-4">
<label>Table 4</label>
<caption>
<title>The performance comparison on the TA 10-10 dataset in the Online phase</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Model</th>
<th>F1 score</th>
<th>GPIMAE</th>
<th>GIMAE</th>
<th>MAE</th>
</tr>
</thead>
<tbody>
<tr>
<td>CDAE</td>
<td>0.6196</td>
<td>0.8453</td>
<td>0.6792</td>
<td>0.8284</td>
</tr>
<tr>
<td>Hybrid_AE</td>
<td>0.7042</td>
<td>0.8269</td>
<td>0.6595</td>
<td>0.7811</td>
</tr>
<tr>
<td>Agg_CCA</td>
<td>0.5343</td>
<td>0.7990</td>
<td>0.6015</td>
<td>0.6618</td>
</tr>
<tr>
<td>Context_CCC</td>
<td>0.5361</td>
<td>0.7857</td>
<td>0.6240</td>
<td>0.6374</td>
</tr>
<tr>
<td>Indep_CIC</td>
<td>0.5327</td>
<td>0.7743</td>
<td>0.6542</td>
<td>0.6719</td>
</tr>
<tr>
<td>ESAE_CF</td>
<td>0.7313</td>
<td>0.5592</td>
<td>0.4870</td>
<td>0.5783</td>
</tr>
<tr>
<td><bold><italic>ESAE_KFCM</italic></bold></td>
<td><bold><italic>0.8165</italic></bold></td>
<td><bold><italic>0.5289</italic></bold></td>
<td><bold><italic>0.4512</italic></bold></td>
<td><bold><italic>0.5403</italic></bold></td>
</tr>
</tbody>
</table>
</table-wrap><table-wrap id="table-5">
<label>Table 5</label>
<caption>
<title>The performance comparison on the TA 20-20 dataset in the Online phase</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Model</th>
<th>F1 score</th>
<th>GPIMAE</th>
<th>GIMAE</th>
<th>MAE</th>
</tr>
</thead>
<tbody>
<tr>
<td>CDAE</td>
<td>0.7200</td>
<td>0.8164</td>
<td>0.6237</td>
<td>0.7541</td>
</tr>
<tr>
<td>Hybrid_AE</td>
<td>0.7578</td>
<td>0.7830</td>
<td>0.6008</td>
<td>0.7205</td>
</tr>
<tr>
<td>Agg_CCA</td>
<td>0.5641</td>
<td>0.6971</td>
<td>0.6042</td>
<td>0.6691</td>
</tr>
<tr>
<td>Context_CCC</td>
<td>0.5585</td>
<td>0.7159</td>
<td>0.6095</td>
<td>0.6798</td>
</tr>
<tr>
<td>Indep_CIC</td>
<td>0.5677</td>
<td>0.7064</td>
<td>0.6218</td>
<td>0.7029</td>
</tr>
<tr>
<td>ESAE_CF</td>
<td>0.8070</td>
<td>0.6523</td>
<td>0.4870</td>
<td>0.5906</td>
</tr>
<tr>
<td><bold><italic>ESAE_KFCM</italic></bold></td>
<td><bold><italic>0.8632</italic></bold></td>
<td><bold><italic>0.6179</italic></bold></td>
<td><bold><italic>0.4324</italic></bold></td>
<td><bold><italic>0.5345</italic></bold></td>
</tr>
</tbody>
</table>
</table-wrap>
<p><list list-type="bullet">
<list-item>
<p>As depicted in <xref ref-type="fig" rid="fig-5">Fig. 5</xref>, ESAE_KFCM constantly outperforms all the conventional models concerning recommendation time. Here the prediction of an intact rating takes only a minimal time for Top-N item recommendation since it inputs only the highly correlated multicriteria rating vector into the ESAE model fetched from multiple clusters in the online phase. Ind_CIC, Agg_CCA, and Context_CCC outperform CDAE and Hybrid_AE in terms of recommendation time since it works with the principle of dimensionality reduction technique. In ESAE_CF, the recommendation generation time is high compared to all the other models.</p></list-item>
<list-item>
<p>Ind_CIC, Agg_CCA, and Context_CCC comparatively have 17% higher errors than the ESAE_KFCM approach in terms of MAE, F1 score, GPIMAE, and GIMAE. Hybrid_AE outperforms CDAE in terms of prediction accuracy but has comparatively 21% high GPIMAE, and GIMAE. ESAE_CF outperform all the multicriteria rating-based recommendation system such as Ind_CIC, Agg_CCA, and Context_CCC in terms of F1 score, GPIMAE, and GIMAE for the existing user with modified multicriteria rating vector and new user with no rating vector.</p></list-item>
<list-item>
<p>The KFCM clustering process with some cluster (<italic>f</italic>) &#x003D; 16 as mentioned in <xref ref-type="fig" rid="fig-4">Fig. 4</xref> on all the datasets has upgraded the prediction accuracy of 98.2% in Top-N recommendation with an average online recommendation generation time per item with 0.00428346 s accordingly.</p></list-item>
<list-item>
<p>ESAE_KFCM has an accurate prediction of intact ratings with less error in terms of GPIMAE, GIMAE, MAE and a 22% efficient F1 score compared to ESAE_CF, Ind_CIC, Agg_CCA, and Context_CCC in addition to minimized recommendation time and better decision support.</p></list-item>
</list></p>
</sec>
</sec>
</sec>
<sec id="s5">
<label>5</label>
<title>Conclusion and Future Enhancement</title>
<p>This work developed and estimated a scalable novel ESAE_KFCM model to enhance Top N recommendation by Kernel Fuzzy C-Means Clustering. In the KFCM technique, the method of clustering efficiently retrieves the highly correlated user from multiple clusters in the recommendation phase. In the offline phase, ESAE ESAE-based smoothening is performed by considering the multicriteria rating vector thereby overcoming the sparsity and enhancing prediction accuracy. Compared with the existing models, the proposed ESAE_KFCM model can make online recommendations using only the most similar users&#x2019; multicriteria rating vector retrieved from multiple clusters enhancing the high quality of Top-N item recommendation with minimal computation time. This model can be enhanced further by involving additional criteria like reviews, and user and item features to improve the accuracy of the recommendation process. Besides, variational or sparse autoencoders can be utilized for further investigation to enhance the Top-N recommendation.</p>
</sec>
</body>
<back>
<ack><p>We thank Vellore Institute of Technology, Chennai, for supporting us with the article processing charge (APC).</p>
</ack>
<sec><title>Funding Statement</title>
<p>The authors received no specific funding for this study.</p>
</sec>
<sec><title>Author Contributions</title>
<p>The authors confirm contribution to the paper as follows: study conception and design: Abinaya S; data collection: Uttej Kumar K; analysis and interpretation of results: Abinaya S, Uttej Kumar K draft manuscript preparation: Abinaya S, Uttej Kumar K. All authors reviewed the results and approved the final version of the manuscript.</p>
</sec>
<sec sec-type="data-availability"><title>Availability of Data and Materials</title>
<p>The TripAdvisor datasets analyzed during the current study are available at <ext-link ext-link-type="uri" xlink:href="https://www.kaggle.com/datasets/andrewmvd/trip-advisor-hotel-reviews">https://www.kaggle.com/datasets/andrewmvd/trip-advisor-hotel-reviews</ext-link>, and Yahoo movies dataset, <ext-link ext-link-type="uri" xlink:href="https://webscope.sandbox.yahoo.com/">https://webscope.sandbox.yahoo.com/</ext-link> accessed on 17 May 2023.</p>
</sec>
<sec sec-type="COI-statement"><title>Conflicts of Interest</title>
<p>The authors declare that they have no conflicts of interest to report regarding the present study.</p>
</sec>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>[1]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Pazzani</surname></string-name> and <string-name><given-names>D.</given-names> <surname>Billsus</surname></string-name></person-group>, &#x201C;<article-title>Learning and revising user profiles: The identification of interesting web sites</article-title>,&#x201D; <source>Mach. Learn</source>., vol. <volume>27</volume>, pp. <fpage>313</fpage>&#x2013;<lpage>331</lpage>, <year>1997</year>. doi: <pub-id pub-id-type="doi">10.1023/A:1007369909943</pub-id>.</mixed-citation></ref>
<ref id="ref-2"><label>[2]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>G.</given-names> <surname>Linden</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Smith</surname></string-name>, and <string-name><given-names>J.</given-names> <surname>York</surname></string-name></person-group>, &#x201C;<article-title>Amazon. com recommendations: Item-to-item collaborative filtering</article-title>,&#x201D; <source>IEEE Internet. Comput</source>., vol. <volume>7</volume>, no. <issue>1</issue>, pp. <fpage>76</fpage>&#x2013;<lpage>80</lpage>, <year>2003</year>. doi: <pub-id pub-id-type="doi">10.1109/MIC.2003.1167344</pub-id>.</mixed-citation></ref>
<ref id="ref-3"><label>[3]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Z.</given-names> <surname>Sun</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Xiao</surname></string-name> and <string-name><given-names>H.</given-names> <surname>Guo</surname></string-name></person-group>, &#x201C;<article-title>Recommender systems based on tensor decomposition</article-title>,&#x201D; <source>Comput. Mater. Contin</source>., vol. <volume>66</volume>, pp. <fpage>621</fpage>&#x2013;<lpage>630</lpage>, <year>2020</year>. doi: <pub-id pub-id-type="doi">10.32604/cmc.2020.012593</pub-id>; <pub-id pub-id-type="pmid">37303558</pub-id></mixed-citation></ref>
<ref id="ref-4"><label>[4]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Koren</surname></string-name></person-group>, &#x201C;<article-title>Factorization meets the neighborhood: A multifaceted collaborative filtering model</article-title>,&#x201D; in <conf-name>Proc. 14th ACM SIGKDD Int. Conf. Knowl. Discov. Data. Min.</conf-name>, <publisher-loc>Las Vegas, Nevada, USA</publisher-loc>, <year>2008</year>, pp. <fpage>426</fpage>&#x2013;<lpage>434</lpage>.</mixed-citation></ref>
<ref id="ref-5"><label>[5]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Rendle</surname></string-name></person-group>, &#x201C;<article-title>Factorization machines</article-title>,&#x201D; in <conf-name>2010 IEEE Int. Conf. Data. Min.</conf-name>, <publisher-loc>Sydney, NSW, Australia</publisher-loc>: <publisher-name>IEEE</publisher-name>, <year>2010</year>, pp. <fpage>995</fpage>&#x2013;<lpage>1000</lpage>.</mixed-citation></ref>
<ref id="ref-6"><label>[6]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Abinaya</surname></string-name> and <string-name><given-names>M. K.</given-names> <surname>Kavitha Devi</surname></string-name></person-group>, &#x201C;<article-title>Trust-based context-aware collaborative filtering using denoising autoencoder</article-title>,&#x201D; in <conf-name>Pervasive. Comp. Soc Netw: Proc. ICPCSN 2021</conf-name>, <publisher-loc>Singapore</publisher-loc>, <publisher-name>Springer</publisher-name>, <year>2022</year>, pp. <fpage>35</fpage>&#x2013;<lpage>49</lpage>.</mixed-citation></ref>
<ref id="ref-7"><label>[7]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Abinaya</surname></string-name>, <string-name><given-names>A. S.</given-names> <surname>Alphonse</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Abirami</surname></string-name>, and <string-name><given-names>M. K.</given-names> <surname>Kavithadevi</surname></string-name></person-group>, &#x201C;<chapter-title>Enhancing context-aware recommendation using trust-based contextual attentive autoencoder</chapter-title>,&#x201D; in <source>Neural. Proc. Letters</source>., <year>2023</year>, pp. <fpage>1</fpage>&#x2013;<lpage>22</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s11063-023-11163-x</pub-id>.</mixed-citation></ref>
<ref id="ref-8"><label>[8]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Wu</surname></string-name>, <string-name><given-names>C.</given-names> <surname>DuBois</surname></string-name>, <string-name><given-names>A. X.</given-names> <surname>Zheng</surname></string-name>, and <string-name><given-names>M.</given-names> <surname>Ester</surname></string-name></person-group>, &#x201C;<article-title>Collaborative denoising auto-encoders for top-N recommender systems</article-title>,&#x201D; in <conf-name>Proc. 9th ACM Int. Conf. Web Search Data Min.</conf-name>, <year>2016</year>, pp. <fpage>153</fpage>&#x2013;<lpage>162</lpage>.</mixed-citation></ref>
<ref id="ref-9"><label>[9]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>F.</given-names> <surname>Strub</surname></string-name>, <string-name><given-names>R.</given-names> <surname>Gaudel</surname></string-name>, and <string-name><given-names>J.</given-names> <surname>Mary</surname></string-name></person-group>, &#x201C;<article-title>Hybrid recommender system based on autoencoders</article-title>,&#x201D; in <conf-name>Proc. 1st Workshop Deep Learn Recommender Syst.</conf-name>, <year>2016</year>, pp. <fpage>11</fpage>&#x2013;<lpage>16</lpage>. doi: <pub-id pub-id-type="doi">10.1145/2988450.2988456</pub-id>.</mixed-citation></ref>
<ref id="ref-10"><label>[10]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>G.</given-names> <surname>Adomavicius</surname></string-name> and <string-name><given-names>Y.</given-names> <surname>Kwon</surname></string-name></person-group>, &#x201C;<article-title>New recommendation techniques for multicriteria rating systems</article-title>,&#x201D; <source>IEEE Intell. Syst</source>., vol. <volume>22</volume>, no. <issue>3</issue>, pp. <fpage>48</fpage>&#x2013;<lpage>55</lpage>, <year>2007</year>. doi: <pub-id pub-id-type="doi">10.1109/MIS.2007.58</pub-id>.</mixed-citation></ref>
<ref id="ref-11"><label>[11]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>D.</given-names> <surname>Jannach</surname></string-name>, <string-name><given-names>Z.</given-names> <surname>Karakaya</surname></string-name>, and <string-name><given-names>F.</given-names> <surname>Gedikli</surname></string-name></person-group>, &#x201C;<article-title>Accuracy improvements for multi-criteria recommender systems</article-title>,&#x201D; in <conf-name>Proc. 13th ACM Conf. Electron. Commer.</conf-name>, <year>2012</year>, pp. <fpage>674</fpage>&#x2013;<lpage>689</lpage>.</mixed-citation></ref>
<ref id="ref-12"><label>[12]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Zheng</surname></string-name></person-group>, &#x201C;<article-title>Criteria chains: A novel multi-criteria recommendation approach</article-title>,&#x201D; in <conf-name>Proc. 22nd Int. Conf. Intell. User Interfaces</conf-name>, <year>2017</year>, pp. <fpage>29</fpage>&#x2013;<lpage>33</lpage>.</mixed-citation></ref>
<ref id="ref-13"><label>[13]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Abinaya</surname></string-name> and <string-name><given-names>M. K.</given-names> <surname>Devi</surname></string-name></person-group>, &#x201C;<article-title>Enhancing top-N recommendation using stacked autoencoder in context-aware recommender system</article-title>,&#x201D; <source>Neural. Process. Lett</source>., vol. <volume>53</volume>, pp. <fpage>1865</fpage>&#x2013;<lpage>1888</lpage>, <year>2021</year>. doi: <pub-id pub-id-type="doi">10.1007/s11063-021-10475-0</pub-id>.</mixed-citation></ref>
<ref id="ref-14"><label>[14]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Kumar</surname></string-name> and <string-name><given-names>A.</given-names> <surname>Sharma</surname></string-name></person-group>, &#x201C;<article-title>Alleviating sparsity and scalability issues in collaborative filtering based recommender systems</article-title>,&#x201D; in <conf-name>Proc. Int Conf Front Intell Comput: Theory Appl. (FICTA)</conf-name>, <publisher-loc>Berlin Heidelberg</publisher-loc>: <publisher-name>Springer</publisher-name>, <year>2013</year>, pp. <fpage>103</fpage>&#x2013;<lpage>112</lpage>.</mixed-citation></ref>
<ref id="ref-15"><label>[15]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Singh</surname></string-name></person-group>, &#x201C;<article-title>Scalability and sparsity issues in recommender datasets: A survey</article-title>,&#x201D; <source>Knowl. Inf. Syst</source>., vol. <volume>62</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>43</lpage>, <year>2020</year>. doi: <pub-id pub-id-type="doi">10.1007/s10115-018-1254-2</pub-id>.</mixed-citation></ref>
<ref id="ref-16"><label>[16]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Abinaya</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Indira</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Karthiga</surname></string-name>, and <string-name><given-names>T.</given-names> <surname>Rajasenbagam</surname></string-name></person-group>, &#x201C;<article-title>Time cluster personalized ranking recommender system in multi-cloud</article-title>,&#x201D; <source>Math</source>., vol. <volume>11</volume>, no. <issue>6</issue>, pp. <fpage>1300</fpage>, <year>2023</year>. doi: <pub-id pub-id-type="doi">10.3390/math11061300</pub-id>.</mixed-citation></ref>
<ref id="ref-17"><label>[17]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M. K.</given-names> <surname>Devi</surname></string-name> and <string-name><given-names>P.</given-names> <surname>Venkatesh</surname></string-name></person-group>, &#x201C;<article-title>Smoothing approach to alleviate the meager rating problem in collaborative recommender systems</article-title>,&#x201D; <source>Future. Gener. Comput. Syst</source>., vol. <volume>29</volume>, no. <issue>1</issue>, pp. <fpage>262</fpage>&#x2013;<lpage>270</lpage>, <year>2013</year>. doi: <pub-id pub-id-type="doi">10.1016/j.future.2011.05.011</pub-id>.</mixed-citation></ref>
<ref id="ref-18"><label>[18]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Bakshi</surname></string-name>, <string-name><given-names>A. K.</given-names> <surname>Jagadev</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Dehuri</surname></string-name>, and <string-name><given-names>G. N.</given-names> <surname>Wang</surname></string-name></person-group>, &#x201C;<article-title>Enhancing scalability and accuracy of recommendation systems using unsupervised learning and particle swarm optimization</article-title>,&#x201D; <source>Appl. Soft. Comput</source>., vol. <volume>15</volume>, pp. <fpage>21</fpage>&#x2013;<lpage>29</lpage>, <year>2014</year>. doi: <pub-id pub-id-type="doi">10.1016/j.asoc.2013.10.018</pub-id>.</mixed-citation></ref>
<ref id="ref-19"><label>[19]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Zhuang</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Wu</surname></string-name>, and <string-name><given-names>L.</given-names> <surname>Zhang</surname></string-name></person-group>, &#x201C;<article-title>Applying probabilistic latent semantic analysis to multi-criteria recommender system</article-title>,&#x201D; <source>Ai. Commun</source>., vol. <volume>22</volume>, no. <issue>2</issue>, pp. <fpage>97</fpage>&#x2013;<lpage>107</lpage>, <year>2009</year>. doi: <pub-id pub-id-type="doi">10.3233/AIC-2009-0446</pub-id>.</mixed-citation></ref>
<ref id="ref-20"><label>[20]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>Liu</surname></string-name>, <string-name><given-names>N.</given-names> <surname>Mehandjiev</surname></string-name>, and <string-name><given-names>D. L.</given-names> <surname>Xu</surname></string-name></person-group>, &#x201C;<article-title>Multi-criteria service recommendation based on user criteria preferences</article-title>,&#x201D; in <conf-name>Proc. Fifth ACM Conf. Recomm. Syst.</conf-name>, <year>2011</year>, pp. <fpage>77</fpage>&#x2013;<lpage>84</lpage>. doi: <pub-id pub-id-type="doi">10.1145/2043932.2043950</pub-id>.</mixed-citation></ref>
<ref id="ref-21"><label>[21]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>Q.</given-names> <surname>Shambour</surname></string-name> and <string-name><given-names>J.</given-names> <surname>Lu</surname></string-name></person-group>, &#x201C;<article-title>A hybrid multi-criteria semantic-enhanced collaborative filtering approach for personalized recommendations</article-title>,&#x201D; in <conf-name>In 2011 IEEE/WIC/ACM Int. Conf. Web. Intell. Intell. Agent. Technol.</conf-name>, <comment>IEEE,</comment> vol. <volume>1</volume>, <year>2011</year>, pp. <fpage>71</fpage>&#x2013;<lpage>78</lpage>.</mixed-citation></ref>
<ref id="ref-22"><label>[22]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Nilashi</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Jannach</surname></string-name>, <string-name><given-names>O.</given-names> <surname>bin Ibrahim</surname></string-name>, and <string-name><given-names>N.</given-names> <surname>Ithnin</surname></string-name></person-group>, &#x201C;<article-title>Clustering-and regression-based multi-criteria collaborative filtering with incremental updates</article-title>,&#x201D; <source>Inf. Sci</source>., vol. <volume>293</volume>, pp. <fpage>235</fpage>&#x2013;<lpage>250</lpage>, <year>2015</year>. doi: <pub-id pub-id-type="doi">10.1016/j.ins.2014.09.012</pub-id>.</mixed-citation></ref>
<ref id="ref-23"><label>[23]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Nilashi</surname></string-name>, <string-name><given-names>O.</given-names> <surname>bin Ibrahim</surname></string-name> and <string-name><given-names>N.</given-names> <surname>Ithnin</surname></string-name></person-group>, &#x201C;<article-title>Hybrid recommendation approaches for multi-criteria collaborative filtering</article-title>,&#x201D; <source>Expert. Syst. Appl</source>., vol. <volume>41</volume>, no. <issue>8</issue>, pp. <fpage>3879</fpage>&#x2013;<lpage>3900</lpage>, <year>2014</year>. doi: <pub-id pub-id-type="doi">10.1016/j.eswa.2013.12.023</pub-id>.</mixed-citation></ref>
<ref id="ref-24"><label>[24]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>R. S.</given-names> <surname>Sreepada</surname></string-name>, <string-name><given-names>B. K.</given-names> <surname>Patra</surname></string-name>, and <string-name><given-names>A.</given-names> <surname>Hernando</surname></string-name></person-group>, &#x201C;<article-title>Multi-criteria recommendations through preference learning</article-title>,&#x201D; in <conf-name>Proc. 4th ACM IKDD Conf. Data. Sci.</conf-name>, <year>2017</year>, pp. <fpage>1</fpage>&#x2013;<lpage>11</lpage>.</mixed-citation></ref>
<ref id="ref-25"><label>[25]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Braunhofer</surname></string-name>, <string-name><given-names>V.</given-names> <surname>Codina</surname></string-name>, and <string-name><given-names>F.</given-names> <surname>Ricci</surname></string-name></person-group>, &#x201C;<article-title>Switching hybrid for cold-starting context-aware recommender systems</article-title>,&#x201D; in <conf-name>Proc. 8th ACM Conf. Recomm. Syst.</conf-name>, <year>2014</year>, pp. <fpage>349</fpage>&#x2013;<lpage>352</lpage>.</mixed-citation></ref>
<ref id="ref-26"><label>[26]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>B. B.</given-names> <surname>Sinha</surname></string-name> and <string-name><given-names>R.</given-names> <surname>Dhanalakshmi</surname></string-name></person-group>, &#x201C;<article-title>DNN-MF: Deep neural network matrix factorization approach for filtering information in multi-criteria recommender systems</article-title>,&#x201D; <source>Neural. Comput. Appl</source>., vol. <volume>34</volume>, no. <issue>13</issue>, pp. <fpage>10807</fpage>&#x2013;<lpage>10821</lpage>, <year>2022</year>. doi: <pub-id pub-id-type="doi">10.1007/s00521-022-07012-y</pub-id>.</mixed-citation></ref>
<ref id="ref-27"><label>[27]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>D.</given-names> <surname>Tallapally</surname></string-name>, <string-name><given-names>R. S.</given-names> <surname>Sreepada</surname></string-name>, <string-name><given-names>B. K.</given-names> <surname>Patra</surname></string-name>, and <string-name><given-names>K. S.</given-names> <surname>Babu</surname></string-name></person-group>, &#x201C;<article-title>User preference learning in multi-criteria recommendations using stacked auto encoders</article-title>,&#x201D; in <conf-name>Proc. 12th ACM Conf. Recomm Syst.</conf-name>, <year>2018</year>, pp. <fpage>475</fpage>&#x2013;<lpage>479</lpage>.</mixed-citation></ref>
<ref id="ref-28"><label>[28]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>D. Q.</given-names> <surname>Zhang</surname></string-name> and <string-name><given-names>S. C.</given-names> <surname>Chen</surname></string-name></person-group>, &#x201C;<article-title>Clustering incomplete data using kernel-based fuzzy c-means algorithm</article-title>,&#x201D; <source>Neural. Process. Lett</source>., vol. <volume>18</volume>, pp. <fpage>155</fpage>&#x2013;<lpage>162</lpage>, <year>2003</year>. doi: <pub-id pub-id-type="doi">10.1023/B:NEPL.0000011135.19145.1b</pub-id>.</mixed-citation></ref>
<ref id="ref-29"><label>[29]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>C. V. M.</given-names> <surname>Krishna</surname></string-name>, <string-name><given-names>G. A.</given-names> <surname>Rao</surname></string-name>, and <string-name><given-names>S.</given-names> <surname>Anuradha</surname></string-name></person-group>, &#x201C;<article-title>Analysing the impact of contextual segments on the overall rating in multi-criteria recommender systems</article-title>,&#x201D; <source>J. Big Data</source>, vol. <volume>10</volume>, no. <issue>1</issue>, pp. <fpage>16</fpage>, <year>2023</year>. doi: <pub-id pub-id-type="doi">10.1186/s40537-023-00690-y</pub-id>; <pub-id pub-id-type="pmid">36777096</pub-id></mixed-citation></ref>
<ref id="ref-30"><label>[30]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J. L.</given-names> <surname>Herlocker</surname></string-name>, <string-name><given-names>J. A.</given-names> <surname>Konstan</surname></string-name>, <string-name><given-names>L. G.</given-names> <surname>Terveen</surname></string-name>, and <string-name><given-names>J. T.</given-names> <surname>Riedl</surname></string-name></person-group>, &#x201C;<article-title>Evaluating collaborative filtering recommender systems</article-title>,&#x201D; <source>ACM Trans. Inf. Syst. (TOIS)</source>, vol. <volume>22</volume>, no. <issue>1</issue>, pp. <fpage>5</fpage>&#x2013;<lpage>53</lpage>, <year>2004</year>. doi: <pub-id pub-id-type="doi">10.1145/963770.963772</pub-id>.</mixed-citation></ref>
<ref id="ref-31"><label>[31]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>F.</given-names> <surname>Cacheda</surname></string-name>, <string-name><given-names>V.</given-names> <surname>Carneiro</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Fern&#x00E1;ndez</surname></string-name>, and <string-name><given-names>V.</given-names> <surname>Formoso</surname></string-name></person-group>, &#x201C;<article-title>Comparison of collaborative filtering algorithms: Limitations of current techniques and proposals for scalable, high-performance recommender systems</article-title>,&#x201D; <source>ACM Trans. Web. (TWEB)</source>, vol. <volume>5</volume>, no. <issue>1</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>33</lpage>, <year>2011</year>. doi: <pub-id pub-id-type="doi">10.1145/1921591.1921593</pub-id>.</mixed-citation></ref>
<ref id="ref-32"><label>[32]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Z.</given-names> <surname>Shu</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Bai</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Yu</surname></string-name>, <string-name><given-names>Z.</given-names> <surname>Yu</surname></string-name>, and <string-name><given-names>X.</given-names> <surname>Wu</surname></string-name></person-group>, &#x201C;<article-title>Specific class center guided deep hashing for cross-modal retrieval</article-title>,&#x201D; <source>Inf. Sci</source>., vol. <volume>609</volume>, pp. <fpage>304</fpage>&#x2013;<lpage>318</lpage>, <year>2022</year>. doi: <pub-id pub-id-type="doi">10.1016/j.ins.2022.07.095</pub-id>.</mixed-citation></ref>
<ref id="ref-33"><label>[33]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Z.</given-names> <surname>Shu</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Yong</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Yu</surname></string-name>, <string-name><given-names>Z.</given-names> <surname>Yu</surname></string-name>, and <string-name><given-names>X. J.</given-names> <surname>Wu</surname></string-name></person-group>, &#x201C;<article-title>Robust supervised matrix factorization hashing with application to cross-modal retrieval</article-title>,&#x201D; <source>Neural. Comput. Appl</source>., vol. <volume>35</volume>, no. <issue>9</issue>, pp. <fpage>6665</fpage>&#x2013;<lpage>6684</lpage>, <year>2023</year>. doi: <pub-id pub-id-type="doi">10.1007/s00521-022-08006-6</pub-id>.</mixed-citation></ref>
<ref id="ref-34"><label>[34]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>P. N.</given-names> <surname>Ramesh</surname></string-name> and <string-name><given-names>S.</given-names> <surname>Kannimuthu</surname></string-name></person-group>, &#x201C;<article-title>Context-aware practice problem recommendation using learners&#x2019; skill level navigation patterns</article-title>,&#x201D; <source>Intell. Autom. Soft. Comput</source>., vol. <volume>35</volume>, no. <issue>3</issue>, pp. <fpage>3845</fpage>&#x2013;<lpage>3860</lpage>, <year>2023</year>. doi: <pub-id pub-id-type="doi">10.32604/iasc.2023.031329</pub-id>; <pub-id pub-id-type="pmid">37303558</pub-id></mixed-citation></ref>
<ref id="ref-35"><label>[35]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>Z.</given-names> <surname>Shu</surname></string-name>, <string-name><given-names>Z.</given-names> <surname>Yu</surname></string-name>, and <string-name><given-names>X. J.</given-names> <surname>Wu</surname></string-name></person-group>, &#x201C;<article-title>Robust online hashing with label semantic enhancement for cross-modal retrieval</article-title>,&#x201D; <source>Pattern. Recognit</source>., vol. <volume>145</volume>, pp. <fpage>109972</fpage>, <year>2024</year>. doi: <pub-id pub-id-type="doi">10.1016/j.patcog.2023.109972</pub-id>.</mixed-citation></ref>
<ref id="ref-36"><label>[36]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Z.</given-names> <surname>Shu</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Yong</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Yu</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Gao</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Mao</surname></string-name> and <string-name><given-names>Z.</given-names> <surname>Yu</surname></string-name></person-group>, &#x201C;<article-title>Discrete asymmetric zero-shot hashing with application to cross-modal retrieval</article-title>,&#x201D; <source>Neurocomputing</source>, vol. <volume>511</volume>, pp. <fpage>366</fpage>&#x2013;<lpage>379</lpage>, <year>2022</year>. doi: <pub-id pub-id-type="doi">10.1016/j.neucom.2022.09.037</pub-id>.</mixed-citation></ref>
<ref id="ref-37"><label>[37]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>V. S.</given-names> <surname>Devi</surname></string-name> and <string-name><given-names>S.</given-names> <surname>Kannimuthu</surname></string-name></person-group>, &#x201C;<article-title>Author profiling in code-mixed WhatsApp messages using stacked convolution networks and contextualized embedding-based text augmentation</article-title>,&#x201D; <source>Neural. Process. Lett</source>., vol. <volume>55</volume>, no. <issue>1</issue>, pp. <fpage>589</fpage>&#x2013;<lpage>614</lpage>, <year>2023</year>. doi: <pub-id pub-id-type="doi">10.1007/s11063-022-10898-3</pub-id>.</mixed-citation></ref>
<ref id="ref-38"><label>[38]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>K. C.</given-names> <surname>Raja</surname></string-name> and <string-name><given-names>S.</given-names> <surname>Kannimuthu</surname></string-name></person-group>, &#x201C;<article-title>Conditional generative adversarial network approach for autism prediction</article-title>,&#x201D; <source>Comput. Syst. Sci. Eng</source>., vol. <volume>44</volume>, no. <issue>1</issue>, <year>2023</year>. doi: <pub-id pub-id-type="doi">10.32604/csse.2023.025331</pub-id>; <pub-id pub-id-type="pmid">37303558</pub-id></mixed-citation></ref>
<ref id="ref-39"><label>[39]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Wu</surname></string-name>, <string-name><given-names>F.</given-names> <surname>Sun</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Xie</surname></string-name>, and <string-name><given-names>B.</given-names> <surname>Cui</surname></string-name></person-group>, &#x201C;<article-title>Graph neural networks in recommender systems: A survey</article-title>,&#x201D; <source>ACM Comput. Surv</source>., vol. <volume>55</volume>, no. <issue>5</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>37</lpage>, <year>2022</year>. doi: <pub-id pub-id-type="doi">10.1145/3535101</pub-id>.</mixed-citation></ref>
<ref id="ref-40"><label>[40]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>C.</given-names> <surname>Gao</surname></string-name> <etal>et al.</etal></person-group>, &#x201C;<article-title>A survey of graph neural networks for recommender systems: Challenges, methods, and directions</article-title>,&#x201D; <source>ACM Trans. Recomm. Syst</source>., vol. <volume>1</volume>, no. <issue>1</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>51</lpage>, <year>2023</year>. doi: <pub-id pub-id-type="doi">10.1145/3568022</pub-id>.</mixed-citation></ref>
<ref id="ref-41"><label>[41]</label><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Gao</surname></string-name> <etal>et al.</etal></person-group>, &#x201C;<article-title>Chat-REC: Towards interactive and explainable llms-augmented recommender system</article-title>,&#x201D; <comment>arXiv preprint arXiv:2303.14524</comment>, <year>2023</year>.</mixed-citation></ref>
<ref id="ref-42"><label>[42]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Deng</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Xu</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Lei</surname></string-name>, <string-name><given-names>T. S.</given-names> <surname>Chua</surname></string-name>, and <string-name><given-names>W.</given-names> <surname>Lam</surname></string-name></person-group>, &#x201C;<article-title>A unified multi-task learning framework for multi-goal conversational recommender systems</article-title>,&#x201D; <source>ACM Trans. Inf. Syst</source>., vol. <volume>4</volume>, no. <issue>3</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>25</lpage>, <year>2023</year>. doi: <pub-id pub-id-type="doi">10.1145/3570640</pub-id>.</mixed-citation></ref>
<ref id="ref-43"><label>[43]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><given-names>H.</given-names> <surname>Jiawei</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Micheline</surname></string-name>, and <string-name><given-names>P.</given-names> <surname>Jian</surname></string-name></person-group>, &#x201C;<chapter-title>Dm concepts and techniques preface and introduction</chapter-title>,&#x201D; in <person-group person-group-type="editor"><string-name><given-names>S.</given-names> <surname>Kusumadewi</surname></string-name></person-group> (Ed.), <source>Klasifikasi Status Gizi Menggunakan</source>, <year>2012</year>, vol. <volume>3</volume>, pp. <fpage>6</fpage>&#x2013;<lpage>11</lpage>.</mixed-citation></ref>
</ref-list>
</back></article>