<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xml:lang="en" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">CMC</journal-id>
<journal-id journal-id-type="nlm-ta">CMC</journal-id>
<journal-id journal-id-type="publisher-id">CMC</journal-id>
<journal-title-group>
<journal-title>Computers, Materials &#x0026; Continua</journal-title>
</journal-title-group>
<issn pub-type="epub">1546-2226</issn>
<issn pub-type="ppub">1546-2218</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">37483</article-id>
<article-id pub-id-type="doi">10.32604/cmc.2023.037483</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>A New Partial Task Offloading Method in a Cooperation Mode under Multi-Constraints for Multi-UE</article-title>
<alt-title alt-title-type="left-running-head">A New Partial Task Offloading Method in a Cooperation Mode under Multi-Constraints for Multi-UE</alt-title>
<alt-title alt-title-type="right-running-head">A New Partial Task Offloading Method in a Cooperation Mode under Multi-Constraints for Multi-UE</alt-title>
</title-group>
<contrib-group>
<contrib id="author-1" contrib-type="author">
<name name-style="western"><surname>Sun</surname><given-names>Shengyao</given-names></name><xref ref-type="aff" rid="aff-1">1</xref><xref ref-type="aff" rid="aff-2">2</xref></contrib>
<contrib id="author-2" contrib-type="author">
<name name-style="western"><surname>Du</surname><given-names>Ying</given-names></name><xref ref-type="aff" rid="aff-3">3</xref></contrib>
<contrib id="author-3" contrib-type="author">
<name name-style="western"><surname>Chen</surname><given-names>Jiajun</given-names></name><xref ref-type="aff" rid="aff-4">4</xref></contrib>
<contrib id="author-4" contrib-type="author">
<name name-style="western"><surname>Zhang</surname><given-names>Xuan</given-names></name><xref ref-type="aff" rid="aff-5">5</xref></contrib>
<contrib id="author-5" contrib-type="author" corresp="yes">
<name name-style="western"><surname>Zhang</surname><given-names>Jiwei</given-names></name><xref ref-type="aff" rid="aff-6">6</xref><email>jwzhang666@bupt.edu.cn</email></contrib>
<contrib id="author-6" contrib-type="author">
<name name-style="western"><surname>Xu</surname><given-names>Yiyi</given-names></name><xref ref-type="aff" rid="aff-7">7</xref></contrib>
<aff id="aff-1"><label>1</label><institution>School of Information Science and Technology, Zhengzhou Normal University</institution>, <addr-line>Zhengzhou, 450044</addr-line>, <country>China</country></aff>
<aff id="aff-2"><label>2</label><institution>Henan Key Laboratory of Big Data Analysis and Processing, Henan University</institution>, <addr-line>Kaifeng, 475004</addr-line>, <country>China</country></aff>
<aff id="aff-3"><label>3</label><institution>School of Geography and Tourism, Zhengzhou Normal University</institution>, <addr-line>Zhengzhou, 450044</addr-line>, <country>China</country></aff>
<aff id="aff-4"><label>4</label><institution>Science and Engineering College, South China University of Technology</institution>, <addr-line>Guangzhou, 510641</addr-line>, <country>China</country></aff>
<aff id="aff-5"><label>5</label><institution>Department of Electrical and Electronic Engineering, Luohe Vocational Technology College</institution>, <addr-line>Luohe, 462002</addr-line>, <country>China</country></aff>
<aff id="aff-6"><label>6</label><institution>School of Computer Science, Beijing University of Posts and Telecommunications</institution>, <addr-line>Beijing, 100876</addr-line>, <country>China</country></aff>
<aff id="aff-7"><label>7</label><institution>Cardiff School of Engineering, Cardiff University</institution>, <addr-line>Cardiff, CF10 3XQ 15</addr-line>, <country>UK</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>&#x002A;</label>Corresponding Author: Jiwei Zhang. Email: <email>jwzhang666@bupt.edu.cn</email></corresp>
</author-notes>
<pub-date date-type="collection" publication-format="electronic"><year>2023</year></pub-date>
<pub-date date-type="pub" publication-format="electronic"><day>08</day><month>10</month><year>2023</year></pub-date>
<volume>76</volume>
<issue>3</issue>
<fpage>2879</fpage>
<lpage>2900</lpage>
<history>
<date date-type="received"><day>05</day><month>11</month><year>2022</year></date>
<date date-type="accepted"><day>07</day><month>4</month><year>2023</year></date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2023 Sun et al.</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Sun et al.</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_CMC_37483.pdf"></self-uri>
<abstract>
<p>In Multi-access Edge Computing (MEC), to deal with multiple user equipment (UE)&#x2019;s task offloading problem of parallel relationships under the multi-constraints, this paper proposes a cooperation partial task offloading method (named CPMM), aiming to reduce UE&#x0027;s energy and computation consumption, while meeting the task completion delay as much as possible. CPMM first studies the task offloading of single-UE and then considers the task offloading of multi-UE based on single-UE task offloading. CPMM uses the critical path algorithm to divide the modules into key and non-key modules. According to some constraints of UE-self when offloading tasks, it gives priority to non-key modules for offloading and uses the evaluation decision method to select some appropriate key modules for offloading. Based on fully considering the competition between multiple UEs for communication resources and MEC service resources, CPMM uses the weighted queuing method to alleviate the competition for communication resources and uses the branch decision algorithm to determine the location of module offloading by BS according to the MEC servers&#x2019; resources. It achieves its goal by selecting reasonable modules to offload and using the cooperation of UE, MEC, and Cloud Center to determine the execution location of the modules. Extensive experiments demonstrate that CPMM obtains superior performances in task computation consumption reducing around 6% on average, task completion delay reducing around 5% on average, and better task execution success rate than other similar methods.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>MEC</kwd>
<kwd>partial task offloading</kwd>
<kwd>parallel dependencies</kwd>
<kwd>completion delay</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label>
<title>Introduction</title>
<p>In recent years, mobile user equipment (UE), represented by smartphones and tablets, has gradually become an essential part of people&#x2019;s daily lives [<xref ref-type="bibr" rid="ref-1">1</xref>&#x2013;<xref ref-type="bibr" rid="ref-4">4</xref>]. UE, with its built-in camera, microphone, stereo, and a large variety of sensors, can provide users with social, business, information entertainment, games, and other services, and plays a vital role in people&#x2019;s learning, entertainment, social, travel and other aspects of the increasingly [<xref ref-type="bibr" rid="ref-1">1</xref>&#x2013;<xref ref-type="bibr" rid="ref-4">4</xref>]. The rapid growth in the number of UEs also makes it possible to expand data-based applications. At present, the number of UE applications is increasing. According to the latest data released by Sensor Tower, in the first quarter of 2022, the global App Store downloads reached 8.6 billion times. The total number of app downloads between the App Store and Google Play Store was 36.9 billion [<xref ref-type="bibr" rid="ref-5">5</xref>].</p>
<p>Although these applications have brought various conveniences to people&#x2019;s lives, due to the natural limitations of UE, such as its computing power, battery power, storage, etc., they cannot handle computation-intensive, energy-intensive, and other applications that require high capability of equipment (such as virtual reality, augmented reality, face recognition, etc.) and seriously affect the service experience of users [<xref ref-type="bibr" rid="ref-1">1</xref>&#x2013;<xref ref-type="bibr" rid="ref-4">4</xref>]. MEC can effectively address this problem. It considers that computing is no longer confined to the cloud (cloud side) and the client (device side), but can occur on any device during data transmission and is closer to the data [<xref ref-type="bibr" rid="ref-1">1</xref>&#x2013;<xref ref-type="bibr" rid="ref-4">4</xref>]. When UE&#x2019;s task is running, the task uses the pre-set task offloading method to offload another execution location to reduce UE&#x0027;s computational complexity and energy consumption.</p>
<p>Nowadays, numerous task-offloading strategies have been proposed [<xref ref-type="bibr" rid="ref-6">6</xref>&#x2013;<xref ref-type="bibr" rid="ref-25">25</xref>]. Much research shows that task offloading needs are offloading time and the location of task processing. Offloading location refers to selecting the appropriate task terminal. The offloading time refers to triggering a pre-set condition when UE starts offloading all (or part) of its tasks to the correct location for processing. Among them, partial offloading thinks the task can be split into several modules. According to the dependence relationship among modules and the consumption of UE resources by modules, some modules are selected to be offloaded to the appropriate MEC server for processing [<xref ref-type="bibr" rid="ref-1">1</xref>,<xref ref-type="bibr" rid="ref-3">3</xref>,<xref ref-type="bibr" rid="ref-4">4</xref>].</p>
<p>Because of the simplicity of describing the modules that make up a task using a serial relationship, most current partial task offloading methods assume serial dependencies among modules when studying task offloading [<xref ref-type="bibr" rid="ref-10">10</xref>&#x2013;<xref ref-type="bibr" rid="ref-16">16</xref>]. However, in practical applications, modules can present both serial and parallel dependencies, such as face-detection tasks [<xref ref-type="bibr" rid="ref-3">3</xref>]. It is not reasonable to deal with parallel tasks using serially. Meanwhile, traditional methods typically offload tasks to MEC servers, with less involvement of other devices in the network, such as the servers of Mobile Cloud Center (MCC). Compared to MCC&#x0027;s servers, MEC has far fewer resources [<xref ref-type="bibr" rid="ref-1">1</xref>,<xref ref-type="bibr" rid="ref-2">2</xref>,<xref ref-type="bibr" rid="ref-4">4</xref>,<xref ref-type="bibr" rid="ref-11">11</xref>]. MCC taking part in offloading can effectively reduce the service pressure of MEC and can balance the load of MEC servers. However, most methods usually consider that MCC is far from UE, and MCC is less considered to participate in task offloading. In addition, when task offloading, most methods assume that there are sufficient resources to perform the task, and less consideration is given to the impact of resource constraints when offloading. This may lead to the problem that although many tasks are offloaded, the processing latency is increasing, and may also lead to the failure of task offloading. For example, during a unit of time, the communication channel of the Base station (BS) is fixed. Multi-UE offload tasks may lead to many requirements for the communication channel or even much more than the amount of traffic that BS can handle. Then, some modules will be waiting for the channel. This will increase the tasks&#x2019; processing delay. If a task fails to get the channel all the time, it will lead to the failure of task offloading.</p>
<p>To deal with the task offloading problem caused by insufficient resources of multi-UEs, and the modules that make up the task present parallel dependencies, this paper proposed a new Cooperation Partial task offloading method under Multi-constraints for Multi-UE (named CPMM). CPMM adopts a cooperation manner to offload the task (Rather than simply offloading tasks to a specific server, multiple servers cooperate to offload some tasks from one server to other servers to improve the system&#x2019;s resource utilization. This offloading mode is called cooperation mode), and considers many constraints, aiming to reduce multi-UE&#x0027;s computation and energy consumption while meeting the completion delay as much as possible.</p>
<p>In summary, the main contributions of this paper are summarized as follows:
<list list-type="bullet">
<list-item>
<p>This paper studies the parallel modules offloading of multiple UEs under multiple constraints and proposes a partial offloading method in a cooperative mode.</p></list-item>
<list-item>
<p>To better study the task offloading of multi-UE under multi-constraints, CPMM first studied the task offloading of single-UE. Then, it studies the multi-UE task offloading under multi-constraints according to the method of single-UE task offloading.</p></list-item>
<list-item>
<p>CPMM uses the critical path algorithm to divide the modules into key and non-key modules. It proposes the non-key module offloading method and key module offloading method according to the multi-constraints. It uses different offloading methods to offload the modules according to their type.</p></list-item>
<list-item>
<p>To ensure the task&#x2019;s completion time, CPMM prioritizes non-key modules for offloading according to multi-constraints and prioritizes key modules to obtain offloading resources when competing for offloading resources.</p></list-item>
</list></p>
<p>The rest of the paper is structured as follows: <xref ref-type="sec" rid="s2">Section 2</xref> presents the related works. <xref ref-type="sec" rid="s3">Section 3</xref> elaborates on the overview of CPMM. <xref ref-type="sec" rid="s4">Section 4</xref> presents single-UE task offloading with theoretical and analysis. <xref ref-type="sec" rid="s5">Section 5</xref> discusses multi-UE task offloading with theoretical analysis under the multi-constraints. In <xref ref-type="sec" rid="s6">Section 6</xref>, extensive experiments show that CPMM obtains better performances than similar approaches within a variety of metrics, and analyses of various factors are conducted.</p>
</sec>
<sec id="s2">
<label>2</label>
<title>Related Work</title>
<sec id="s2_1">
<label>2.1</label>
<title>The Task Offloading under MEC</title>
<p>Task offloading has been successfully applied in many fields, such as MEC, vehicle networks, the Internet of Things (IoT), Artificial Intelligence (AI), Geographic Information Systems (GIS), etc. [<xref ref-type="bibr" rid="ref-1">1</xref>&#x2013;<xref ref-type="bibr" rid="ref-4">4</xref>]. Many task-offloading methods for mobile edge computing have been proposed [<xref ref-type="bibr" rid="ref-6">6</xref>&#x2013;<xref ref-type="bibr" rid="ref-25">25</xref>]. These methods can be classified from the following three aspects.</p>
<p>Based on task offloading scale: According to the scale of task offloading, the task-offloading methods can be divided into executing locally, completely offloading, and partially offloading. Liu et al. proposed a complete task offloading method based on a one-dimensional search algorithm [<xref ref-type="bibr" rid="ref-6">6</xref>]. This method finds the optimal offloading scheme according to the queue state of the application buffer, the available computing power at the UE and MEC servers, and the channel characteristics between the UE and MEC servers. LODCO is a dynamic complete task offloading method, aiming to optimize application execution delay [<xref ref-type="bibr" rid="ref-7">7</xref>]. It assumes that UE uses energy collection technology to minimize energy consumption during local execution, and uses battery power control method to optimize energy consumption for data transmission. Paper [<xref ref-type="bibr" rid="ref-10">10</xref>] used partial data to audit the integrity of edge server cache data. It analyzes the threat model and the audit objectives, then proposes a lightweight sampling-based probabilistic approach, namely EDI-V, to help app vendors audit the integrity of their data cached on a large scale of edge servers. Paper [<xref ref-type="bibr" rid="ref-11">11</xref>] proposed a MEC service pricing scheme to coordinate with the service caching decisions and control wireless devices&#x0027; task offloading behavior in a cellular network. It proposes a two-stage dynamic game of incomplete information to model and analyzes the two-stage interaction between the BS and multiple associated wireless devices.</p>
<p>Based on the dependency relationship of modules: During the task offloading, if a task can be split into many modules, the relationships presented by the modules that make up the task can be serial or parallel [<xref ref-type="bibr" rid="ref-1">1</xref>,<xref ref-type="bibr" rid="ref-3">3</xref>]. In a serial dependency relationship, the execution of the latter task (or module) must wait for the result of the previous task (or module). A parallel relationship is one in which the tasks (or modules) offloaded to a remote are offloaded and processed concurrently. The relationship between the tasks (or modules) can be expressed in a task (or module) dependency graph [<xref ref-type="bibr" rid="ref-1">1</xref>]. It needs to analyze them according to the specific characteristics of the APP/program when analyzing the dependencies between tasks (or modules). Dependencies relationships between tasks are not absolute or fixed and can be changed according to different criteria [<xref ref-type="bibr" rid="ref-3">3</xref>].</p>
<p>Since using the serial relationship to describe the dependencies between tasks (or modules) is relatively simple, many partial tasks offloading method deal with the relationship between modules according to the serial relationship. Paper [<xref ref-type="bibr" rid="ref-16">16</xref>] considered the cooperation of cloud computing and MEC in IoT. It starts with the single-user computation offloading problem. Then it considers the multiuser computation offloading problem formulated as a mixed integer linear programming problem by considering resource competition among mobile users. It designs an iterative heuristic MEC resource allocation algorithm to make the offloading decision dynamically. Paper [<xref ref-type="bibr" rid="ref-17">17</xref>] considered a practical application consisting of a set of tasks and models it as a generic graph topology. Then, the energy-efficient task offloading problem is mathematically formulated as a constrained 0&#x2013;1 programming.</p>
<p>Based on the manner of offloading execution: Currently, most offloading strictly follows the definition of MEC. The offloading task is only allowed in the MEC server cluster relying on BS, and less consideration is given to other service resources to participate in the task offloading, such as remote MCC servers and other adjacent BS&#x2019;s MEC servers [<xref ref-type="bibr" rid="ref-1">1</xref>&#x2013;<xref ref-type="bibr" rid="ref-4">4</xref>]. This manner can effectively ensure that the offloading task has a small data transmission delay. However, it is easy to cause an overload of MEC service in the local area when the communication range of BS has a great offload demand, which leads to the imbalance of MEC.</p>
<p>In cooperative methods, they usually think that the resources to execute the offloading task are limited, and multi-constraints limit the task offloading. The offloading location is no longer limited to the MEC servers, and the task can also be offloaded to the MCC server or other MEC servers relying on other BS.</p>
<p>Reducing the service pressure of MEC servers through multi-terminal cooperation. Paper [<xref ref-type="bibr" rid="ref-14">14</xref>] extended the task offloading scenario to multiple cloud servers, aiming to obtain the optimal computation distribution among cloud servers in closed form for the energy consumption of minimization and latency of application execution minimization problems. Paper [<xref ref-type="bibr" rid="ref-16">16</xref>] considered multi-constraints when task offloading, designs an iterative heuristic MEC resource allocation algorithm to make the offloading decision dynamically, and accomplishes the task offloading through the cooperation of MEC resources and MCC resources.</p>
</sec>
<sec id="s2_2">
<label>2.2</label>
<title>MEC Architecture</title>
<p>European Telecommunications Standards Institute (ETSI) has proposed some generic reference architecture [<xref ref-type="bibr" rid="ref-1">1</xref>,<xref ref-type="bibr" rid="ref-20">20</xref>,<xref ref-type="bibr" rid="ref-21">21</xref>]. The framework can be divided into the MEC system, MEC server management, and network layer. Although ETSI has proposed some reference architecture, no concrete standardized architectural framework exists for MEC. Therefore, many researchers first need to define the system model of task offloading when they study task offloading [<xref ref-type="bibr" rid="ref-4">4</xref>,<xref ref-type="bibr" rid="ref-10">10</xref>&#x2013;<xref ref-type="bibr" rid="ref-16">16</xref>]. Different researchers have different definitions of the MEC system model, which can be summarized into the following three system model structures: (1) Single MEC server in the single Base station (BS): This system model includes only one MEC server and only one BS. MEC servers are attached to BS and provide services to UEs within the communication range of BS. (2) Multiple MEC sever with single BS: This system model exists with multiple MEC servers and one BS. These multi-MEC servers are attached to BS. (3) Distributed MEC servers with multiple BS: In this system model, it includes multiple BS. Each BS is attached to one or multiple MEC servers. UE offloads its task to the nearest BS. BS can distribute assignments to its attached MEC servers or route them to other MEC servers running to other BSs.</p>
</sec>
</sec>
<sec id="s3">
<label>3</label>
<title>The Overview of CPMM</title>
<p>CPMM is a partial task offloading method for multi-UE under multi-constraints MEC. It focuses on the system model as multi-MEC servers with single-BS and the parallel dependencies of the modules that make up the task. It aims to reduce energy and computation consumption while meeting the task completion delay as much as possible. It is divided into offloading under the single UE and offloading under multi-UE. The first part mainly describes how single-UE offloads tasks under multi-constraints. The last part mainly describes how multi-UE offloads tasks under constraints according to the single UE offloading. CPMM begins to select some modules of UE to offload according to the multi-constraints concerned when the UE starts to execute the task. <xref ref-type="fig" rid="fig-1">Fig. 1</xref> shows its process and constraint conditions.</p>
<fig id="fig-1">
<label>Figure 1</label>
<caption>
<title>The process and the constraint conditions of CPMM</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_37483-fig-1.tif"/>
</fig>
<p>When the single UE offloading, CPMM uses the critical path algorithm to divide the modules into key and non-key modules. And then, the offloading module is preferentially selected from the non-key module set as the candidate offloading module set. After getting the set, CPMM evaluates the battery power of UE to determine whether the current battery power can support the normal execution of tasks. If the battery power is sufficient, it evaluates the current amount of computation to determine whether a task must be offloaded due to resource constraints. The offloading that must be offloaded due to insufficient resources is called forced offloading. The offloading under the condition of sufficient resources is called active offloading. When the multi-constraint conditions considered by CPMM can meet the normal execution of the task, it uses active offloading to offload some non-key modules. Suppose it cannot be satisfied (e.g., even though some non-key modules have been offloaded, the current UE computing resources still cannot meet the normal execution of the task), CPMM will consider whether it is necessary to offload some key modules according to the multi-constraints. Then, it selects the candidate module set from the key modules in a successive verification manner, waiting for offloading.</p>
<p>Since multi-UE task offloading can be regarded as multiple single-UE tasks offloading without external resource competition, CPMM proposes the offloading method of key and non-key modules on the premise of assuming there are enough communication and MEC server resources. According to the offloading method of single-UE key and non-key modules, CPMM discusses the task offloading method of multi-UE with external resource competition. It adopts the weighted queuing method to alleviate the problem of multi-UE competing for BS&#x2019;s communication resources and uses the branching method to divide the task offloading of multi-UE into many different cases according to the constraints of BS&#x2019;s communication and MEC servers. Finally, it discusses the location of task module offloading and execution according to the candidate offloading module and the case of the resources at a time interval.</p>
</sec>
<sec id="s4">
<label>4</label>
<title>The Task Offloading of Single-UE under Multi-Constraints</title>
<sec id="s4_1">
<label>4.1</label>
<title>The Constrain of Task Offloading for Single-UE</title>
<p>Task offloading is a complex process that is affected and constrained by many factors, such as user preferences, network link quality, mobile device performance, BS performance, etc. [<xref ref-type="bibr" rid="ref-1">1</xref>]. CPMM mainly focuses on the following aspects:
<list list-type="bullet">
<list-item>
<p>Constraint 1 (UE&#x2019;s battery power constraint): The battery power of UE must be able to support the standard task processing.</p></list-item>
<list-item>
<p>Constraint 2 (Task execution constraint): All modules must be executed to ensure the normal execution of the task.</p></list-item>
<list-item>
<p>Constraint 3 (Calculation constraint during a unit of time): The CPU computing capacity of UE needs to meet the computing required for task execution during a unit of time.</p></list-item>
<list-item>
<p>Constraint 4 (Communication channel constraint): When multi-UE offload candidate modules concurrently, it is constrained by the number of available communication channels per unit of time.</p></list-item>
<list-item>
<p>Constraint 5 (MEC server resource constraint): When multi-UE concurrently selects MEC servers to perform the service, it is constrained by the number of services acceptable to the MEC servers.</p></list-item>
</list></p>
<p>Constraint 1 is a prerequisite for task execution. Constraint 2 ensures that user-submitted tasks execute smoothly and with correct results. Constraints 1 and 3 determine why tasks are offloaded, i.e., whether the UE actively offloads modules to reduce energy and computational consumption or is forced to offload due to insufficient UE resources. Constraints 4 and 5 determine that communication competition and MEC resource competition should be paid attention to when the task is offloading.</p>
</sec>
<sec id="s4_2">
<label>4.2</label>
<title>The Method of Task Module Classification</title>
<p>This section mainly describes which modules under the parallel relationship belong to modules with lower impact. Using <inline-formula id="ieqn-1"><mml:math id="mml-ieqn-1"><mml:mi>U</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo fence="false" stretchy="false">{</mml:mo><mml:mi>c</mml:mi><mml:mi>a</mml:mi><mml:msub><mml:mi>p</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>m</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mi>o</mml:mi><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mi>P</mml:mi><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>c</mml:mi><mml:mi>o</mml:mi><mml:mi>m</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mi>P</mml:mi><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>s</mml:mi><mml:mi>e</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi></mml:mrow></mml:msub><mml:mo fence="false" stretchy="false">}</mml:mo></mml:math></inline-formula> to represent UE. <inline-formula id="ieqn-2"><mml:math id="mml-ieqn-2"><mml:mi>c</mml:mi><mml:mi>a</mml:mi><mml:msub><mml:mi>p</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>m</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> indicates the calculation frequency of the CPU during a unit of time (e.g., 1 s). The current battery power is <inline-formula id="ieqn-3"><mml:math id="mml-ieqn-3"><mml:mi>p</mml:mi><mml:mi>o</mml:mi><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. CPU calculation power is<inline-formula id="ieqn-4"><mml:math id="mml-ieqn-4"><mml:mi>P</mml:mi><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>c</mml:mi><mml:mi>o</mml:mi><mml:mi>m</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. The sent power of the network card is <inline-formula id="ieqn-5"><mml:math id="mml-ieqn-5"><mml:mi>P</mml:mi><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>s</mml:mi><mml:mi>e</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. Using <inline-formula id="ieqn-6"><mml:math id="mml-ieqn-6"><mml:mrow><mml:mi>t</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:msubsup><mml:mi>k</mml:mi><mml:mi>i</mml:mi><mml:mo>&#x0027;</mml:mo></mml:msubsup></mml:mrow></mml:math></inline-formula> to denote <inline-formula id="ieqn-7"><mml:math id="mml-ieqn-7"><mml:mi>U</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>&#x2019;s a pre-performed task. Assuming that it can be split into <italic>m &#x002B; 2</italic> modules. These modules are a parallel dependency. The initial module is <inline-formula id="ieqn-8"><mml:math id="mml-ieqn-8"><mml:mi>T</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula>, which means the module is initiated by the user locally. The last output module is <inline-formula id="ieqn-9"><mml:math id="mml-ieqn-9"><mml:mi>T</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>m</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula>, which indicates that the final task execution results are assembled locally after execution at different execution terminals. Using <inline-formula id="ieqn-10"><mml:math id="mml-ieqn-10"><mml:mi>T</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> represents the <italic>k</italic>th module, and its execution position is either local, MEC servers, or MCC servers according to CPMM focuses system model. Its execution position can be expressed as:</p>
<p><disp-formula id="eqn-1"><label>(1)</label><mml:math id="mml-eqn-1" display="block"><mml:mi>&#x03B1;</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mrow><mml:mi>l</mml:mi><mml:mi>o</mml:mi><mml:mi>c</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mrow><mml:mi>o</mml:mi><mml:mi>t</mml:mi><mml:mi>h</mml:mi><mml:mi>e</mml:mi><mml:mi>r</mml:mi></mml:mrow></mml:mtd></mml:mtr></mml:mtable><mml:mo fence="true" stretchy="true" symmetric="true"></mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>The task processing process can be split into several time intervals, using <inline-formula id="ieqn-11"><mml:math id="mml-ieqn-11"><mml:mo fence="false" stretchy="false">{</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo fence="false" stretchy="false">}</mml:mo></mml:math></inline-formula> to denote and form a sequential relationship. Any two-timing intervals are expressed as <inline-formula id="ieqn-12"><mml:math id="mml-ieqn-12"><mml:mi mathvariant="normal">&#x0394;</mml:mi><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:mi mathvariant="normal">&#x0394;</mml:mi><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>. These time intervals are the same. So, the models of <inline-formula id="ieqn-13"><mml:math id="mml-ieqn-13"><mml:mrow><mml:mi>t</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:msubsup><mml:mi>k</mml:mi><mml:mi>i</mml:mi><mml:mo>&#x2032;</mml:mo></mml:msubsup></mml:mrow></mml:math></inline-formula> can be shown the <xref ref-type="fig" rid="fig-2">Fig. 2</xref> according to the possible processing position and the dependencies among modules.</p>
<fig id="fig-2">
<label>Figure 2</label>
<caption>
<title>The time series model of single-UE task offloading</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_37483-fig-2.tif"/>
</fig>
<p>Based on the relevant theory of critical path in graph theory [<xref ref-type="bibr" rid="ref-26">26</xref>], the critical path determines the latest project completion time and can obtain the latest project completion time by this algorithm. CPMM uses the directed weighted graph to describe the relationship between modules too. So, it can also use the critical path algorithm to get the latest completion time of the task. According to the critical path algorithm, in the directed weighted graph, if the delay of a module on the critical path increases, the task completion time also increases. That is, the module completion delay on the critical path has a greater impact on the task completion delay. In other words, CPMM can use the critical path algorithm to get which modules have less impact on delay.</p>
<p>Setting <inline-formula id="ieqn-14"><mml:math id="mml-ieqn-14"><mml:mi>T</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mi>D</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mi>B</mml:mi><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>I</mml:mi><mml:mi>s</mml:mi><mml:mi>K</mml:mi><mml:mi>e</mml:mi><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>}</mml:mo></mml:mrow></mml:math></inline-formula>. <inline-formula id="ieqn-15"><mml:math id="mml-ieqn-15"><mml:mi>D</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is the size of the data to be processed. <inline-formula id="ieqn-16"><mml:math id="mml-ieqn-16"><mml:mi>B</mml:mi><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is the module attribute. If <inline-formula id="ieqn-17"><mml:math id="mml-ieqn-17"><mml:mi>B</mml:mi><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:math></inline-formula>, it means that <inline-formula id="ieqn-18"><mml:math id="mml-ieqn-18"><mml:mi>T</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> must be processed locally. <inline-formula id="ieqn-19"><mml:math id="mml-ieqn-19"><mml:msub><mml:mrow><mml:mi>I</mml:mi><mml:mi>s</mml:mi><mml:mi>K</mml:mi><mml:mi>e</mml:mi><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> indicates whether a module is a key module. If a module is on a critical path, it is called the key module by CPMM, and set <inline-formula id="ieqn-20"><mml:math id="mml-ieqn-20"><mml:msub><mml:mrow><mml:mi>I</mml:mi><mml:mi>s</mml:mi><mml:mi>K</mml:mi><mml:mi>e</mml:mi><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:math></inline-formula>. Else, it is called the non-key module and set <inline-formula id="ieqn-21"><mml:math id="mml-ieqn-21"><mml:msub><mml:mrow><mml:mi>I</mml:mi><mml:mi>s</mml:mi><mml:mi>K</mml:mi><mml:mi>e</mml:mi><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:math></inline-formula>. Using <inline-formula id="ieqn-22"><mml:math id="mml-ieqn-22"><mml:mi>C</mml:mi><mml:mi>T</mml:mi><mml:msub><mml:mi>S</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> to denote the key module set and using <inline-formula id="ieqn-23"><mml:math id="mml-ieqn-23"><mml:mi>N</mml:mi><mml:mi>C</mml:mi><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> to denote the non-key module set. According to the definition of critical path, CPMM can get <inline-formula id="ieqn-24"><mml:math id="mml-ieqn-24"><mml:mi>T</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>&#x2019;s earliest start time (<inline-formula id="ieqn-25"><mml:math id="mml-ieqn-25"><mml:mi>E</mml:mi><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>), latest start time (<inline-formula id="ieqn-26"><mml:math id="mml-ieqn-26"><mml:mi>L</mml:mi><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>), and maneuvering time <inline-formula id="ieqn-27"><mml:math id="mml-ieqn-27"><mml:mi>M</mml:mi><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>.</p>
<p><disp-formula id="eqn-2"><label>(2)</label><mml:math id="mml-eqn-2" display="block"><mml:msub><mml:mrow><mml:mi mathvariant="normal">M</mml:mi><mml:mi mathvariant="normal">T</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>L</mml:mi><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:mi>E</mml:mi><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mi>M</mml:mi><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>&#x003E;</mml:mo><mml:mn>0</mml:mn></mml:math></disp-formula></p>
<p>Maneuver time is the time difference between the time the module can complete (i.e., earliest start time) and the time the module must complete (i.e., latest start time). Based on the definition of the critical path, the maneuvering time of the non-key module must meet <inline-formula id="ieqn-28"><mml:math id="mml-ieqn-28"><mml:mi>M</mml:mi><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2265;</mml:mo><mml:mn>0</mml:mn></mml:math></inline-formula>. It means that even the delay of the late <inline-formula id="ieqn-29"><mml:math id="mml-ieqn-29"><mml:mi>M</mml:mi><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> start of non-key modules will not influence the overall task processing delay. In other words, selecting module offloading from <inline-formula id="ieqn-30"><mml:math id="mml-ieqn-30"><mml:mi>C</mml:mi><mml:mi>T</mml:mi><mml:msub><mml:mi>S</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> has a lower impact on the processing delay of <inline-formula id="ieqn-31"><mml:math id="mml-ieqn-31"><mml:mrow><mml:mi>t</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:msubsup><mml:mi>k</mml:mi><mml:mi>i</mml:mi><mml:mo>&#x2032;</mml:mo></mml:msubsup></mml:mrow></mml:math></inline-formula>. Consequently, whether active or forced offloading, the non-key module set can be chosen as the offloading module to reduce the UE&#x2019;s energy and computational consumption.</p>
</sec>
<sec id="s4_3">
<label>4.3</label>
<title>The Selection Method of Candidate Offloading Set</title>
<p>This section mainly describes the method for UE to select a candidate offloading module from non-key modules. The steps are as follows:
<list list-type="bullet">
<list-item>
<p>Step 1: Obtain the initial candidate offloading set according to whether the module needs to be processed locally.</p></list-item>
</list></p>
<p>Since <inline-formula id="ieqn-32"><mml:math id="mml-ieqn-32"><mml:mi>B</mml:mi><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:math></inline-formula> means the module must process locally, CPMM selects modules with <inline-formula id="ieqn-33"><mml:math id="mml-ieqn-33"><mml:mi>B</mml:mi><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:math></inline-formula>, forming the initial candidate offloading set, and denoted by <inline-formula id="ieqn-34"><mml:math id="mml-ieqn-34"><mml:mi>N</mml:mi><mml:mi>C</mml:mi><mml:msubsup><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>C</mml:mi><mml:mn>1</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula>.
<list list-type="bullet">
<list-item>
<p>Step 2: Perform secondary candidate offloading set <inline-formula id="ieqn-35"><mml:math id="mml-ieqn-35"><mml:mi>N</mml:mi><mml:mi>C</mml:mi><mml:msubsup><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>C</mml:mi><mml:mn>1</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula> according to the required calculation amount of each module.</p></list-item>
</list></p>
<p>Since the module with high computational can consume UE&#x2019;s larger computational, CPMM selects the module with high computational from <inline-formula id="ieqn-36"><mml:math id="mml-ieqn-36"><mml:mi>N</mml:mi><mml:mi>C</mml:mi><mml:msubsup><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>C</mml:mi><mml:mn>1</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula> to form a new candidate set.</p>
<p>To describe which module belongs to computationally intensive, CPMM defines a calculate threshold, demoted <inline-formula id="ieqn-37"><mml:math id="mml-ieqn-37"><mml:mrow><mml:mi mathvariant="normal">&#x03B4;</mml:mi></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi mathvariant="normal">&#x03B4;</mml:mi></mml:mrow><mml:mo>&#x2265;</mml:mo><mml:mn>2</mml:mn><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula>. When the calculation of <inline-formula id="ieqn-38"><mml:math id="mml-ieqn-38"><mml:mi>T</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> satisfies the inequality <xref ref-type="disp-formula" rid="eqn-3">(3)</xref> during a unit of time, it means that <inline-formula id="ieqn-39"><mml:math id="mml-ieqn-39"><mml:mi>T</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> with high calculation.</p>
<p><disp-formula id="eqn-3"><label>(3)</label><mml:math id="mml-eqn-3" display="block"><mml:mi>U</mml:mi><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2265;</mml:mo><mml:mi>&#x03B4;</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>A</mml:mi><mml:mi>V</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></disp-formula></p>
<p>In which, <inline-formula id="ieqn-40"><mml:math id="mml-ieqn-40"><mml:mi>A</mml:mi><mml:mi>V</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> represents the average amount of computation during a unit of time for all tasks performed by <inline-formula id="ieqn-41"><mml:math id="mml-ieqn-41"><mml:mi>U</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, <inline-formula id="ieqn-42"><mml:math id="mml-ieqn-42"><mml:mi>A</mml:mi><mml:mi>V</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> can be obtained from the <xref ref-type="disp-formula" rid="eqn-4">formula (4)</xref>.</p>
<p><disp-formula id="eqn-4"><label>(4)</label><mml:math id="mml-eqn-4" display="block"><mml:mi>A</mml:mi><mml:mi>V</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>c</mml:mi><mml:mi>o</mml:mi><mml:mrow><mml:mi mathvariant="normal">u</mml:mi><mml:mi mathvariant="normal">n</mml:mi></mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:munderover><mml:mrow><mml:mo>(</mml:mo><mml:mfrac><mml:mrow><mml:mi>T</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>m</mml:mi><mml:mo>+</mml:mo><mml:mn>2</mml:mn><mml:mo>)</mml:mo></mml:mrow></mml:mfrac><mml:mo>)</mml:mo></mml:mrow><mml:mspace width="negativethinmathspace" /><mml:mstyle scriptlevel="0"><mml:mrow><mml:mo fence="true" stretchy="true" symmetric="true" maxsize="2.047em" minsize="2.047em">/</mml:mo></mml:mrow></mml:mstyle><mml:mspace width="negativethinmathspace" /><mml:mrow><mml:mi>c</mml:mi><mml:mi>o</mml:mi><mml:mi>u</mml:mi><mml:mi>n</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mo>=</mml:mo><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mrow><mml:mi>c</mml:mi><mml:mi>o</mml:mi><mml:mi>u</mml:mi><mml:mi>n</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mrow></mml:munderover><mml:mrow><mml:mo>(</mml:mo><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mi>m</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:munderover><mml:mfrac><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mi>D</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mstyle scriptlevel="0"><mml:mrow><mml:mo fence="true" stretchy="true" symmetric="true" maxsize="2.047em" minsize="2.047em">/</mml:mo></mml:mrow></mml:mstyle><mml:mspace width="negativethinmathspace" /><mml:mo stretchy="false">(</mml:mo><mml:mi>m</mml:mi><mml:mo>+</mml:mo><mml:mn>2</mml:mn><mml:mo stretchy="false">)</mml:mo><mml:mo>)</mml:mo></mml:mrow><mml:mspace width="negativethinmathspace" /><mml:mstyle scriptlevel="0"><mml:mrow><mml:mo fence="true" stretchy="true" symmetric="true" maxsize="2.047em" minsize="2.047em">/</mml:mo></mml:mrow></mml:mstyle><mml:mspace width="negativethinmathspace" /><mml:mi>c</mml:mi><mml:mi>o</mml:mi><mml:mrow><mml:mi mathvariant="normal">u</mml:mi></mml:mrow><mml:mi>n</mml:mi><mml:mi>t</mml:mi></mml:math></disp-formula></p>
<p>The <inline-formula id="ieqn-43"><mml:math id="mml-ieqn-43"><mml:mrow><mml:mi>c</mml:mi><mml:mi>o</mml:mi><mml:mi>u</mml:mi><mml:mi>n</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:math></inline-formula> represents the number of tasks processed by <inline-formula id="ieqn-44"><mml:math id="mml-ieqn-44"><mml:mi>U</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> during the unit of time. According to <xref ref-type="disp-formula" rid="eqn-4">formula (4)</xref>, if a module in <inline-formula id="ieqn-45"><mml:math id="mml-ieqn-45"><mml:mi>N</mml:mi><mml:mi>C</mml:mi><mml:msubsup><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>C</mml:mi><mml:mn>1</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula> satisfy inequalities <xref ref-type="disp-formula" rid="eqn-3">(3)</xref>, it is a module with high computational and is divided into set <inline-formula id="ieqn-46"><mml:math id="mml-ieqn-46"><mml:mi>N</mml:mi><mml:mi>C</mml:mi><mml:msubsup><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>C</mml:mi><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula>.
<list list-type="bullet">
<list-item>
<p>Step 3: Getting the final candidate offloading set according to maneuvering time.</p></list-item>
</list></p>
<p>Due to the need for data migration when offloading, according to the definition of maneuver time, if the execution delay of offloading to another location is still less than the maneuver time, the offloading of the module will not affect the task processing delay and vice versa. Therefore, <inline-formula id="ieqn-47"><mml:math id="mml-ieqn-47"><mml:mi>N</mml:mi><mml:mi>C</mml:mi><mml:msubsup><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>C</mml:mi><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula> is filtered again based on the relationship between maneuver time and the execution time of offloading to the MEC service, getting the final candidate offloading set.</p>
<p>The module first migrates to BS when offloading. Assuming the communication bandwidth between <inline-formula id="ieqn-48"><mml:math id="mml-ieqn-48"><mml:mi>U</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> and BS is B, the channel adopts a Code Division Multiple Access (CDMA) cellular model with h channels, the channel transmission rate conforms to Shannon&#x2019;s theorem, the signal-to-noise ratio is fixed, expressed as <inline-formula id="ieqn-49"><mml:math id="mml-ieqn-49"><mml:mi>&#x03B7;</mml:mi></mml:math></inline-formula>. Then the data transmission rate (<inline-formula id="ieqn-50"><mml:math id="mml-ieqn-50"><mml:mi>T</mml:mi><mml:msub><mml:mi>r</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>) between UE and BS can be expressed as:</p>
<p><disp-formula id="eqn-5"><label>(5)</label><mml:math id="mml-eqn-5" display="block"><mml:mi>T</mml:mi><mml:mi>r</mml:mi><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mi>&#x03B7;</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mi>B</mml:mi><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mi>h</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>If <inline-formula id="ieqn-51"><mml:math id="mml-ieqn-51"><mml:mi>T</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> needs to offload, the delay transmission (<inline-formula id="ieqn-52"><mml:math id="mml-ieqn-52"><mml:mi>T</mml:mi><mml:msub><mml:mi>t</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>) from UE to BS can be expressed as:</p>
<p><disp-formula id="eqn-6"><label>(6)</label><mml:math id="mml-eqn-6" display="block"><mml:mi>T</mml:mi><mml:msub><mml:mi>t</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>D</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mi>T</mml:mi><mml:msub><mml:mi>r</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mi>D</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>&#x00D7;</mml:mo><mml:mi>h</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>B</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>&#x03B7;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></disp-formula></p>
<p>Assuming the distance between BS and MEC servers is one hop, the bandwidth is <inline-formula id="ieqn-53"><mml:math id="mml-ieqn-53"><mml:msub><mml:mi>B</mml:mi><mml:mrow><mml:mi>B</mml:mi><mml:mo stretchy="false">&#x2192;</mml:mo><mml:mi>E</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, <inline-formula id="ieqn-54"><mml:math id="mml-ieqn-54"><mml:mi>T</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>&#x2019;s execution delay on MEC severs is <inline-formula id="ieqn-55"><mml:math id="mml-ieqn-55"><mml:msub><mml:mi>&#x03B6;</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, then the delay of <inline-formula id="ieqn-56"><mml:math id="mml-ieqn-56"><mml:mi>T</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> offloading to the MEC server (<inline-formula id="ieqn-57"><mml:math id="mml-ieqn-57"><mml:mi>T</mml:mi><mml:mi>E</mml:mi><mml:mi>C</mml:mi><mml:msubsup><mml:mi>D</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mi>M</mml:mi><mml:mi>E</mml:mi><mml:mi>C</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula>) is expressed as:</p>
<p><disp-formula id="eqn-7"><label>(7)</label><mml:math id="mml-eqn-7" display="block"><mml:mi>T</mml:mi><mml:mi>E</mml:mi><mml:mi>C</mml:mi><mml:msubsup><mml:mi>D</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mi>M</mml:mi><mml:mi>E</mml:mi><mml:mi>C</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mi>T</mml:mi><mml:msub><mml:mi>t</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mi>D</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:msub><mml:mi>B</mml:mi><mml:mrow><mml:mi>B</mml:mi><mml:mo stretchy="false">&#x2192;</mml:mo><mml:mi>E</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03BE;</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></disp-formula></p>
<p>If <inline-formula id="ieqn-58"><mml:math id="mml-ieqn-58"><mml:mi>T</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>&#x2019;s <inline-formula id="ieqn-59"><mml:math id="mml-ieqn-59"><mml:mi>M</mml:mi><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> satisfies the inequality <xref ref-type="disp-formula" rid="eqn-8">(8)</xref>, then offloading the module would increase the task processing delay. So, it cannot be offloading. Otherwise, consider offloading.</p>
<p><disp-formula id="eqn-8"><label>(8)</label><mml:math id="mml-eqn-8" display="block"><mml:mi>M</mml:mi><mml:mi>T</mml:mi><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mi>T</mml:mi><mml:mi>E</mml:mi><mml:mi>C</mml:mi><mml:msubsup><mml:mi>D</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mi>M</mml:mi><mml:mi>E</mml:mi><mml:mi>C</mml:mi></mml:mrow></mml:msubsup></mml:math></disp-formula></p>
<p>According to the inequality <xref ref-type="disp-formula" rid="eqn-8">(8)</xref>, <inline-formula id="ieqn-60"><mml:math id="mml-ieqn-60"><mml:mi>N</mml:mi><mml:mi>C</mml:mi><mml:msubsup><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>C</mml:mi><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula> has filtered again, getting the final candidate offloading set.</p>
</sec>
<sec id="s4_4">
<label>4.4</label>
<title>The Energy Evaluation Method</title>
<p>This section mainly describes whether the submitted tasks can be successfully executed.</p>
<p>After getting the candidate offloading set from non-key modules, the remaining modules are to be processed locally. The data sent to the offloading module still needs to consume the local battery energy. Battery energy is what keeps the task running. Therefore, CPMM needs to evaluate whether the UE&#x2019;s battery energy can support the offloading and running of these modules to determine whether the task can be executed normally.</p>
<p>If <inline-formula id="ieqn-61"><mml:math id="mml-ieqn-61"><mml:mi>T</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is processed locally, then the power waste of CPU (<inline-formula id="ieqn-62"><mml:math id="mml-ieqn-62"><mml:mi>E</mml:mi><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>) can be expressed as:</p>
<p><disp-formula id="eqn-9"><label>(9)</label><mml:math id="mml-eqn-9" display="block"><mml:mi>E</mml:mi><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>P</mml:mi><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>c</mml:mi><mml:mi>o</mml:mi><mml:mi>m</mml:mi></mml:mrow></mml:msub><mml:mo>&#x00D7;</mml:mo><mml:mi>T</mml:mi><mml:msub><mml:mi>p</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>P</mml:mi><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>c</mml:mi><mml:mi>o</mml:mi><mml:mi>m</mml:mi></mml:mrow></mml:msub><mml:mo>&#x00D7;</mml:mo><mml:mi>D</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></disp-formula></p>
<p>If <inline-formula id="ieqn-63"><mml:math id="mml-ieqn-63"><mml:mi>T</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is offloaded to other locations, then transmit power consumption (<inline-formula id="ieqn-64"><mml:math id="mml-ieqn-64"><mml:mi>E</mml:mi><mml:msub><mml:mi>S</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>) is expressed as:</p>
<p><disp-formula id="eqn-10"><label>(10)</label><mml:math id="mml-eqn-10" display="block"><mml:mi>E</mml:mi><mml:msub><mml:mi>S</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>P</mml:mi><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>s</mml:mi><mml:mi>e</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi></mml:mrow></mml:msub><mml:mo>&#x00D7;</mml:mo><mml:mi>T</mml:mi><mml:msub><mml:mi>t</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>P</mml:mi><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>s</mml:mi><mml:mi>e</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi></mml:mrow></mml:msub><mml:mo>&#x00D7;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mi>D</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>&#x00D7;</mml:mo><mml:mi>m</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>B</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>&#x03B7;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></disp-formula></p>
<p>Therefore, the local energy consumption of all modules for <inline-formula id="ieqn-65"><mml:math id="mml-ieqn-65"><mml:mrow><mml:mi>t</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:msubsup><mml:mi>k</mml:mi><mml:mi>i</mml:mi><mml:mo>&#x2032;</mml:mo></mml:msubsup></mml:mrow></mml:math></inline-formula> (<inline-formula id="ieqn-66"><mml:math id="mml-ieqn-66"><mml:mi>E</mml:mi><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>) can be expressed as:</p>
<p><disp-formula id="eqn-11"><label>(11)</label><mml:math id="mml-eqn-11" display="block"><mml:mi>E</mml:mi><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>l</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mi>m</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:munderover><mml:mrow><mml:mo>[</mml:mo><mml:mi>&#x03B1;</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>E</mml:mi><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03B1;</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mi>E</mml:mi><mml:msub><mml:mi>S</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>]</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>CPMM assumes when the power of <inline-formula id="ieqn-67"><mml:math id="mml-ieqn-67"><mml:mi>U</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> satisfies inequality <xref ref-type="disp-formula" rid="eqn-16">(16)</xref>, it indicates that the battery power does not support the current task. At this time, task execution failed.</p>
<p><disp-formula id="eqn-12"><label>(12)</label><mml:math id="mml-eqn-12" display="block"><mml:mo stretchy="false">(</mml:mo><mml:mi>p</mml:mi><mml:mi>o</mml:mi><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mi>E</mml:mi><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>l</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo><mml:mspace width="negativethinmathspace" /><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mspace width="negativethinmathspace" /><mml:mi>p</mml:mi><mml:mi>o</mml:mi><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>A</mml:mi><mml:mi>L</mml:mi><mml:mi>L</mml:mi></mml:mrow></mml:msub><mml:mo>&#x00D7;</mml:mo><mml:mn>100</mml:mn><mml:mi mathvariant="normal">&#x0025;</mml:mi><mml:mo>&#x2265;</mml:mo><mml:mi>p</mml:mi><mml:mi>o</mml:mi><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mo movablelimits="true" form="prefix">min</mml:mo></mml:mrow></mml:msub></mml:math></disp-formula></p>
<p>In which, <inline-formula id="ieqn-68"><mml:math id="mml-ieqn-68"><mml:mi>p</mml:mi><mml:mi>o</mml:mi><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>A</mml:mi><mml:mi>l</mml:mi><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is the maximum battery power that UE can provide. <inline-formula id="ieqn-69"><mml:math id="mml-ieqn-69"><mml:mi>p</mml:mi><mml:mi>o</mml:mi><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>i</mml:mi><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is a pre-defined threshold, which is called the power threshold.</p>
</sec>
<sec id="s4_5">
<label>4.5</label>
<title>The Trigger Method of Forced Offloading</title>
<p>Even if some non-key modules are offloaded, the total computation amount of the remaining modules may need to be bigger to meet the needs of the remaining modules. At this point, forced offloading is required to meet Constraint 2. This section describes the reasons for forced offloading.</p>
<p>If <inline-formula id="ieqn-70"><mml:math id="mml-ieqn-70"><mml:mi>T</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is processed locally, then the completion delay <inline-formula id="ieqn-71"><mml:math id="mml-ieqn-71"><mml:mi>T</mml:mi><mml:msub><mml:mi>p</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is:</p>
<p><disp-formula id="eqn-13"><label>(13)</label><mml:math id="mml-eqn-13" display="block"><mml:mi>T</mml:mi><mml:msub><mml:mi>p</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>D</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mi>c</mml:mi><mml:mi>a</mml:mi><mml:msub><mml:mi>p</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>m</mml:mi></mml:mrow></mml:msub></mml:math></disp-formula></p>
<p>Then, during a unit of time, the calculation (<inline-formula id="ieqn-72"><mml:math id="mml-ieqn-72"><mml:mi>U</mml:mi><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>) required by <inline-formula id="ieqn-73"><mml:math id="mml-ieqn-73"><mml:mi>T</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> can be expressed as:</p>
<p><disp-formula id="eqn-14"><label>(14)</label><mml:math id="mml-eqn-14" display="block"><mml:mi>U</mml:mi><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mi>T</mml:mi><mml:msub><mml:mi>p</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>m</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>c</mml:mi><mml:mi>a</mml:mi><mml:msub><mml:mi>p</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>m</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mi>D</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></disp-formula></p>
<p>According to offloading the non-key candidate module set, CPMM uses <xref ref-type="disp-formula" rid="eqn-15">formula (15)</xref> to get computational of the remaining modules during a unit of time, denoted by (<inline-formula id="ieqn-74"><mml:math id="mml-ieqn-74"><mml:mi>T</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>).</p>
<p><disp-formula id="eqn-15"><label>(15)</label><mml:math id="mml-eqn-15" display="block"><mml:mi>T</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mi>m</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:munderover><mml:mrow><mml:mo>(</mml:mo><mml:mi>&#x03B1;</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>U</mml:mi><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mi>m</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:munderover><mml:mrow><mml:mo>(</mml:mo><mml:mi>&#x03B1;</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mi>T</mml:mi><mml:msub><mml:mi>p</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mi>m</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:munderover><mml:mrow><mml:mo>(</mml:mo><mml:mi>&#x03B1;</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mfrac><mml:mrow><mml:mi>c</mml:mi><mml:mi>a</mml:mi><mml:msub><mml:mi>p</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>m</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>D</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>According to <inline-formula id="ieqn-75"><mml:math id="mml-ieqn-75"><mml:mi>T</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> and Constraint 3, using the following inequality <xref ref-type="disp-formula" rid="eqn-16">(16)</xref> to judge whether the current calculation is sufficient:</p>
<p><disp-formula id="eqn-16"><label>(16)</label><mml:math id="mml-eqn-16" display="block"><mml:mrow><mml:mo>(</mml:mo><mml:mi>T</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>l</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mi>T</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi><mml:mi>o</mml:mi><mml:mi>w</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mspace width="negativethinmathspace" /><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mspace width="negativethinmathspace" /><mml:mi>c</mml:mi><mml:mi>a</mml:mi><mml:msub><mml:mi>p</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>m</mml:mi></mml:mrow></mml:msub><mml:mo>&#x00D7;</mml:mo><mml:mn>100</mml:mn><mml:mi mathvariant="normal">&#x0025;</mml:mi><mml:mo>&#x2265;</mml:mo><mml:msub><mml:mi>q</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mo movablelimits="true" form="prefix">max</mml:mo></mml:mrow></mml:msub></mml:math></disp-formula></p>
<p>Which, <inline-formula id="ieqn-76"><mml:math id="mml-ieqn-76"><mml:mi>c</mml:mi><mml:mi>a</mml:mi><mml:msub><mml:mi>p</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi><mml:mi>o</mml:mi><mml:mi>w</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is the current calculation of <inline-formula id="ieqn-77"><mml:math id="mml-ieqn-77"><mml:mi>U</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. <inline-formula id="ieqn-78"><mml:math id="mml-ieqn-78"><mml:msub><mml:mi>q</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is a pre-defined threshold called the max-tolerance calculation threshold, which represents the maximum amount of calculation <inline-formula id="ieqn-79"><mml:math id="mml-ieqn-79"><mml:mi>U</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> can bear per unit of time. For example, <inline-formula id="ieqn-80"><mml:math id="mml-ieqn-80"><mml:msub><mml:mi>q</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>75</mml:mn><mml:mrow><mml:mi mathvariant="normal">&#x0025;</mml:mi></mml:mrow></mml:math></inline-formula> indicates that if the submitting task and the current computation exceeds the total computation reached 75%, the task cannot meet the calculation requirement of the current task, i.e., it means that after the non-key modules are offloaded, the calculation is still poor. The key modules must be selected to offload.</p>
</sec>
<sec id="s4_6">
<label>4.6</label>
<title>The Method of Non-Key Module Offloading</title>
<p>After each UE selects the candidate set, multi-UE competes for resources to complete task offloading. To provide a better reference for multi-UE under multi-constraints, CPMM discusses the single-UE task offloading under the following <xref ref-type="sec" rid="s4_6">Sections 4.6</xref> and <xref ref-type="sec" rid="s4_7">4.7</xref>. This section mainly addresses the method of key module offloading under the single UE.</p>
<p>The step of single-UE task offloading is as follows:
<list list-type="bullet">
<list-item>
<p>Step 1: Using the selection method of candidate offloading set to get non-key candidate module set.</p></list-item>
<list-item>
<p>Step 2: Sorting the set in ascending order according to the sequential relationship. The sorted candidate sets are migrated to BS.</p></list-item>
<list-item>
<p>Step 3: BS determines the offloading position of the module according to the relationship between module maneuver time and module execution delay in a centralized manner.</p></list-item>
</list></p>
<p>When a module is offloaded from BS to MCC, it must go through different network devices. These network devices have various capacities in terms of bandwidth, memory storage, processing speed, etc. CPMM assumes that different capacities can be represented by one metric, such as bandwidth, and assumes that there is a <inline-formula id="ieqn-81"><mml:math id="mml-ieqn-81"><mml:mi>l</mml:mi></mml:math></inline-formula> hop distance from BS to MCC. The bandwidth of each hop denoted by <inline-formula id="ieqn-82"><mml:math id="mml-ieqn-82"><mml:mo fence="false" stretchy="false">{</mml:mo><mml:msub><mml:mi>B</mml:mi><mml:mrow><mml:mi>B</mml:mi><mml:mo stretchy="false">&#x2192;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>B</mml:mi><mml:mrow><mml:mn>1</mml:mn><mml:mo stretchy="false">&#x2192;</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mi>B</mml:mi><mml:mrow><mml:mi>l</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>2</mml:mn><mml:mo stretchy="false">&#x2192;</mml:mo><mml:mi>l</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>B</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="normal">l</mml:mi></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy="false">&#x2192;</mml:mo><mml:mi>l</mml:mi></mml:mrow></mml:msub><mml:mo fence="false" stretchy="false">}</mml:mo></mml:math></inline-formula>.</p>
<p>If <inline-formula id="ieqn-83"><mml:math id="mml-ieqn-83"><mml:mi>T</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is processed on MCC servers, then the processing delay (denoted by <inline-formula id="ieqn-84"><mml:math id="mml-ieqn-84"><mml:mi>T</mml:mi><mml:mi>E</mml:mi><mml:mi>C</mml:mi><mml:msubsup><mml:mi>D</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mi>M</mml:mi><mml:mi>C</mml:mi><mml:mi>C</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula>) can be expressed as:</p>
<p><disp-formula id="eqn-17"><label>(17)</label><mml:math id="mml-eqn-17" display="block"><mml:mi>T</mml:mi><mml:mi>E</mml:mi><mml:mi>C</mml:mi><mml:msubsup><mml:mi>D</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mi>M</mml:mi><mml:mi>C</mml:mi><mml:mi>C</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mi>T</mml:mi><mml:msub><mml:mi>t</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:munderover><mml:mrow><mml:mo>(</mml:mo><mml:mfrac><mml:mrow><mml:mi>D</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:msub><mml:mi>B</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo stretchy="false">&#x2192;</mml:mo><mml:mrow><mml:mi mathvariant="normal">i</mml:mi></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:mrow></mml:msub></mml:mfrac><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03C1;</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></disp-formula></p>
<p>Generally, the delay for a module to process on the MEC server is less than on the MCC server [<xref ref-type="bibr" rid="ref-1">1</xref>&#x2013;<xref ref-type="bibr" rid="ref-4">4</xref>]. Given a waiting-for-offloading module <inline-formula id="ieqn-85"><mml:math id="mml-ieqn-85"><mml:mi>T</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, if its maneuver time satisfies inequality <xref ref-type="disp-formula" rid="eqn-18">(18)</xref>, it means that <inline-formula id="ieqn-86"><mml:math id="mml-ieqn-86"><mml:mi>T</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is offloaded to MEC and has no impact on task execution latency. Therefore, it can be offloaded on MEC servers.</p>
<p><disp-formula id="eqn-18"><label>(18)</label><mml:math id="mml-eqn-18" display="block"><mml:mi>T</mml:mi><mml:mi>E</mml:mi><mml:mi>C</mml:mi><mml:msubsup><mml:mi>D</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mi>M</mml:mi><mml:mi>E</mml:mi><mml:mi>C</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x2264;</mml:mo><mml:mi>M</mml:mi><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>&#x003C;</mml:mo><mml:mi>T</mml:mi><mml:mi>E</mml:mi><mml:mi>C</mml:mi><mml:msubsup><mml:mi>D</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mi>M</mml:mi><mml:mi>C</mml:mi><mml:mi>C</mml:mi></mml:mrow></mml:msubsup></mml:math></disp-formula></p>
<p>If <inline-formula id="ieqn-87"><mml:math id="mml-ieqn-87"><mml:mi>T</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>&#x2019;s maneuver time satisfies inequality <xref ref-type="disp-formula" rid="eqn-19">(19)</xref>, it is offloaded to MCC and has no impact on task execution latency. Therefore, it can be offloaded to MCC.</p>
<p><disp-formula id="eqn-19"><label>(19)</label><mml:math id="mml-eqn-19" display="block"><mml:mi>T</mml:mi><mml:mi>E</mml:mi><mml:mi>C</mml:mi><mml:msubsup><mml:mi>D</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mi>M</mml:mi><mml:mi>C</mml:mi><mml:mi>C</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x2264;</mml:mo><mml:mi>M</mml:mi><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></disp-formula></p>
</sec>
<sec id="s4_7">
<label>4.7</label>
<title>The Method of Key Module Offloading</title>
<p>This section mainly discusses the method of key module offloading under the single UE.</p>
<p>CPMM uses the verification method to offload the key modules. The steps of key module offloading are as follows:
<list list-type="bullet">
<list-item>
<p>Step 1: Getting the key module candidate offloading set.</p></list-item>
</list></p>
<p>Taking <inline-formula id="ieqn-88"><mml:math id="mml-ieqn-88"><mml:mi>C</mml:mi><mml:mi>T</mml:mi><mml:msub><mml:mi>S</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> as the input, CPMM uses the selection method of candidate offloading set to obtain the candidate key module offloading set <inline-formula id="ieqn-89"><mml:math id="mml-ieqn-89"><mml:mi>C</mml:mi><mml:mi>T</mml:mi><mml:msubsup><mml:mi>S</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>C</mml:mi><mml:mn>3</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula>.
<list list-type="bullet">
<list-item>
<p>Step 2: Sorting <inline-formula id="ieqn-90"><mml:math id="mml-ieqn-90"><mml:mi>C</mml:mi><mml:mi>T</mml:mi><mml:msubsup><mml:mi>S</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>C</mml:mi><mml:mn>3</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula> in ascending according to the sequential relationship among modules.</p></list-item>
<list-item>
<p>Step 3: Take modules from <inline-formula id="ieqn-91"><mml:math id="mml-ieqn-91"><mml:mi>C</mml:mi><mml:mi>T</mml:mi><mml:msubsup><mml:mi>S</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>C</mml:mi><mml:mn>3</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula> in turn, getting the calculation number of modules.</p></list-item>
</list></p>
<p>Taking a module (denoted by <inline-formula id="ieqn-92"><mml:math id="mml-ieqn-92"><mml:mi>T</mml:mi><mml:msubsup><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mi>C</mml:mi><mml:mi>T</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula>) from <inline-formula id="ieqn-93"><mml:math id="mml-ieqn-93"><mml:mi>C</mml:mi><mml:mi>T</mml:mi><mml:msubsup><mml:mi>S</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>C</mml:mi><mml:mn>3</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula> and getting its calculation amount according to <xref ref-type="disp-formula" rid="eqn-15">formula (15)</xref>, denoted by <inline-formula id="ieqn-94"><mml:math id="mml-ieqn-94"><mml:mi>U</mml:mi><mml:msubsup><mml:mi>C</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mi>C</mml:mi><mml:mi>T</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula>. Then, according to <xref ref-type="disp-formula" rid="eqn-17">formulas (17)</xref> and <xref ref-type="disp-formula" rid="eqn-20">(20)</xref>, obtaining the offloading modules calculation amount, denoted by <inline-formula id="ieqn-95"><mml:math id="mml-ieqn-95"><mml:mrow><mml:mi>T</mml:mi><mml:mi>c</mml:mi><mml:msubsup><mml:mi>a</mml:mi><mml:mi>i</mml:mi><mml:mo>&#x2032;</mml:mo></mml:msubsup></mml:mrow></mml:math></inline-formula>.</p>
<p><disp-formula id="eqn-20"><label>(20)</label><mml:math id="mml-eqn-20" display="block"><mml:mrow><mml:mi>T</mml:mi><mml:msubsup><mml:mrow><mml:mtext>ca</mml:mtext></mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x2032;</mml:mo></mml:msubsup><mml:mo>=</mml:mo><mml:mi>T</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:mi>U</mml:mi><mml:msubsup><mml:mi>C</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mi>C</mml:mi><mml:mi>T</mml:mi></mml:mrow></mml:msubsup></mml:mrow></mml:math></disp-formula>
<list list-type="bullet">
<list-item>
<p>Step 4: Verify again whether inequality <xref ref-type="disp-formula" rid="eqn-16">(16)</xref> is true.</p></list-item>
</list></p>
<p>Verify again whether inequality <xref ref-type="disp-formula" rid="eqn-16">(16)</xref> is true. If true, it means that <inline-formula id="ieqn-96"><mml:math id="mml-ieqn-96"><mml:mi>U</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> still cannot meet the computation amount required by the task. Then, getting the next key module from the sorted <inline-formula id="ieqn-97"><mml:math id="mml-ieqn-97"><mml:mi>C</mml:mi><mml:mi>T</mml:mi><mml:msubsup><mml:mi>S</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>C</mml:mi><mml:mn>3</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula> again for offloading. Else, stop.</p>
</sec>
</sec>
<sec id="s5">
<label>5</label>
<title>The Multi-UE Task Offloading under Multi-Constraint</title>
<p>Since multi-UE can be regarded as task offloading of multiple single-UE tasks offloading without external resource competition, single-UE task offloading lays a foundation for studying multi-UE task offloading. Based on single-UE offloading, <xref ref-type="sec" rid="s5">Section 5</xref> focuses on multi-UE task offloading.</p>
<sec id="s5_1">
<label>5.1</label>
<title>Using Weighted Queuing Method to Deal with Communication Competition</title>
<p>CPMM first considers the problem of communication channel competition. Assuming the number of modules to be offloaded exceeds the number of channels during <inline-formula id="ieqn-98"><mml:math id="mml-ieqn-98"><mml:mi mathvariant="normal">&#x0394;</mml:mi><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, and starts competing for channels.</p>
<p>The modules waiting to be offloaded during <inline-formula id="ieqn-99"><mml:math id="mml-ieqn-99"><mml:mi mathvariant="normal">&#x0394;</mml:mi><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> come from different offloading sets submitted by the different UEs (including key and non-key modules). Since the key module directly impacts the task processing delay, the priority key module obtains the channel when competing for the channel. According to queuing theory [<xref ref-type="bibr" rid="ref-27">27</xref>], the modules with short execution delays can occupy the channel first, effectively reducing the overall queueing time when there is a queueing competition for resources. So, CPMM prioritizes modules with short delays to obtain communication channels. However, there is a high probability that some modules with long delays will be waiting if key modules and modules with short delays are always given priority. In that case, affecting the processing delay of the offloading task. CPMM adopts the weighted linear queuing method to address the resource competition problem caused by Constraint 4.</p>
<p>Assuming <inline-formula id="ieqn-100"><mml:math id="mml-ieqn-100"><mml:msub><mml:mrow><mml:mi>C</mml:mi><mml:mi>o</mml:mi><mml:mi>u</mml:mi><mml:mi>n</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> modules waiting for migration to BS during <inline-formula id="ieqn-101"><mml:math id="mml-ieqn-101"><mml:mi mathvariant="normal">&#x0394;</mml:mi><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. CPMM uses the following <xref ref-type="disp-formula" rid="eqn-21">formula (21)</xref> to get every module&#x0027;s priority.</p>
<p><disp-formula id="eqn-21"><label>(21)</label><mml:math id="mml-eqn-21" display="block"><mml:mrow><mml:mi>T</mml:mi><mml:msubsup><mml:mrow><mml:mtext>ca</mml:mtext></mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x2032;</mml:mo></mml:msubsup><mml:mo>=</mml:mo><mml:mi>&#x03C9;</mml:mi><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mi>I</mml:mi><mml:mi>s</mml:mi><mml:mi>K</mml:mi><mml:mi>e</mml:mi><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>j</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mi>&#x03C9;</mml:mi><mml:mn>2</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>/</mml:mo><mml:mi>T</mml:mi><mml:msub><mml:mi>t</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mi>r</mml:mi><mml:mi>o</mml:mi><mml:mi>u</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>&#x03D5;</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math></disp-formula></p>
<p>In which, <inline-formula id="ieqn-102"><mml:math id="mml-ieqn-102"><mml:mi>&#x03D5;</mml:mi></mml:math></inline-formula> is a constant, <inline-formula id="ieqn-103"><mml:math id="mml-ieqn-103"><mml:mrow><mml:mi>r</mml:mi><mml:mi>o</mml:mi><mml:mi>u</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi></mml:mrow></mml:math></inline-formula> is the number of times the module waits for the offload interval. Initial <inline-formula id="ieqn-104"><mml:math id="mml-ieqn-104"><mml:mrow><mml:mi>r</mml:mi><mml:mi>o</mml:mi><mml:mi>u</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi></mml:mrow><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:math></inline-formula>. <inline-formula id="ieqn-105"><mml:math id="mml-ieqn-105"><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> and <inline-formula id="ieqn-106"><mml:math id="mml-ieqn-106"><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> are weighted, <inline-formula id="ieqn-107"><mml:math id="mml-ieqn-107"><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:math></inline-formula>. When competing for communication channels, <inline-formula id="ieqn-108"><mml:math id="mml-ieqn-108"><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x003C;</mml:mo><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> means that a higher weight is assigned to the number of competing rounds, in this case, the modules that did not obtain the communication channel in the last round have a higher probability of getting the communication resources in this round. <inline-formula id="ieqn-109"><mml:math id="mml-ieqn-109"><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x003E;</mml:mo><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> means the key module is given greater weight. Key modules are still prioritized in the next round of communication channel acquisition. These two weight values can be dynamically adjusted according to the actual situation so that specific types of modules can obtain communication channels.</p>
<p>Assuming the communication channel is in contention. The modules that not only the module from the waiting for offloading modules during <inline-formula id="ieqn-110"><mml:math id="mml-ieqn-110"><mml:mi mathvariant="normal">&#x0394;</mml:mi><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> but also the remaining modules that do not obtain communication channels during <inline-formula id="ieqn-111"><mml:math id="mml-ieqn-111"><mml:mi mathvariant="normal">&#x0394;</mml:mi><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>j</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> (denoted by <inline-formula id="ieqn-112"><mml:math id="mml-ieqn-112"><mml:msub><mml:mrow><mml:mi mathvariant="normal">R</mml:mi><mml:mi mathvariant="normal">O</mml:mi><mml:mi mathvariant="normal">m</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula>) will participate in the competition. These modules use <xref ref-type="disp-formula" rid="eqn-21">formula (21)</xref> to get their priority.</p>
<p>Setting <inline-formula id="ieqn-113"><mml:math id="mml-ieqn-113"><mml:mrow><mml:mi>r</mml:mi><mml:mi>o</mml:mi><mml:mi>u</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi></mml:mrow><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:math></inline-formula> as the module from the waiting for offloading, and <inline-formula id="ieqn-114"><mml:math id="mml-ieqn-114"><mml:mrow><mml:mi>r</mml:mi><mml:mi>o</mml:mi><mml:mi>u</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mi>o</mml:mi><mml:mi>u</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi></mml:mrow><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:math></inline-formula> as the module from <inline-formula id="ieqn-115"><mml:math id="mml-ieqn-115"><mml:msub><mml:mrow><mml:mi mathvariant="normal">R</mml:mi><mml:mi mathvariant="normal">O</mml:mi><mml:mi mathvariant="normal">m</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula>. When getting every module&#x2019;s priority, CPMM arranges the competing modules in descending order according to the priority and selects the former h modules to obtain the channel. The remaining modules cannot obtain the communication channel. They participate in the next competition of time intervals (i.e., <inline-formula id="ieqn-116"><mml:math id="mml-ieqn-116"><mml:mi mathvariant="normal">&#x0394;</mml:mi><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>j</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula>).</p>
</sec>
<sec id="s5_2">
<label>5.2</label>
<title>Using the Branching Method to Deal with MEC Competition</title>
<p>Next, consider the MEC resource competition (i.e., Constraint 5).</p>
<p>If MEC resources are sufficient, the key and non-key modules shall be offloaded by the key (or non-key) offloading method. CPMM adopts the branch decision method to address the MEC server competition problem.</p>
<p>Since the key module offloading directly impacts the task completion latency, key modules are prioritized to MEC servers. According to the relationship between the number of concurrent modules to be offloaded and the number of services that MEC can be provided in each time interval, MEC resource competition presents the following relationship when MEC resources are insufficient:</p>
<p>[1] The number of MEC services is greater than the number of key modules to be offloaded</p>
<p>In this case, MEC can meet the requirements of key module offloading, but the non-key module requirements may still need to be met. To reduce the task completion delay, CPMM prioritizes offloading key modules, and the competition of non-key modules obtains the remaining MEC service resources. In other words, under the multi-UE, the key module still adopts the method of the key module to offload, but the method of the non-key module needs to be revised. The improvement ideas are as follows.</p>
<p>According to the timing relationship of task completion, CPMM first selects the non-key module from the waiting-for-offloading module during <inline-formula id="ieqn-117"><mml:math id="mml-ieqn-117"><mml:mi mathvariant="normal">&#x0394;</mml:mi><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> and then verifies the relationship between the remaining MEC services and the number of non-key modules. If the number of non-key modules is less than the remaining number of services, the remaining services can still meet the non-key offloading. Then, CPMM uses the method of the non-key module to offload. If the number of non-key modules is larger than the remaining number of services, the remaining number of services cannot meet the non-key offloading. In this case, CPMM first arranges the non-key modules in ascending order according to the maneuver time and then takes modules from the sorted set. If the selected non-key module&#x2019;s maneuver time satisfies inequality <xref ref-type="disp-formula" rid="eqn-18">(18)</xref>, this module will get MEC servers until all the MEC service resources have been completed. The remaining non-key module will be offloaded to MCC.</p>
<p>[2] The number of MEC services less than the number of key modules to be offloaded</p>
<p>In this case, MEC servers cannot meet the requirements of the key module and non-key module offloading. To reduce the task completion delay, CPMM prioritizes offloading the key modules. The remaining key modules are offloaded to MCC servers. The non-key modules offloaded to MCC servers too. So, the method of the non-key module and the method of the key module need to be revised.</p>
<p>For the method of the non-key module, key modules obtain all MEC resources. MCC servers process all non-key modules. For the method of the key module, CPMM allows some key modules to obtain MEC services while others are processed on MCC servers. CPMM prioritized modules with short delays to get MEC servers&#x2019; resources according to the queuing theory. So, CPMM arranges these modules in ascending order according to the processing delay and then takes modules from the sorted set until all the MEC service resources are occupied. The remaining key module will be offloaded to MCC servers.</p>
</sec>
</sec>
<sec id="s6">
<label>6</label>
<title>Experimental Simulation and Analysis</title>
<sec id="s6_1">
<label>6.1</label>
<title>Experimental Description</title>
<p>This paper designs and implements a simulator with VC&#x002B;&#x002B; language for evaluating the CPMM and compares the CMPP&#x2019;s performance with other similar offloading methods in multi-conditional: MEC offloading method (named MEC), MCC offloading method (named MCC), and without offloading method (named LOCAL). MEC and MCC select the offloading module in the same manner as CPMM, but it is different when selecting the offloading location. MEC only selects MEC servers, and MCC only selects MCC servers. LOCAL does not offload any modules, and all modules are processed locally.</p>
<p>This paper designs the BS, UE, MEC servers, and the MCC in the simulator. UE, BS, MEC servers, and MCC center are organized by the layered heterogeneous networks, which can reflect the real world. BS can perform services and deploy a MEC server with lightweight resources to provide services for UE. MEC servers belonging to BS, which has a fixed number of VMS, can provide execution resources for offloaded modules. MCC center has enough resources to support task offloading. The UE&#x2019;s task in a random manner, and assuming that tasks can be processed concurrently on UE at the same time.</p>
<p>To verify the superior performances of CMPP, this paper compares the processing delay, the failure rate of the task execution, the energy consumption, and the calculated consumption. The compared performance is closely related to the calculated threshold, the min power threshold, the amount of UE, the min-calculation threshold, and the max-tolerance calculation threshold. Consequently, in each group of the performance comparison, this paper sets the max-tolerance threshold as 70%, the min power threshold from 5% to 50%, the calculated threshold from 2 to 5, and the amount of UE from 100 to 400. Meanwhile, this paper also compares the delay and the failure rate under different max-tolerance calculation thresholds. In addition, according to the operation mode of CPMM, when offloading a task, CPMM first needs to classify the modules that make up the task and then use different offloading methods according to the module type. Since running CPMM and partitioning the task into key and non-key modules can consume local resources, the simulation experiment also takes the cost of CPMM operation as an important factor to objectively demonstrate the superior performance of CPMM.</p>
</sec>
<sec id="s6_2">
<label>6.2</label>
<title>Comparison of the Task Execution Failure Rate</title>
<p>This experiment shows that CPMM can ensure the success rate of task execution. <xref ref-type="fig" rid="fig-3">Fig. 3</xref> demonstrates the failure rate of task execution in different methods under different parameters. It shows that the failure rate increases with the amount of UE increasing, the calculated threshold increasing, and the power threshold increasing. <xref ref-type="fig" rid="fig-3">Fig. 3</xref> also shows that the failure rate of CPMM is lower than LOCAL and MEC. MCC has the least failure rate. MEC and LOCAL have the worst failure rate. Compared to LOCAL, CPMM can reduce the failure rate by around 3.9% on average under the power threshold is 15%, can reduce the failure rate by about 4.4% on average under the calculated threshold is 4, and can be reduced by about 4% on average under the amount of UE is 300.</p>
<fig id="fig-3">
<label>Figure 3</label>
<caption>
<title>The failure rate of task execution</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_37483-fig-3a.tif"/>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_37483-fig-3b.tif"/>
</fig>
<p>Through above simulation shows that different methods of offloading tasks all have a certain probability of causing the task to fail. This is because UE battery power is Poisson distributed in the experiment. Some UEs have lower battery power in the initial state. Many UEs cannot meet the task requirement, leading to task failure. This phenomenon also exists in the real world. In this experiment, MEC and LOCAL have the max failure rate. The max failure rate even exceeds 30% of the total tasks. This is because LOCAL does not adopt the task-offloading manner to process tasks, but CPMM and MCC process tasks in a task-offloading manner. Therefore, many tasks cannot be processed normally due to energy consumption, resulting in a higher failure rate. Although MEC has also adopted the task-offloading manner, many modules cannot get MEC resources due to the limited MEC service resources, resulting in many task-offload failures. MCC has a sufficient resource supply and does not need to offload tasks to compete for execution resources, so its execution success rate is the highest. Although CPMM uses the remote cloud to perform the offloaded task, it first competes for MEC service resources and then cooperates with MCC to offload. Therefore, CPMM performs better than MCC.</p>
<p>The task failure rate is closed related to the power threshold, the calculation threshold, and the UE amount. When the amount of UE increases, more tasks compete for limited resources, so the task failure rate will increase with the UE number increasing. According to the definition of power threshold, the higher the power threshold, the lower power the UE can use to process tasks. So, the lower power with a higher task failure rate. Recall the definition of calculation threshold, and it is mainly used to select which modules to offload. The higher threshold, the fewer modules that match offloading, resulting in more modules that need to be executed locally. This will undoubtedly exacerbate local energy consumption, resulting in many modules being unable to process due to insufficient power. Therefore, the task failure rate will increase with the calculation threshold increasing.</p>
</sec>
<sec id="s6_3">
<label>6.3</label>
<title>Comparison of the Task Completion Delay</title>
<p>This experiment demonstrates that CPMM can effectively reduce task completion delay. This experiment uses the ratio of task completion delay, denoted by<inline-formula id="ieqn-118"><mml:math id="mml-ieqn-118"><mml:mi>C</mml:mi><mml:msub><mml:mi>D</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>M</mml:mi><mml:msub><mml:mi>d</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mi>L</mml:mi><mml:mi>o</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, to compare other methods. <inline-formula id="ieqn-119"><mml:math id="mml-ieqn-119"><mml:mi>L</mml:mi><mml:mi>o</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> indicate the delay of the execution module when <inline-formula id="ieqn-120"><mml:math id="mml-ieqn-120"><mml:mi>U</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> adopts LOCAL. <inline-formula id="ieqn-121"><mml:math id="mml-ieqn-121"><mml:mi>M</mml:mi><mml:msub><mml:mi>d</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> represents the delay of the execution module when CPMM, MEC, and MCC are adopted, respectively. When statistical the processing delay, if a module fails to process due to resource competition, recording 0.</p>
<p><xref ref-type="fig" rid="fig-4">Fig. 4</xref> shows the delay ratio in different methods under different parameters. The delay ratio increases with the number of UE increasing and the power threshold increasing, reducing with the calculated threshold increasing. <xref ref-type="fig" rid="fig-4">Fig. 4</xref> also shows that MEC has a minor delay ratio. The delay ratio of CPMM is better than MCC and less than MEC. LOCAL has the worst delay ratio. Compared with MEC, CPMM can increase by around 4.6% on average under the power threshold is 15%, grow by about 3.6% on average under the calculation threshold is 4, and increase by about 4.6% on average under the amount of UE is 300. Compared with MCC, CPMM can reduce around 5.6% on average under the power threshold is 15%, reduces about 5.2% on average under the calculation threshold is 4, and reduces about 4.7% on average under the amount of UE is 300.</p>
<fig id="fig-4">
<label>Figure 4</label>
<caption>
<title>The ratio of task completion delay</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_37483-fig-4.tif"/>
</fig>
<p>This is because of the following reasons: (1) since MEC, MCC, and CPMM all adopt task offloading, their delay is less than LOCAL. (2) Since the delay of module offloading to the remote cloud is greater than that of MEC, the delay of MEC is less than CPMM and MCC. According to the experiment on failure rate, <xref ref-type="fig" rid="fig-4">Fig. 4</xref> shows that MEC has a tall task failure rate. This experiment sets the execution delay as 0 for modules that are not executed. So, MEC has the lowest delay. But its lower delay is obtained by a high failure rate. (3) CPMM uses cooperation mode to offload. Compared to MCC, not all modules are offloaded to MCC. Therefore, its execution delays less than MCC.</p>
<p>The delay is closely related to the UE number, the calculation threshold, and the power threshold. The execution delay is increasing with the number of UE increases. The reasons have been analyzed above. Since the calculation threshold is directly related to the number of modules to be offloaded, it increases with a lower calculation threshold. The power threshold directly impacts whether the module can process or not. The higher the power threshold, the lower the battery power available to UE, resulting in many modules that cannot be executed due to insufficient battery power. So, delay decreases with the power threshold increasing. This can be verified by <xref ref-type="fig" rid="fig-4">Fig. 4D</xref>. However, from <xref ref-type="fig" rid="fig-4">Figs. 4B</xref> and <xref ref-type="fig" rid="fig-4">4C</xref>, we can see that the delay ratio increases with the power threshold increasing. This is because when the power threshold increases, the modules of CPMM and MCC are not completely dependent on the local. LOCAL is implemented locally and is very sensitive to the change in power threshold. When the power threshold becomes lower, many modules cannot be processed due to insufficient power of UE, so the overall module processing delay changes significantly. According to the definition of the delay ratio, the ratio shows an upward trend.</p>
</sec>
<sec id="s6_4">
<label>6.4</label>
<title>Comparison of the Energy Consumption</title>
<p>This experiment demonstrates that CPMM can reduce energy consumption. Since MEC and MCC adopt the same offloading mode as CPMM, this experiment only compares CPMM and LOCAL. <xref ref-type="fig" rid="fig-5">Fig. 5</xref> shows the energy consumption in different methods under different parameters and shows that the energy consumption increases with the amount of UE increasing, with the calculated threshold increasing, and reduces with the power threshold increasing. CPMM is less than LOCAL. Compared to LOCAL, CPMM&#x2019;s average energy consumption is about 59% of LOCAL under the power threshold is 15%, about 65.4% of LOCAL under the calculated threshold is 4, is about 63% of LOCAL under the amount of UE is 300.</p>
<fig id="fig-5">
<label>Figure 5</label>
<caption>
<title>The energy consumption</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_37483-fig-5.tif"/>
</fig>
<p>According to the above experiment, the energy consumption of CPMM is better than LOCAL. This is because CPMM offloads some non-key modules according to the relationship between modules under the same conditions, reducing the energy consumption of UE. LOCAL does not offload tasks, so the energy consumed by local is higher than CPMM. Meanwhile, <xref ref-type="fig" rid="fig-5">Fig. 5</xref> also shows that the increase in the number of UE will increase the number of tasks. Therefore, the total energy consumption will increase. Recalling the definition of power threshold again, when the power threshold increases, it indicates that the UE has less power to process tasks. This will cause many modules to fail. Therefore, the energy consumed decreases with the power threshold increasing. According to the definition of calculating threshold, as the calculation threshold increases, fewer modules will be offloaded, and more modules will be executed locally. So, the energy consumed locally will also increase.</p>
</sec>
<sec id="s6_5">
<label>6.5</label>
<title>Comparison of the Computation Consumption</title>
<p><xref ref-type="fig" rid="fig-6">Fig. 6</xref> shows the computation consumption in different methods under different parameters and shows that the computation consumption increases with the calculation threshold increasing, and the amount of UE increases and reduces with the power threshold increasing. Compared to LOCAL, CPMM is about 55% of LOCAL when the power threshold is 15%, about 68.6% of LOCAL when the calculation is 4, and about 59% of LOCAL when the amount of UE 300.</p>
<fig id="fig-6">
<label>Figure 6</label>
<caption>
<title>The computation consumption</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_37483-fig-6.tif"/>
</fig>
<p>Rather than select some modules to offload randomly, CPMM selects key (non-key) modules with a larger calculation. In LOCAL, all modules need to be processed locally. Therefore, the LOCAL has the largest computation consumption. In addition, the number of tasks increases with the number of UE increases. Since the calculation threshold is used to determine the scale of offloading modules, a higher threshold means that a larger number of modules need to be processed locally to increase the local calculation. According to the definition of power threshold, a higher threshold indicates lower available power of UE. The lower available power means a larger number of modules cannot be executed. So, the overall calculation consumption will decrease.</p>
</sec>
<sec id="s6_6">
<label>6.6</label>
<title>Comparison under Different Max-Calculation Threshold</title>
<p>In the above experiments, the max-tolerance calculation threshold is 70%. According to the definition of the max-tolerance threshold, the performance to be compared by simulation is also closely related to this threshold. This experiment sets <inline-formula id="ieqn-122"><mml:math id="mml-ieqn-122"><mml:mi>&#x03B4;</mml:mi><mml:mo>=</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mi>o</mml:mi><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>i</mml:mi><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>15</mml:mn><mml:mrow><mml:mi mathvariant="normal">&#x0025;</mml:mi></mml:mrow></mml:math></inline-formula>, and the UE number as 300, mainly used to verify that CPMM also has better performance under the different max-tolerance thresholds.</p>
<p><xref ref-type="fig" rid="fig-7">Fig. 7A</xref> shows the ratio task completion delay under the different max-tolerance thresholds and shows that the delay ratio increases with the max-tolerance threshold reduction. The UE&#x2019;s computing resources will be increased relatively with the threshold increasing according to the definition of the max-tolerance threshold. Increased computing power means the probability of UE triggering the forced offloading is also decreasing. In other words, when the max-tolerance threshold increases, it tends to use active offloading to form the candidate offloading set. Active offloading mainly selects non-key modules and has a lower probability of selecting key modules. Since key modules directly impact task processing delay, fewer key modules are selected for offloading, and the overall delay is reduced too. Under the parameters this experiment sets, MEC has the minimum delay ratio. CPMM following, MCC larger than CPMM. LOCAL has the Maximum delay ratio. MEC achieves lower task execution delay through a higher task execution failure rate, while MCC increases task execution latency due to longer data migration delay. CPMM completes the task offload cooperatively, so it is between MCC and MEC. Compared to MEC, CPMM can increase the delayed radio by around 5% on average. CPMM can reduce the delay ratio by around 6% on average compared to MCC.</p>
<fig id="fig-7">
<label>Figure 7</label>
<caption>
<title>The performance under different max-calculation threshold</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_37483-fig-7.tif"/>
</fig>
<p><xref ref-type="fig" rid="fig-7">Fig. 7B</xref> demonstrates the energy consumption under the different max-tolerance thresholds. According to <xref ref-type="fig" rid="fig-7">Fig. 7B</xref>, energy consumption increases with the max-tolerance threshold increasing. This is because the computing power of the UE increases with the max-tolerance threshold increasing, which means more modules are executing locally. Therefore, more energy-consuming locally. Under the experimental parameters we set, CPMM is better than LOCAL. Since CPMM is a partial task offloading strategy in a cooperative mode, task offloading can reduce local energy consumption. Compared to the LOCAL, the average energy consumption of CPMM is about 40% of that of LOCAL.</p>
<p><xref ref-type="fig" rid="fig-7">Fig. 7C</xref> shows that the computation consumption increases with the max-tolerance threshold increasing. This is because more modules will be processed locally with the max-tolerance threshold increasing, so the total local computation is also increasing. Since CPMM adopts the offloading method to reduce local computing consumption, LOCAL does not offload. Therefore, CPMM is better than LOCAL. Under the experimental parameters this experiment sets, the calculation consumption of CPMM only accounts for about 37% of the calculation consumption of LOCAL.</p>
</sec>
</sec>
<sec id="s7">
<label>7</label>
<title>Conclusion</title>
<p>This paper proposes a cooperation mode partial task offloading method (CPMM) to deal with the problem of task offloading for multi-UE under multi-constraints. CPMM focuses on the modules that make up a task are parallel dependencies, aiming to reduce the energy and computation consumption while meeting the task completion delay as much as possible. It focuses on many constraint conditions, including UE battery power, task execution, computing power, communication channel, MEC service resources, etc. CPMM first discusses the method for single-UE to select candidate offloading module sets under multi-constraints. Then, the task offloading method of multi-UE under multi-constraints is discussed. When discussing single-UE offloading, it uses a critical path algorithm to divide modules into key and non-key modules. It proposes the selection method of the non-key module and key module. Meanwhile, the method of how MEC and MCC cooperate in offloading is formulated. According to the constraints of the multi-UE and single-UE task offloading method, the weighted queuing method and branch processing method is used to offload multi-UE tasks in multi-constraints. Extensive experiments show that CPMM has better performance than other similar methods.</p>
<p>However, CPMM also has many limitations. For example, CPMM focuses on computing-type task offloading and less on data resource access-type task offloading when considering task offloading. In the future, we plan to study the data resource access-type task offloading and the edge caching strategy to reduce the completion delay of data resource access-type tasks through caching.</p>
</sec>
</body>
<back>
<ack><p>This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.</p></ack>
<sec><title>Funding Statement</title>
<p>The authors received no specific funding for this study.</p></sec>
<sec><title>Author Contributions</title>
<p>The authors confirm contribution to the paper as follows: study conception and design: Shengyao Sun and Jiwei Zhang; data collection: Ying Du, Jiwei Zhang; analysis and interpretation of results: Shengyao Sun, Jiajun Chen, Xuan Zhang; draft manuscript preparation: Yiyi Xu. All authors reviewed the results and approved the final version of the manuscript.</p></sec>
<sec sec-type="data-availability"><title>Availability of Data and Materials</title>
<p>According to the edge tasks offloading in the real world, we used C++ language to simulate and validate our proposed method; the data source mainly comes from laboratory simulations, rather than real-life datasets, so we feel that there is no need to present these simulated data.</p></sec>
<sec sec-type="COI-statement"><title>Conflicts of Interest</title>
<p>The authors declare that they have no conflicts of interest to report regarding the present study.</p></sec>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>[1]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>K. Y.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>X. L.</given-names> <surname>Gui</surname></string-name>, <string-name><given-names>D. W.</given-names> <surname>Ren</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Wu</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Survey on computation offloading and content caching in mobile edge networks</article-title>,&#x201D; <source>Journal of Software</source>, vol. <volume>30</volume>, no. <issue>8</issue>, pp. <fpage>2491</fpage>&#x2013;<lpage>2516</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-2"><label>[2]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y. Z.</given-names> <surname>Zhou</surname></string-name> and <string-name><given-names>D.</given-names> <surname>Zhang</surname></string-name></person-group>, &#x201C;<article-title>Near-end cloud computing: Opportunities and challenges in the post-cloud computing era</article-title>,&#x201D; <source>Chinese Journal of Computers</source>, vol. <volume>42</volume>, no. <issue>4</issue>, pp. <fpage>677</fpage>&#x2013;<lpage>700</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-3"><label>[3]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Z. Y.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>Q.</given-names> <surname>Wang</surname></string-name>, <string-name><given-names>Y. F.</given-names> <surname>Chen</surname></string-name> and <string-name><given-names>R. F.</given-names> <surname>Li</surname></string-name></person-group>, &#x201C;<article-title>A survey on task offloading research in vehicular edge computing</article-title>,&#x201D; <source>Chinese Journal of Computers</source>, vol. <volume>44</volume>, no. <issue>5</issue>, pp. <fpage>963</fpage>&#x2013;<lpage>982</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-4"><label>[4]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>I.</given-names> <surname>Akhirul</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Arindam</surname></string-name>, <string-name><given-names>G.</given-names> <surname>Manojit</surname></string-name> and <string-name><given-names>C.</given-names> <surname>Suchetana</surname></string-name></person-group>, &#x201C;<article-title>A survey on task offloading in multi-access edge computing</article-title>,&#x201D; <source>Journal of Systems Architecture</source>, vol. <volume>118</volume>, no. <issue>1</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>16</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-5"><label>[5]</label><mixed-citation publication-type="other"><collab>Global App Store downloads reach 8.6 billion in the first quarter of 2022</collab>. [Online]. Available: <ext-link ext-link-type="uri" xlink:href="https://baijiahao.baidu.com/s?id=1731224939845177595&#x0026;wfr=spider&#x0026;for=pc">https://baijiahao.baidu.com/s?id=1731224939845177595&#x0026;wfr=spider&#x0026;for=pc</ext-link></mixed-citation></ref>
<ref id="ref-6"><label>[6]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Liu</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Mao</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Zhang</surname></string-name> and <string-name><given-names>K. B.</given-names> <surname>Letaie</surname></string-name></person-group>, &#x201C;<article-title>Delay-optimal computation task scheduling for mobile-edge computing systems</article-title>,&#x201D; in <conf-name>2016 IEEE Int. Symp. on Information Theory (ISIT)</conf-name>, <publisher-loc>Barcelona, Spain</publisher-loc>, pp. <fpage>1451</fpage>&#x2013;<lpage>1455</lpage>, <year>2016</year>. </mixed-citation></ref>
<ref id="ref-7"><label>[7]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y. Y.</given-names> <surname>Mao</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Zhang</surname></string-name> and <string-name><given-names>B. L.</given-names> <surname>Khaled</surname></string-name></person-group>, &#x201C;<article-title>Dynamic computation offloading for mobile-edge computing with energy harvesting devices</article-title>,&#x201D; <source>IEEE Journal on Selected Areas in Communications</source>, vol. <volume>34</volume>, no. <issue>12</issue>, pp. <fpage>3590</fpage>&#x2013;<lpage>3605</lpage>, <year>2016</year>.</mixed-citation></ref>
<ref id="ref-8"><label>[8]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Ulukus</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Yener</surname></string-name>, <string-name><given-names>E.</given-names> <surname>Erkip</surname></string-name>, <string-name><given-names>O.</given-names> <surname>Simeone</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Zorzi</surname></string-name></person-group>, &#x201C;<article-title>Energy harvesting wireless communications: A review of recent advances</article-title>,&#x201D; <source>IEEE Journal on Selected Areas in Communications</source>, vol. <volume>33</volume>, no. <issue>3</issue>, pp. <fpage>360</fpage>&#x2013;<lpage>381</lpage>, <year>2015</year>.</mixed-citation></ref>
<ref id="ref-9"><label>[9]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Kamoun</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Labidi</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Sarkiss</surname></string-name></person-group>, &#x201C;<article-title>Joint resource allocation and offloading strategies in cloud enabled cellular networks</article-title>,&#x201D; in <conf-name>2015 IEEE Int. Conf. on Communications (ICC)</conf-name>, <publisher-loc>London, UK</publisher-loc>, pp. <fpage>5529</fpage>&#x2013;<lpage>5534</lpage>, <year>2015</year>. </mixed-citation></ref>
<ref id="ref-10"><label>[10]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>B.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>Q.</given-names> <surname>He</surname></string-name>, <string-name><given-names>F. F.</given-names> <surname>Chen</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Jin</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Xiang</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Auditing cache data integrity in the edge computing environment</article-title>,&#x201D; <source>IEEE Transactions on Parallel and Distributed Systems</source>, vol. <volume>32</volume>, no. <issue>5</issue>, pp. <fpage>1210</fpage>&#x2013;<lpage>1223</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-11"><label>[11]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Yan</surname></string-name>, <string-name><given-names>S. Z.</given-names> <surname>Bi</surname></string-name>, <string-name><given-names>L. J.</given-names> <surname>Duan</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Jun</surname></string-name> and <string-name><given-names>A.</given-names> <surname>Zhang</surname></string-name></person-group>, &#x201C;<article-title>Pricing-driven service caching and task offloading in mobile edge computing</article-title>,&#x201D; <source>IEEE Transactions on Wireless Communications</source>, vol. <volume>20</volume>, no. <issue>7</issue>, pp. <fpage>4495</fpage>&#x2013;<lpage>4512</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-12"><label>[12]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Wang</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Hu</surname></string-name>, <string-name><given-names>G.</given-names> <surname>Min</surname></string-name>, <string-name><given-names>A. Y.</given-names> <surname>Zomaya</surname></string-name> and <string-name><given-names>N.</given-names> <surname>Georgalas</surname></string-name></person-group>, &#x201C;<article-title>Fast adaptive task offloading in edge computing based on Meta reinforcement learning</article-title>,&#x201D; <source>IEEE Transactions on Parallel and Distributed Systems</source>, vol. <volume>32</volume>, no. <issue>1</issue>, pp. <fpage>242</fpage>&#x2013;<lpage>253</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-13"><label>[13]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Z. L.</given-names> <surname>Ning</surname></string-name>, <string-name><given-names>X. J.</given-names> <surname>Wang</surname></string-name>, <string-name><given-names>X. J.</given-names> <surname>Kong</surname></string-name> and <string-name><given-names>W. G.</given-names> <surname>Hou</surname></string-name></person-group>, &#x201C;<article-title>A social-aware group formation framework for information diffusion in narrowband Internet of Things</article-title>,&#x201D; <source>IEEE Internet of Things Journal</source>, vol. <volume>5</volume>, no. <issue>3</issue>, pp. <fpage>1527</fpage>&#x2013;<lpage>1538</lpage>, <year>2017</year>.</mixed-citation></ref>
<ref id="ref-14"><label>[14]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y. T.</given-names> <surname>Wang</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Sheng</surname></string-name>, <string-name><given-names>X. J.</given-names> <surname>Wang</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Wang</surname></string-name> and <string-name><given-names>J. D.</given-names> <surname>Li</surname></string-name></person-group>, &#x201C;<article-title>Mobile-edge computing: Partial computation offloading using dynamic voltage scaling</article-title>,&#x201D; <source>IEEE Transactions on Communications</source>, vol. <volume>64</volume>, no. <issue>10</issue>, pp. <fpage>4268</fpage>&#x2013;<lpage>4282</lpage>, <year>2016</year>.</mixed-citation></ref>
<ref id="ref-15"><label>[15]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>W.</given-names> <surname>Liu</surname></string-name>, <string-name><given-names>Y. C.</given-names> <surname>Huang</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Du</surname></string-name> and <string-name><given-names>W.</given-names> <surname>Wang</surname></string-name></person-group>, &#x201C;<article-title>Resource-constrained serial task offload strategy in mobile edge computing</article-title>,&#x201D; <source>Journal of Software</source>, vol. <volume>31</volume>, no. <issue>6</issue>, pp. <fpage>1889</fpage>&#x2013;<lpage>1908</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-16"><label>[16]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Z. L.</given-names> <surname>Ning</surname></string-name>, <string-name><given-names>P.</given-names> <surname>Dong</surname></string-name>, <string-name><given-names>X. J.</given-names> <surname>Kong</surname></string-name> and <string-name><given-names>F.</given-names> <surname>Xia</surname></string-name></person-group>, &#x201C;<article-title>A cooperative partial computation offloading scheme for mobile edge computing enabled Internet of Things</article-title>,&#x201D; <source>IEEE Internet of Things Journal</source>, vol. <volume>6</volume>, no. <issue>3</issue>, pp. <fpage>4804</fpage>&#x2013;<lpage>4814</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-17"><label>[17]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Deng</surname></string-name>, <string-name><given-names>T.</given-names> <surname>Hui</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Bo</surname></string-name></person-group>, &#x201C;<article-title>Fine-granularity based application offloading policy in small cell cloud-enhanced networks</article-title>,&#x201D; in <conf-name>2016 IEEE Int. Conf. on Communications Workshops (ICC)</conf-name>, <publisher-loc> Kuala Lumpur</publisher-loc>, pp. <fpage>638</fpage>&#x2013;<lpage>643</lpage>, <year>2016</year>. </mixed-citation></ref>
<ref id="ref-18"><label>[18]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>M. H.</given-names> <surname>Chen</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Liang</surname></string-name> and <string-name><given-names>D.</given-names> <surname>Min</surname></string-name></person-group>, &#x201C;<article-title>A semidefinite relaxation approach to mobile cloud offloading with computing access point</article-title>,&#x201D; in <conf-name>2015 IEEE 16th Int. Workshop on Signal Processing Advances in Wireless Communications (SPAWC) IEEE</conf-name>, <publisher-loc>Stockholm, Sweden</publisher-loc>, pp. <fpage>186</fpage>&#x2013;<lpage>190</lpage>, <year>2015</year>. </mixed-citation></ref>
<ref id="ref-19"><label>[19]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>O.</given-names> <surname>Munoz</surname></string-name>, <string-name><given-names>A. P.</given-names> <surname>Iserte</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Vidal</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Molina</surname></string-name></person-group>, &#x201C;<article-title>Energy-latency trade-off for multiuser wireless computation offloading</article-title>,&#x201D; in <conf-name>Wireless Communications &#x0026; Networking Conf. Workshops IEEE</conf-name>, <publisher-loc>Istanbul, Turkey</publisher-loc>, pp. <fpage>29</fpage>&#x2013;<lpage>33</lpage>, <year>2014</year>. </mixed-citation></ref>
<ref id="ref-20"><label>[20]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>N. A.</given-names> <surname>Sulieman</surname></string-name>, <string-name><given-names>C. L.</given-names> <surname>Ricciardi</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>Z.</given-names> <surname>Albert</surname></string-name> and <string-name><given-names>V.</given-names> <surname>Massimo</surname></string-name></person-group>, &#x201C;<article-title>Edge-oriented computing: A survey on research and use cases</article-title>,&#x201D; <source>Energies</source>, vol. <volume>15</volume>, no. <issue>2</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>28</lpage>, <year>2022</year>.</mixed-citation></ref>
<ref id="ref-21"><label>[21]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>R. P.</given-names> <surname>Lin</surname></string-name>, <string-name><given-names>T. Z.</given-names> <surname>Xie</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Luo</surname></string-name>, <string-name><given-names>X. N.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Xiao</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Energy-efficient computation offloading in collaborative edge computing</article-title>,&#x201D; <source>IEEE Internet of Things Journal</source>, vol. <volume>9</volume>, no. <issue>21</issue>, pp. <fpage>21305</fpage>&#x2013;<lpage>21322</lpage>, <year>2022</year>.</mixed-citation></ref>
<ref id="ref-22"><label>[22]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>T.</given-names> <surname>Khikmatullo</surname></string-name> and <string-name><given-names>D. H.</given-names> <surname>Kim</surname></string-name></person-group>, &#x201C;<article-title>Blockchain-enabled approach for big data processing in edge computing</article-title>,&#x201D; <source>IEEE Internet of Things Journal</source>, vol. <volume>9</volume>, no. <issue>19</issue>, pp. <fpage>18473</fpage>&#x2013;<lpage>18486</lpage>, <year>2022</year>.</mixed-citation></ref>
<ref id="ref-23"><label>[23]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Yan</surname></string-name>, <string-name><given-names>S. Z.</given-names> <surname>Bi</surname></string-name>, <string-name><given-names>L. J.</given-names> <surname>Duan</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Jun</surname></string-name> and <string-name><given-names>A.</given-names> <surname>Zhang</surname></string-name></person-group>, &#x201C;<article-title>Pricing-driven service caching and task offloading in mobile edge computing</article-title>,&#x201D; <source>IEEE Transactions on Wireless Communications</source>, vol. <volume>20</volume>, no. <issue>4</issue>, pp. <fpage>4495</fpage>&#x2013;<lpage>4512</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-24"><label>[24]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>K. L.</given-names> <surname>Xiao</surname></string-name>, <string-name><given-names>Z. P.</given-names> <surname>Gao</surname></string-name>, <string-name><given-names>W. S.</given-names> <surname>Shi</surname></string-name>, <string-name><given-names>X. S.</given-names> <surname>Qiu</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Yang</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>EdgeABC: An architecture for task offloading and resource allocation in the Internet of Things</article-title>,&#x201D; <source>Future Generation Computer Systems</source>, vol. <volume>107</volume>, no. <issue>1</issue>, pp. <fpage>498</fpage>&#x2013;<lpage>508</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-25"><label>[25]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J. W.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>M. Z. A.</given-names> <surname>Bhuiyan</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Yang</surname></string-name>, <string-name><given-names>T.</given-names> <surname>Wang</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Xu</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>AntiConcealer: Reliable detection of adversary concealed behaviors in EdgeAI assisted IoT</article-title>,&#x201D; <source>IEEE Internet of Things Journal</source>, vol. <volume>9</volume>, no. <issue>22</issue>, pp. <fpage>22184</fpage>&#x2013;<lpage>22193</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-26"><label>[26]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><given-names>C. A.</given-names> <surname>Shaffer</surname></string-name></person-group>, &#x201C;<chapter-title>Graphs</chapter-title>,&#x201D; in <source>A Practical Introduction to Data Structures and Algorithm Analysis</source>, <edition>3rd</edition> ed., <publisher-loc>Virginia, USA</publisher-loc>: <publisher-name>Publishing House of Electronics Industry</publisher-name>, pp. <fpage>381</fpage>&#x2013;<lpage>411</lpage>, <year>2011</year>.</mixed-citation></ref>
<ref id="ref-27"><label>[27]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><given-names>S. M.</given-names> <surname>Ross</surname></string-name></person-group>, &#x201C;<chapter-title>Queueing Theory</chapter-title>,&#x201D; in <source>The Introduction to Probability Models</source>, <edition>11th</edition> ed., <publisher-loc>Los Angeles, Colifornia, USA</publisher-loc>: <publisher-name>Academic Press</publisher-name>, pp. <fpage>481</fpage>&#x2013;<lpage>538</lpage>, <year>2014</year>.</mixed-citation></ref>
</ref-list>
</back></article>