A Dynamic Resource-Aware Routing Protocol in Resource-Constrained Opportunistic Networks

: Recently, Opportunistic Networks (OppNets) are considered to be one of the most attractive developments of Mobile Ad Hoc Networks that have arisen thanks to the development of intelligent devices. OppNets are characterized by a rough and dynamic topology as well as unpredictable contacts and contact times. Data is forwarded and stored in intermediate nodes until the next opportunity occurs. Therefore, achieving a high delivery ratio in OppNets is a challenging issue. It is imperative that any routing protocol use network resources, as far as they are available, in order to achieve higher network performance. In this article, we introduce the Resource-Aware Routing (ReAR) protocol which dynamically controls the buffer usage with the aim of balancing the load in resource-constrained, stateless and non-social OppNets. The ReAR protocol invokes our recently introduced mutual information-based weighting approach to estimate the impact of the buffer size on the network performance and ultimately to regulate the buffer consumption in real time. The proposed routing protocol is proofed conceptually and simu-lated using the Opportunistic Network Environment simulator. Experiments show that the ReAR protocol outperforms a set of well-known routing protocols such as EBR, Epidemic MaxProp, energy-aware Spray and Wait and energy-aware PRoPHET in terms of message delivery ratio and overhead ratio.


Introduction
In opportunistic networks (OppNets) [1], contacts are unscheduled and unpredictable, so the topology is unstable and connections among them are interrupted continuously. The only information available to a node is the information it can collect from the other nodes it encounters while roaming on the network. Hence, the gathered information is of great importance as it reflects the opportunities of meeting among nodes. In contrast to stateful OppNets, stateless OppNets does not require a central entity to save and manage status for communications among devices that are appointed to the network. Therefore, any node must rely on itself to forward data packets through the network. In addition, and in contrast to social OppNets, no information about node affiliation is available in non-social OppNets. This information, if available, helps the nodes route messages to their destinations and choose the best route.
In OppNets, mobile nodes are not supposed to acquire any knowledge of the network topology. Routes from the sender to the destination of a message are created dynamically, and any possible node can opportunistically be used as the next hop. For these reasons, routing data in OppNets is a crucial challenge. Most routing protocols take advantage of contact history information to make message forwarding decisions like in Chen et al. [2]. However, due to the dynamics of OppNets, routing protocols, that depend only on contact history information, face challenges. The main challenge in OppNets is to achieve a high delivery ratio with low overhead. The basic methodology to address this issue is to flood the network with message copies, like in flooding-based routing protocols [3]. A node will have to buffer data until it gets an opportunity to forward this data. So data must be stored, sometimes for a considerable interval of time. Hence, the networking performance is guaranteed only if resources are assumed to be unlimited, which is unfortunately non realistic in OppNets because of their limited and constrained resources [4][5][6]. Therefore, a buffer management scheme must be implemented to handle stored data.
To conserve the network's resources, utility-based routing protocols send the message only to the intermediate nodes that are more likely to reach the destination. These nodes are considered to be the most active nodes on the network. Since the activity of the node is related to its resources, most routing policies rely on these resources when formulating message routing decisions, as in [7][8][9]. As a result, a small group of nodes will bear the brunt of delivering messages to their destinations as discussed in [10,11]. In Pujol et al. [12], it was shown that if the traffic is mainly concentrated on the most active nodes, the resources of these nodes are exhausted, which leads to their interruption. Due to the reduction in the number of nodes in the network, the connectivity in the network is decreased and the network performance decreases as stated in Roy et al. [13]. This problem is known as load unbalancing, which leads to congestion that has a negative impact on the network performance and ultimately incurs a bottleneck [11,14]. On the other hand, Mtibaa et al. [15] show that absolute fairness in the load distribution among nodes adversely affects performance. This is because the absolute fairness among nodes leaves active nodes, which play a key role in raising performance, underutilized.
Many researches seek to maximize the probability of message delivery by means of invoking node's attributes in the expression of the forwarding decision metric. Node's attributes describe the characteristics or reflect the state of a node, such as available buffer space, mobility speed (MobSp), bandwidth (BW), information about node's community. The impact of a node's attribute is quantified by a specific weight. In Lenando et al. [16], we proposed the mutual informationbased weighting scheme (MIWS). The basic idea behind the MIWS approach is to assign each attribute a weight corresponding to the amount of mutual information between nodes' effectiveness and the attribute. In other words it quantifies the correlative relationships between the attribute of the nodes and the performance of the network in terms of delivery ratio. It was shown that MIWS is very promising weighting approach for the recognition of the most impactful attribute in OppNets.
Motivated by the above considerations, this paper aims to increase the OppNets' performance in terms of message delivery ratio and overhead. This is accomplished by focusing on two objectives. The first objective is to strike a balance between distributing the traffic load fairly and using active nodes; the second objective is to ensure adequate control of resource consumption in proportion to their scarcity in the network by invoking the recently introduced MIWS. It is noteworthy that achieving these objectives becomes more challenging in stateless non-social resource-limited OppNets.
Our contribution in this paper is focused on designing a dynamic Resource-Aware Routing (ReAR) protocol that seeks to improve the performance of stateless non-social resourceconstrained OppNets. The contributions made under the proposed protocol are as follows: • Effective buffer management scheme: The proposed ReAR protocol estimates the decisions of messages dropping based on the recently published Acumen Message Drop scheme outlined in Lenando et al. [17]. • Equitable distribution of traffic loads: We propose a mechanism that achieves equitable distribution of traffic loads among nodes based on node's resource considerations. This workload distribution mechanism prevents load concentrations heavily on active nodes in the network avoiding congested paths. • Dynamic regulation of buffer consumption in the network: This is the most innovative part of the proposed protocol. We invoke our recently introduced mutual information-based weighting approach to estimate the impact of the buffer size on the network performance and ultimately to regulate the buffer consumption in real-time, as weights are provided instantaneously.
The remainder of this paper is organized as follows: Section 2 presents related works. Section 3 describes the design of the proposed ReAR protocol. Performance evaluation are presented in Section 4. At last, Section 5 concludes this work.

Related Works
Many approaches have been developed to address the problem of unfairness in the distribution of workload in OppNets, which certainly affects their performances.
Unfairness problem in OppNets could be classified based on the method in which fairness is treated into two categories: fairness in terms of load balancing and fairness in terms of delivered messages [13]. The latter category deals with the problem of inequality in the rate of messages delivered to their destinations. More details on this is available in [18][19][20]. Our research falls into the first category and deals with fairness in relation to load balancing.
Pujol et al. [12] realized that heavy loads often fall on some active nodes in the network. Hence, they come to the conclusion that unfairness in the distribution of the traffic load leads to a rapid consumption of the limited resources of the active nodes, which negatively impacts the network performance. The authors proposed the FairRout scheme in which messages are forwarded just to nodes with high interactions with the destination. In other words, nodes accept the forwarded messages just from the nodes which have similar social features. However, Fair-Rout is suitable for social-OppNets where social features are the basis of the routing processes. Unfortunately, the general social nodes still suffer from heavy traffic loads in the network [13].
Furthermore, there is no estimate of the efficiency of FairRout scheme in non-social opportunistic networks.
Mtibaa et al. [15] have shown that absolute fairness in the distribution of loads in OppNets among all nodes, adversely affects performance. In contrast, even the lack of fairness between the active nodes can also result in degradation in performance. They proposed the fairness-based opportunistic networking (FOG) approach to ensure the trade-off between the efficiency and fairness of rank-based forwarding techniques using local information. FOG do well in social networks where rank-based forwarding techniques are applied. Nonetheless, it is not applicable in general delay tolerant networks.
Mashhadi et al. [21] suggested that each node in the network estimates the load of the other nodes locally in order to achieve traffic load balancing. Based on this estimation, each node disseminates data just to the under-loaded parts of the network. Therefore, messages are only forwarded to the nodes which have low load. The problem with this approach is that the data path must already be specified. Therefore, it cannot be applied to practical use in OppNets as it is difficult to know the path of data in advance.
Bezirgiannidis et al. [22] proposed to achieve fairness by building a time series to predict future queuing rates. This approach depends on a deterministic contacts plan schedule that is difficult to implement in normal OppNets. Therefore, this approach is only suitable for the case for which it was designed.
Le et al. [14] propose the social contact graph-based routing (SCGR) to achieve fairness in traffic load and throughput. SCGR drops messages based on a wait timer. The message forwarding scheme is essentially based on the social attachment metric, which is heavily dependent on contact history. Traffic load fairness is achieved only by avoiding forwarding the message to the nodes whose queue contents are larger than the contents of the sent node's queue. Therefore, the node only sends messages to nodes with a queue length that is similar to or less than the queue's own length. Consequently, traffic loads are transferred to less congested paths.
Ma et al. [11] proposed a fairness-aware routing strategy (FARS) for mobile OppNets. They formulate the forwarding decision based on the factors that affect load fairness. The factors that will be selected are the history of contacts with the destination, the remaining buffer size and the total number of delivered messages. Although FARS improves the balance of load traffic, the results show that the overhead is not much better than current schemes.
Roy et al. [4] proposed the congestion-aware relay selection (CARS) scheme to solve limited resource problem in OppNets. They formulate the forwarding decision based on centrality, buffer availability and residual energy of the nodes. However, this scheme is primarily designed for a disaster scenario. The main problem in this scheme lies in the parameter configuration, which is highly dependent on the application specifications.
Wei et al. [23] addressed the problem of congested resources as a multiple attribute decision making (MADM) problem. Their forwarding decision is formulated based on the buffer size, message size, time to live, and message's waiting time in the queue. The difficulty in this approach is to choose the most appropriate routing protocol to work with it. It was observed that their performance is fluctuated according to the chosen protocol [13].
From the reviewed literature, most approaches rely either on social relationships or they require a global knowledge of the network. Furthermore, some routing protocols are tailored to the scenario or to the protocol which is designed for it. In contrast to these works, the proposed approach allows us to control the consumption of the network resources in real-time, stateless, non-social and resource-constrained OppNets. Experiments show that ReAR is able to raise the network performance in terms of message delivery and overhead to encouraging and high ratios.

Designing of the Resource-Aware Routing Protocol
In Ali et al. [3], we addressed an analytical study in which we identified a number of enviable features that should be met when designing a routing protocol in resource constrained OppNets. In particular, an effective routing protocol in such networks should meet the following requirements: • Use acknowledgments to eliminate duplicate messages.
• Implement efficient buffer management that only depends on the available information in the stateless non-social opportunistic networks. • The exploitation of active nodes in routing decisions should take into account the rational consumption of their resources. • Impose a limit on the number of copies of messages in the network in order to reduce resource consumption.
To this end, we have designed a novel routing protocol, called Resource-Aware Routing (ReAR) protocol, which fulfills the above requirements, as we will demonstrate in the next subsections.
In Tab. 1, we summarize the notations and definitions that we use throughout the ReAR protocol algorithm.  E n x ,n y The estimated time of n x to meet n y . E n x ,n y is equal to 0 if n x didn't meet n y before otherwise is given by Eq. (1). 12 TTL m Time to life for a message m. 13 DropExpireMessages(n x ) A procedure for dropping expired messages in node n x 14 ExchangeSummaryVectors(n x ,n y ) A procedure for exchanging the summary vectors of both nodes 15 ExchangeAcknowledgments(n x ,n y ) A procedure to exchange the acknowledgments lists of the delivered messages 16 DeleteAcknowledgedMessages(n x ) A procedure to delete the acknowledged messages in n x

Message Forwarding Strategy
Forwarding messages to nodes without considering their eligibility to carry the message will produce useless copies of messages in the network. Consequently, this will increase overhead and consume network's resources. In addition, focusing on active nodes to get messages to their destination increases their workload, causing their buffers and energy to be consumed. Eventually they will be taken out of service.
In stateless non-social opportunistic networks, the available information is only what the node collects from its neighbors while it roams the network. This is due to the absence of any central entity that the node can rely on to provide information about the state of the network. Furthermore, there are no human behaviors and social relationships between the nodes to build more efficient and trustworthy message dissemination schemes that can be helpful in routing processes.
To address these challenges, the ReAR protocol makes message forwarding decisions based solely on the nodes' contact history information and acts upon the following considerations: • Equitable distributing of workload • Avoiding congested paths in the network • Achieving a dynamic regulation of buffers' consumption In the following paragraphs, the message forwarding decision process in the proposed ReAR protocol and the aforementioned considerations are detailed.

Message Forwarding Decision in the ReAR Protocol
In the proposed protocol, the message forwarding decision is based on the estimation of the time taken to meet the destination of the message. If the encountered node, in other words, the candidate node to which the message can be forwarded, needs less time to reach the destination, then the source node sends the message to it. Otherwise, the source node holds the message and waits for another opportunity.
To estimate the time a node n x takes to encounter any other node n y , the node n x must record the meeting and leaving times between itself and all other nodes that it encounters while their roaming. In this way, the estimated time is calculated based on the following theorem: is the set of the recorded inter-contact intervals between two nodes n x and n y , where ΔT xy g is the g th inter-contact interval between n x and n y which is given by ΔT where ΔT xy m (g) and ΔT xy l (g − 1) are respectively the g th meeting and the (g-1) th leaving time between n x and n y ; and let t xy 0 be the time of the last leaving between n x and n y .
Then, at any current time t > t xy 0 , the estimated time for encountering between the two nodes n x and n y is given by the following equation: where . More details about Theorem 1 is available in our recently published work [17]. The most important feature of Theorem 1 is that it relies solely on the information gathered by the nodes while roaming in the network.
The use of the above-mentioned process alone while making a message forwarding decision is not enough. This is because the active nodes usually need less time than others to reach the rest of the nodes and consequently the workload will be focused on active nodes. Therefore, forwarding messages decision should take into account the following three requirements: Equitable distribution of traffic load among nodes according to its resources, proactive congestion avoidance and dynamic regulation of buffer consumption. The message forwarding decision algorithm of the ReAR protocol is presented in Algorithm 1. Line 7 shows that if the candidate node n y is the destination of the message, the message is delivered to it without any condition. Otherwise, the aforementioned three requirements must be met. The next paragraphs demonstrate how these conditions will be met in the message forwarding decision of the proposed protocol.

Proactive Congestion Avoidance
Congestion control in OppNets is a crucial aspect as the storage space in nodes is limited. In addition, many researchers have shown that an unfairly distributed load on nodes which are better connected leads to congestion in the network [24,25]. When a path is congested, nodes in that path get rid of messages because there is not enough space in their buffers [26]. To address this problem, the proactive congestion avoidance scheme proposed in Le et al. [27] is adopted in our algorithm. The scheme provides mechanisms for avoiding congestion proactively and alleviating congestion reactively. It compares the available storage capacity between the sending node and the candidate node. If the buffer occupancy of the candidate node is greater than that of the sending node, the sending node no longer forwards messages to the candidate node. Otherwise, messages are forwarded to the candidate node. This test statement is outlined in line 11 of Algorithm 1. As a result, congested paths could be avoided by choosing other paths to conduct messages to their destination.
E n x ,n y for every node couple n x , n y ∈ N × N 1: if n x encounters n y then 2: DropExpireMessages(n x ) 3: ExchangeAcknowledgments(n x ,n y ) 4: ExchangeSummaryVectors(n x ,n y ) 5: DeleteAcknowledgedMessages(n x ) 6: for each message m in node n x do 7: if [ n y = n m ] then 8: n x forwards a copy of m to n y 9: Add m to the acknowledgment list 10:

Dynamic Regulation of Buffer Consumption
Buffer management is considered as one of the most important open problems in resourcelimited OppNets. Since data replication is typically used to increase the delivery ratio which unfortunately increases buffer usage, we can consider the buffer management process as a filter that keeps the messages that are more likely to be delivered than the others. In addition, several works have shown that the way the buffers are consumed in wireless sensor networks and delay tolerant networks is directly reflected in the energy consumption [28][29][30]. Consequently, the method of consuming buffers directly determines the network lifespan. Thus, the key challenge is to decide in which order the messages should be replicated and which messages should be dropped when the node's buffers operate close to their capacity [31]. A buffer management strategy that provides countermeasures for these issues must be implemented in the routing protocol.
To this end, the proposed ReAR protocol offers an approach to dynamically regulate buffer consumption using the correlative relations between the buffer state and delivery ratio performance of the network as a basis for buffer management decisions. This is resolved by invoking our recently introduced mutual information-based weighting scheme (MIWS) [16] in the buffer management decision. The MIWS approach aims to estimate the correlation between network effectiveness and the attributes of the nodes in terms of delivery ratio. The high weight of an attributes reflects a proportionally high impact in achieving efficient data forwarding.
Let Y be a random variable that takes the values effective and ineffective denoted by eff and ineff, respectively, and indicates whether a node in the vicinity of a certain node is effective or ineffective. A node is said to be effective or ineffective based on its successful number of forwarding messages compared to the average successful forwarding messages of all neighboring nodes. In addition, let V buffer be the set of buffer size values of the nodes in the communication range. V buffer can be considered as a random variable where its distribution is characterized by the frequency of use of each buffer size value, normalized by the total number of nodes inside a node's communication range.
To estimate the impact of the buffer size on the neighboring node's forwarding efficiency we calculate the weight given by the following expression: where H (Y ) is the entropy of Y given by: where P eff and P ineff are the probabilities of effective and ineffective nodes in the neighbourhood of a given node, respectively; and H Y /V buffer is the conditional entropy of Y given the set of buffer sizes in the nodes within communication range and is given by: where P y,v is the probability that Y = y and V buffer = v, and p y/v is the conditional probability that Y = y given that V buffer = v. Explanation and examples of the proposed weighting scheme are presented in Lenando et al. [16]. In our work we use the estimated buffer weight to control the buffer consumption regulation by introducing the so called consumption factor, denoted by CF and is given by the following expression: The CF value ranges from zero to one. It is a measure of how much uncertainty remains about the network efficiency Y when we know the buffer size values V buffer . A value of CF close to 1 means the non-impactness of the buffer size on the network effectiveness, buffer space is abundant in the network, and hence, more consumption of buffers in the network is allowed to raise the network performance. In contrast, a value of CF close to 0 means the high impactness of the buffer size on the network effectiveness, in other words, the buffer occupancy is high which causes a bottleneck in the network, and consequently, buffers are scarce and their consumption should be decreased.
The test statement of the buffer consumption regulation is outlined in line 12 of Algorithm 1. This condition indicates that the buffer occupancy of the candidate node should be less than or equal to CF. The consumption factor is prior calculated based on the weight of the buffer size attribute, which reveals how loaded the nodes' buffers are in the network. Invoking this test procedure will leave enough space for the candidate node to store its own messages and the messages that are designated to it.

Equitable Distribution of Traffic Load
When a node encounters another node, it issues a copy of the message that the other node does not own. To achieve this, both nodes first exchange the so-called summary vector, which contains their respective messages Ids. For the case of simple controlled replication, the Sprayand-Wait (SaW) routing protocol [32] controls the overhead by bounding the total number of copies and transmissions per message. Let L mx be the number of replicated messages at the node n x (source or rely). In SaW, any node n x that has L mx > 1 and encounters another node n y (with no copies), hands over L mx /2 to n y and keeps L mx /2 for itself. When it is left with only one copy, it switches to direct transmission. It is easy to see that this scheme quickly distributes all its copies into the immediate vicinity nodes, but few of them may ever see the destination.
This problem could be solved if a sophisticated replication scheme is deployed to route a copy after it's handed over to a relay. To this end, the ReAR protocol proposes a controlled replication where each node can forward its copy to a potentially more appropriate relay using a buffer occupation based scheme. The node with the largest available space gets the largest share of replications. The following paragraph illustrates an example to describe the proposed replication method.
Let L mx = 8 be the number of copies of a message m at the node n x which decide to forward the message to an encountered node n y . Assuming that the buffer occupancy for the node n x is 75% and the buffer occupancy of the node n y is 25%, then node n y gets its share of copies in relation to its buffer usage as follows: 8 × 0.75/(0.25+0.75) = 6. Hence n x hand over 6 to n y and keeps 2 for itself. The distribution process continues until the number of copies reaches one. At this stage, the node retains a single copy of the message until it is delivered to its destination or deleted due to expiration.
The sequence of this process is outlined in lines 13-17 of Algorithm 1. Line 13 shows the estimated time condition to meet the message destination. If the sender estimated time to meet the message's destination is less than the candidate estimated time to meet the same message's destination, then there is no need to forward the message to the candidate node. This condition reduces redundant copies of messages in the network, and hence, preserves network resources. Additionally, Line 13 states that if the sender node does not encounter the message's destination before (that is E n x ,n m = 0), the message will be sent to the candidate node regardless of its estimated time to meet the message's destination. Actually, this action is necessary in order not to miss the opportunity of the meeting with the candidate node, as it is likely to meet the message's destination sooner or later.  show that if the above three conditions are fulfilled, the message will be sent to the candidate node. The message's copies will be distributed between the sender and the candidate node so that a higher number of copies is attributed to the node with less buffer occupancy.
This mechanism achieves an equitable distribution of the traffic load among the nodes and prevents a heavy load concentration on active nodes in the network. Thus, active nodes retain their active role in the network and continue to work longer.

Buffer Management Scheme
The buffer management scheme should be understood as the way in which space is freed up for the new incoming messages when the buffer does not have enough space. Therefore, the buffer management scheme should decide which messages to keep and which messages to delete.
The proposed ReAR protocol estimates message dropping decisions based on our proposed buffer management scheme, called the Acumen Message Drop (AMD) scheme, recently presented in Lenando et al. [17]. The basic idea of the AMD schema is that the decision to delete messages is estimated based on the contact history information, which is the only information available in the stateless non-social OppNets. AMD deletes the messages with the lowest lifetime (TTL) and the greatest time to reach the destination. The AMD scheme is outlined in Algorithm 2. The ReAR protocol uses acknowledgments to remove duplicate messages. This is because the excess copies of the messages occupy large space in buffers and are continuously sent and received on the network, draining energy from the nodes and shortening their lifespan. if n x encounters n y then 2: both n x and n y updates their contact history. To evaluate the performance of the ReAR protocol, we have implemented a simulation model that includes the proposed algorithms, which are implemented in the well-known Opportunistic Network Environment Simulator (ONE) [33]. ONE is a Java-based simulation program which is designed primarily for OppNets. Source codes are available in Jones [34]. All experiments are conducted using real live mobility of the following patterns: pedestrians, bicycles, motorcycles, and drivers of faster and slower cars. All nodes use Bluetooth as communication medium. Each node in the network is equipped with one wireless interface with a fixed bandwidth. Each node records meeting and departing times with any node it encounters. Performance is evaluated under limited buffers and limited energy. Each node is equipped with a battery with a limited power budget of 800 mAh. The size of the experimentation area is 4500 × 3400 m 2 . Downtown Helsinki (pedestrian and streets) is the environment of our experiments.

Performance Evaluation Metrics
We consider six metrics to evaluate the performance of the routing protocols. The performance metrics are defined as follows: • Delivery Ratio: The ratio of the total number of delivered messages (D) to the total number of created messages (C).
• Overhead Ratio: It reflects how many redundant messages are relayed to deliver one message. It can be interpreted as the transmission cost in the network and is given by the following expression: where R is the total number of forwarded (relayed) messages by the relays nodes.
• Efficiency: The ratio between Delivery ratio and Overhead ratio. It is used to compare performance with respect to both Delivery and Overhead ratios. It is given by the following expression: • Average Latency: It is the average of delay, i.e., the average time between the creation of messages and their reception by the final destination. It is given by the following expression: where t D i is the time at which the i th message was delivered, t C i is the time at which the i th message was created, and k is the number of delivered messages.
• Average buffer occupancy: The percentage of memory fullness.
• Number of dead nodes: The number of nodes that become out of service due to depletion in their energy.

Simulation Results
In the following paragraphs we will present first the simulation results of the proposed ReAR protocol regarding the consumption factor. Afterwards, a comparison between the performance of the ReAR protocol with that of several well-known protocols is discussed. Fig. 1 shows the effect of the consumption factor on the network performance. The buffer sizes for all nodes are set to the same value of 2 MB. Tab. 2 lists the simulation settings regarding the experiment for investigating the effect of the consumption factor on the network performance.

Consumption Factor Effect
Figs. 1a and 1b show that an increase in the consumption factor (CF) leads to an increase in the overhead ratio and in the average hop count. This can be explained by the fact that an increase in the consumption factor leads to an increase in the numbers of messages exchanged in the network which is justified by line 12 in Algorithm 1 as the larger the CF, the more messages are forwarded to the encountered node. Fig. 1c shows that an increase in the consumption factor negatively affects the performance in terms of delivery ratio. This is because the closer CF approaches one, the more buffers will be flooded with messages and, consequently, the rate of dropped messages will increase. In addition, the power consumption increases due to the increase in sending and receiving of messages. This drains the energy of the nodes. As a result, the number of messages delivered to their destinations decreases. Fig. 1d shows that the average delay is reduced as CF approaches one. This is because more copies of the messages are being disseminated in the network. Hence, this accelerates their delivery to their destinations.  Fig. 1 shows that the consumption factor greatly affects the buffer consumption in the network, which is ultimately reflected in the network performance. The simulation results reveals that the proposed ReAR protocol adequately estimates the value of the consumption factor through the recently introduced MIWS.

Comparison of the ReAR Protocol with Other Protocols
In this section we carry out a comparative study between the performance of the proposed ReAR protocol against that of five well-known routing protocols in opportunistic environments, namely: • Encounter-based routing protocol (EBR) [35]: It uses an encounter-based metric for optimization of message forwarding that maximizes message delivery ratio while minimizing overhead. • Energy-aware Spray and Wait routing protocol (ES&W) [36]: It is designed by considering the design the two phases Spray and Wait protocol [32], however, it uses information about speed and residual energy to assign the number of copies between the corresponding pair nodes in the spray phase. • MaxProp routing protocol [37]: It uses an estimated delivery likelihood for each node in the network, so that nodes that are seen infrequently obtain lower values over time, and packets that are ranked with the highest priority are the first to be forwarded, whereas those ranked with the lowest priority are the first to be dropped to make space for incoming packets.
• Energy-aware PRoPHET routing protocol [38]: It is designed by considering the design of the PRoPHET routing protocol [39], where the selection of the next best forwarder for a message relies on the energy of the nodes. • EPIDEMIC routing protocol [40]: It is a flooding-based routing protocol, as nodes continuously replicate and transmit messages to newly discovered nodes.
In this study, we investigate the performance of the routing protocols in terms of delivery ratio, overhead ratio, efficiency, average latency, average buffer usage, and number of dead nodes. Tab. 3 lists the simulation settings for the experiments concerning the comparison of the ReAR protocol with the aforementioned protocols. The initial maximum number of message' copies 8 Fig. 2 shows that the ReAR protocol increases the delivery rate compared to EBR, ES&W, EPRoPHET, MaxProp and EPIDEMIC routing protocols on average by 45%, 72%, 200%, 849%, 1008%, respectively. It should be emphasized that nodes make the decision to delete messages either because of their lifetime expiration or because of a buffer overflow. In both cases this will negatively affect the number of messages delivered. Since the ReAR protocol regulates buffer consumption and takes message lifetime into account when dropping messages, the number of messages delivered is greater than other protocols. As a result, the ReAR protocol outperforms the other protocols in terms of delivery ratio.  Fig. 3 shows that the ReAR protocol outperforms all other protocols regarding the overhead ratio. The reason for the ReAR protocol superiority lies in its message forwarding policy, which mainly depends on the buffer occupancy and ultimately on the weight of the buffer attribute. This leads to a productive use of the node resources. On the contrary, we observe that EPIDEMIC and MaxProp, which are flooding-based routing protocols, have the highest overhead and the lowest achievement due to their depletion of the limited nodes' resources. 6 Fig. 4 shows that the ReAR protocol outperforms all other protocols in terms of efficiency, i.e., the ratio of delivery ratio to overhead ratio. However, this improvement was not without cost. Fig. 5 shows that the average latency of ReAR is increased. This is due to two reasons. First, due to the use of the AMD scheme for buffer management, which is causes an increase in latency. Second, the conservative behaviour of the message forwarding policy of the ReAR protocol as outlined in Algorithm 1. Consequently, the average time required to deliver messages to their destination will increase. It is noteworthy that the efficiency curve starts decreasing after 30 h of the experiment time. This is due to the fact that the number of dead nodes reached a noticeable value after 30 h as it clear in Fig. 7.

Conclusion
In this paper, we proposed the ReAR protocol which aims to enhance the routing efficiency in resource-constrained, stateless, and non-social OppNets. The ReAR protocol achieves the following requirements: effective buffer management, fair load distribution based on node's resources, proactive avoidance of congestion, and energy preserving by a dynamic regulation of buffer consumption. The buffer consumption regulation was resolved by invoking the recently introduced MIWS which provides an innovative way to explore the depths of networks with dynamic topology. Hence, it becomes possible to estimate the impact of the buffer size attribute on the network performance which is ultimately used to regulate the buffer consumption in real time.
To the best of our knowledge, no previous work has attempted to dynamically address the problem of control buffer consumption in resource-constrained, stateless, and non-social opportunistic networks. Experimental results show that the ReAR protocol increases the network performance in terms of messages delivered and overhead at encouraging rates. However, these improvements were not without cost. Experiments show that the average delay has increased. The extent to which this delay can be tolerated depends on the requirements of the service that is being made available to the user. In future work, we will consider other factors such as the number of hops and bandwidth to make more informed decisions about how to delete and forward messages.