This paper presents a novel application of metaheuristic algorithms for solving stochastic programming problems using a recently developed gaining sharing knowledge based optimization (GSK) algorithm. The algorithm is based on human behavior in which people gain and share their knowledge with others. Different types of stochastic fractional programming problems are considered in this study. The augmented Lagrangian method (ALM) is used to handle these constrained optimization problems by converting them into unconstrained optimization problems. Three examples from the literature are considered and transformed into their deterministic form using the chanceconstrained technique. The transformed problems are solved using GSK algorithm and the results are compared with eight other stateoftheart metaheuristic algorithms. The obtained results are also compared with the optimal global solution and the results quoted in the literature. To investigate the performance of the GSK algorithm on a realworld problem, a solid stochastic fixed charge transportation problem is examined, in which the parameters of the problem are considered as random variables. The obtained results show that the GSK algorithm outperforms other algorithms in terms of convergence, robustness, computational time, and quality of obtained solutions.
Gainingsharing knowledge based algorithmmetaheuristic algorithmsstochastic programmingstochastic transportation problemIntroduction
Optimization techniques include finding the best suitable values of decision variables that optimize the objective function. They are used in various fields of engineering to solve realworld problems. It has several applications in mechanics, economics, finance, machine learning, computer network engineering, etc. In realworld problems, the exact or deterministic information of the problems is difficult to find; therefore, randomness or uncertainty takes place [1]. These problems come under stochastic programming, where the parameters of the problems are characterized by random variables which follow any probabilistic distribution [2]. Stochastic programming has several applications in different areas such as transportation [3,4], portfolio optimization [5], supply chain management [6], electrical engineering [7], lot sizing and scheduling [8,9], water resources allocation [10], production planning [11], medical drug inventory [12] etc. The basic idea of solving a stochastic programming problem is to convert probabilistic constraints into their equivalent deterministic constraints and then are solved using analytical or numerical methods.
Stochastic programming is applied to a large number of problems of which fractional programming problems are considered in this study. The stochastic fractional programming problems (SFPP) optimize the ratio of two functions with some additional constraints, which include at least one parameter that is probabilistic rather than deterministic. Additionally, some of the constraints may be indeterministic. Charles et al. [13–16] considered the sum of the probabilistic fractional objective function and solved by classical approaches. By using classical methods, several difficulties such as finding optimal solution, handling constraints, highdimensional problems etc. arise. To handle these situations, metaheuristic algorithms have been developed over the last three decades. The algorithms need not to calculate the derivative of the problem and are classified into four categories which are shown in Fig. 1 [17,18]. These algorithms are nature inspired algorithms such as evolutionary algorithms are inspired by natural evolution, swarmbased algorithms are based on the behaviour of insects or animals, physicsbased algorithms are inspired from the physical rule and human based algorithms are based on the philosophy of human activity.
Classification of metaheuristic algorithms
Numerous evolutionary, swarmbased, and physicsbased algorithms have been developed and applied to solve different realworld problems [19]. Claro and Sousa [20] proposed multiobjective metaheuristic algorithms for solving stochastic knapsack problems. Hoff et al. [21] considered a timedependent service network design problem in which the demand is in stochastic nature and the problem is solved using metaheuristic algorithms. Differential Evolution (DE) algorithm with a triangular mutation operator is proposed to solve the optimization problem [22] and applied to the stochastic programming problems [23]. Many researchers presented the applications of metaheuristic algorithms in different types of problems such as unconstrained function optimization [24], vehicle routing problems [25–27], machine scheduling [28,29], mine production schedules [30], project selection [31], soil science [32], feature selection problem [33,34], risk identification in supply chain [35] etc. For constrained optimization problems, Particle Swarm Optimization (PSO) with Genetic Algorithm (GA) was presented and compared to other metaheuristic algorithms [36].
Agrawal et al. [17] presented an extensive review of the scientific literature, from which it can be observed that there are only a few algorithms in the humanbased category. Recently, Mohamed et al. [18] developed a gaining sharing knowledge (GSK) based optimization algorithm that typically depends on the ideology of gaining and sharing knowledge during the human life span. The GSK algorithm comes under the humanbased algorithm category and has been evaluated over test problems from the CEC2017 benchmark functions for different dimensions. They observed that GSK algorithm gives significantly better results as compared to other metaheuristic algorithms in terms of accuracy, convergence, and can find the optimal solutions. Moreover, Agrawal et al. [37–40] proposed binary versions of the GSK algorithm and applied it to the realworld problems such as feature selection problem, knapsack problem.
The SFPP problems are solved using classical approaches and obtained the solution by Charles et al. [16]. While, Mohamed [22] solved the problems using modified version of DE algorithm and found that the DE algorithm presented better results in comparison to the classical approaches. This implies that the use of metaheuristic algorithm in solving stochastic programming problems will be more efficient and effective approach.
Therefore, this paper presents SFPP and their deterministic models that are solved using the GSK algorithm. To the best of our knowledge, it is the first study on applying GSK to stochastic programming problems and an application of realworld problems. The obtained solutions are compared with eight other stateoftheart metaheuristic algorithms, quoted results in the literature [16] and optimal global solution. The stateoftheart metaheuristic algorithms include two evolutionary algorithms i.e., Genetic Algorithm (GA) [41], Differential Evolution (DE) [42]; three swarmbased algorithms i.e., Particle Swarm Optimization (PSO) [43], Whale Optimization Algorithm (WOA) [44], Ant Lion Optimizer (ALO) [45]; two physicsbased algorithms i.e., Water Cycle Algorithm (WCA) [46], MultiVerse Optimizer (MVO) [47] and one humanbased algorithm i.e., Teaching Learning Based Optimization (TLBO) [48].
As an application of stochastic programming to realworld problems, a transportation problem is examined under a stochastic environment. Mahapatra et al. [49] considered a stochastic transportation problem in which the parameters of the problem follow extreme value distribution. Yang et al. [3] considered fixed charge transportation problem and used a tabu search algorithm to find the solution. Agrawal et al. [50] solved multichoice fractional stochastic transportation problems involving Newton's Divided Difference interpolating polynomial.
In this study, the transportation problem is considered with multiobjective functions and probabilistic constraints. The main aim of the problem is to minimize the transportation cost and the total transportation time while fulfilling the demand requirements. The problem is solved by the GSK algorithm, other metaheuristic algorithms and the solutions are compared to evaluate the relative performance of the algorithms.
The organization of the paper is as follows: Section 2 describes the problem definition of SFPP, Section 3 presents the methodology used in solving SFPP. The numerical examples SFPP are shown in Section 4 and Section 5 represents the numerical results of the problems. A case study is given in Section 6 and the analysis of the results is discussed in Section 7 which is followed by the conclusions in Section 8.
Problem Description
Stochastic programming problems deal with the situations when uncertainty or randomness takes place. This section gives a detailed description of the stochastic fractional programming problems (SFPP) dealing with the optimization of the ratio of functions, in which randomness occurs in at least one of the parameters of the problem. The uncertain parameters are estimated as random variables that follow probability distribution.
The sum of SFPP is considered from the literature [16], and their mathematical model is demonstrated as:
maxz∈S∑t=1kNt(Z)+αtDt(Z)+βt;t=1,2,…,ksubject to
P(∑j=1ndijzj≤bi(1))≥1−pi(1);i=1,2,…,m∑j=1neijzj≤bi(2);i=m+1,…,qwhere zj∈Z=(z1,z2,,zn)⊂{\cal R}n are deterministic decision variables and Nt(Z)=∑j=1nhtjzj, Dt(Z)=∑j=1nptjzj, where, htj,ptj are the coefficients of decision variables in the objectives functions and αt,βt are constants. Out of Nt(Z),Dt(Z),dij and bi(1) at least one parameter is a random variable therefore, the problem is called as sum of SFPP. S={Z:Eqs.(2),(23)andZ≥0} is nonempty, convex, and compact set in {\mathfrak{R}}n (Feasible Set). There is total q number of constraints in which the number of probabilistic constraints is m and rest are deterministic constraints. dij,eij are the coefficients of decision variables in the constraints, bi(1),bi(2) are the righthand side of the constraints and pi(1) is a probability for the ith stochastic constraints of SFPP.
Methodology
This section is divided into two subsections: the first subsection describes the detailed description of the GSK algorithm, and the second presents the constraint handling technique.
Gaining Sharing KnowledgeBased Algorithm (GSK)
Gaining sharing knowledgebased algorithm (GSK) is one of the metaheuristic optimization algorithms [18]. GSK depends on the concept of gaining and sharing knowledge in the human life span. The algorithm comprises two stages:
Junior (beginners) gaining and sharing stage
Senior (experts) gaining and sharing stage
In the human life span, all persons gain and share knowledge or views with others. The persons from early middle age gain knowledge through their small connections such as from family members, relatives and want to share their views or opinions with others who may or may not belong to their group. Similarly, people from middle later age gain knowledge by interacting with their colleagues, friends, etc. They have the experience to judge people and categorize them as good or bad. Also, they share their views or opinions with experienced or suitable persons so that their knowledge may be enhanced.
The process, as mentioned above, can be mathematically formulated in the following steps:
Step 1: In the first step, the number of persons are assumed (Number of population size Npop). Let zi(i=1,2,,Npop) be an individual of a population zij=(zi1,zi2,,ziN), where N is branch of knowledge assigned to an individual. and Fi(i=1,2,,Npop) are the corresponding objective function values.
Step 2: At the beginning of the search, the number of dimensions for the junior and senior stage should be computed. The number of dimensions that should be changed or updated during both the stages must set on, and it is calculated by a nonlinear decreasing and increasing equation:
Njunior=N×(Gen−GGen)kNsenior=N−Njunior
Step 3: Junior gaining sharing knowledge stage: In this stage, earlymiddle aged people gain knowledge from their small networks. Due to the curiosity of exploring others, they share their views or skills with other people who may or may not belong to their group. Thus, individuals are updated as follows:
1. According to objective function values, the individuals are arranged in ascending order as zbest,…,zi−1,zi,zi+1,…,zworst
2. For every zi(i=1,2,,Npop), select the nearest best (zi−1) and worst zi+1 to gain the knowledge, also select randomly (zr) to share the knowledge. Therefore, the updated new individual is as
zijnew={zi+kf[(zi−1−zi+1)+(zr−zi)],ifF(zr)<F(zi)zi+kf[(zi−1−zi+1)+(zi−zr)],otherwisewhere, kf>0 is the knowledge factor.
Step 4: Senior gaining sharing knowledge stage: This stage comprises the impact and effect of other people (good or bad) on an individual. The updation of the individual can be computed as follows:
The individuals are classified into three categories (best, middle and worst) after sorting individuals in ascending order (based on the objective function values). best individual =100p\%\; (zp−best), middle individual =N−2100p\%\; (zmiddle), worst individual =100p\%\; (zp−worst).
For every individual zi, choose two random vectors of the top and bottom 100p\% individual for gaining part and the third one (middle individual) is chosen for the sharing part. Therefore, the new individual is as
zijnew={zi+kf[(zp−best−zp−worst)+(zmiddle−zi)],ifF(zmiddle)<F(zi)zi+kf[(zp−best−zp−worst)+(zi−zmiddle)],otherwise
where, p∈[0,1] is the percentage of best and worst classes.
The flow chart of GSK is shown in Fig 2.
The flow chart of GSK algorithmConstraint Handling Technique
To solve constrained optimization problems, different types of constraint handling techniques are used [51,52]. Deb [53] introduced an efficient constraint handling technique which is based on the feasibility rules. The most commonly used approach to handle the constraints is the penalty function method, in which the infeasible solutions are punished with some penalty for violating the constraints. The mathematical formulation of a constrained optimization problem is given as
maxf(Z)whereZ=(z1,z2,,zN)∈RNsubject to
gi(Z)≤0i=1,2,…,mwk(Z)=0k=1,2,…,nEq. (8) represents the objective function, Eq. (9) describes the inequality constraints and Eq. (10) describes the equality constraints. In this study, the augmented Lagrange method (ALM) is used to solve the constrained problem by converting it into an unconstrained optimization problem with some penalty to the original objective function. Bahreininejad [54] introduced ALM for the water cycle algorithm and solved realtime problems. The original optimization problem is transformed into the following unconstrained optimization problem:
max=f(Z)+δ∑i=1N{gi(z)}2−λ∑i=1N{gi(z)}where, f(Z) is the objective function given in the problem, δ is the quadratic penalty parameter, ∑i=1N{gi(z)}2 is quadratic penalty term and λ is the Lagrange multiplier.
The ALM is similar to the penalty approach method in which the penalty parameter is chosen as large as possible. In ALM, δ and λ are chosen in such a way that λ can remain small to maintain the strategic distance from ill conditions. The advantage of ALM is that it reduces the possibility of illconditioning happening in the penalty approach method.
Numerical Examples
The three test examples of the sum of SFPP were taken from Charles et al. [16]. The detailed description of each example can be found in [16].
Example 1
maxR(Z)=∑t=12ht1z1+ht2z2+αtpt1z1+pt2z2+βtsubject to
d11z1+d12z2≤1;d21z1+d22z2≤b2;16z1+z2≤4;z1,z2≥0
The aforementioned problem is converted into deterministic one and the model is given as [16]:
maxF(Z)=γ1+γ2subject to
(γ1+2γ2−5)z1+(γ1+3γ2−4)z2+2γ1+4γ2+1.28γ12+γ22≤3;(2z1+z2)+1.645z12+z22≤1;(3z1+4z2)+0.842z12+3z22+2≤3;16z1+z2≤4;z1,z2,γ1,γ2≥0
Example 2
maxR(Z)=∑t=13ht1z1+ht2z2+αtpt1z1+pt2z2+βtsubject to
d11z1+d12z2+d13z3≤b1;d31z1+d32z2+d33z3≤20;z1+z2+z3≤b3;5z1+3z2+4z3≤15;z1,z2,z3≥0
The deterministic model of the example is given as:
maxF(Z)=γ1+γ2+γ3subject to
(γ1+2γ2+4γ3−17)z1+(γ1+γ2+γ3−19)z2+(γ1+4γ2+7γ3−23)z3+2γ1+10γ2+5γ3+1.645(γ22+0.5γ32)z12+(0.5γ22+2γ32)z22+(2γ22+3γ32)z32≤12;(4z1+2z2+7z3)+1.6450.5z12+0.25z22+0.5z32+0.25≤12;(6z1+4z2+6z3)+1.28z12+0.5z22+0.75z32≤20;z1+z2+z3≤3.16;5z1+3z2+4z3≤15;z1,z2,z3,γ1,γ2,γ3≥0
Example 3
maxR(Z)=∑t=12ht1z1+ht2z2+αtpt1z1+pt2z2+βtsubject to
d11z1+d12z2+d13z13≤27;5z1+3z2+z3≤12;z1,z2≥0
The deterministic model of the example is given as:
maxF(Z)=γ1+γ2subject to
(20−2γ1+4γ2)z1+(16−3γ1−2γ2)z2+(12−5γ1−2γ2)z3−10γ1−12γ2−1.28(γ12+γ22+10)z12+(2γ12+γ22+4)z22+(3γ12+2γ22+5)z32≥3;(3z1+4z2+8z3)+1.6452z12+z22+z32≤27;(5z1+3z2+z3)≤12;z1,z2,z3,γ1,γ2≥0
Numerical Results
This section describes the parameters settings of the algorithms and the obtained results of the numerical examples.
Parameter Settings
The user defined parameters of the GSK algorithm are number of population (Npop), knowledge factor (kf), knowledge ratio (kr) and knowledge rate (k) and the considered values of each parameter are Npop=50, kf=0.5, kr=0.9, k=10 (taken from [18]). The percentage of best and worst classes in senior gaining sharing knowledge stage is p=0.1. The parameters used in the ALM are δ=102 and λ=−104. Also, the values of parameters of all compared algorithms are given in Tab. 1.
Parameters Values for all compared algorithms
Parameters
Values
Npop (Number of population)
50
Maximum number of function evaluations
25000
Crossover probability for GA
1
Mutation probability for GA
0.09
Scaling factor lower bound for DE
0.2
Scaling factor upper bound for DE
0.7
Crossover Probability for DE
0.95
c1 Cognitive factor for PSO
1.5
c2 Social factor for PSO
1.5
Wmax Maximum bound on inertia weight for PSO
1
Wmin Minimum bound on inertia weight for PSO
0.2
b (constant) for WOA
1
Number of streams for WCA
4
Evaporation condition constant (dmax) for WCA
1e6
Minimum value for Wep (wormhole existence probability) in MVO
0.2
Maximum value for Wep (wormhole existence probability) in MVO
1
The following conditions are assumed:
To terminate the algorithms, the maximum number of function evaluations is assumed [55].
To handle the constraints, the parameter used in the ALM depends on each example.
A total of 25 independent runs are conducted, and the best results are recorded throughout the process.
The results are compared among the algorithms (GSK, GA, DE, PSO, ALO, WOA, WCA, MOV, and TLBO) and a previous study [16].
The numerical results are shown in terms of maximum (best) objective value, minimum (worst) objective value, average objective value, standard deviation, and coefficient of variation (C.V.).
The results are obtained for the deterministic objective function F(Z).
Simulation Results
The considered numerical examples are solved by the GSK and other metaheuristic algorithms using MATLAB R2015a on a personal computer having InterCoreTMi5@2.50GHz processor with 4GBRAM. To fair comparisons and obtain the optimal global solutions, the examples are solved by LINGO 11.0 and the obtained results of each example are presented in Tabs. 2–4.
Experimental results of example 1
Algorithms
Maximum (Best)
Mean
Minimum (Worst)
Std.Deviation
C.V.
GSK
1.83246
1.83246
1.83246
0.00000
0.00000
GA
1.83142
1.57011
0.89299
0.32710
0.20833
DE
1.83246
1.83246
1.83246
0.00000
0.00000
PSO
1.50000
1.47000
0.75000
0.15000
0.10204
ALO
1.83246
1.83214
1.83115
0.00032
0.00018
WOA
1.83200
1.63303
1.47480
0.12887
0.07892
WCA
1.83246
1.83201
1.82998
0.00057
0.00031
MVO
1.83228
1.82881
1.82084
0.00321
0.00176
TLBO
1.83246
1.83246
1.83246
0.00000
0.00000
LINGO
1.83246




Results in [16]
1.75533




Experimental results of example 2
Algorithms
Maximum (Best)
Mean
Minimum (Worst)
Std. Deviation
C.V.
GSK
15.2256
15.2256
15.2256
0.0000
0.0000
GA
15.2255
15.0591
14.0484
0.2705
0.0180
DE
15.2256
15.2255
15.2240
0.0003
0.0000
PSO
15.2255
14.8238
13.9612
0.5519
0.0372
ALO
15.2255
15.2120
15.1558
0.0166
0.0011
WOA
14.8468
13.8022
11.2338
0.8402
0.0609
WCA
15.2256
15.1733
15.0933
0.0610
0.0040
MVO
15.2241
15.2068
15.0922
0.0271
0.0018
TLBO
15.2256
15.2256
15.2256
0.0000
0.0000
LINGO
15.2256




Results in [16]
15.1931




The results of example 1 depict that all the algorithms can find a feasible solution to the problem. GSK, DE, and TLBO obtained the solutions equal to the optimal global solution (F(z)=1.83246) with a minimum standard deviation. The convergence graph of the GSK algorithm with other metaheuristic algorithms is presented in Fig. 3 in which the comparison is shown among the GSK and other algorithms. The convergence graph shows that GSK has the best convergence as compared to the other algorithms. Also, the average elapsed time taken by the GSK algorithm is less than others, which is presented in Fig 6. Moreover, the results obtained by metaheuristic algorithms are much better than the results in the literature (Charles et al. [16]) except for the PSO algorithm. Therefore, the values of decision variables obtained by GSK algorithm are γ1=1.83246,γ2=0,z1=0.202324,z2=0.165433 and the values of the constraints are [−0.5901,−5.9157e−07,−0.4956,−0.5974], that describes the feasibility of the solutions.
The convergence graph for the solution of Example 1
For the solution of example 2, the results are presented in Tab. 3. It indicates that the solutions obtained by GSK, DE, and TLBO are equal to optimal solution with zero standard deviation, which implies that these are efficient algorithms to solve the problem. The obtained results are better than the results in (Charles et al. [16]). Moreover, the computational time is also noted throughout the process. The average elapsed time taken by all algorithms is shown in Fig. 6, which establishes that the GSK algorithm takes less computational time as compared to others. Also, Fig. 4 shows the convergence graph of the GSK algorithm with other algorithms. To show the feasibility of the solutions, the values of the constraints are [0,−2.7569e−06,−1.9445,−0.0536,−3.9992] and the values of the decision variables are γ1=15.22559,γ2=0,γ3=0,z1=0,z2=1.424836,z3=1.681578 that are obtained by GSK algorithm.
Similarly, example 3 is also solved by the GSK algorithm and the other algorithms. The results are shown in Tab. 4 in terms of maximum (best), minimum (worst) and average objective value with their standard deviations and coefficient of variation. All algorithms GSK, GA, DE, PSO, WOA, ALO, WCA, MVO, and TLBO can find the solution, but GSK and ALO algorithm find the optimal solution, which has a 0\% difference from the optimal global solution. The objective function value in (Charles et al. [16]) is 3.6584 which is 53.5\% of the global optimal solution (7.8808). The convergence graph of example 3 by GSK and other algorithms is shown in Fig. 5. The average computational time is presented in Fig. 6, which indicates that the GSK algorithm takes very less computational time. The results obtained by GSK algorithm are: Objective function value=7.8808, value of the decision variables γ1=0,γ2=7.88078,z1=2.4,z2=0,z3=0 and the value of the constraints are [6.7823e−05,−14.2167,0].
The convergence graph for the solution of Example 2Experimental results of example 3
Algorithms
Maximum (Best)
Mean
Minimum (Worst)
Std. Deviation
C.V.
GSK
7.88079
7.88079
7.88079
0.00000
0.00000
GA
7.88079
7.41234
6.81614
0.53938
0.07277
DE
7.88079
7.45492
5.75135
0.68726
0.09219
PSO
7.88079
7.35145
5.29517
0.80506
0.10951
ALO
7.88078
7.88078
7.88078
0.00000
0.00000
WOA
7.83065
6.65585
2.39331
1.44584
0.21723
WCA
7.88079
7.88067
7.87905
0.00043
0.00005
MVO
7.87867
7.86897
7.85739
0.00488
0.00062
TLBO
7.88079
7.62353
4.64353
0.77710
0.10193
LINGO
7.8808




Results in [16]
3.6584




Statistical Analysis
To validate the results obtained from the GSK and other algorithms, two nonparametric statistical tests i.e., Friedman test and Wilcoxon signed rank test are performed using IBM SPSS 20.
Friedman Test
To compare the performance of algorithms simultaneously, the Friedman test is conducted by calculating their mean ranks. The null hypothesis is “There is no significant difference among the performance of the algorithms” whereas the alternative hypothesis is “There is significant difference among the performance of the algorithms”. Using the Friedman test, the mean rank is obtained for each example and the acquired results are shown in Tab. 5. According to obtained mean ranks, the ranks are assigned to the algorithms. The high ranks are assigned to the larger value of mean rank and higher ranks indicate the better performance of the algorithm. The same is shown in Fig. 7 for each example. From Tab. 5, it can be observed that the GSK algorithm obtains first rank among others for each example. Moreover, it is noted that all the algorithms have significant differences at the 5\% level (pvalue = 0.00 < 0.05) therefore, to check the pairwise comparison, the Wilcoxon signed rank test is also performed.
The convergence graph for the solution of Example 3Average Elapsed time of example 1, 2 and 3 for all algorithmsResults of Friedman Test
Example 1
Example 2
Example 3
Algorithms
Mean Rank
Ranking
Mean Rank
Ranking
Mean Rank
Ranking
GSK
8.52
1
8.96
1
7.22
1
GA
2.40
7
3.68
7
4.80
6
DE
8.18
2
7.26
3
5.32
4
PSO
1.56
9
3.44
8
5.12
5
ALO
5.48
4
4.56
4
4.36
7
WOA
2.12
8
1.16
9
1.78
9
WCA
5.44
5
4.20
5
6.64
2
MVO
4.00
6
4.12
6
3.24
8
TLBO
7.30
3
7.62
2
6.52
3
pvalue
0.00∗
0.00∗
0.00∗
Note: * indicates that the value is less than 0.05
The mean ranks of the algorithms obtained by Friedman testWilcoxon Signed Rank Test
To check the pairwise comparison between the algorithms (GSK vs. GA, GSK vs. DE, GSK vs. PSO, GSK vs. ALO, GSK vs. WOA, GSK vs. WCA, GSK vs. MVO and GSK vs. TLBO), Wilcoxon signedrank test is performed at the 5\% level of significance. The obtained results are presented in Tab. 6, in which S+,S− denote the sum of positive ranks and negative ranks, respectively. From Tab. 6, it can be observed that the GSK algorithm obtains a higher S+ value than S− for every pairwise comparison. As obtained p−value <0.05, it can be observed that the GSK algorithm performs better when compared to all other algorithms.6.
Results of Wilcoxon signed rank test
Algorithms
Example 1
Example 2
Example 3
S+
S−
pvalue
S+
S−
pvalue
S+
S−
pvalue
GSK vs. GA
325
0
0.00∗
325
0
0.00∗
66
0
0.00∗
GSK vs. DE
15
0
0.04∗
276
0
0.00∗
36
0
0.01∗
GSK vs. PSO
325
0
0.00∗
325
0
0.00∗
45
0
0.01∗
GSK vs. ALO
325
0
0.00∗
325
0
0.00∗
325
0
0.00∗
GSK vs. WOA
325
0
0.00∗
325
0
0.00∗
325
0
0.00∗
GSK vs. WCA
325
0
0.00∗
325
0
0.00∗
15
0
0.04∗
GSK vs. MVO
325
0
0.00∗
325
0
0.00∗
325
0
0.00∗
GSK vs. TLBO
231
0
0.00∗
325
0
0.00∗
6
0
0.11
* indicates that the value is less than 0.05
Nomenclature of solid stochastic fixed charge transportation problem
i
the index for source locations
t
the index for destination locations
k
the index for conveyances
zitk
the amount of product that should be transported from ith source totth destination by kth conveyance
αitk
direct transportation cost
βitk
the fixed cost
ξitk
transportation time
ai
total availability at ith supply location.
bt
minimum requirement at tth destination location
qk
the capacity of kth conveyance
A Case Study
This section contains a case study based on a stochastic transportation problem. The transportation problem is considered with cost objective function in which the main aim is to minimize the total transportation cost and find the total transportation time in Tab. 7.
A cement company transports cement from its 4 distributors (source locations) to 5 retailers (destination locations) with 2 conveyances. In this problem, two categories of transportation cost are asummed: direct costs and fixed costs. The direct cost is paid according to per unit of the transport product and the fixed cost will be charged if the transportation facility occurs between source locations to destination locations. Mathematically, the fixed cost can be formulated by introducing the variables as:
y(zitk)={1,ifzitk>00,Otherwise
Thus, the total transportation cost can be calculated as
Cost=fcost=∑i=14∑t=15∑k=12(αitkzitk+βitky(zitk))
Also, the total transportation time will be minimized when the transportation activity holds between ith source locations to tth destination locations. Thus, the objective function for the total transportation time can be formulated as
Time=ftime=∑i=14∑t=15∑k=12(ξitky(zitk))
In the classical transportation problem, the data is already known to the decision maker but in realworld problem, the data cannot be obtained in advance. It can be obtained by statistical experience or observed from a previous activity. Hence, the parameters of the problem; ai,bt,qk,αitk,βitk,ξitk are treated as random variables. Therefore, the problem becomes a solid stochastic fixed charge transportation problem (SSFCTP). The mathematical model of the SSFCTP can be formulated as:
minfcost=∑i=14∑t=15∑k=12(αitkzitk+βitky(zitk))subject to
P(∑t=15∑k=12zitk≤ai)≥γi;i=1,2,3,4P(∑i=14∑k=12zitk≥bt)≥ηt;t=1,2,3,4,5P(∑i=14∑t=15zitk≤qk)≥ζk;t=1,2zitk≥0;y(zitk)=0or1;foreveryi,t,kwhere, γi, ηt, ζk are probability confidence levels.
In order to obtain the solution of the problem, the usual procedure cannot be applied. Since the parameters in the objective functions are random variables, the expected minimization model is used to obtain the optimal solution and the chance constrained technique is applied to the probabilistic constraints. The data used for the said problem is taken from Yang et al. [3].
The problem is solved by the GSK and the other algorithms (TLBO, DE, WCA, GA, MVO, WOA). The independent runs for every algorithm are taken to be 15 and, the results are noted throughout the process. The obtained solutions are shown in Tab. 8. It can be observed that the GSK algorithm gives the lowest transportation cost (fcost=1360.4) as compared to the other algorithms. In the case of average cost, the GSK algorithm has the minimum average cost relative to the other algorithms. The minimum and average transportation costs are also presented in Fig. 8 which describes that the GSK algorithm is more efficient in comparison with others. The corresponding total transportation time taken by all transportation activities is ftime=42 hours and the optimal transportation plan is: z112=0.03,z121=0.02,z141=0.06,z212=27.57,z221=27.67,z222=18.36,z241=0.02,z442=34.45 and all other decision variables are zero.
The transportation cost obtained by algorithms
Algorithms
GSK
TLBO
DE
WCA
GA
MVO
WOA
Minimum (Best) Cost
1360.4
1631.2
1897.1
1818.0
1662.9
2163.0
2528.5
Average Cost
1702.6
1800.7
2117.8
1969.9
1890.4
2385.5
3592.7
Maximum (Worst) Cost
1839.5
1867.6
2192.8
2273.1
1969.6
2933.4
6710.5
The average and minimum transportation cost
To validate the efficiency and robustness of the GSK algorithm, Friedman Test and Wilcoxon signedrank test are performed. The obtained results from the tests are shown in Tabs. 9 and 10 respectively. To check the difference among all the algorithms, the Friedman test is applied and it is observed that they are significantly different at (5\%) level. To know the pairwise comparison between the GSK algorithm and all other considered algorithms, Wilcoxon signedrank test is performed. It can be observed that the GSK algorithm is significantly different when compared to other algorithms at the (5\%) level except for the TLBO algorithm.
Results Analysis
From the experimental results, it can be observed that the GSK algorithm performs better in all SFPP examples in terms of convergence, robustness and ability to find the optimal solutions.
In example 1, ALO, PSO, WOA algorithms have premature convergence and do not find the optimal solution of the problem. While GSK algorithm has a fast convergence speed and does not trap into local optima due to its good exploration and exploitation quality. It explores the search space efficiently and effectively and converges to the optimal solution. Moreover, the GSK algorithm proves its robustness quality by obtaining zero standard deviation in all the test examples of SFPP. In case of other algorithms, the techniques do not converge to the optimal solution in every simulation. Also, due to two main pillars of the GSK algorithm i.e., junior and senior gaining sharing stage, the algorithm can find the optimal solution with great convergence. Hence, it can be concluded that the GSK algorithm is a very effective approach to solve the SFPP.
Results of Friedman test for SSFCTP
Algorithms
Mean Rank
Rank
GSK
1.60
1
TLBO
1.73
2
WOA
6.93
7
DE
4.87
5
WCA
3.53
4
GA
3.33
3
MVO
6.00
6
pvalue
0.00∗
Results of Wilcoxon test for SSFCTP
Algorithms
S+
S−
pvalue
GSK vs. TLBO
31
89
0.10
GSK vs. WOA
120
0.0
0.01∗
GSK vs. DE
120
0.0
0.01∗
GSK vs. WCA
116
4.0
0.01∗
GSK vs. GA
120
0.0
0.01∗
GSK vs. MVO
120
0.0
0.01∗
In addition, the GSK algorithm shows promising results in comparison with other metaheuristic algorithms. While other algorithms are not even able to find the optimal solution to the SFPP problem, GSK algorithm convergences to the optimal solution at an early stage of the optimization process. It makes a proper balance between its exploration and exploitation characteristics and finds the solution. Moreover, it consumes very less computational time which is an important characteristic to find the optimal solution. Statistically, it is also shown that the GSK algorithm presents significantly better results as compared to other algorithms by applying statistical tests.
Moreover, based on the results of the stochastic transportation problem, all algorithms other than the GSK did not perform well and also did not obtain the minimum transportation cost of the problem. However, the GSK algorithm obtained the minimum transportation cost and transportation time of the problem, this proves its efficiency to solve realworld problems. Thus, it can be used to solve all optimization problems (unconstrained, constrained and multiobjective) with both discrete and continuous spaces. It is considered a generalpurpose algorithm and easy to understand and implement.
Concluding Remarks
This paper describes an application of a recently developed gaining sharing knowledgebased algorithm (GSK) to stochastic programming. GSK algorithm is a metaheuristic algorithm which is based on the human activity of gaining and sharing knowledge. To check the performance of the algorithm in terms of convergence and finding the optimal solution, GSK is applied to stochastic fractional programming problems with three different types of numerical examples. For comparative assessment, metaheuristic algorithms from each category, such as GA and DE from evolutionary algorithms; PSO, ALO, and WOA from swarmbased algorithms; WCA and MVO from physicsbased algorithm; and TLBO from humanbased algorithms are considered.
From the comparative results, it can be concluded that the GSK algorithm performs better than other algorithms. It converges to the optimal solution rapidly and takes less computational time. The obtained results are also compared with the global optimal solution and results from a previous study. For a fair comparison, nonparametric statistical tests (Friedman test and Wilcoxon signedrank test) are conducted at 5\% level of significance and the GSK algorithm proves that it is significantly different from other algorithms and outperforms them.
Besides, a solid stochastic fixed charge transportation problem, a realworld application of stochastic programming, is studied under a stochastic environment, in which all parameters of the problem are treated as random variables. The main objective of the problem is to find the optimal transportation plan which has minimum transportation cost and minimum transportation time, satisfying all the constraints. Metaheuristic algorithms are applied to the problem and solutions are obtained. From the obtained results, it is observed that the GSK algorithm gives the minimum transportation cost (fcost=1360.4) and minimum transportation time (ftime=42) as compared to other algorithms in less computational time.
From these results, it can be concluded that the GSK algorithm performs significantly better than other metaheuristic algorithms. It is highly noted that the empirical analysis of this study may differ on another benchmark set or realworld problems according to the nofreelunch theorem.
The authors would like to thank the Editor and the reviewers for their valuable suggestions, that helped us to improve the quality of the paper.
The authors present their appreciation to King Saud University for funding this work through Researchers Supporting Project Number (RSP2021/305), King Saud University, Riyadh, Saudi Arabia.
Funding Statement: The research is funded by Researchers Supporting Program at King Saud University, (Project# RSP2021/305).
Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.
ReferencesZ.Qin, “Uncertain random goal programming,” S. S.Rao, L.Yang and Y.Feng, “A bicriteria solid transportation problem with fixed charge under stochastic environment,” M.Alizadeh, J.Ma, N.MahdaviAmiri, M.Marufuzzaman and R.Jaradat, “A stochastic programming model for a capacitated locationallocation problem with heterogeneous demands,” Y.Zhang, X.Li and S.Guo, “Portfolio selection problems with markowitz's mean–variance framework: A review of literature,” X.Zhang, S.Huang and Z.Wan, “Stochastic programming approach to global supply chain management under random additive demand,” F.Murphy, S.Sen and A.Soyster, “Electric utility capacity expansion planning with uncertain load forecasts,” Z.Hu and G.Hu, “A multistage stochastic programming for lotsizing and scheduling under demand uncertainty,” H.Ke, W.Ma and J.Ma, “Solving project scheduling problem with the philosophy of fuzzy random programming,” B. A.Foued and M.Sameh, “Application of goal programming in a multiobjective reservoir operation model in Tunisia,” A.Baykasoglu and T.Gocken, “Multiobjective aggregate production planning with fuzzy parameters,” E.Nikzad, M.Bashiri and F.Oliveira, “Twostage stochastic programming approach for the medical drug inventory routing problem under uncertainty,” V.Charles and D.Dutta, “Linear stochastic fractional programming with branchandbound technique,” in Proc. of the National Conf. on Mathematical and Computational Models, Coimbatore, India, Allied Publishers, pp. 131, 2001.V.Charles, D.Dutta, and K. A.Raju, “Linear stochastic fractional programming problem,” in Proc. of the Int. Conf. on Mathematical Modelling, University of Roorkee, India, Tata McGraw–Hill, pp. 211–217, 2001.V.Charles and D.Dutta, “A method for solving linear stochastic fractional programming problem with mixed constraints,” V.Charles and D.Dutta, “Linear stochastic fractional programming with sumofprobabilisticfractional objective,” P.Agrawal, H. F.Abutarboush, T.Ganesh and A. W.Mohamed, “Metaheuristic algorithms on feature selection: A survey of one decade of research (2009–2019),” A. W.Mohamed, A. A.Hadi and A. K.Mohamed, “Gainingsharing knowledge based algorithm for solving optimization problems: A novel natureinspired algorithm,” O. B.Haddad, M.Moravej and H. A.Loáiciga, “Application of the water cycle algorithm to the optimal operation of reservoir systems,” J.Claro and J. P.de Sousa, “A multiobjective metaheuristic for a meanrisk static stochastic knapsack problem,” A.Hoff, A. G.Lium, A.Løkketangen, and T. G.Crainic, “A metaheuristic for stochastic service network design,” A. W.Mohamed, “An improved differential evolution algorithm with triangular mutation for global numerical optimization,” A. W.MohamedA.Ibrahim, H. A.Ali, M. M.Eid and E. S. M.Elkenawy, “Chaotic harris hawks optimization for unconstrained function optimization,” in 2020 16th Int. Computer Engineering Conf. (ICENCO), Cairo, Egypt, IEEE, pp. 153–158, 2020.F.Hernandez, M.Gendreau, O.Jabali and W.Rei, “A local branching metaheuristic for the multivehicle routing problem with stochastic demands,” Journal of Heuristics, vol. 25, no. 2, pp. 215–245, 2019.I.Sbai, S.Krichen and O.Limam, “Two metaheuristics for solving the capacitated vehicle routing problem: The case of the Tunisian post office,” Operational Research, pp. 1–43, 2020.H.Yahyaoui, I.Kaabachi, S.Krichen and A.Dekdouk, A., “Two metaheuristic approaches for solving the multicompartment vehicle routing problem,” Operational Research, pp. 1–24, 2018.M. A.Abdeljaoued, N. E. H.Saadani and Z.Bahroun, “Heuristic and metaheuristic approaches for parallel machine scheduling under resource constraints,” Operational Research, vol. 20, no. 4, pp. 2109–2132, 2020.ElSayed M. Elkenawy, Hattan F. Abutarboush, Ali Wagdy Mohamed and Abdelhameed Ibrahim, “Advance artificial intelligence technique for designing double Tshaped monopole antenna,” Computers, Materials & Continua, vol. 69, no. 3, pp. 2983–2995, 2021.L.Montiel and R.Dimitrakopoulos, “A heuristic approach for the stochastic optimization of mine production schedules,” Journal of Heuristics, vol. 23, no. 5, pp. 397–415, 2017.J.Panadero, J.Doering, R.Kizys, A. A.Juan and A.Fito, “A variable neighborhood search sim heuristic for project portfolio selection under uncertainty,” Journal of Heuristics, vol. 26, no. 3, pp. 353–375, 2018.N.Kumar, A.Poddar, A.Dobhal and V.Shankar, “Performance assessment of pso and ga in estimating soil hydraulic properties using nearsurface soil moisture observations,” Compusoft, vol. 8, no. 8, pp. 3294–3301, 2019.ElSayed M. Elkenawy and Marwa Eid, “Hybrid gray wolf and particle swarm optimization for feature selection,” International Journal of Innovative Computing Information and Control, vol. 16, no. 3, pp. 831–844, 2020.E. S. M.ElKenawy, S.Mirjalili, A.Ibrahim, M.Alrahmawy, M.ElSaidet al., “Advanced metaheuristics, convolutional neural networks, and feature selectors for efficient COVID19 Xray chest image classification,” IEEE Access, vol. 9, pp. 36019–36037, 2021.M. K.Dhadwal, S. N.Jung and C. J.Kim, “Advanced particle swarm assisted genetic algorithm for constrained optimization problems,” Computational Optimization and Applications, vol. 58, no. 3, pp. 781–806, 2014.A. A.Salamai, E. M.Elkenawy and I.Abdelhameed, “Dynamic voting classifier for risk identification in supply chain 4.0,” Computers, Materials & Continua, vol. 69, no. 3, pp. 3749–3766, 2021.P.Agrawal, T.Ganesh and A. W.Mohamed, “A novel binary gaining–sharing knowledgebased optimization algorithm for feature selection,” Neural Computing and Applications, vol. 33, no. 11, pp. 5989–6008, 2020.P.Agrawal, T.Ganesh and A. W.Mohamed, “Chaotic gaining sharing knowledgebased optimization algorithm: An improved metaheuristic algorithm for feature selection,” Soft Computing, vol. 25, pp. 1–24, 2021.P.Agrawal, T.Ganesh and A. W.Mohamed, “Solving knapsack problems using a binary gaining sharing knowledgebased optimization algorithm,” Complex & Intelligent Systems, vol. 2021, pp. 1–21, 2021.P.Agrawal, T.Ganesh, D.Oliva and A. W.Mohamed, “Sshaped and Vshaped gainingsharing knowledgebased algorithm for feature selection,” Applied Intelligence, vol. 2021, pp. 1–32, 2021.J. H.Holland, “Genetic algorithms,” Scientific American, vol. 267, no. 1, pp. 66–73, 1992.R.Storn and K.Price, “Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997.R.Eberhart and J.Kennedy, “Particle swarm optimization,” in Proc. of the IEEE Int. Conf. on Nueral Networks, Citeseer, vol. 4, pp. 1942–1948,. 1995.S.Mirjalili, and A.Lewis, “The whale optimization algorithm,” Advances in Engineering Software, vol. 95, pp. 51–67, 2016.S.Mirjalili, “The ant lion optimizer,” Advances in Engineering Software, vol. 83, pp. 80–98, 2015.H.Eskandar, A.Sadollah, A.Bahreininejad and M.Hamdi, “Water cycle algorithm–a novel metaheuristic optimization method for solving constrained engineering optimization problems,” Computers & Structures, vol. 110, pp. 151–166, 2012.S.Mirjalili, S. M.Mirjalili and A.Hatamlou, “Multiverse optimizer: A natureinspired algorithm for global optimization,” Neural Computing and Applications, vol. 27, no. 2, pp. 495–513, 2016.R. V.Rao, TeachingLearningBased Optimization Algorithm, Springer, pp. 9–39, 2016.D. R.Mahapatra, S. K.Roy and M. P.Biswal, “Multichoice stochastic transportation problem involving extreme value distribution,” Applied Mathematical Modelling, vol. 37, no. 4, pp. 2230–2240, 2013.P.Agrawal and T.Ganesh, “Solving transportation problem with stochastic demand and nonlinear multichoice cost,” in AIP Conf. Proc., vol. 2134, pp. 060002, 2019.C. A. C.Coello, “Theoretical and numerical constrainthandling techniques used with evolutionary algorithms: A survey of the state of the art,” Computer Methods in Applied Mechanics and Engineering, vol. 191, no. 11–12, pp. 1245–1287, 2002.E.MezuraMontes, ConstraintHandling in Evolutionary Optimization, Springer, Switzerland, vol. 198, 2009.K.Deb, “An efficient constraint handling method for genetic algorithms,” Computer Methods in Applied Mechanics and Engineering, vol. 186, no. 2–4, pp. 311–338, 2000.A.Bahreininejad, “Improving the performance of water cycle algorithm using augmented lagrangian method,” Advances in Engineering Software, vol. 132, pp. 55–64, 2019.M.Črepinšek, S. H.Liu and M.Mernik, “Replication and comparison of computational experiments in applied evolutionary computing: Common pitfalls and guidelines to avoid them,” Applied Soft Computing, vol. 19, pp. 161–170, 2014.