Metaheuristic algorithm is a generalization of heuristic algorithm that can be applied to almost all optimization problems. For optimization problems, metaheuristic algorithm is one of the methods to find its optimal solution or approximate solution under limited conditions. Most of the existing metaheuristic algorithms are designed for serial systems. Meanwhile, existing algorithms still have a lot of room for improvement in convergence speed, robustness, and performance. To address these issues, this paper proposes an easily parallelizable metaheuristic optimization algorithm called team competition and cooperation optimization (TCCO) inspired by the process of human team cooperation and competition. The proposed algorithm attempts to mathematically model human team cooperation and competition to promote the optimization process and find an approximate solution as close as possible to the optimal solution under limited conditions. In order to evaluate the performance of the proposed algorithm, this paper compares the solution accuracy and convergence speed of the TCCO algorithm with the Grasshopper Optimization Algorithm (GOA), Seagull Optimization Algorithm (SOA), Whale Optimization Algorithm (WOA) and Sparrow Search Algorithm (SSA). Experiment results of 30 test functions commonly used in the optimization field indicate that, compared with these current advanced metaheuristic algorithms, TCCO has strong competitiveness in both solution accuracy and convergence speed.

Metaheuristic algorithm combines the advantages of random search algorithm and local search algorithm. Compared with the optimization algorithm that gives a clear optimal solution, the metaheuristic algorithm gives an optimal solution or approximate solution to the optimization problem under limited conditions. In recent years, traditional optimization algorithms can hardly meet the accuracy requirements of various fields such as engineering, business and economics for optimization problems under limited conditions [

The No Free Lunch theorem (NFL) [

The rest of the paper is structured as follows. Section 2 presents a literature review of metaheuristic algorithms. Section 3 presents the details about TCCO and its pseudo-code implementation. Section 4 provides the comparative statistical analysis of results on benchmark functions. Section 5 concludes the work and suggests some directions for future studies.

In the past few decades, researchers have developed a series of metaheuristic algorithms inspired by nature to solve optimization problems under limited conditions. They can be roughly divided into the following four classes:

The metaheuristic algorithm based on Darwinian theory of evolution. For instance, the Genetic Algorithm (GA) [

The second class is physics-based algorithms. This type of algorithm seeks the optimal solution by imitating the physical rules that are common in the real world. For example, the annealing idea proposed by Metropolis et al. was introduced into the field of optimization problems by Kirkpatrick et al. and designed the simulated annealing algorithm (SA) [

Algorithms based on swarm intelligence is the third class, they find the optimal solution by simulating the activities of biological swarms. The Particle Swarm optimization algorithm (PSO) [

The Fourth subclass is the algorithms based on human behavior, such as CAs [

In this section, we will elaborate on the inspiration of TCCO, the mathematical model and pseudo-code.

Competition and cooperation exist everywhere in human society. The development of human society is largely driven by competition and cooperation. Therefore, this paper attempts to mathematically model cooperation and competition to promote the optimization process and find an approximate solution as close as possible to the optimal solution under limited conditions. At the same time, the concept of team and intra-team update in algorithm naturally support parallel computation, the simple and efficient inter-team cooperation also allow the algorithm to be parallelized with a small communication cost. In addition, considering the massive computing power of the parallel system, this algorithm also designs a process of judging the advantage of team members, which improves the performance and convergence speed of the algorithm to a certain extent.

The competition and cooperation process in this paper can be simplified into the following steps. First, people are divided into several teams. In order to achieve a same goal, everyone proposes their own pre-solution to this problem. Each team selects a leader according to the quality of their solution. The cooperation is advanced under the organization of the leader, trying to find a better solution. After all the team members updated their own solution, a new leader is selected, and then the best solution is used to participate in the team competition. The team with the best solution wins and becomes the dominant team. Next, each team randomly finds their own partners to work together to optimize their solution. However, the dominant team has the right to choose more partners than other teams. If your team's solution is better than partner's, they will follow you, otherwise thing reversed. After the cooperation, people of each team should re-elect the leader and participate in the team competition. When all the teams have completed these steps, the current round of competition and cooperation process ends. After multiple rounds of this process, the solution given by the dominant team will become the final solution.

In order to simplify the mathematical model, this paper is based on the following assumptions:

Assumption 7: The team leader's solution is defined as the team's solution.

In this paper,

There are 7 teams in this paper, and each team contains 7 members. It can be seen from assumptions 1 and 5 that members can be abstracted as feasible solutions, and the members of a team can be specified by a matrix. Each row of the matrix identifies a member. See

Generally, under the condition of assumption 5, if the objective function f(x) is a maximization problem and its value range is non-negative, then the fitness function

For ordinary members of the team, they have three ways to update their solution. In this paper, there is a probability

Follow the team leader, update method is given in

Follow the leader of the dominant team, the

Same update method as

For the team leader, he will learn the advantages of all other members, and

In particular, for the first and second update methods of the above-mentioned ordinary members, when the feasible solution exceeds the range of the solution space, this paper simply adopts the method of directly taking the boundary value. After a round of update solutions are completed, a new team leader will be selected according to Assumption 6 and

After all members updated, the team randomly search for partners according to Assumption 2. In the two teams that cooperate, all members will follow the better solution. The way to update the solution is given by

When the feasible solution exceeds the solution space, the method of directly taking the boundary value is also simply adopted in this paper. After a round of collaborative update solutions between teams, a new team leader will be selected according to Assumption 6 and

Algorithm | |
---|---|

01: | Begin |

02: | Randomly grouping and initializing team members’ solutions. |

03: | Select team leader and dominant team. |

04: | While (stop condition is not met): |

05: | For (team in teams): |

06: | Team members update solution by itself. |

07: | Select new team leader. |

08: | Select new dominant team. |

09: | Randomly find partners. |

10: | Start team cooperation. |

11: | Select new team leader. |

12: | Select new dominant team. |

13: | End |

14: | End |

15: | The dominant team's solution is the final optimal solution. |

16: | End |

In this section, the TCCO is benchmarked on 30 classic benchmark functions [

Parameters\ Algorithm | GOA | SOA | WOA | SSA | TCCO |
---|---|---|---|---|---|

l | 1.5 | ||||

f | 0.5 | ||||

1 | |||||

0.00004 | |||||

fc | 2 | ||||

a | 2 | ||||

b | 1 | ||||

0.2 | |||||

R2 | 0.8 | ||||

0.6 | |||||

0.3 | |||||

1 | |||||

2 |

This section will conduct experiments on the first set of eight low-dimensional unimodal test functions to compare the performance of the above five optimization algorithms. The name, expression, minimum value, range and dimension of the functions are shown in

Function name | Expression | Minimum | Range | d |
---|---|---|---|---|

Mccormick | −1.9133 | [−3, 4] | 2 | |

Easom | −1 | [−100, 100] | 2 | |

Matyas | 0 | [−10, 10] | 2 | |

Zakharov | 0 | [−5, 10] | 10 | |

Bohachevsky1 | 0 | [−100, 100] | 2 | |

Booth | 0 | [−10, 10] | 2 | |

Bohachevsky2 | 0 | [−100, 100] | 2 | |

Bohachevsky3 | 0 | [−100, 100] | 2 |

GOA | SOA | WOA | SSA | TCCO | ||
---|---|---|---|---|---|---|

F1 | Best | −1.913E+00 | −1.913E+00 | −1.913E+00 | ||

Worst | −1.913E+00 | −1.721E+00 | −1.913E+00 | −1.913E+00 | ||

Mean | −1.913E+00 | −1.884E+00 | −1.913E+00 | −1.913E+00 | ||

F2 | Best | −1.000E+00 | −1.000E+00 | −1.000E+00 | −1.000E+00 | |

Worst | −1.000E+00 | −8.088E−05 | −1.000E+00 | −1.000E+00 | ||

Mean | −1.000E+00 | −8.310E−01 | −1.000E+00 | −1.000E+00 | ||

F3 | Best | 1.193E−14 | 0.000E+00 | 3.527E−213 | 0.000E+00 | |

Worst | 1.426E−13 | 1.012E−174 | 2.263E−18 | 9.011E−47 | ||

Mean | 5.726E−14 | 3.374E−176 | 1.373E−19 | 3.932E−48 | ||

F4 | Best | 1.194E−05 | 7.815E−02 | 1.093E−26 | 1.162E−15 | |

Worst | 3.023E−05 | 1.709E+02 | 2.493E−16 | 4.030E−12 | ||

Mean | 2.011E−05 | 3.405E+01 | 9.677E−18 | 3.629E−13 | ||

F5 | Best | 3.064E−10 | ||||

Worst | 2.148E−09 | 4.223E−11 | ||||

Mean | 9.757E−10 | 1.408E−12 | ||||

F6 | Best | 9.943E−13 | 1.645E−06 | 4.489E−09 | 6.558E−07 | |

Worst | 5.328E−12 | 2.231E−01 | 5.972E−06 | 1.887E−04 | ||

Mean | 2.628E−12 | 1.584E−02 | 7.115E−07 | 4.054E−05 | ||

F7 | Best | 3.621E−10 | ||||

Worst | 2.375E−09 | 2.731E−14 | ||||

Mean | 1.346E−09 | 9.363E−16 | ||||

F8 | Best | 1.303E−10 | 6.181E−12 | |||

Worst | 3.472E−10 | 1.820E−06 | 8.116E−14 | |||

Mean | 2.065E−10 | 1.160E−07 | 3.364E−15 |

This section will conduct experiments on the second set of 12 low-dimensional multimodal test functions to compare the performance of the above five optimization algorithms. The name, expression, minimum value, range and dimension of the function are shown in

Function name | Expression | Minimum | Range | d |
---|---|---|---|---|

Beale | 0 | [−4.5, 4.5] | 2 | |

Michalewicz2 | −1.8013 | [0, |
2 | |

Michalewicz5 | −4.6877 | [0, |
5 | |

Michalewicz10 | −9.6602 | [0, |
10 | |

Schaffer | 0 | [−100, 100] | 2 | |

Six_Hump_ Camel_Back | −1.03163 | [−5, 5] | 2 | |

Shubert | −186.73 | [−10, 10] | 2 | |

Cross_in_tray | −2.06261 | [−10, 10] | 2 | |

Drop_Wave | −1 | [−5.12, 5.12] | 2 | |

Eggholder | −959.647 | [−512, 512] | 2 | |

Goldstein_Price | 3 | [−2, 2] | 2 | |

Colville | 0 | [−10, 10] | 4 |

GOA | SOA | WOA | SSA | TCCO | ||
---|---|---|---|---|---|---|

F9 | Best | 8.096E−14 | 1.589E−10 | 2.811E−10 | 1.145E−08 | |

Worst | 7.621E−01 | 7.621E−01 | 7.990E−07 | 2.611E−06 | ||

Mean | 5.080E−01 | 2.286E−01 | 1.413E−07 | 5.153E−07 | ||

F10 | Best | −1.801E+00 | −1.801E+00 | −1.801E+00 | −1.801E+00 | |

Worst | −1.000E+00 | −1.000E+00 | −1.313E+00 | −1.801E+00 | ||

Mean | −1.534E+00 | −1.668E+00 | −1.730E+00 | −1.801E+00 | ||

F11 | Best | −3.550E+00 | −4.585E+00 | −3.504E+00 | −4.631E+00 | |

Worst | −2.658E+00 | −2.514E+00 | −1.896E+00 | −3.462E+00 | ||

Mean | −3.141E+00 | −3.303E+00 | −2.548E+00 | −4.233E+00 | ||

F12 | Best | −6.490E+00 | −5.936E+00 | −4.304E+00 | −8.046E+00 | |

Worst | −4.811E+00 | −4.200E+00 | −2.803E+00 | −5.842E+00 | ||

Mean | −5.898E+00 | −5.109E+00 | −3.602E+00 | −6.670E+00 | ||

F13 | Best | 2.491E−12 | ||||

Worst | 8.427E−11 | 2.495E−13 | 9.716E−03 | |||

Mean | 3.148E−11 | 0.000E+00 | 0.000E+00 | 8.330E−15 | 7.449E−03 | |

F14 | Best | −1.032E+00 | −1.032E+00 | −1.032E+00 | −1.032E+00 | |

Worst | −1.032E+00 | −1.032E+00 | −1.032E+00 | −1.032E+00 | ||

Mean | −1.032E+00 | −1.032E+00 | −1.032E+00 | −1.032E+00 | ||

F15 | Best | −1.867E+02 | −1.867E+02 | −1.867E+02 | −1.867E+02 | |

Worst | −1.867E+02 | −1.864E+02 | −1.864E+02 | −1.867E+02 | ||

Mean | −1.867E+02 | −1.867E+02 | −1.867E+02 | −1.867E+02 | ||

F16 | Best | −2.063E+00 | −2.063E+00 | −2.063E+00 | ||

Worst | −2.063E+00 | −2.063E+00 | −2.063E+00 | −2.063E+00 | ||

Mean | −2.063E+00 | −2.063E+00 | −2.063E+00 | −2.063E+00 | ||

F17 | Best | −1.000E+00 | ||||

Worst | −9.362E−01 | −1.000E+00 | −9.362E−01 | |||

Mean | −9.575E−01 | −1.000E+00 | −9.957E−01 | |||

F18 | Best | −7.865E+02 | −9.596E+02 | −9.596E+02 | −9.596E+02 | |

Worst | −7.182E+02 | −8.889E+02 | −9.595E+02 | −8.889E+02 | ||

Mean | −7.410E+02 | −9.181E+02 | −9.596E+02 | −9.554E+02 | ||

F19 | Best | 3.000E+00 | 3.000E+00 | 3.000E+00 | 3.000E+00 | |

Worst | 3.000E+00 | 3.271E+01 | 3.000E+00 | 3.000E+00 | ||

Mean | 3.000E+00 | 9.479E+00 | 3.000E+00 | 3.000E+00 | ||

F20 | Best | 3.000E+00 | 3.000E+00 | 3.000E+00 | 3.000E+00 | |

Worst | 3.000E+00 | 3.255E+01 | 3.000E+00 | 3.000E+00 | ||

Mean | 3.000E+00 | 9.467E+00 | 3.000E+00 | 3.000E+00 |

This section will conduct experiments on the third group of 7 high-dimensional unimodal test functions to compare the performance of the above 5 optimization algorithms. The name, expression, minimum value, range and dimension of the function are shown in

Function name | Expression | Minimum | Range | d |
---|---|---|---|---|

Step | 0 | [−5.12, 5.12] | 30 | |

Trid | −4930 | [−900, 900] | 30 | |

Quartic | 0 | [−1.28, 1.28] | 30 | |

Schwefel2_22 | 0 | [−10, 0] | 30 | |

Schwefel1_2 | 0 | [−100, 100] | 30 | |

Rosenbrock | 0 | [−30, 30] | 30 | |

Dixon_Price | 0 | [−10, 10] | 30 |

GOA | SOA | WOA | SSA | TCCO | ||
---|---|---|---|---|---|---|

F21 | Best | 1.251E−04 | 2.565E−09 | 2.521E−01 | 6.127E−05 | |

Worst | 6.057E−04 | 7.382E−03 | 1.492E+00 | 6.969E−03 | ||

Mean | 4.035E−04 | 1.941E−03 | 9.511E−01 | 2.158E−03 | ||

F22 | Best | −3.370E+03 | −1.288E+03 | −1.469E+03 | −4.779E+03 | |

Worst | −1.914E+03 | −8.707E+02 | −5.811E+01 | −3.747E+03 | ||

Mean | −2.523E+03 | −9.554E+02 | −3.827E+02 | −4.343E+03 | ||

F23 | Best | 4.087E−02 | 2.526E−04 | 5.325E−05 | 2.907E−03 | |

Worst | 9.642E−02 | 7.706E−03 | 1.839E−03 | 4.602E−01 | ||

Mean | 6.544E−02 | 2.352E−03 | 6.062E−04 | 1.407E−01 | ||

F24 | Best | 2.428E+00 | 1.276E−17 | 1.289E−81 | 7.975E−40 | |

Worst | 3.816E+00 | 1.102E−14 | 3.456E−07 | 2.270E−36 | ||

Mean | 3.282E+00 | 9.780E−16 | 1.928E−08 | 3.739E−37 | ||

F25 | Best | 6.035E+02 | 9.671E+03 | 0.000E+00 | 4.318E+00 | |

Worst | 2.631E+03 | 7.689E+04 | 1.137E−13 | 6.298E+01 | ||

Mean | 1.680E+03 | 3.966E+04 | 4.532E−15 | 2.167E+01 | ||

F26 | Best | 1.913E+02 | 2.531E+01 | 3.266E−08 | 1.262E−02 | |

Worst | 3.933E+02 | 1.588E+00 | 9.306E−01 | 2.147E+01 | ||

Mean | 2.694E+02 | 2.417E−01 | 1.419E−01 | 1.069E+01 | ||

F27 | Best | 4.807E+00 | 7.686E−07 | 2.376E+01 | 1.278E−01 | |

Worst | 8.006E+00 | 1.484E+00 | 1.942E+02 | 5.166E+01 | ||

Mean | 6.620E+00 | 1.477E−01 | 1.239E+02 | 1.220E+01 |

This section will conduct experiments on the fourth group of three high-dimensional multimodal test functions to compare the performance of the above five optimization algorithms. The name, expression, minimum value, range and dimension of the function are shown in

Function name | Expression | Minimum | Range | d |
---|---|---|---|---|

Rastrigin | 0 | [−5.12, 5.12] | 30 | |

Griewank | 0 | [−600, 600] | 30 | |

Ackley | 0 | [−32, 32] | 30 |

GOA | SOA | WOA | SSA | TCCO | ||
---|---|---|---|---|---|---|

F28 | Best | 4.688E+01 | ||||

Worst | 1.214E+02 | 5.684E−14 | 1.124E−08 | |||

Mean | 8.530E+01 | 1.895E−15 | 3.772E−10 | |||

F29 | Best | 7.120E−01 | 8.424E−03 | 3.609E+00 | 1.346E−02 | |

Worst | 1.029E+00 | 1.029E+00 | 2.341E+01 | 1.017E+00 | ||

Mean | 8.947E−01 | 5.336E−01 | 1.150E+01 | 8.238E−01 | ||

F30 | Best | 2.748E+00 | 3.952E−14 | 3.997E−15 | ||

Worst | 3.582E+00 | 1.611E−09 | 4.682E−07 | 1.465E−14 | ||

Mean | 3.227E+00 | 5.533E−11 | 3.487E−08 | 7.550E−15 |

Unimodal function has only one global optimal solution in the solution space. This type of function can well evaluate the exploitation ability of metaheuristic algorithm. Experiment 1 and experiment 3 show that whether it is a low-dimensional unimodal function or a high-dimensional unimodal function, TCCO has good performance compared with other algorithms, and the convergence speed is faster than other algorithms.

Different from unimodal function, multimodal function has many local optimal solutions in the solution space. The number of local optimal solutions increases exponentially with the growth of the problem scale. Therefore, this type of test function is often used to evaluate the exploration capability of metaheuristic algorithm. Experiment 2 and 4 are designed for this purpose. The experiment results show that the performance of TCCO is better than other algorithms in both low-dimensional multimodal function and high-dimensional multimodal function.

This research is inspired by the competition and cooperation between human teams, and proposes a new metaheuristic optimization algorithm, Team Competition and Cooperation Optimization Algorithm (TCCO), which is easy to parallelize. TCCO includes two operators, which respectively simulate the update solution within the team and the update solution between teams. This paper conducts detailed experiments on 30 benchmark functions, compared and analyzed the exploration ability, exploitation ability and convergence speed of WOA, SSA, SOA, GOA and TCCO. Experiment results show that compared with other metaheuristic algorithms mentioned above, TCCO has stronger competitiveness.