This paper applies a machine learning technique to find a general and efficient numerical integration scheme for boundary element methods. A model based on the neural network multi-classification algorithm is constructed to find the minimum number of Gaussian quadrature points satisfying the given accuracy. The constructed model is trained by using a large amount of data calculated in the traditional boundary element method and the optimal network architecture is selected. The two-dimensional potential problem of a circular structure is tested and analyzed based on the determined model, and the accuracy of the model is about 90%. Finally, by incorporating the predicted Gaussian quadrature points into the boundary element analysis, we find that the numerical solution and the analytical solution are in good agreement, which verifies the robustness of the proposed method.

The methods for solving partial differential equations (PDEs) are usually classified as analytical and numerical methods. Encouraged by the earlier studies, some innovative analytical methods have been proposed. By a set of constraint conditions, the generalized auxiliary equation method is employed to achieve many new exact solutions which are the hyperbolic trigonometric, trigonometric, exponential, and rational [

Despite the aforementioned salient features, the BEM is not without shortcomings. Firstly, BEM is not suitable for non-linear problems because of the lack of a fundamental solution, which is essential for transforming PDEs into BIEs. Hence, the BEM is not as generic as FEM, but it still plays an important role in concept design. Secondly, the coefficient matrix of the BEM is an asymmetric full-matrix, so the computational time increases rapidly with the degrees of freedom. This process can be accelerated by algorithms such as Fast Multipole Method (FMM), Adaptive Crossing Approximation (ACA), and fast Fourier transformation, etc. Thirdly, the integrand in BEM contains the fundamental solutions with singularities, which cannot be integrated exactly with Gauss-Legendre quadrature. The improvement in the Gaussian integration accuracy depends on the increase of the number of Gaussian quadrature points, but the improvement in accuracy will increase the time consumption of the calculation. Hence, it is of great significance to develop a robust, accurate, and efficient numerical integration scheme for the BEM.

With the rapid development of the digital technology and computers, machine learning has become a popular topic in computer science. As a subset of machine learning, the artificial neural network (ANN) algorithm based on the synaptic neuron model [

In this paper, a classification algorithm based on the ANN is used to accelerate the Gaussian quadrature of the BEM. The minimum number of Gaussian quadrature points under the premise of a given accuracy is predicted by using the ANN. The remainder of this paper is organized as follows.

The softmax regression algorithm is mainly used in the multi-classification problem. It is an empirical loss minimization algorithm based on the softmax model and uses multiple cross-entropy as objective function. Given a set of training data:

where

Each

Once

where the values of

By combining

The cross-entropy loss function in _{w}

where the _{w}

Many problems in machine learning can be transferred to optimization problems, which are solved through iteration. The most basic optimization algorithm is the gradient descent algorithm. Its core idea is that each step of the iteration moves in the opposite direction of the gradient of the objective function, and finally obtains the global or local optimal solution. In this paper, the stochastic gradient descent method was used. In contrast to the batch gradient descent algorithm, only one sample needs to be selected from all the training data to estimate the gradient and update the weight in the stochastic gradient descent algorithm, which greatly reduces the time complexity of the algorithm.

In the

Then, the parameters

_{1} = _{R}

Each layer of the neural network is represented by two sets of parameter values. The two sets of parameter values for the

An

An

Then,

where the activation function

Finally, the output layer is transformed by softmax to obtain the probability value, which is the final output of the classification model. Combined with the real label, the cross-entropy of forward propagation is calculated. At the same time, the optimization algorithm of machine learning uses back propagation to calculate the partial derivatives of the cross-entropy loss function to the weight and bias, respectively, and then uses the stochastic gradient descent algorithm to update the parameters.

Under the given boundary conditions, the two-dimensional potential problem governed by the Laplace equation can be formulated as:

where

According to the above partial differential equation, the boundary integral equation of the two-dimensional potential problem is:

in which ^{*}(^{*}(

where

After discretizing the BIEs with constant boundary elements, we can obtain the following discretization formulation of

where ^{j} and ^{j} denote the potential value and flux value of the

where

Among them:

_{k}

(

In this section, we mainly consider how to construct a model to predict the number of Gaussian quadrature points. The specific determinate process, which can be divided into two phases, as shown in

In the data preparation phase, multiple independent elements are randomly generated to construct the feature parameters. For a given precision, the number of minimum Gaussian quadrature points satisfying the above precision is calculated according to the element information. It is used as the label of training data. Then, the feature parameters and labels are combined as the training data for algorithm learning.

In the training phase, the data constructed in the preparation phase are introduced into the model for training, and the optimal model is determined by adjusting the network structure. Then, the model is verified by the prediction results on the training data. Finally, a comparison is made between the machine learning prediction and the traditional method in terms of the time to calculate the number of Gaussian quadrature points satisfying the same accuracy.

The data preparation phase provides training data for machine learning. Here, the parameter definition and analysis are performed on a single independent constant element in the BEM. Firstly, the element length is defined as _{1}. The right and left endpoints of the element are defined as 1 and 2, respectively. The green point denotes the midpoint of the element. Given a yellow point as the collocation point (source point), we set the distance from the source point to the midpoint of the element as _{2}. The parameters defined above are shown in

The features and labels for training data are defined as follows:

Feature: _{1} and _{2}, where

Labels: The minimum number of Gaussian quadrature points satisfying a certain precision from 2 to 10.

To obtain the training data, the following steps are required:

Randomly generate _{1} and _{2} in the feature range by the BEM program and set the precision

Computing element information.

Use the element information generated in Step 2 to calculate the non-diagonal elements of the coefficient matrix obtained by Gaussian integration using 15 Gaussian quadrature points.

Use the element information generated in Step 2 to calculate the non-diagonal elements of the coefficient matrix obtained by Gaussian integration using 2–10 Gaussian quadrature points.

Assemble the values calculated in Steps 3 and 4 to compute the relative error. When the relative error is less than the given precision _{1}, the corresponding number of Gaussian quadrature points is the required label.

To find the label satisfying the precision, the formula of relative error is given as follows:

where _{G}_{H}_{n, g} denotes the coefficient matrix _{n, max} denotes the coefficient matrix _{n, g} and _{n, max} represent the same values.

When _{G}_{1} and _{H}_{1},

In _{G}_{H}_{1}. In the left figure, five Gaussian quadrature points are required to meet the given precision, whereas in the right figure six Gaussian quadrature points are needed.

To compensate for the randomness of insufficient data, the root mean square error is defined as follows:

where

Four cases,

In _{1}, which is consistent with the conclusion obtained in

The 10,000 training data generated in the previous work were fed into the neural network model for training. The input layer receives the features, and the output layer outputs the corresponding Gaussian quadrature points that meet a certain precision.

The structure of the neural network is adjusted to obtain a relatively accurate model. In the process of accuracy verification using the separation function for cross-validation, we import 10000 data of which 8000 are used as training data and the remaining 2000 for testing. Each network structure is tested 10 times, and the accuracy is added and averaged 10 times to obtain the mean as presented in

As can be observed from

Number of units in each hidden layer | ||||
---|---|---|---|---|

Hidden 1 | 100 | 100 | 300 | 300 |

Hidden 2 | 100 | 300 | 100 | 300 |

Accuracy | 91.76% | 90.665% | 90.525% |

To observe the prediction effect of the model on the training data, 30 data points with fixed

In _{1}, and the vertical axis represents the number of Gaussian quadrature points corresponding to the change of _{1}. The blue thick solid line, marked as a star, represents the real number of Gaussian quadrature points (real label), and the other eight lines correspond to the results of eight predictions, respectively. As can be observed from _{1}. The prediction results of the other eight lines are only inconsistent with the real value in the _{1} range of 0.025 to 0.035, which accounts for

Similarly, 30 pieces of fixed data

As shown in _{2}. Similar to _{2} from 0.2 to 0.3 in

_{1}, and the model training and debugging time as _{2}. Then _{1} +_{2}. It can be seen that the time required to determine the optimal model increases linearly as the number of training data increases. In addition, the data preparation time is significantly greater than the training and debugging time of the model.

To verify the accuracy of the model, the two-dimensional potential problem of a circular structure discretized by equal length and unequal length are taken as examples. The specific process of the application is presented in

As shown in _{1} and _{2} are calculated using the above-mentioned element information and determining whether _{1} and _{2} are consistent at this time with the feature range of the training data. Otherwise, the radius of the circle is adjusted. The calculated _{1} and _{2} are introduced into the optimal model as the test data to predict, and then the Gaussian quadrature points predicted previously are used to replace the Gaussian quadrature points in the traditional calculation to solve the Gaussian quadrature and subsequent equations. Finally, the numerical and analytical solutions on the boundary are calculated for comparison.

As mentioned above, to control the feature parameters _{1} and _{2} within the feature range of the training data, the radius of the circle is set to 0.25, and the discrete circle structure with equal length is shown in

In

Number of Gaussian points (Predict) | |||||||
---|---|---|---|---|---|---|---|

2 | 3 | 4 | 5 | 6 | Total | ||

Number of Gaussian points (Real) | 2 | 200 | 200 | ||||

3 | 7700 | ||||||

4 | 200 | 1200 | |||||

5 | 400 | ||||||

6 | 400 |

In

The true value is 2, and the predicted value is 3 (200 copies).

The true value is 4, and the predicted value is 5 (200 copies).

Therefore, the prediction accuracy of the model is

It is known that the potential function satisfying the Laplace equation is

It is assumed that the boundary condition

where _{1} and _{1} denote the coordinates of the collocation point, and

It is worth noting that the numerical solution _{1} of the boundary flux can be calculated by importing the predicted Gaussian quadrature points into the BEM code for computing. Finally, the numerical solution and the analytical solution of the flux are compared to obtain the relative error as presented in

Source point number | Numerical solution _{1} |
Analytical solution |
Relative error |
---|---|---|---|

10 | 0.64947196 | 0.64831024 | 0.00179192 |

20 | −0.06659779 | −0.066478971 | 0.001787317 |

30 | −0.69063127 | −0.68939651 | 0.001791074 |

40 | −0.36023543 | −0.3595915 | 0.001790726 |

50 | 0.46799392 | 0.46715674 | 0.001792075 |

60 | 0.64947196 | 0.64831024 | 0.00179192 |

70 | −0.06659779 | −0.066478971 | 0.001787317 |

80 | −0.69063127 | −0.68939651 | 0.001791074 |

90 | −0.36023543 | −0.3595915 | 0.001790726 |

100 | 0.46799392 | 0.46715674 | 0.001792075 |

Here, 10 collocation points are selected on the boundary. As indicated in

Similarly, to control the feature parameters _{1} and _{2} within the feature range of the training data, a circle with radius of 0.25 is discretized with unequal length constant elements, and the discrete results obtained by making the center angle of the circle corresponding to each element within the range from

As depicted in

Number of Gaussian points (Predict) | |||||||
---|---|---|---|---|---|---|---|

2 | 3 | 4 | 5 | 6 | Total | ||

Number of Gaussian points (Real) | 2 | 0 | |||||

3 | 23 | 1353 | |||||

4 | 38 | 95 | 485 | ||||

5 | 9 | 145 | |||||

6 | 179 |

In

It is also known that the potential function satisfying the Laplace equation is:

It is assumed that the boundary condition

where _{2} and _{2} denote the coordinates of the collocation point, and

In addition, numerical solution _{2} can be obtained by importing the predicted number of Gaussian quadrature points into the BEM code for calculation. The relative error between the numerical solution and the analytical solution of the flux is compared as presented in

Source point number | Numerical solution _{2} |
Analytical solution |
Relative error |
---|---|---|---|

1 | 0.551674140 | 0.5406446200 | 0.020400684 |

6 | 0.583586340 | 0.5805891200 | 0.005162377 |

11 | −0.344400090 | −0.3414192800 | 0.008730643 |

16 | −0.665824240 | −0.6611589200 | 0.007056276 |

21 | −0.033422764 | −0.0330742800 | 0.010536405 |

26 | 0.657508880 | 0.653487190 | 0.006154199 |

31 | 0.408340670 | 0.4015636100 | 0.016876679 |

36 | −0.493518620 | −0.4835520400 | 0.020611184 |

41 | −0.609798190 | −0.6060351900 | 0.00620921 |

46 | 0.249349350 | 0.2464910800 | 0.011595835 |

Similarly, as can be observed in

In addition to accuracy, efficiency is another important criterion for the proposed method. Here, the radius of the circle is changed so that the discrete feature parameters _{1} and _{2} are all within the feature range of the training data. Afterward, the minimum number of Gaussian quadrature points satisfying _{1} can be predicted by transferring the feature parameters into the trained model.

It is worth noting that the number of Gaussian quadrature points only affects the calculation process of the coefficient matrix.

As can be seen from

In this paper, the multi-classification algorithm of a neural network is used to predict the minimum number of Gaussian quadrature points that meet the given precision in the Gaussian integral calculation of the BEM. The accuracy of the prediction can reach approximately