10976 FACTA UNIVERSITATIS Series: Mechanical Engineering https://doi.org/10.22190/FUME220829004W © 2020 by University of Niš, Serbia | Creative Commons License: CC BY-NC-ND Original scientific paper AN IMPROVED BARE-BONES PARTICLE SWARM ALGORITHM FOR MULTI-OBJECTIVE OPTIMIZATION WITH APPLICATION TO THE ENGINEERING STRUCTURES Zhaohua Wang1, Guobiao Yang1, Yinxu Sun2, Yongxin Li2, Fenghe Wu2 1School of Mechanical Engineering, Taiyuan University of Science and Technology, Taiyuan, China 2School of Mechanical Engineering, Yanshan University, Qinhuangdao, China Abstract. In this paper, an improved bare-bones multi-objective particle swarm algorithm is proposed to solve the multi-objective size optimization problems with non-linearity and constraints in structural design and optimization. Firstly, the development of particle individual guide and the randomness of gravity factor are increased by modifying the updated form of particle position. Then, the combination of spatial grid density and congestion distance ranking is used to maintain the external archive, which is divided into two parts: feasible solution set and infeasible solution set. Next, the global best positions are determined by increasing the probability allocation strategy which varies with time. The algorithmic complexity is given and the performance of solution ability, convergence and constraint processing are analyzed through standard test functions and compared with other algorithms. Next, as a case study, a support frame of triangle track wheel is optimized by the BB-MOPSO and improved BB-MOPSO. The results show that the improved algorithm improves the cross-region exploration, optimal solution distribution and convergence of the bare-bones particle swarm optimization algorithm, which can effectively solve the multi-objective size optimization problem with non-linearity and constraints. Key words: Optimization, Multi-objective, Particle swarm, BB-MOPSO 1. INTRODUCTION At present, engineering structures are developing towards high reliability and lightweight design. In the design process, the stiffness, strength and vibration characteristics Received: August 29, 2022 / Accepted January 16, 2023 Corresponding author: Fenghe Wu School of Mechanical Engineering, Yanshan University, No 438, West of Hebei Avenue, Qinhuangdao, China, 066004. E-mail: risingwu@ysu.edu.cn 2 Z. Wang, G. Yang, Y. Sun, Y. Li, F. Wu of the structure must be taken into account at the same time [1, 2]. Therefore, the multi-objective optimization problem (MOP) in the process of structural design is becoming more and more important [3]. In the conceptual design stage, the topology configuration of structure is determined by the topology optimization methods [4]. Then, the local size parameters are further optimized to enhance the mechanical performance in the detailed design stage. Size optimization needs to consider many mechanical performances of the structure at the same time, including stiffness, strength, fatigue life, etc. Local extremum and non-convergence often occur in the calculation process, especially the stress-based optimization, which belongs to a typical nonlinear MOP with constraints. With the increasing complexity of multi-objective optimization problems, more and more intelligent algorithms are applied to size optimization. Bekdas et al. [5] and Assimi et al. [6] improved different intelligent algorithms to optimize the truss size with good results obtained. A new collision box is designed and optimized by using archive-based micro-genetic algorithm and improved, non-dominated sorting genetic algorithm, which improve the energy absorption characteristics and comprehensive crashworthiness [7]. Although many achievements have been made in the application of intelligent algorithms to size optimization, there are still problems of local extremum and non-convergence for the solution with multi-constraints. Evolutionary algorithm or genetic algorithm [8-9] has a special evaluation mechanism, and has gradually matured after several generations of development, but its local search ability is poor, and the selection of more parameters affects the quality of the solution. Simulated annealing method [10-11] draws lessons from the phenomenon of sudden jump of solid properties during annealing heat treatment and becomes a good global optimization method, but it has low efficiency and slow convergence speed. Ant colony algorithm [12-13] is a parallel algorithm with fast convergence speed and obvious advantages in dealing with complex combinatorial optimization problems, but it is prone to premature termination. Inspired by the predation behavior of birds, Particle Swarm Optimization (PSO) [14, 15] is a random iterative parallel algorithm, which is simple and easy to implement, has strong global search ability for nonlinear multi-peak problems, and has obvious advantages in solving multi-objective size optimization problems. However, the algorithm is limited to unconstrained optimization problems, and it is ignored to solve general multi-objective optimization problems with constraints. Since the PSO does not have the ability to deal with constraint problems, it is often necessary to transform the constraint problems with unconstrained problems. An improved multi-objective particle swarm optimization algorithm (MOPSO) based on the concept of constraint domination is proposed in [16], which perturbs particles with small probability to improve the diversity of the algorithm. Mohamad et al. [17] proposed a MOPSO with good convergence and constraint processing ability for high-dimensional problems to solve complex engineering problems with many optimization variables. Li et al. [18] proposed an adaptive particle swarm optimization algorithm with different learning strategies to solve the path planning problem of mobile robot under different types of constraints in complex environment. Xu et al. [19] proposed an improved adaptive weighted PSO and applied it to multi-objective optimization design of planetary gears, which solved the problem that PSO is not easy to converge or fall into local optimum under complex constraints. It can be seen that many scholars have done extensive research on PSO in solving MOP and constrained An Improved Bare-Bones Particle Swarm Algorithm for Multi-Objective Optimization... 3 problems [20], but the above algorithms need to set parameters such as learning factor and inertia weight, which affects the computational efficiency of PSO. A bare-bones particle swarm optimization algorithm (BB-PSO), using a Gaussian distribution of personal best positions and global best positions to update particle positions, was first proposed by Kennedy [21], which has the advantage of not setting inertia weights, learning factors and other control parameter. Zhang et al. [22] first extended the BB-PSO to multi-objective optimization problem, and the multi-objective economic/environmental scheduling problems with constraints are solved. However, the bare-bones multi-objective particle swarm optimization algorithm (BB-MOPSO) focuses more on the globality in particle update and global best positions selection, which makes the algorithm's optimization ability stronger, but the local development ability, boundary search ability and particle diversity are correspondingly reduced. Therefore, it needs to be further improved for the problems that there are many local extremum and the optimal solution may be located at the boundary when solving nonlinear multi-objective optimization with constraints. Based on the above analysis, this paper proposes an improved BB-MOPSO to solve the multi-objective size optimization problems with non-linearity and constraints. The MOP and BB-MOPSO are introduced in Section 2 and Section 3. An improved BB-MOPSO is proposed in Section 4. The algorithmic complexity is given and the performance of solution ability, convergence and constraint processing are analyzed in Section 5. As a case study, a support frame of triangle track wheel is optimized in Section 6. 2. MULTI-OBJECTIVE OPTIMIZATION PROBLEM A classical multi-objective optimization problem can be defined as finding a decision variable * * * * 1 2 [ , , , ] n x x x x that satisfies the following conditions: Decision space: min max , 1, 2, , ( ) i i i i x x x i n x x    . (1) Equality constraint: ( ) 0, 1, 2, , j h x j J  . (2) Inequality constraint: ( ) 0, 1, 2, , k g k K x . (3) Minimized objective function: 1 2 ( ) [ ( ), ( ), , ( )] m f x f x f x f x . (4) Function f(x) is called the objective function. Each component value of decision variable x is constrained by two boundary values min i x and max i x . The boundaries of all components of the decision variable constitute the decision space of multi-objective optimization. The output values of m objective functions constitute the objective space of multi-objective optimization. The decision variables that conform to all constraints 4 Z. Wang, G. Yang, Y. Sun, Y. Li, F. Wu constitute the feasible region of multi-objective optimization, and the solution that conforms to Eq. (4) in the feasible region is called the optimal solution. Different from the single objective optimization problem, the MOP contains many conflicting objective functions, and it is difficult for each objective to achieve the optimal at the same time. Therefore, its solution becomes a solution set containing infinite elements. Usually, we find the effective solution of this solution set, also known as Pareto solution, which is defined as: if there is not x X , make ( ) ( )*f x f x , and then * x is defined as an effective solution to the MOP. 3. BB-MOPSO In the PSO, each particle represents a solution of the optimized problem. The position of the i-th particle in the n-dimensional space is represented as xi=[xi,1, xi,2,…, xi,n], and the velocity is represented as vi=[vi,1, vi,2,…, vi,n]. Each particle has an adaptation value determined by the optimization function, and it is known that the optimal value of the current particle individual is pi=[pi,1, pi,2,…, pi,n] and the global optimal value of the particle is gi=[g1, g2,…, gn] at the i-th iteration. At the next iteration, the particle determines the next motion state according to its own experience and the experience of particles in the same neighborhood until the end of the iteration. The velocity update equation is, , , 1 1 , , 2 2 , ( 1) ( ) ( ( ) ( )) ( ( ) ( )) i j i j i j i j j i j v t wv t c r p t x t c r g t x t      . (5) where w is the inertia weight, c1 and c2 are learning factors. r1 and r2 are random obeying uniformly distributed U(0,1); j=1, 2,... , n, and i=1, 2,..., N, N is the number of particles. The particle position is, , , , ( 1) ( ) ( 1) i j i j i j x t x t v t    . (6) BB-MOPSO has the same optimization idea as PSO, but it is different from traditional PSO. On the one hand, the control parameters such as inertia coefficient and learning factor are deleted to overcome the disadvantage that traditional PSO relies too much on parameters. The mathematical model to determine the updating position of particles is realized by a Gaussian sampling (normal distribution) of personal best positions and global best positions. The mathematical model of BB-MOPSO [18] is as follows:  1 , 1 , , , , , [ ( ) (1 ) ( )] / 2, ( ) ( ) (0,1) 0.5 ( 1) ( ) others i j i j i j i j i j i j N r p t r g t p t g t U x t p t            . (7) On the other hand, PSO belongs to the single-objective optimization method, which has a definite optimal solution. BB-MOPSO belongs to the multi-objective optimization method. Due to the addition of a non-dominated relationship between each particle, the algorithm may get more than one set of optimal solutions (non-inferior solutions) after iteration. Therefore, for each particle, there will be more than one candidate point when updating the global best positions, and these candidate points do not dominate each other. All candidate points are stored in a set different from particle swarm, which is the external archive An Improved Bare-Bones Particle Swarm Algorithm for Multi-Objective Optimization... 5 mentioned above. When the particle position needs to be updated, the global best position of the particle is selected from the external archive, and the external archive element is also submitted to the decision maker as the final result of the algorithm. The flow of the algorithm is shown in Fig. 1, and the Tmax is the maximum number of iterations. Fig. 1 Flow chart of BB-MOPSO 4. IMPROVED BB-MOPSO Multi-objective size optimization is a nonlinear multi-objective optimization problem with constraints. There are many local extremum, and the Pareto solution may be at the boundary, and the constraint relationship is complex. Although BB-MOPSO has some advantages for solving such problems, but the algorithm focuses more on globality in particle update and global best positions selection, which reduces the algorithm’s boundary and cross-region search ability and particle diversity. Therefore, the algorithm is difficult to find the global optimal solution when dealing with multi-objective size optimization problems. In this paper, the BB-MOPSO is improved from three aspects, including particle update mode, maintenance strategy of external archive and global best positions selection, to increase the algorithm's boundary and cross-region search ability and particle diversity. The calculation process of new algorithm is shown in Fig. 2, and the improvement of each part of the algorithm will be introduced in detail in sections 4.1~4.3. 4.1 Particle Position Updates Approach The particle position update of BB-MOPSO (Eq. 7) is more inclined to the global best positions. But for the size optimization problems, the optimal solution is likely to appear near the constrained boundary, especially the stress-based optimization. Therefore, in the process of particle updating, it is necessary to increase the local development of particles. In this paper, the development of individual particle guides and the randomness of gravitational factors are increased in this paper. The new update mode of particle position is shown in Eq. (8). 6 Z. Wang, G. Yang, Y. Sun, Y. Li, F. Wu  1 , 2 , , , , , [ ( ) ( )] / 2, ( ) ( ) (0,1) 0.5 ( 1) ( ) others i j i j i j i j i j i j N r p t r g t p t g t U x t p t           , (8) where r1 and r2 are random numbers in the range of 0~1. Fig. 2 Flow chart of improved BB-MOPSO The new update mode can make the particle have a 50% probability to select the relevant component of the current local optimal particle in each iteration calculation. At the same time, the random numbers r1 and r2 can improve the possibility of particles entering another feasible solution region, which expands the particle search range and theoretically increases the boundary and cross region search ability of the algorithm. The updated particle xi (t+1) needs to be compared with the historical optimal particle pi(t) to select the personal best positions. The principle of selection is as follows: when xi (t+1) dominates pi(t), then pi(t+1)=xi(t+1); when xi(t+1) and pi(t) do not dominate each other, anyone can be selected as pi(t+1), which can increase the probability of "excellent particles" entering the next generation; Otherwise, pi(t+1)=pi(t). 4.2 Maintenance Strategy of External Archive The constraint dominance relation is constructed by direct comparison method in BB-MOPSO, which is easy to fall into local optimum in multi-island problem. In order to reduce the possibility of local optimum, the external archive is divided into two parts: feasible solution set and infeasible solution set. In this way, the development of isolated regions and the global capability of the algorithm can be enhanced [23]. After each update, the feasible solution set is updated first, that is, the particle set with constraint violation degree of 0. A new feasible solution set is composed of the newly obtained particles and the particles in the original feasible solution set. Pareto domination An Improved Bare-Bones Particle Swarm Algorithm for Multi-Objective Optimization... 7 relationship is analysed, and the particles that do not dominate each other are left. In order to ensure the diversity of learning particles, the original BB-MOPSO method uses crowded distances sorting [24] as the external archive maintenance strategy. It needs to calculate the Euclidean distance of each particle relative to other particles, which has a large computational complexity. In this paper, the combination of spatial grid and crowded distance sorting are used. The objective space is first divided into M1×M2×…Mk grid, k is the objective dimension, Mk is the grid number of each dimension. The grid position of each particle is calculated to determine the grid density, which can reduce the computational complexity and maintain the optimal solution distribution in the global range. The grid density can be calculated by the following method. The upper bounds F max i and lower bounds F min i of the objective space are given by analysing the values range of each dimension. The objective width of i-th dimension of each grid is max min ( ) / i i i i d F F M  . (9) Suppose the coordinate of a particle corresponding to the objective space is s=[s1, s2,…,sm], then the particle position in each dimension of the objective space is min ( , ) 1 i i i i L fix s F d   , (10) where fix (a, b) is a rounding function. Each grid is numbered and placed into different grids according to the position values of each dimension. The number of particles put into the grids is recorded as N (j, t), which indicates the number of particles in the region where the j-th particle is located at time t. Suppose the maximum theoretical capacity of the grid is Npi, the density is ( , ) / t i pi N j t N  . (11) When the total number of feasible solutions in a certain grid is larger than the grid capacity, the crowding distance of each particle is calculated and Npi with larger distance as the new external archive particles are selected, to ensure the distribution of the solution. The crowding distance can be calculated by the following method. The objective values of each dimension are sorted from small to large, and composition of objective value sequence [fi,1, fi,2, …, fi,p, …, fi,sn] , then the crowding distance of a objective value is max min1 2, , 1 , -1 1 1 ( ) / ( ) k k p i p i p i p i i i i dy dy f f f f         , (12) where f max i and f min i are represent the maximum and minimum values of the i-th objective in the current archive. dyi,p is the crowding distance of the p-th particle in the objective value of i-dimension. sn is the size of particle swarm. For the update of the infeasible solution set (the particle set with constraint violation degree greater than 0), the Pareto dominance relationship analysis is first performed to obtain the mutually un-dominated particles. And then, the adaptive grid technique is used to divide the particles. For the grid exceeding the capacity, instead of congestion distance sorting, Npi particles are randomly selected and re-placed into the grid. Although this 8 Z. Wang, G. Yang, Y. Sun, Y. Li, F. Wu approach may reduce the diversity of infeasible solution set, it can still ensure the ability of the algorithm to explore the unknown feasible region and reduce the computational complexity. 4.3 Global Best Positions Selection Whether the selection of global best positions is reasonable or not is related to the ability of the algorithm to accurately converge to the Pareto front. In the BB-MOPSO, the global best positions for each particle are selected by the tournament selection method with scale of 2 based on the crowding distance value. The larger the congestion distance value, the more likely it is selected as the global best positions. In this paper, the dynamic probability selection method is used to give the global best positions, that is, the global best positions are selected from the infeasible solution set and the feasible solution set with the probability of psl and 1-psl. 1 2 / sl sl sl p p p t T   , (13) where psl1 and psl2 are dynamic probability constants, psl1=0.7 and psl2=0.6. t is the current number of iterations. T is the total number of iterations. In this way, the algorithm has sufficient global search ability. The diversity of learning particles in the early stage is improved, and the convergence in the later stage is accelerated. After determining the global best positions, a probability selection equation (Eq. 14) is constructed based on the density values of each grid, so that the probability of selecting a value with small density is high and that of selecting a value with large density is low. The global best positions selected in this way will have good distribution and convergence. 1 1 / ( ) ks t t t i i i i P       , (14) where t i p is the final choice probability. ks is total number of grids. 4.4 Analysis of Algorithmic Complexity The computational cost of the improved BB-MOPSO mainly concentrates on the update of the reserve set. Suppose an optimization problem with M objectives and K constraints, the number of feasible and infeasible solutions in the new particle swarm optimization is Np and Nq, the feasible reserve capacity is Na, and the infeasible reserve capacity is Nb. Firstly, the renewal process of feasible reserve set is analysed. The size of the new population is Np+Na, and the number of comparisons is M×(Np+Na) to judge whether a particle is advantages and disadvantages. Considering the worst case (the particles in the population are not dominated by each other), it takes M×(Np+Na) 2 comparisons to select all non-inferior solutions from Np+Na particles. On the other hand, the computational complexity of using crowded distance to maintain the reserve set is O(M×(Np+Na)× log(Np+Na)) and log(Np+Na)