Начиная с начала 2000 года осуществляется внедрение GHIS в здравоохранении, в рамках принятого проекта о реформирование информ Mathematical Problems of Computer Science 44, 85--92, 2015. Combined Digital Methods for Solving Optimal Resource Allocation Problems Hasmik S. Derdzyan Vanadzor State University e-mail: jaslinna@mail.ru Abstract This article is devoted to the development of new effective methods for solving optimal resource allocation problems. The simulated annealing and genetic methods of digital optimization are widely used for solving these kinds of problems. Though these methods approach to the optimal solution of the problem, as usual, a long period of time is required for obtaining the exact solution. This article offers to combine the simulated annealing method with the modification of downhill simplex method to increase the convergence of the optimization method. Keywords: Simulated annealing, Downhill simplex, Modification, Allocation, Resources. 1. Introduction The optimal usage of industrial, technological, financial and other resources is one of the important preconditions for further development of the society. The equal allocation of resources during the period of their usage is a special kind of optimal resource distribution. These kinds of problems arise during the parallel implementation of many industrial and technological processes as well as in the fields of construction engineering, financial investments, labor force distribution and so on. In such fields it is often required to work with stable number of employees, provide equal funding in a certain period of time, etc. Usually these are the problems of digital optimization. According to the analysis of the recent scientific articles, researchers give preference to the meta-heuristic methods, particularly to the genetic, simulated annealing and other methods to solve these kinds of problems [1, 2]. These methods approach to the surroundings of global minimum, however, they require a great number of iterations for obtaining the exact global minimum of the objective function [3, 4]. That is why many researchers are satisfied with finding an approximate solution of optimization problems. So, it is necessary to develop new high performed and efficient methods and algorithms to find optimal solutions to the problems of equal distribution of resources. In this article the modification of downhill simplex method is combined with the simulated annealing method to speed up the search process for finding the optimal solution to these problems. 85 86 Combined Digital Methods for Solving Optimal Resource Allocation Problems 2. Simulated Annealing Method The simulated annealing method is a meta-heuristic method used for finding the minimal value of the given objective function 𝑓𝑓(𝒙𝒙�), where 𝒙𝒙� ∈ 𝑅𝑅𝑛𝑛. The method allows to approach the global minimum of the given function in conditions of presence of several local minima. This method is based on crystallization processes of slow freezing thermodynamic systems, such as metals [5]. During the gradual decrease of temperature, the thermodynamic system shifts from the higher energy state to the lower energy state and comes to more stable structure. Sometimes the system shifts towards the greater energy state, the result of which is more unstable metal structure. Then the system changes its energy state again and comes to more stable condition. By the analogy of thermodynamic systems, the simulated annealing method does not permanently move towards the reduction of objective function values during the minimization process. The method sometimes allows moving to the direction of increment of the function values, depending on some probability. To determine the optimal value of objective function 𝑓𝑓(𝒙𝒙�) the following steps are done by this method. At first point 𝒙𝒙𝒄𝒄𝒄𝒄𝒄𝒄 ∈ 𝑅𝑅𝑛𝑛 is randomly selected from the search space of function 𝑓𝑓(𝒙𝒙�) and then initial temperature T is determined. Then the temperature T is gradually decreased while it is higher than the already given temperature 𝑇𝑇𝑚𝑚𝑚𝑚𝑛𝑛 > 0. For each value of temperature T the following cycle is performed. The new point 𝒙𝒙𝒏𝒏𝒏𝒏𝒏𝒏 ∈ 𝑅𝑅𝑛𝑛 is chosen from the search space of the objective function. Then ∆𝑓𝑓 = 𝑓𝑓(𝒙𝒙𝒏𝒏𝒏𝒏𝒏𝒏) − 𝑓𝑓(𝒙𝒙𝒄𝒄𝒄𝒄𝒄𝒄) difference is determined. 1. When ∆𝑓𝑓 < 0, in sense of minimization, the new value of function 𝑓𝑓(𝒙𝒙�) is “better” than the current value. In this case the new point 𝒙𝒙𝒏𝒏𝒏𝒏𝒏𝒏 is definitely accepted as the current point of the search space of the objective function. 2. When ∆𝑓𝑓 > 0, the objective function worsens its value on the new point. In this case the new point 𝒙𝒙𝒏𝒏𝒏𝒏𝒏𝒏 is accepted by probability 𝑝𝑝 = 𝑒𝑒−∆𝑓𝑓/𝑘𝑘𝑘𝑘, despite of the fact that the function takes higher value on that point (k is the constant of Boltzmann). This step allows to observe wider areas. In the end the point is displayed on which the objective function takes minimal value. Simulated Annealing Algorithm (Algorithm 1) A. Initialization Step 1.1. Generate a random point 𝒙𝒙𝒄𝒄𝒄𝒄𝒄𝒄 ∈ 𝑅𝑅𝑛𝑛 from the search space of the objective function 𝑓𝑓(𝒙𝒙�), 𝒙𝒙 ∈ 𝑅𝑅𝑛𝑛. Choose 𝒙𝒙𝒐𝒐𝒐𝒐𝒐𝒐 as the best point. Assign the point 𝒙𝒙𝒄𝒄𝒄𝒄𝒄𝒄 to the point 𝒙𝒙𝒐𝒐𝒐𝒐𝒐𝒐, 𝒙𝒙𝒐𝒐𝒐𝒐𝒐𝒐 ≔ 𝒙𝒙𝒄𝒄𝒄𝒄𝒄𝒄. Step 1.2. Select the minimal temperature 𝑇𝑇𝑚𝑚𝑚𝑚𝑛𝑛 > 0, the initial temperature 𝑇𝑇 > 0 and the value of parameter ℎ ∈ (0; 1). In [3] the following values are given: 𝑇𝑇𝑚𝑚𝑚𝑚𝑛𝑛 = 10−4, 𝑇𝑇 = � 𝑓𝑓(𝒙𝒙𝒄𝒄𝒄𝒄𝒄𝒄) ln (0.9) � , ℎ = 0.95. B. Main Cycle While 𝑇𝑇 > 𝑇𝑇𝑚𝑚𝑚𝑚𝑛𝑛, repeat steps 2.1-2.5. Step 2.1. Generate the new random point 𝑥𝑥𝑛𝑛𝑛𝑛𝑛𝑛 ∈ 𝑅𝑅𝑛𝑛 from the search space of the function 𝑓𝑓(𝒙𝒙�). Step 2.2. Determine the difference of values of the function 𝑓𝑓(𝒙𝒙�) on points 𝒙𝒙𝒏𝒏𝒏𝒏𝒏𝒏 and 𝒙𝒙𝒄𝒄𝒄𝒄𝒄𝒄. ∆𝑓𝑓 ≔ 𝑓𝑓(𝒙𝒙𝒏𝒏𝒏𝒏𝒏𝒏) − 𝑓𝑓(𝒙𝒙𝒄𝒄𝒄𝒄𝒄𝒄) Step 2.3. If ∆𝑓𝑓 ≤ 0, accept the point 𝒙𝒙𝒏𝒏𝒏𝒏𝒏𝒏 as a current point of the search space, 𝒙𝒙𝒄𝒄𝒄𝒄𝒄𝒄 ≔ 𝒙𝒙𝒏𝒏𝒏𝒏𝒏𝒏. H. Derdzyan 87 Moreover, if 𝑓𝑓(𝒙𝒙𝒏𝒏𝒏𝒏𝒏𝒏) < 𝑓𝑓�𝒙𝒙𝒐𝒐𝒐𝒐𝒐𝒐�, accept the point 𝑥𝑥𝑛𝑛𝑛𝑛𝑛𝑛 as the optimum point of that moment. 𝒙𝒙𝒐𝒐𝒐𝒐𝒐𝒐 ≔ 𝒙𝒙𝒏𝒏𝒏𝒏𝒏𝒏. Go to step 2.5. Step 2.4. If ∆𝑓𝑓 > 0, decide whether to accept the new worse point as a current point of the search space or not depending on some probability. Do steps 2.4.1-2.4.3. 2.4.1. Generate the random number 𝑟𝑟 ∈ [0; 1]. 2.4.2. Calculate 𝑝𝑝 = 𝑒𝑒− ∆𝑓𝑓 𝑘𝑘𝑘𝑘 probability, where 𝑘𝑘 = 1.380650524 ∗ 10−23. 2.4.3. If 𝑟𝑟 ≤ 𝑝𝑝, accept 𝑥𝑥𝑛𝑛𝑛𝑛𝑛𝑛 as a current point, 𝒙𝒙𝒄𝒄𝒄𝒄𝒄𝒄 ≔ 𝒙𝒙𝒏𝒏𝒏𝒏𝒏𝒏. Step 2.5. 𝑇𝑇: = ℎ ∙ 𝑇𝑇 (Decrease the current temperature of the system). C. Final Stage Step 3.1. Display the vector 𝒙𝒙𝒐𝒐𝒐𝒐𝒐𝒐 and the value 𝑓𝑓(𝒙𝒙𝒐𝒐𝒐𝒐𝒐𝒐) as the best value of the objective function. Terminate. In this case there is a high probability that the point 𝒙𝒙𝒐𝒐𝒐𝒐𝒐𝒐 is quite close to the global minimum of the objective function. It is necessary to take higher initial temperature T, and lower 𝑇𝑇𝑚𝑚𝑚𝑚𝑛𝑛 in order to get exact results. This step significantly increases the number of calculations. To overcome this disadvantage, the simulated annealing method is combined with downhill simplex method in [3]. In this article the simulated annealing method is combined with the modification of downhill simplex method in order to increase the convergence of the proposed method. 3. The Modification of Downhill Simplex Method The downhill simplex method is a method of local optimization for finding minimal value of non-linear objective function 𝑓𝑓(𝒙𝒙�), where 𝒙𝒙� ∈ 𝑅𝑅𝑛𝑛. The method is initially proposed by Nelder and Mead [6]. Simplex is a convex set with n+1 vertices in n-dimensional space. At first, the regular simplex S={𝒙𝒙�𝒊𝒊} is constructed in the search space of the objective function, 𝒙𝒙�𝒊𝒊 ∈ 𝑅𝑅𝑛𝑛, i = 1, n + 1����������. Then, the center 𝒙𝒙�𝒄𝒄 of simplex S is determined, by formula 𝒙𝒙�𝒄𝒄 = 1 𝑛𝑛 ∑ 𝒙𝒙�𝒊𝒊 𝑛𝑛 𝑚𝑚=1 . The center 𝒙𝒙�𝒄𝒄 is disposed equally from the first n vertices in the initial method. The 𝒙𝒙�𝒏𝒏+𝟏𝟏 vertex of simplex is replaced with the point which decreases the value of the objective function and belongs to the line connecting 𝒙𝒙�𝒄𝒄 and 𝒙𝒙�𝒏𝒏+𝟏𝟏. The suggested modification is based on the idea of weighting coefficients. In this case, the search process of the minimum value of the objective function occurs towards the direction in which the values of the given function decrease at high speed. In this case the weighted center 𝒙𝒙�𝒄𝒄′ of simplex is deteremined by formula 𝒙𝒙�𝒄𝒄′ = 𝜆𝜆1𝐱𝐱�𝟏𝟏 + 𝜆𝜆2𝐱𝐱�𝟐𝟐 + ⋯ + 𝜆𝜆𝑛𝑛𝐱𝐱�𝒏𝒏 , where ∑ 𝜆𝜆𝑚𝑚 = 1 𝑛𝑛 𝑚𝑚=1 , 𝜆𝜆𝑚𝑚 > 0, 𝑖𝑖 = 1, 𝑛𝑛�����. It’s proposed to choose such values for coefficients 𝜆𝜆𝑚𝑚 which will make the weighted center 𝒙𝒙�𝒄𝒄′ tilt to the vertex of simplex, towards which the function values decrease at high speed. Then, the vertex 𝒙𝒙�𝒏𝒏+𝟏𝟏 is replaced with the point which decreases the value of objective function and belongs to the line connecting the points 𝒙𝒙�𝒄𝒄′ and 𝒙𝒙�𝒏𝒏+𝟏𝟏. The detailed description of the suggested modification of the downhill simplex method is given in the publication [7]. 88 Combined Digital Methods for Solving Optimal Resource Allocation Problems Modified Downhill Simplex Algorithm (Algorithm 2) A. Initialization Step 1.1. Generate the random point 𝒙𝒙�𝟏𝟏 ∈ Rn from the search space of the objective function 𝑓𝑓(𝒙𝒙�). Step 1.2. Around the point 𝒙𝒙�𝟏𝟏, construct the regular simplex S={𝒙𝒙�𝒊𝒊}, 𝑖𝑖 = 1, n + 1���������� with the given rib h>0. Choose the sufficiently small number 𝜀𝜀 > 0, for example 𝜀𝜀 = 10−8. B. Main Cycle While |𝑓𝑓(𝒙𝒙�𝒏𝒏+𝟏𝟏) − 𝑓𝑓(𝒙𝒙�𝟏𝟏)| > 𝜀𝜀 or 𝜌𝜌(𝒙𝒙�𝟏𝟏, 𝒙𝒙�𝒏𝒏+𝟏𝟏) = �∑ (𝑥𝑥𝑛𝑛+1,𝑗𝑗 − 𝑥𝑥1,𝑗𝑗)2𝑛𝑛𝑗𝑗=1 > 𝜀𝜀, the steps 2-7 are done, otherwise the step 8 is done. Step 2. The vertices of the simplex are sorted to meet condition 𝑓𝑓(𝒙𝒙�𝟏𝟏) ≤ 𝑓𝑓(𝒙𝒙�𝟐𝟐) ≤ ⋯ ≤ 𝑓𝑓(𝒙𝒙�𝒏𝒏+𝟏𝟏). Step 3. Determine the coefficients 𝜇𝜇𝑚𝑚 = �𝑓𝑓(𝒙𝒙�𝒏𝒏+𝟏𝟏) − 𝑓𝑓(𝒙𝒙�𝑚𝑚)� 𝜌𝜌(𝒙𝒙�𝒏𝒏+𝟏𝟏, 𝒙𝒙�𝑚𝑚)⁄ , where 𝜌𝜌(𝒙𝒙�𝒏𝒏+𝟏𝟏, 𝒙𝒙�𝑚𝑚) = �∑ (x𝑛𝑛+1,𝑗𝑗 − x𝑚𝑚,𝑗𝑗)2nj=1 is the distance between vertices 𝒙𝒙�𝒏𝒏+𝟏𝟏 and 𝒙𝒙�𝒊𝒊 = �x𝑚𝑚,1, x𝑚𝑚,2, … , x𝑚𝑚,𝑛𝑛� ∈ Rn, where 𝜇𝜇𝑚𝑚 shows the relative change of function 𝑓𝑓(𝒙𝒙�) on the vertices 𝒙𝒙�𝒏𝒏+𝟏𝟏 and 𝒙𝒙�𝑚𝑚, 𝑖𝑖 = 1, 𝑛𝑛�����. Step 4. Determine new coefficients 𝜆𝜆𝑚𝑚 by normalizing weighted coefficients 𝜇𝜇𝑚𝑚, 𝑖𝑖 = 1, 𝑛𝑛�����. Choose 𝜆𝜆𝑚𝑚 = 𝜇𝜇𝑚𝑚 𝜇𝜇⁄ , 𝑖𝑖 = 1, 𝑛𝑛�����, where 𝜇𝜇 = ∑ 𝜇𝜇𝑚𝑚𝑛𝑛𝑚𝑚=1 . Coefficients 𝜆𝜆𝑚𝑚 meet the condition ∑ 𝜆𝜆𝑚𝑚 = 1 𝑛𝑛 𝑚𝑚=1 . Step 5. Determine the weighted center of simplex by formula 𝒙𝒙�𝒄𝒄′ = 𝜆𝜆1𝒙𝒙�𝟏𝟏 + 𝜆𝜆2𝒙𝒙�𝟐𝟐 + ⋯ + 𝜆𝜆𝑛𝑛𝒙𝒙�𝒏𝒏. Step 6. A new point, which decreases the value of the objective function 𝑓𝑓(𝒙𝒙�), is being searched on the line 𝑙𝑙2 = {𝒙𝒙�𝒄𝒄′ + 𝛼𝛼 (𝒙𝒙�𝒄𝒄′ − 𝒙𝒙�𝒏𝒏+𝟏𝟏)}. Step 6.1. For this purpose, on the line 𝑙𝑙2 the points 𝒙𝒙�𝒄𝒄, 𝒙𝒙�𝒏𝒏, 𝒙𝒙�𝒐𝒐𝒄𝒄, 𝒙𝒙�𝒊𝒊𝒄𝒄 are observed, corresponding to the values {1; 2; 0.5; -0.5} of 𝛼𝛼. Step 6.2. For the mentioned points the following condition is checked by order: if on the new point the value of the function 𝑓𝑓(𝒙𝒙�) is lower, than on 𝒙𝒙�𝒏𝒏+𝟏𝟏 point, replace vertex 𝒙𝒙�𝒏𝒏+𝟏𝟏 of the simplex with that point, go back to the step 2. If there is no replacement move to the step 7. Step 7. Shrink the simplex to the vertex 𝒙𝒙�𝟏𝟏 by formula 𝒙𝒙�𝒊𝒊 = 𝒙𝒙�𝟏𝟏 + σ(𝒙𝒙�𝒊𝒊 − 𝒙𝒙�𝟏𝟏), i=2,…,n+1, σ = 0.5. C. Final Stage Step 8. Display the vector 𝑥𝑥1 and the optimal value 𝑓𝑓(𝑥𝑥1). End the algorithm. 4. The Proposed Combined Algorithm (algorithm 3) In order to reach the neighborhood of the objective function 𝑓𝑓(�̅�𝑥), 𝑥𝑥 ∈ 𝑅𝑅𝑛𝑛, a certain number of steps of simulated annealing algorithm are done, by choosing 𝑇𝑇𝑚𝑚𝑚𝑚𝑛𝑛 = 10−2. In the result the vector 𝑥𝑥𝑜𝑜𝑜𝑜𝑜𝑜 ∈ 𝑅𝑅𝑛𝑛 is obtained, which is quite close to the global minimum of the objective function. Then, a regular simplex is constructed around the point 𝑥𝑥𝑜𝑜𝑜𝑜𝑜𝑜 ∈ 𝑅𝑅𝑛𝑛 and the proposed modification of downhill simplex method is applied. A. Description of the Proposed Combined Algorithm The following steps are done to get the minimum value of the objective function 𝑓𝑓(�̅�𝑥), 𝑥𝑥 ∈ 𝑅𝑅𝑛𝑛. H. Derdzyan 89 1. Perform a certain number of steps of simulated annealing algorithm (algorithm 1), by choosing 𝑇𝑇𝑚𝑚𝑚𝑚𝑛𝑛 = 10−2. Obtain 𝒙𝒙𝒐𝒐𝒐𝒐𝒐𝒐 ∈ 𝑅𝑅𝑛𝑛 vector in the result of this algorithm. 2. Assign 𝒙𝒙1 ≔ 𝒙𝒙𝑜𝑜𝑜𝑜𝑜𝑜. By this step the vector 𝒙𝒙𝒐𝒐𝒐𝒐𝒐𝒐, obtained by the simulated annealing method, is chosen as the first 𝒙𝒙1 ∈ 𝑅𝑅𝑛𝑛 vertex of the constructed initial simplex. 3. Apply the modified downhill simplex algorithm (algorithm 2), starting from the step 1.2. 4. In the end of the modified downhill simplex method, display the obtained vector 𝒙𝒙1 and the value 𝑓𝑓(𝒙𝒙1) as the minimal value of the objective function. This combined method has been tested on standard test functions of global optimization listed in [3]. A hundred independent tests have been done to solve each task and attempt the optimal value. Every time the algorithm was started from different points of the search space of test function. The proposed combined method was compared with the combination of the simulated annealing method and Nelder-Mead’s downhill simplex method. Table 1 shows the comparative evaluations between the two combined methods where the success rate shows the number of the successful cases of determining the global minimum of each standard function during the 100 independent tests. The test is considered successful and the minimal value of the test functions is achieved if condition �𝑓𝑓∗ − 𝑓𝑓� < 10−4 ∙ |𝑓𝑓∗| + 10−6 is satisfied [3]. Here 𝑓𝑓∗ is the exact global minimum of the test function and 𝑓𝑓 is the optimal value obtained by the combined methods. Table 1. Combination of the simulated annealing method with the initial and modified downhill simplex methods. N Test function n (number of variables) Combination with the initial downhill simplex method Combination with the modified downhill simplex method Success rate Success rate 1 Rosenbrock's valley 4 80% 85% 2 Freudenstein and Roth 2 82% 87% 3 Shubert 2 86% 89% 4 Griewank 2 100% 100% 5 Griewank 4 100% 100% 6 Griewank 6 100% 100% 7 Goldstein and Price 2 100% 100% 8 Bohachevsky 2 2 82% 85% 9 Bohachevsky 3 2 88% 91% 10 Schwefel 2 78% 82% 11 Six-hump camel back 2 100% 100% Table 1 shows that the combination of the simulated annealing method with the modified downhill simplex method has better results than the combination with Nelder-Mead initial method. During the hundred independent tests the proposed modified combined method in the algorithm 3 has obtained the global minimum of the test function more often than the initial combined method. For example, the number of the cases of obtaining the global minimum of Rosenbrock's valley, Freudenstein and Roth test functions is increased by 5%, while for Shubert, Bohachevsky, Schwefel functions it is increased by 3-4%. 90 Combined Digital Methods for Solving Optimal Resource Allocation Problems 5. The Application of the Proposed Combined Method for Solving Problems of Equal Allocation of Resources in Time The proposed combined method is applicable for the solution of the following problem of the optimal resource distribution. Problem 1: n number of industrial, technological or other type of projects are accomplished. The given 𝑓𝑓𝑚𝑚(𝑡𝑡), 𝑡𝑡 ∈ [0, 𝑑𝑑𝑚𝑚], 𝑖𝑖 = 1, 𝑛𝑛����� functions show the number of the required resources for execution of the project i at time t . For each project the duration di is given. It is allowed to postpone the implementation of the given projects. ci is the earliest and vi is the latest start allowed for the project i. If the project i starts with the delay 𝑡𝑡𝑚𝑚 ∈ [𝑐𝑐𝑚𝑚, 𝑣𝑣𝑚𝑚] the function describing the allocation of resources at time t is 𝑓𝑓𝑚𝑚(𝑡𝑡 − 𝑡𝑡𝑚𝑚) where 𝑡𝑡 ∈ [𝑡𝑡𝑚𝑚, 𝑡𝑡𝑚𝑚 + 𝑑𝑑𝑚𝑚], 𝑖𝑖 = 1, 𝑛𝑛�����. In this case, the sum function ∑ 𝑓𝑓𝑚𝑚(𝑡𝑡 − 𝑡𝑡𝑚𝑚) 𝑛𝑛 𝑚𝑚=1 will present the total number of resources required at time t. It is required to select such values of 𝑡𝑡1, 𝑡𝑡2, … , 𝑡𝑡𝑛𝑛 start points which allow the equal allocation of the total resources during the given time. A. The Selection of Criteria for Equal Allocation of Resources We take the following criteria for the equal allocation of resources: 𝐼𝐼1(𝑡𝑡1, 𝑡𝑡2, … , 𝑡𝑡𝑛𝑛) and 𝐼𝐼2(𝑡𝑡1, 𝑡𝑡2, … , 𝑡𝑡𝑛𝑛), where 𝐼𝐼1(𝑡𝑡1, 𝑡𝑡2, … , 𝑡𝑡𝑛𝑛) = 𝑚𝑚𝑚𝑚𝑥𝑥𝑎𝑎≤𝑜𝑜≤𝑏𝑏 |∑ 𝑓𝑓𝑚𝑚(𝑡𝑡 − 𝑡𝑡𝑚𝑚) − 𝑀𝑀 𝑛𝑛 𝑚𝑚=1 | and 𝐼𝐼2(𝑡𝑡1, 𝑡𝑡2, … , 𝑡𝑡𝑛𝑛) = � 1 𝑏𝑏−𝑎𝑎 ∙ ∫ [∑ 𝑓𝑓𝑚𝑚(𝑡𝑡 − 𝑡𝑡𝑚𝑚) − 𝑀𝑀 𝑛𝑛𝑚𝑚=1 ]2𝑑𝑑𝑡𝑡 𝑏𝑏 𝑎𝑎 . These criteria represent the maximal deviation of sum function ∑ 𝑓𝑓𝑚𝑚(𝑡𝑡 − 𝑡𝑡𝑚𝑚) 𝑛𝑛 𝑚𝑚=1 from its mean value M and mean square deviation, correspondingly. The mean value M of required resources during the implementation period of projects is determined by 𝑀𝑀 = 1 𝑏𝑏−𝑎𝑎 ∙ ∫ ∑ 𝑓𝑓𝑚𝑚(𝑡𝑡 − 𝑡𝑡𝑚𝑚) 𝑛𝑛 𝑚𝑚=1 𝑑𝑑𝑡𝑡, 𝑚𝑚 = 𝑚𝑚𝑖𝑖𝑛𝑛𝑚𝑚 𝑡𝑡𝑚𝑚 = 0 , 𝑏𝑏 = 𝑚𝑚𝑚𝑚𝑥𝑥𝑚𝑚 𝑡𝑡𝑚𝑚 + 𝑑𝑑𝑚𝑚 , 𝑏𝑏 𝑎𝑎 where (b-a) is the implementation period of the projects. Equal allocation of sum resources will be obtained in such values of parameters 𝑡𝑡1, 𝑡𝑡2, … , 𝑡𝑡𝑛𝑛 for which criteria 𝐼𝐼1(𝑡𝑡1, 𝑡𝑡2, … , 𝑡𝑡𝑛𝑛) or 𝐼𝐼2(𝑡𝑡1, 𝑡𝑡2, … , 𝑡𝑡𝑛𝑛) will accept their minimal value. So the given problem comes to the following problems (k=1, 2). 𝒎𝒎𝒊𝒊𝒏𝒏 𝑜𝑜1,𝑜𝑜2,…,𝑜𝑜𝑛𝑛 𝐼𝐼𝑘𝑘(𝑡𝑡1, 𝑡𝑡2, … , 𝑡𝑡𝑛𝑛), for 𝑐𝑐𝑚𝑚 ≤ 𝑡𝑡𝑚𝑚 ≤ 𝑣𝑣𝑚𝑚, 𝑖𝑖 = 1, 𝑛𝑛����� restrictions. The general formulation of the given problem is applicable for the following particular problems. Problem 1: The system of n parallel operating aggregates is given. Given functions 𝑓𝑓𝑚𝑚(𝑡𝑡), 𝑡𝑡 ∈ [0, 𝑑𝑑𝑚𝑚], 𝑖𝑖 = 1, 𝑛𝑛����� show the workload of aggregate i at time t. It is required to select such start points 𝑡𝑡𝑚𝑚 ∈ [𝑐𝑐𝑚𝑚, 𝑣𝑣𝑚𝑚] for operating the given aggregates which will make the total workload of the system more equal during the time. Problem 2: n number of parallel implenting projects are given. Given functions 𝑓𝑓𝑚𝑚(𝑡𝑡), 𝑡𝑡 ∈ [0, 𝑑𝑑𝑚𝑚], 𝑖𝑖 = 1, 𝑛𝑛����� show the labor force required for the work i at time t. It is required to select such start points 𝑡𝑡𝑚𝑚 ∈ [𝑐𝑐𝑚𝑚, 𝑣𝑣𝑚𝑚] for the implementation of the projects which will make the distribution of the total labor force more equal during the time. H. Derdzyan 91 References [1] S. H. H. Doulabi, A. Seifi and S.Y. Shariat, “Efficient hybrid genetic algorithm for resource leveling via activity splitting”, Journal of Construction Engineering and Management, vol.137, no. 2, pp. 137–146, 2011. [2] A. R. Mushi, Algorithms for the Resource Leveling Problem, PhD thesis, Dublin City University, 1997. [3] A. R. Hedar and M. Fukushima, “Hybrid simulated annealing and direct search method for nonlinear unconstrained global Optimization”, Optimization Methods and Software, vol. 17, no. 5, pp. 891-912, 2002. [4] A. R. Hedar and M. Fukushima, “Simplex coding genetic algorithm for the global optimization of nonlinear functions”, Multi-objective programming and goal programming, part 2, pp. 135-140, 2003. [5] S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi, “Optimization by simulated annealing”, Science, New Series, vol. 220, no. 4598, pp. 671-680, 1983. [6] J. A. Nelder and R. Mead, “A simplex method for function minimization”, Computer Journal, vol. 7,pp. 308-313, 1965. [7] В. С. Овсепян и А. С. Дерцян, “Модификация метода последовательного симплексного планирования и ее применение к решению задач оптимизации”, Известия МГТУ “МАМИ”, Москва, т. 4, стр. 40-48, 2013 г. Submitted 27.08.2015, accepted 25.11.2015 Համակցված թվային մեթոդներ՝ ռեսուրսների օպտիմալ բաշխման խնդիրների լուծման համար Հ. Դերձյան Ամփոփում Հոդվածը նվիրված է ըստ ժամանակի ռեսուրսների օպտիմալ բաշխման խնդիրների լուծման արդյունավետ մեթոդների մշակմանը: Նշված խնդիրների լուծման տարածված մեթոդներից են թրծման մոդելավորման, գենետիկական մեթոդները: Այս մեթոդները սովորաբար մոտենում են խնդրի օպտիմալ լուծմանը, բայց ճշգրիտ լուծում գտնելու համար նրանցից պահանջվում է բավական երկար ժամանակ: Սույն հոդվածի մեջ առաջակվում է թրծման մոդելավորման մեթոդը համակցել սիմպլեքս պլանավորման մեթոդի վերափոխման հետ՝ նշված խնդիրների լուծումը ճշգրտելու նպատակով: 92 Combined Digital Methods for Solving Optimal Resource Allocation Problems Комбинированные численные методы для решения проблем оптимального распределения ресурсов А. Дерцян Аннотация Статья посвящена разработке эффективных методов для решения проблем оптимального распределения ресурсов по времени. Для решения этих проблем широко используются метод отжига, генетический метод и другие. Эти методы приближаются к решению данных задач, но для нахождения точного решения требуется много времени. В этой статье предлагается скомбинировать метод отжига с модифицированным методом последовательного симплексного планирования, с целью нахождения более точного решения данных задач.