50 Novel Heuristic Approach for Solving Multi-objective Scheduling Problems Begard A. Amin Ayad M. Ramadan begardamin1976@gmail.com Mathematics Department, College of Science, Sulaimani University Abstract In this paper, we studied the scheduling of 𝑛 jobs on a single machine. Each of n jobs is to be processed without interruption and becomes available for processing at time zero. The objective is to find a processing order of the jobs, minimizing the sum of maximum earliness and maximum tardiness. This problem is to minimize the earliness and tardiness values, so this model is equivalent to the just-in-time production system. Our lower bound depended on the decomposition of the problem into two subprograms. We presented a novel heuristic approach to find a near-optimal solution for the problem. This approach depends on finding efficient solutions for two problems. The first problem is minimizing total completion time and maximum tardiness. The second is minimizing total completion time and maximum earliness. We used these efficient solutions to find a near-optimal solution for another problem which is a sum of maximum earliness and maximum tardiness. This means we eliminate the total completion time from the two problems. The algorithm was tested on a set of problems of different n. Computational results demonstrate the efficiency of the proposed method. Keywords:Scheduling problems, Multi-Objective function, Maximum Tardiness, Maximum Earliness, Heuristic method. 1. Introduction Since 1954, scheduling problems have received much attention in the literature. At first, the researchers considered only one objective function. In practical cases, the decision-maker is obligated to choose only one objective from some objectives [1]. Nowadays, research on multi-criteria scheduling problems has increased. Nagar et al. [2] introduced a survey of multiple and bicriteria problems in scheduling. In general, we have two structures to deal Ibn Al Haitham Journal for Pure and Applied Science Journal homepage: http://jih.uobaghdad.edu.iq/index.php/j/index Doi: 10.30526/34.3.2677 Article history: Received 20 December, 2020, Accepted 16,Febeuary,20 12 , Published in July 2021. ayad.ramadan@univsul.edu.iq mailto:begardamin1976@gmail.com mailto:ayad.ramadan@univsul.edu.iq Ibn Al-Haitham Jour. for Pure & Appl. Sci. 34(3)2021 51 with conflicting criteria: the hierarchical minimization and the simultaneous minimization [3]. The first one is the primary criterion, and the other is the secondary criterion. In this case, one minimizes the primary criterion and chooses a schedule with a minimum secondary criterion value. In the second approach, the efficient solutions (Pareto set) will be generated, and the decision-maker the one with the best composite objective function [4]. The first paper on a problem of this type was presented by Smith [5]. In this work, the problem of scheduling n jobs on one machine can be processed at most one job at a time and without interruption. Each job becomes available for processing at time zero, which requires a positive processing time. The objective function is to minimize the sum of maximum earliness and maximum tardiness ETmax. This problem was first introduced by Amin-Nayeri and Moslehi [6]. We shall use a novel heuristic method to find the best solution for the problem and drive a lower bound which depends on a decomposition of the problem into two subprograms. We present a novel heuristic method to find a best solution depending on a new idea that is inserting an objective function to our problem, and then find efficient solutions for the two problems. 2. Notations and Definitions In this section, we give some notations and definitions N = the set {1, 2, 3,…, n}.  = the set of permutation schedules.  = a permutation schedule. pj = processing time for job j. dj = due date for job j. Cj = completion time for job j. Ei = Max {di – Ci , 0}; the earliness of job i. Emax = Max {Ei}; the maximum earliness. Ti = Max {Ci – di, 0}; the tardiness of job i. Tmax = Max {Ti}; the maximum tardiness. MST: (minimum slack times) jobs are sequenced in non-decreasing order of minimum slack times sj, where sj = dj - pj. SPT: (shortest processing time) they are in non-decreasing order of pj. EDD- rule: (Early due date) they are sequenced in non-decreasing order of dj. LB: lower bound. UB: upper bound. Definition (1) [7]: Consider a problem P , a schedule ο°οƒŽο (where  is the set of all schedules) is said to be feasible, if it satisfies the constraints of P. Definition (2) [8]: A feasible schedule πœ‹ βˆ— is efficient, with respect to the criteria (and ) if there is no any feasible schedule Ο€ such f() ≀ f(*) and g() ≀ g(*), and at least one of the inequalities is strict. Ibn Al-Haitham Jour. for Pure & Appl. Sci. 34(3)2021 52 3. Problem Formulation In this study, we consider the one machine scheduling problem with multi-objective function which is minimizing the sum of the maximum earliness and tardiness, denoted by 1 ο‚€ ο‚€ Emax + Tmax , for simplicity it is 1 ο‚€ ο‚€ ETmax. A set N = {1, 2, … , n} of n independent jobs that has to be scheduled on a single machine to minimize a criterion. Job j ( j = 1, 2, … , n) is to be processed on a single machine without interruption that handles only one job at a time, requires processing time pj, and due date dj. For a sequence Ο€ of the jobs, earliness EΟ€(j), and the tardiness TΟ€(j) are given by: EΟ€(j)= max {dΟ€(j) βˆ’ CΟ€(j), 0}, j = 1, 2, … , n. TΟ€(j)= max {CΟ€(j) βˆ’ dΟ€(j), 0}, j = 1, 2, … , n. So the mathematical form of this problem can be formulated as: Z = min{Emax(Ο€) + Tmax(Ο€)} s. t. EΟ€(j) β‰₯ 0, j = 1, 2, … , n. TΟ€(j) β‰₯ 0, j = 1, 2, … , n. ... (p1) EΟ€(j) β‰₯ dΟ€(j) βˆ’ CΟ€(j), j = 1, 2, … , n. TΟ€(j) β‰₯ CΟ€(j) βˆ’ dΟ€(j), j = 1, 2, … , n. To find a sequence Ο€ that minimizes Z, the schedules that minimize Emax and Tmax are referred to as EDD and MST rules respectively [9]. For the composition of these two functions, we cannot find optimal solution by any direct rule. 4. Analysis of the Problem with Special Cases This problem has been studied in various conditions and compositions. The problem was studied at first by Amin and Moslehi [6], they studied the problem of sequencing a single machine to find an optimal sequence of jobs, also they showed that the algorithm solved 100% of the examples. Since the problem is NP-hard (No Bounded Polynomial Exists) [10], several heuristic approaches have been developed for the problem. Abdul-Razaq and Akram used local search algorithm to solve a multi-criteria function of five objectives included maximum earliness and maximum tardiness [11]. Abdullah considered maximum earliness and maximum tardiness as primary criteria [12]. The problem is also considered with a common due date. In a known sequence, Tavakkoli-Moghaddam et al. [10] found the best value of an idle insert. Tavakkoli- Moghaddam et al. presented also a branch-and-bound algorithm to solve the problem, with an idle insert [13]. Mahnam and Moslehi presented an efficient branch-and-bound algorithm to minimize the problem on a single machine with unequal release times and no unforced idle time [14]. Moslehi, et al., showed that the problem is irregular; so many properties of regular will be missed for the present objective function. They presented a heuristic method using a novel branch-and-bound algorithm to minimize the sum of maximum earliness and tardiness on a single-machine scheduling problem where the due dates were distinct [15]. Ibn Al-Haitham Jour. for Pure & Appl. Sci. 34(3)2021 53 5. Algorithms for Solving Multi-objective Problems In this section, we introduce two problems that concerning our problem. They have three criteria, namely βˆ‘ ∁j n j=1 , Tmax and Emax. 5.1 1/ / F ( βˆ‘ ∁j n j=1 , Tmax) (Total Completion Time and Maximum Tardiness) This problem can be written as: Min βˆ‘ ∁j n j=1 and ... (p2) Min Tmax V. Wassenhove and Gelders [8] found all efficient solutions for (p2). 5.2 1/ / F ( βˆ‘ ∁j n j=1 , Emax) (Total Completion Time and Maximum Earliness) This problem can be written as: Min βˆ‘ ∁j n j=1 and ... (p3) Min Emax. Hoogeveen and Van de Velde [16] presented an algorithm in which they found all the efficient solutions (Pareto set) for (p3). 6. A Novel Heuristic Approach Two methods have been used to solve multi-objective scheduling problem exact and approximation method. The exact solutions are obtained by enumeration method, Branch and bound method, and dynamic programming method. The approximate solutions are obtained by local search methods which form a very general class of heuristic methods [9]. Here we will give a novel heuristic method to find a best solution for the (p1) problem. Let S be a set of all efficient solutions for the problem (p2), and let S1 be a set of all efficient solutions for the problem (p3). Proposition (1): S ∩ S1 β‰  Ο†. Proof: Since SPT-rule is one of the efficient solutions for (p2), and also, SPT-rule is one of the efficient solutions for (p4), so SPT-rule is one of the common sequences of (S ∩ S1). ∎ For each Ο€ ∈ (S βˆͺ S1), find (Emax (Ο€)+Tmax(Ο€)). This construct a set of solutions, and one of them may be an optimal solution for problem (p1). In general, one cannot guarantee to find finding an optimal solution form the set. 6.1 Special Cases of the Problem (𝐩𝟏) Case (1): If a schedule Ο€ gives SPT-rule and EDD-rule at the same, then Ο€ is optimal schedule for (p2), and produces one efficient solution. Proof: Directly from the definition of SPT-rule and EDD-rule. β–  Ibn Al-Haitham Jour. for Pure & Appl. Sci. 34(3)2021 54 Case (2): If a schedule Ο€ gives SPT-rule and MST-rule at the same, then Ο€ is optimal schedule for (p3), and produces one efficient solution. Proof: Directly from the definition of SPT-rule and MST-rule. β–  Case (3): If a schedule Ο€ gives EDD-rule and MST-rule at the same time, then Ο€ is optimal schedule for (p1), and produces the only efficient solution for (p1). Proof: Directly from the definition of EDD-rule and MST-rule. β–  Preposition (2): There exists an optimal solution Ο€ such that π (S βˆͺ S1). Proof: Since in general EDD-rule and MST- rule are not efficient solutions for the problems (p2) and (p3) respectively. Let Ο€ be the EDD-rule, and let Ο€βˆ— be another schedule with Tmax (Ο€ βˆ—) = Tmax (EDD). If Ο€βˆ— is an efficient solution for (p2), this means Ο€ βˆ— ∈ S, and dominates Ο€, because βˆ‘ Cj n j=1 (Ο€ βˆ—) < βˆ‘ Cj n j=1 (Ο€). At the same time, it is possible that Emax(Ο€ βˆ—) > Emax(Ο€), so Ο€ is an efficient solution for (p1) (Optimal) and dominates Ο€ βˆ—. β–  6.2 Lower Bound (LB) Deriving lower bounds for a multiple objective function is difficult, because we have to minimize the objectives at the same time, so we introduce here a lower bound for our problem as follows 6.2.1 Derive a Lower Bound (LB) Consider again the problem (p1). If we have only one objective function, πΈπ‘šπ‘Žπ‘₯ problem, or π‘‡π‘šπ‘Žπ‘₯ problem, then we can solve the problems by MST-rule and EDD- rule respectively. This technique is called the decomposition of the problem [13]. So LB = Emax (MST) + Tmax (EDD). 6.3 Optimal Solution and Efficient Solutions It is clear that the optimal solution is greater than or equal to the lower bound. Define the upper bound by UB = Emax(MST) + Tmax(MST). The relation between the optimal value (opt.), lower bound (LB) and efficient solutions for (p1) is presented below. Theorem: There exists an integer r β‰₯ 0 such that LB + r = opt., and r ∈ [N1 βˆ’ 1, N2 + 1], where, N1= number of efficient solutions, N2 = Tmax( MST) βˆ’ Tmax( EDD). Proof: Since LB ο‚£ opt., so there exists an integer r β‰₯ 0 such that LB + r = opt., so the first part of the theorem is proved. Now, we show that r ∈ [N1 βˆ’ 1, N2 + 1] or that N1 βˆ’ 1 ο‚£ r ο‚£ N2 + 1. Now LB + r = opt., thus r = opt. – LB ≀ UB βˆ’ LB = Emax(MST) + Tmax(MST ) βˆ’ Emax(MST) βˆ’ Tmax(EDD). = Tmax (MST) βˆ’ Tmax(EDD) = N2 ≀ N2 + 1. Hence, r ο‚£ N2 + 1. We will prove N1 βˆ’ 1 ο‚£ r by mathematical induction on N1. Note that N1 is not known. If 𝐍𝟏 = 𝟏, that is, there is only one efficient solution which is EDD (not EDD-rule) as well as MST (not MST-rule), then r = 0pt. – LB = Emax(Opt. ) + Tmax(Opt. ) βˆ’ Emax(MST) βˆ’ Tmax(EDD) = Emax(MST) + Tmax(EDD) βˆ’ Emax(MST) βˆ’ Tmax(EDD) = 0. Thus N1 βˆ’ 1 ο‚£ r ≀ N2 + 1. That is r Ο΅ [N1 βˆ’ 1, N2 + 1], and so the theorem is true for N1 = 1. Ibn Al-Haitham Jour. for Pure & Appl. Sci. 34(3)2021 55 If 𝐍𝟏 = 𝟐, i.e., the number of efficient solutions is two which are (Οƒ) and (Οƒ1) such that Emax (Οƒ) = Emax (MST) and Tmax (Οƒ1) = Tmax (EDD), since N1 = 2 implies that N1 βˆ’ 1 = 1. Now if (Οƒ) is optimal then: r = Emax(Οƒ) + Tmax (Οƒ) βˆ’ Emax(MST) βˆ’ Tmax (EDD) = Emax(MST) + Tmax (Οƒ)βˆ’ Emax(MST) βˆ’ Tmax(EDD) = Tmax (Οƒ) βˆ’ Tmax(EDD) β‰₯ 1 = N1 βˆ’ 1. Hence N1 βˆ’ 1 ο‚£ r ο‚£ N2 + 1. Now if (Οƒ1) is optimal then: r = Emax(Οƒ1) + Tmax (Οƒ1) βˆ’ Emax(MST) βˆ’ Tmax (EDD) = Emax(Οƒ1) + Tmax (EDD ) – Emax(MST) βˆ’ Tmax(EDD) = Emax(Οƒ1)– Emax(MST) β‰₯ 1 = N1 βˆ’ 1. Hence N1 βˆ’ 1 ο‚£ r ο‚£ N2 + 1. So, r Ο΅ [N1 βˆ’ 1 , N2 + 1] and the theorem is true for N1 = 2. If 𝐍𝟏 = πŸ‘, i.e., there are three efficient solutions (Οƒ), (Οƒ1) and (Οƒ2) such that Emax (Οƒ) = Emax (MST) and Tmax (Οƒ2) = Tmax (EDD). Since N1 = 3 Implies that N1 βˆ’ 1 = 2. Now if (Οƒ) is optimal then: r = Emax(Οƒ) + Tmax (Οƒ) βˆ’ Emax(MST) βˆ’ Tmax (EDD) = Emax(MST) + Tmax (Οƒ) βˆ’ Emax(MST) βˆ’ Tmax(EDD) = Tmax (Οƒ) βˆ’ Tmax(EDD) β‰₯ 2 = N1 βˆ’ 1. Hence N1 βˆ’ 1 ο‚£ r ο‚£ N2 + 1. Now if (Οƒ1) is optimal then: r = Emax(Οƒ1) + Tmax (Οƒ1) βˆ’ Emax(MST) βˆ’ Tmax (EDD) = Emax(Οƒ1) βˆ’ Emax (MST ) + Tmax(Οƒ1) βˆ’ Tmax(EDD) β‰₯ 1 + 1 = 2 = N1 βˆ’ 1. Hence N1 βˆ’ 1 ο‚£ r ο‚£ N2 + 1. Finally if (Οƒ2) is optimal, then: r = Emax(Οƒ2) + Tmax (Οƒ2) βˆ’ Emax(MST) βˆ’ Tmax (EDD) = Emax(Οƒ2) βˆ’ Emax (MST )+ Tmax(EDD) βˆ’ Tmax(EDD) β‰₯ 1 + 1 = 2 = N1 βˆ’ 1. Hence N1 βˆ’ 1 ο‚£ r ο‚£ N2 + 1. So r Ο΅ [N1 βˆ’ 1 , N2 + 1] and it is true for N1 = 3. Suppose the theorem is true for N1 = k , i.e., the theorem is true for the k efficient solutions (Οƒ), (Οƒ1), …, (Οƒkβˆ’1), that is, for these k efficient solutions N1 βˆ’ 1 ο‚£ r ο‚£ N2 + 1. Let N1 = k + 1, that is , there is k + 1 efficient solutions (Οƒ) , (Οƒ1),…, (Οƒkβˆ’1), (Οƒk), if any one of the first k efficient solutions (Οƒ) , (Οƒ1), ..., (Οƒkβˆ’1) , is optimal then since it is true for N1 = k , we get N1 βˆ’ 1 ≀ r , and hence, N1 βˆ’ 1 ο‚£ r ο‚£ N2 + 1, and if the last efficient solution (Οƒk) is optimal, then: r = Emax(Οƒk) + Tmax(Οƒk)– Emax(MST) βˆ’ Tmax(EDD) = Emax(Οƒk) βˆ’ Emax(MST) β‰₯ k = k + 1 = 1 = N1 βˆ’ 1, thus N1 βˆ’ 1 ο‚£ r ο‚£ N2 + 1. Thus it is true for N1 = k + 1 which completes the proof. ∎ Corollary: If Tmax(MST) = Tmax(EDD), then r = 0. Proof: Since Tmax(MST) = Tmax(EDD) so, we have a schedule Ο€ (say) which gives EDD-rule and MST-rule at the same time, and Ο€ is also optimal schedule. The lower bound is defined as: LB = Emax (MST) + Tmax (EDD). Therefore, LB = opt. this implies that r = 0. ∎ 6.4 Modified Smith’s Algorithm Ibn Al-Haitham Jour. for Pure & Appl. Sci. 34(3)2021 56 Smith presented an algorithm to solve a multi-criteria problem [5]. We modify it to find an optimal solution for a given Tmax. Step (1): Set R = βˆ‘ pj n j=1 , N = {1, 2, … , n}, k = n. Step (2): Find a job j* such that pjβˆ— β‰₯ pj, Dj*, Dj β‰₯ R, and R βˆ’ dj ≀ Tmax (EDD), j and j βˆ— ∈ N. (Break ties arbitrarily). Assign job j* to position k . Step (3): Set R = R βˆ’ pj , N=N- {j*}, k = k βˆ’ 1. If k = 0 , stop. Else, go to step (2). This theorem finds the difference between the LB and the opt. solution. We use this theorem to find a heuristic method and then the near optimal solution for (p1). Let S1= {Οƒ1, Οƒ2, …, Οƒk} which means that there are k efficient solutions for the problem (p3). Also let S = {Οƒ βˆ— 1, Οƒ βˆ— 2, …, Οƒk1 βˆ— } which means that there are k1 efficient solutions for the problem (p2). In S1, there exists i (i = 1, . . . , k) such that Emax (Οƒi) = Emax(MST), and there exists also in S j (j = 1, . . . , k1) such that Tmax (Οƒ βˆ— j ) = Tmax(EDD). Compute zi = Emax (Οƒi) + Tmax(Οƒi) βˆ€i, i = 1, … , k . Also compute: zj = Emax (Οƒ βˆ— j ) + Tmax(Οƒ βˆ— j) βˆ€j, j = 1, … , k1. We get a set of solutions zi and zj which are upper bounds of the problem (p1). This means that they are greater than or equal to the optimal solution. Let zβˆ— be the minimum one of zi and zj. From the theorem, we have opt. = LB + r, where r ∈ [N1 βˆ’ 1, N2 + 1]. Now we have the possible cases for the optimal. If N1 = 1, implies that r ∈ [0, N2 + 1], opt. = LB + 0. If N1 = 2, implies that r ∈ [1, N2 + 1], opt. = LB + 1, and so on. Since we have z βˆ— as the minimum upper bound, so if opt. = LB + 1 > zβˆ— stop the process. For the acceptable values of r we start with r = 0. It is a special case where Tmax(MST) = Tmax(EDD), in this case, one can find the optimal solution directly. If not. We check for r = 1. If there exists a sequence Ο€βˆ— ∈ S1 βˆͺ S such that Emax(Ο€ βˆ—) + Tmax(Ο€ βˆ—) = LB + 1, so it is optimal, otherwise solve (p1) using modified Smith’s algorithm with given Tmax(EDD). If a solution exists, it is optimal otherwise check for r = 2. The Proposed Algorithm: Step (1): Find efficient solutions for (p3) , say S1= {Οƒ1, Οƒ2, …, Οƒk} and for (p2), say S = {Οƒβˆ—1, Οƒ βˆ— 2, …, Οƒk1 βˆ— }. N1 is the number of efficient solutions of (p1), N2 = Tmax(MST) βˆ’ Tmax (EDD) and LB = Emax(MST) + Tmax (EDD). Step (2): Compute zi = Emax (Οƒi) + Tmax(Οƒi) βˆ€i, i = 1, … , k, and compute zj = Emax (Οƒ βˆ— j ) + Tmax(Οƒ βˆ— j) βˆ€j, j = 1, … , k1. z βˆ— is minimum of zi and zj. Step (3): If Tmax(MST) = Tmax(EDD), then EDD-rule as well MST-rule give optimal solution . Go to step (7). Else, go to step (4). Step (4): Find values of r such that r ∈ [N1 βˆ’ 1, N2 + 1], stop if r > N2 + 1 or r > z βˆ—, r = 1. Step (5): If there exists a sequence Ο€βˆ— ∈ S1 βˆͺ S such that Emax(Ο€ βˆ—) + Tmax(Ο€ βˆ—) = LB + r, so it is optimal and go to step (7). Else, go to step (6). Ibn Al-Haitham Jour. for Pure & Appl. Sci. 34(3)2021 57 Step (6): Use modified Smith’s algorithm for given Tmax(EDD). If a solution exists, it is optimal. Go to step (7). Else, r = r + 1. Go to step (5). Step (7): Stop. Example: Consider a 4 job problem with the following given data j 1 2 3 4 pj 5 3 2 7 dj 8 6 11 7 At first, we find efficient solutions for the problems (p2) and (p3) by suitable algorithms. The results are in the following table. We characterize each efficient solution for each problem by ο‚·, and the other solutions are the corresponding solutions. S1 = {(3, 2, 1, 4), (2, 3, 1, 4), (2, 1, 3, 4), (4, 2, 3, 1)}, S = {(3, 2, 1, 4), (3, 1, 2, 4), (2, 1, 4, 3), (2, 4, 1, 3)}, k = 4 and k1= 4. In S1 schedule number 7 gives Emax (7) = Emax(MST) = 0, and in S schedule number 6 gives Tmax (6) = Tmax(EDD) = 7 (in this example it is EDD-rule). Now find the set of solutions zi and zj which are in column 5. z βˆ— = 9 To apply the theorem LB = 7 + 0 = 7, N2 = 7 βˆ’ 7 = 0, so r ∈ [N1 βˆ’ 1, 1]. If N1 = 1, implies that r ∈ [0,1], opt. = LB + 0 = 7. If N1 = 2, implies that r ∈ [1, 1], opt. = LB + 1 = 8. The possible cases for the optimal value are 7, 8, and 9. Start with r = 0. Since there exists no Ο€βˆ— ∈ S1 βˆͺ S such that Emax(Ο€ βˆ—) + Tmax(Ο€ βˆ—) = LB + 0 = 7, use the modified Smith’s algorithm, where Tmax = Tmax (EDD). The solution is (4, 2, 1, 3) with Tmax = 7 and Emax = 0. 7. Computational Results Here, we will introduce our results via computational tests to show the efficiency of the proposed algorithm. The problems are generated as follows: an integer processing time pj generated from the uniform distribution [1, 10]. Also, an integer due date dj is generated from the uniform distribution [0, βˆ‘ pj n j=1 ]. Numbers Schedules (βˆ‘ Cj 4 j=1 , Emax) (βˆ‘ Cj 4 j=1 , Tmax) (Emax + Tmax) 1 (3, 2, 1, 4) ο‚· (34, 9) ο‚· (34, 10) 19 2 (2, 3, 1, 4) ο‚· (35, 6) (35, 10) 16 3 (3, 1, 2, 4) (36, 9) ο‚· (36, 9) 18 4 (2, 1, 3, 4) ο‚· (38, 3) (38, 10) 13 5 (2, 1, 4, 3) (43, 3) ο‚· (43, 8) 11 6 (2, 4, 1, 3) (45, 3) ο‚· (45, 7) 10 7 (4, 2, 3, 1) ο‚· (46, 0) (46, 9) 9 Ibn Al-Haitham Jour. for Pure & Appl. Sci. 34(3)2021 58 The proposed algorithm was running and coding it in Matlab 8.1 (R2013a) and implemented on Intel (R) Pentium (R) CPU 2117U@ 1.80 GHZ, with RAM 4.00 GB personal computer. 8. Conclusions and Future Works We presented a novel heuristic algorithm to minimize the sum of maximum earliness and tardiness in single-machine scheduling ETmax. The algorithm depended on a new idea that is presented for the first time. The algorithm was easy to implement and had a simple structure comparing with other used algorithms such as branch and bound algorithm. To evaluate the efficiency of the proposed algorithm, the algorithm was tested on (25) examples for different n. For n = 3, it solves 100% of the instances. For n = 4, it solves 100% of the instances. For n = 5, it solves also 100% of the instances. Computational results showed the ability of the algorithm. An important area for future research is to use the presented idea for other multi- objective problems with a variety of environments and conditions. References 1. T'kindt, V.; Billaut, J.-C. Multicriteria Scheduling: Theory, Models and Algorithms, 2nd; Springer: Berlin, 2002; 9783540282303. 2. Joksch, H. C. Constraints objectives, efficient solutions and sub optimization in mathematical programming. Journal of Institutional and Theoretical Economics. 1996, 122, 5-13. 3. Hoogeveen, J.A. Single-Machine Bicriteria Scheduling. Ph.D. Thesis. CWI, Amsterdam. 1992. 4. Hoogeveen, J. A. Minimizing maximum promptness and maximum lateness on a single machine. Mathematics of Operation Research. 1996, 21, 100-114, https://doi.org/10.1287/moor.2018.0943. 5. Ayad, M. Ramadan. Range of lower bounds. Applied Mathematics and Computation. 2011, 218, 1008-1011, https://doi.org/10.1016/j.amc.2020.125904 6. Amin-Nayeri, M.R.; Moslehi, G. Optimal algorithm for single machine sequencing to minimize early/tardy cost. ESTEGHLAL Journal of Engineering. 2000, 19, 35–48. 7. Jouni, L. Multi-objective nonlinear Pareto-optimization. Lappeenranta University of Technology, Laboratory of Information Processing. 2000. 8. Van Wassenhove, L. N.; Gelders, F. Solving a bicriterion scheduling problem. European Journal of Operational Research. 1980, 4, 42-48, https://doi.org/10.1016/j.ejor.2021.01.025. 9. Abdul-Razaq, T. S.; Hussam Abid Ali Mohammed. Exact and approximation approaches for solving multiple objective function. Al-Mustansiriyah Journal of Science. 2016, 27, 89-97, http://dx.doi.org/10.23851/mjs.v31i4.897. 10. Moghaddam, R.T.; Moslehi, G.; Vasei, M.; Azaron, A. Optimal scheduling for a single machine to minimize the sum of maximum earliness and tardiness considering idle insert. Applied Mathematics and Computation. 2005, 167, 1430-1450, https://doi.org/10.1016/j.amc.2020.125904. https://doi.org/10.1287/moor.2018.0943 https://www.sciencedirect.com/science/journal/00963003 https://www.sciencedirect.com/science/journal/00963003/218/3 https://doi.org/10.1016/j.amc.2020.125904 https://doi.org/10.1016/j.ejor.2021.01.025 https://doi.org/10.1016/j.amc.2020.125904 Ibn Al-Haitham Jour. for Pure & Appl. Sci. 34(3)2021 59 11. Abdul-Razaq, T. S.; Akram, A. O. Local search algorithms for multi-criteria single machine scheduling problem. Ibn Al-Haitham Journal for Pure and Applied Science (Special issue). 2017, 436-451, http://dx.doi.org/10.30526/2017.IHSCICONF.1817. 12. Abdullah, H. F. Multicriteria scheduling problems to minimize total tardiness subject to maximum earliness or tardiness. Ibn Al-Haitham Journal for Pure and Applied Science. 2010, 23, 311-320, http://dx.doi.org/10.30526/2017.IHSCICONF.1817. 13. Tavakkoli-Moghaddam, R.; Moslehi, G.; Vasei, M.; Azaron, A. a branch-and-bound algorithm for a single machine sequencing to minimize the sum of maximum earliness and tardiness with idle insert. Applied Mathematics and Computation. 2006, 174, 388–408, https://doi.org/10.1016/j.amc.2020.125904. 14. Mahnam, M.; Moslehi, G. A branch and bound algorithm for minimizing the sum of maximum earliness and tardiness with unequal release times. Engineering Optimization. 2009, 41, 521–536. 15. Moslehi, G.; Mahnam, M.; Amin-Nayeri, M.; Azaron, A. A branch-and-bound algorithm to minimize the sum of maximum earliness and tardiness in the single machine. Int. Journal Operational Research. 2010, 8, 458-482. 16. Hoogeveen, J.A.; Van de Velde, S.L. Minimizing total completion time and maximum cost simultaneously is solvable in polynomial time. Operations Research Letters. 1995, 17, 205– 208. https://www.iasj.net/iasj/journal/44/issues https://dx.doi.org/10.30526/2017.IHSCICONF.1817 https://www.iasj.net/iasj/search?query=au:%22H.%20F%20Abdullah.%22 https://www.iasj.net/iasj/journal/44/issues https://www.iasj.net/iasj/issue/227 https://dx.doi.org/10.30526/2017.IHSCICONF.1817 https://doi.org/10.1016/j.amc.2020.125904