INTERNATIONAL JOURNAL OF COMPUTERS COMMUNICATIONS & CONTROL Online ISSN 1841-9844, ISSN-L 1841-9836, Volume: 16, Issue: 3, Month: June, Year: 2021 Article Number: 4221, https://doi.org/10.15837/ijccc.2021.3.4221 CCC Publications Job-shop Scheduling Over a Heterogeneous Platform J. A. Hermosillo-Gomez, H. Benitez-Perez Jose A. Hermosillo-Gomez* Institute of Applied Mathematics Research and Systems (IIMAS) National Autonomous University of Mexico, Mexico 3000 Circuito Escolar, Cd. Universitaria, Coyoacán 04510, Mexico City *Corresponding author: jose.hermosillo@iimas.unam.mx Hector Benitez-Prez Institute of Applied Mathematics Research and Systems (IIMAS) National Autonomous University of Mexico, Mexico 3000 Circuito Escolar, Cd. Universitaria, Coyoacán 04510, Mexico City hector.benitez@iimas.unam.mx Abstract Real-time scheduling involves determining the allocation of platform resources in such a way tasks can meet their temporal restrictions. This work focuses on job-shop tasks model in which a task have a finite number of nonpreemptive different instances (jobs) that share a unique hard deadline and their time requirements are known until task arrival. Non-preemptive scheduling is considered because this characteristic is widely used in industry. Besides job-shop scheduling has direct impacts on the production efficiency and costs of manufac- turing systems. So that the development of analysis for tasks with these characteristics is necessary. The aim of this work is to propose an online scheduling test able to guarantee the execution of a new arriving task, which is generated by human interaction with an embedded system, otherwise to discart it. An extension of the schedulability test proposed by Baruah in 2006 for non-preemptive periodic tasks over an identical platform is presented in this paper. Such extension is applied to non-preemptive tasks that have hard deadlines over a heterogeneous platform. To do that, some virtual changes over both the task set and the platform are effectuated. Keywords: Embedded systems, real-time, uniform heterogeneous platform, non-preemptive jobs. 1 Introduction Traditionally, the job-shop problem is an optimisation problem in which many manufacturing jobs are assigned to machines at particular times while trying to minimise the makespan. However, here it is considered a platform whose machines have the same processing capabilities but different speeds, this characteristic gives certain flexibility to the traditional job-shop problem (JSP) [1] in such a way that there is a machine sequence-dependency setup avoidance. https://doi.org/10.15837/ijccc.2021.3.4221 2 The addressed task model consists of nonpreemptive job-shops. Let τi be a job-shop, this kind of task is composed by ni jobs such that ni ∈ N and ni < ∞; for the propose of this work, a job is defined as an ordered sequence of operatios that must be executed without interruption (i.e. atomic jobs). The jth job of the ith task is represented as τi,j where i = 1, . . . and j = 1, . . . ,ni. A job-shop, τi, is an ordered sequence of jobs: let τi = {τi,j: i = 1, . . . , j = 1, . . . ,ni} be a job- shop where ni ∈ N denotes its number of jobs. These jobs follow precedence constraints, that is, the job τi,j cannot start its execution if the previous job τi,j−1 has not finished yet. Job-shops are also characterised by temporal restrictions such as a unique release time instant (ri), a critical unique deadline (di), and ni execution time requirements (one per job), denoted as ci,j (where j = 1, . . . ,ni). Definition 1 (Precedence constraint). For any two consecutive jobs τi,j−1, τi,j ∈ τi, the job τi,j−1 precedes τi,j if its finishing time fi,j−1 meets that fi,j−1 ≤ bi,j, where bi,j denotes the τi,j’s beginning instant of execution, i = 1, . . . and j = 2, . . . ,ni. Scheduling research needs to focus on this type of tasks because of its importance in the inherent processes of industry (manufacturing, inventory costs, transportation schedules, etc.). Under the industry 4.0, the scheduling should deal with smart and distributed manufacturing systems supporting by novel and emerging manufacturing technologies such as mass customisation, cyber-physical systems, digital twins, and SMAC (Social, Mobile, Analytics, and Cloud), big data, etc. [12]. Embedded systems are an ideal platform to implement projects in terms of the requirements of IoT and Industry 4.0 regarding different resource capacities (communication, computing, memory, etc.), energy efficiency, etc. Embedded systems must operate in dynamic environments where human activities occur at any moment, then some tasks, such as emergency tasks, external event tasks, human interaction tasks, etc., arrive aperiodically [11]. Such aperiodic interactions might be the cause of generating new entering tasks (in this case job-shops) that should be completed within their respective deadline by the embedded systems. Embedded platforms for dynamic environment domains are expected to handle several tasks at the same time, such as the task model previously described. Non-preemptive scheduling is widely used in industry, and it may be preferable to preemptive scheduling for certain reasons [8]: non-preemptive scheduling algorithms are easier to implement and have lower run-time overhead than preemptive scheduling algorithms; the overhead of preemptive scheduling algorithms is more difficult to characterise and predict than of non-preemptive scheduling due to inter-task interference caused by caching and pipelining. These benefits of non-preemptive scheduling are more important on multiprocessor since the task migration overhead is higher and more difficult to predict [7]. A result that gives the possibility of thinking that for a considerable part of real-life applications on multiprocessor platforms, non-preemptive scheduling could be a better choice with respect to real time performance is specified in [7], since in it is experimentally showed that for task sets, in which the range of execution time is not very wide, the performance of EDFnp (short for non-preemptive EDF) is very close to EDF, and DMnp (short for non-preemptive DM) performs better than DM. This work addresses the problem of scheduling sets of arriving non-preemptive job-shops that have to be scheduled over a heterogeneous platform by their temporal restrictions. By means of extending [3] it is possible to determine if a set of arriving job-shops is schedulable online. Contrary to [3] that holds for static1 conditions; in this work, the attention is focused on those tasks that arrive at unknown time instants. The managed tasks, job-shops 2 also have their respective unique [6] but critical deadlines. The platform considered for scheduling these tasks is a uniform heterogeneous [4] one. Let π = (s1, s2, . . . ,sm) denote such multiprocessor platform of size m with the ith processor having a speed rate si, where si ≥ si+1 for i = 1, . . . ,m− 1, as bigger the value of si the slower the ith processor is. The problem addressed in this work is about making the decision of assigning new released job- shops into a heterogeneous platform at certain instant t, then a schedulability test is defined in function of the active tasks as from time t. 1Not only the number of tasks still the same but also their time computing requirements, during execution. 2There are also other names like coordinated tasks[10] in the literature to name this tasks which, at the same time, are a particular case of DAG tasks https://doi.org/10.15837/ijccc.2021.3.4221 3 The rest of this paper presents a review of the analysis in which this work is based (section 2); defines the transformation operations that let apply the proposed modified BAR06 analysis to schedule new entering job-shops (section 3); describes the possible transitions that the jobs may have from its arriving onto the embedded system to its execution or discard (section 4); describes the server construction (section 5); introduces an important added component to analyze entering tasks (section 6); finally enunciates a theorem that extends the usability of BAR06 over job-shops in a uniform heterogeneous platform (section 7). 2 Related work There is a few work dedicated to non-preemptive task schedulability analysis, one of the most known papers about this topic is [3], and [7] which is an improvement of the first. Recent work focuses their efforts on developing analysis to determine schedulability of a set of fixed set of periodic tasks over a homogeneous multiprocessor platform: [9], [2]. [9] seeks to bound the worst-case response time of a set of jobs which access shared resources, and a non-work-conserving nonclairvoyant and non-preemptive scheduling framework is proposed in [2], which is able to find feasible schedules for work-conserving-infeasible task sets. Although, they do not consider the constant interaction with users that produces new entry tasks in a system. The schedulability analysis of sufficiency proposed by Baruah [3], henceforth called BAR06, consists of validating the restriction (1) for a set of n known periodic tasks, τ, on an identical m-multiprocessor platform. Vsum(τ) ≤ m− (m− 1) ·Vmax(τ) (1) where Vsum(τ) := ∑ τi∈τ V (τi,τ) (2) Vmax(τ) := max τi∈τ {V (τi,τ)} (3) and V (τi,τ) is the utilization of the periodic non-preemptive task τi: V (τi,τ) := ci Ti −emax(τ) (4) emax(τ) is the maximum of the execution requirements in τ, as appreciated at (4), it is the worst case interference time scenario caused by non-preemption: emax(τ) := max i=1,...,n {ci} (5) where ci and Ti are the execution requirement and the interrelease times of the periodic task, τi ∈ τ, respectively. An obvious observation is that a task τi ∈ τ with low utilization can not pass (1) if ci > Tmin, where Tmin denotes the minimal period Ti of the tasks in τ. 3 Transformations Global EDFnp try to assign as many tasks with the promptest deadline as it can to available processors, then schedules the instances with the next earliest deadlines as it can and on and on, until either all processors are busy or there are no more waiting task instances. With the aim of applying the schedulability analysis proposed by Baruah in [3] over job-shops scheduled by global EDFnp in a heterogeneous platform, the next series of virtual transformations is realized: 1. Transformation of a heterogeneous platform into a homogeneous one. https://doi.org/10.15837/ijccc.2021.3.4221 4 2. Transformation of the incoming aperiodic non-preemptive job-shops into periodic tasks. Lemma 1. A uniform heterogeneous platform has associated an identical (homogeneous) platform. Proof. The proof is by construction. Let π = (s1, s2, . . . ,sm) denote the speed rates of the uniform multiprocessor platform of size m with the kth processor having a rate of sk, where 0 < sk ≤ 1 (note that rate smax = 1 is the slowest one). To convert the current heterogeneous platform into a homogeneous one is necessary to meet that all processors speeds are the same. To do that it is necessary to find a factor for every processor in the platform, let αk be such value, k = 1, . . . ,m, such that αk·sk = smax where smax is the rate of the slowest processor smax := max k=1,...,m {sk} then αk = smax sk This new homogeneous platform is a virtual platform over which the schedulability test will be applied for the new entering tasks. Note that the factor αk indicates how much the kth virtual processor must slow down to be part of the slowest homogeneous platform. Also note that the bigger αk is, the slower the kth-processor will turn. By lemma 2, the execution requirements of the pseudo-periodic tasks are built under the assump- tion of having an underlying homogeneous platform mentioned in lemma 1. It is assumed that for every τi,j ∈ τi there exists an execution requirement ci,j (i = 1, . . . , n) that is given in terms of the slowest processor in π. It means that a job, τi,j, executed on the kth-processor of the heterogeneous platform last less than on a homogeneous one: ci,j ≥ ci,j ·sk for any k = 1, . . . ,m. Lemma 2 (Pseudo-periodic task). A job-shop, τi, has its equivalent associated periodic task, which is called the associated pseudo-periodic task, let denote it as τ ′i. Proof. The proof is by construction. An associated pseudo-periodic task, τ ′i, can be represented as the tuple (ri,Ci,Ti(t)), where ri is the same release time as in τi; Ci, short for C(τi), is the virtual execution requirement, which is computed as C(τi) := max τi,j∈τi {ci,j} (6) thus, for each ci,j of every non-executed job τi,j ∈ τi, it is necessary to add Ci−ci,j units of unoccupied time in order to have ni instances with the same execution requirements (just like periodic task instances); Ti(t), short for T(τi, t), is the imposed virtual period of the task τi that is computed as T(τi, t) := ⌊ di − t ni ⌋ (7) where di is the absolute deadline of τi and t is the time instant in which Ti is calculated. It must be met that Ti(t) ·ni ≤ di and Ci ≤ Ti(t). In this studied setting, the jobs of τi ∈ τ, for all i = 1, . . ., are finite; their execution requirements are not necessary equal; and they are consumed during run time. Also, the arrival instants of new job-shops are considered during run time (online). Hence the necessity to modify BAR06 analysis: Considering the evolution in time of the execution of the “accepted” job-shops. Once τi,j ∈ τi was consumed, is discarded, therefore v(t,τi,τ) must be calculated by means of substituting definition (6) and (7) (see lemma 2) into (4) every time the server task is active at instant t. v(t,τi,τ) := Ci Ti(t) −emax(τ) (8) https://doi.org/10.15837/ijccc.2021.3.4221 5 where emax(τ) is the maximum execution requirement of the periodic task set τ: emax(τ) := max τi∈τ {Ci} (9) once considered all this run time dynamism to the new model, innequation (1) has been modified as showed next: vsum(t,τ) ≤ m− (m− 1) ·vmax(t,τ) (10) where vmax(t,τ) := { maxτi∈τ{v(t,τi,τ)} if τ 6= φ 0 if τ = φ (11) and vsum(t,τ) := {∑ τi∈τ v(t,τi,τ) if τ 6= φ 0 if τ = φ (12) note the dependence on time for this new acceptance analysis. So far, it is known how to tailor the temporal restrictions of a fixed set of job-shops into a set of periodic tasks but not how to go through with new incoming job-shops to the system. Therefore, a systematic way of converting and verifying the released tasks is described below. 4 Task transitions Global EDFnp is the underlying algorithm for this approach and it also utilizes a task server (τs), they are adapted to a multiprocessor platform to allocate resources for released incoming tasks (here task means a job-shop). There are four possible states tasks can go through, namely released, wait, active, and dead; each state disposes of its own data structure (i.e. a heap) that helps to make reference to the treated tasks. As showed in Figure 1, for instance, the state released has associated the data structure τreleased, that receives all new execution requests of arriving tasks at any time to the system. When an interruption occurs at instant t and there is enough budget for the server task (let Cs denote the budget capacity of the server), tasks referenced by τreleased begin to be transformed (see lemma 2) and then they change their status to wait, it means that they were allocated to τwait by τs. Formally τreleased is defined as follows τreleased := {τi : 0 ≤ ri ≤ t} When global EDFnp selects τs as the highest priority task and all tasks from τreleased have been transformed, τs validates which tasks from τwait are still schedulable. Definition 2 (Schedulable task). At instant t, given the total utilization of the currently performing pseudo-periodic tasks, vsum(t,τactive), it is said that a task τi ∈ τwait is still schedulable at instant t if it can still be executed by its deadline, di: t + ni ·Ci + vsum(t,τactive) · (di − t) m ≤ di (13) where ni is the number of jobs in τi that have not been scheduled yet and m is the number of processors. If a schedulable task in τk ∈ τwait passes BAR06 analysis, then its status changes to active and is moved towards τactive that is the set of tasks currently assigned to the platform; otherwise, it is either discarded (put towards τdead) or rejected (placed back to τwait). Note that, at instant t, τwait will keep schedulable tasks: τwait := {τi : satisfies (13) and vsum(t,τ) > m− (m− 1) ·vmax(t,τ)} https://doi.org/10.15837/ijccc.2021.3.4221 6 Acceptance test Transformation τ k re je c te d discarded Interruption φ ended acceptedactive wait released (τactive) (T released) (τreleased) (τwait ) dead (τdead) t+ Figure 1: States of tasks during run time where τ = τactive ∪{τi}. As it is possible to observe in Figure 1, rejected tasks are still schedulable, while discarded tasks are thrown away because they cannot meet restriction (13), besides according to [3], it must be satisfied the next restriction before verifying inequality (10): The maximum of the execution requirements in any task set τ, emax(τ), must be less than or equal to the minimum of the periods of τ: emax(τ) ≤ Tmin. When a task satisfies (10) is added to τactive which is a global set of pseudo-periodic tasks being scheduled by EDFnp. Algorithm 1 Acceptance Test 1: Treleased ←− convert2Pseudoperiodic(τreleased) 2: τwait ←− τwait ∪Treleased 3: for all τ ′k ∈ τwait or c s is not used up do 4: τ ←− τactive ∪{τ ′k} 5: if t + nk · Ck + vsum(t,τactive) · (dk−t) m ≤ dk and Ck ≤ minτi∈τactive{Ti} and Tk ≥ maxτi∈τactive{Ci} and vsum(t,τ) ≤ m− (m− 1) ·vmax(t,τ) then 6: τactive ←− τ 7: else if t + nk · Ck + vsum(t,τactive) · (dk−t) m > dk or Ck > minτi∈τactive{Ti} or Tk < maxτi∈τactive{Ci} then 8: discard(τ ′k) 9: end if 10: end for As showed in algorithm 1, every time an interruption is triggered at instant t, every τk ∈ τreleased is stored in an auxiliary data structure, Treleased, and then converted into its associated periodic task, τ ′k, before it becomes part of τwait (see line 1) and then restrictions (13) and (10) are verified for each τ ′k ∈ τwait (see lines 3-10). 5 Server design Generally, the server, denoted by τs, is a special periodic task whose intent is to handle new incoming aperiodic requests as soon as possible and without causing previously accepted tasks to miss their deadlines. In this case, two types of requests are considered: first, those tasks that are waiting until instant t in τreleased to be transformed; second, those tasks that are stored in τwait and need to be validated, in case to be accepted they are moved towards τactive. τs adapts itself to the current platform conditions, it means that it is able to change its budget capacity at every triggered interruption (see section 6). Note that task server has not restricted execution so that it can be run on any processor. https://doi.org/10.15837/ijccc.2021.3.4221 7 The utilization for τs, Us, is bounded according to the capacity left by the tasks in τactive. At instant t, when an interruption has been triggered, Us is calculated as follows: Us(t) = m−vsum(t,τactive) (14) since the addressed problem copes with non-preemptive tasks, Us is treated as the upper bound for τs’s utilization value (us), this is 0 < us ≤ Us. It is necessary to regard τs as one extra task in the set, then inequality (10) must satisfy: vsum(t,τactive) + us ≤ m− (m− 1) ·vmax(t,τactive) thus, for the non-preemptive task system, us must meet us ≤ m− (m− 1) ·vmax(t,τactive) −vsum(t,τactive) (15) It is supposed the existence of a user defined value that delimits the minimum value the τs’s utilization can take, let usmin be that value, such that 0 < usmin ≤ us. From (15) and usmin, the value for us is defined: us := max{usmin, m− (m− 1) ·vmax(t,τactive) −vsum(t,τactive)} (16) with this is pretended to assign the highest possible utilization value for us. The budget capacity for τs, Cs, is upper bounded by emax(τactive), (i.e. 0 < Cs ≤ emax(τactive)), this is to guarantee that there will be no need to calculate again the utilization value for all the tasks in τacepted. To guarantee predictability, it is also necessary to meet that there will be always enough budget to execute, at least, one request in the system, then Cs is bounded in the next way: max{c1,c2}≤ Cs ≤ emax(τactive) (17) Once that Cs was set, it is necessary to define the absolute deadline calculation for τs, ds. It is calculated in such a way that the interference of the task with the highest job execution requirement, emax(τactive), is considered, and all the allocated budget is used up by such deadline: ds(t) = t + ⌈ Cs us + emax(τactive) ⌉ (18) Although the considered task set consists of jobs with precedence constraints, the analysis is executed at job level. Under the established conditions and the proposed transformations it is possible to guarantee the execution of a task through their jobs. The rules to replenish the budget are enunciated next: 1. Initially, at t = 0, us is set according to (16), note that if there is no task in τactive (τactive = φ) then us = m, see definitions (11) and (12), it means that at the beginning τs could have the platform at its complete disposal. 2. Every time the deadline of τs (ds) has been reached, Cs is set to the lowest necessary value it is able to take: Cs = max{c1,c2} (see (17)) and the next deadline ds is again calculated according to equation (18). 6 Interruptions Interruptions are important for two reasons. First, in every interruption occurrence, at time t, the server’s budget is reset according to the number of tasks in τactive. Second, the next interruption is set at instant t+ to assist the released tasks (τreleased). It is assumed that the first interruption occurs at t = 0 and there is also a minimum utilization value, usmin, defined by the user for the server task τs, then t+ is calculated as follows https://doi.org/10.15837/ijccc.2021.3.4221 8 π1 π2 πk πm t t + di1 di2 dik dim x 1 x 2ϕ Figure 2: Different operation subintervals in which the interval [t,t+) may be divided: x1, x2, and φ t+ = t + ⌈ φ + x1 + x2 us ⌉ (19) where φ is the time interval that will take the platform to calculate the next interruption, t+, and the budget for the server; while x1 and x2 are the time intervals that will take the transformation and the validation operations (through condition (10)), respectively, see Figure 2. To know how long it takes to calculate t+ the next suppositions are made: S0 It is considered that the server task τs always works under the worst case scenario it may have, it is us = usmin (see section 5). S1 The required time to calculate φ is constant. When implementing, it is possible to endow the receiving data structures of counters that permit to know at every time the number of total jobs or tasks in a data structure as required, let cτxjobs and c τx be the counters that store the number of jobs and tasks for the data structure τx, respectively. S2 Both transformation and validation are operations that depend on the number of jobs and tasks, respectively. S3 Neither transformation nor validation operations take zero time. The amount of time used by these operations is described by two constants: Let c1 > 0 and c2 > 0 be such constants for the transformation and validation, respectively. From S2 and S3, it is easy to deduce that the complexity of calculating t+ only depends on the number of jobs in τreleased and the number of tasks in τwait. Lemma 3 (Task transformation time). The needed time to transform the tasks in τreleased depends on the number of jobs it has x1 = c1 · cτreleasedjobs that can be rewritten as x1 = c1 · ∑ τi∈τreleased ni where ni is the number of jobs in the task τi ∈ τreleased. Proof. Proof follows from the suppositions S2 and S3. After performing the transformation of the tasks in Treleased, they are added to τwait where they wait until acceptance or removal. Note also that it is possible to know in a delimited time interval if a task cannot be performed by its deadline, namely [t,t + φ + x1), this gives the possibility to the user of changing the task to other system. https://doi.org/10.15837/ijccc.2021.3.4221 9 τ’i , k π1 πl πm π1 ' πl ' πm ' π1 πl πm Identical (homogeneous) Uniform (heterogeneous) si , k f i ,k ri , k di , k τ i , k Figure 3: Correspondence between a job-shop (down) and its pseudo-periodic associated task (up) Lemma 4 (Task validation time). Once the tasks have been transformed, the needed time for validating the tasks in τwait by BAR06 is x2 = c2(2 · cτactive + cτwait ) where cτactive and cτwait represent the number of stored tasks in the data structures τactive and τwait, respectively. Proof. Let τwait be the data structure in which the tasks wait until either execution or removal (to discard a task). For BAR06, it is needed to calculate two values from the tasks that are currently assigned to the platform, τactive; first, the maximum execution requirement, vmax(t,τactive), this operation takes proportional time to the number of tasks in τactive, cτactive; second, vsum(t,τactive) that also takes proportional time to cτactive. The tasks in τwait are validated in a sequential way, thus every other task trying to be part of τacvtive do not increase the time complexity. It is still having linear time, according to the number of tasks that need to be validated, then the time, that take the validation of all tasks in τwait, is added. As mentioned above, in supposition S1, through the use of counters in every data structure, it is possible to obtain the information of the number of tasks and jobs from τreleased, τwait, and τactive in constant time, thus the calculations of x1 and x2 can be considered as negligible and so do for φ. Therefore, equation (19) can be reduced to t+ = t + ⌈ x1 + x2 us ⌉ . (20) 7 Theorem After all of those transformations over both the platform and the task set; the validation of tasks, and interruptions, the next result is enunciated. Theorem 1 (Heterogeneous schedulability guarantee). If a set of pseudo-periodic tasks is scheduled on the associated homogeneous platform, then it is also scheduled on the underlying heterogeneous platform. Proof. The job-shops are treated as periodic tasks, due to their previous transformation (see lemma 2), and their respective jobs will be chosen by global EDFnp at run time. When EDFnp selects a job https://doi.org/10.15837/ijccc.2021.3.4221 10 τ ′i,j ∈ τ ′ i from τactive to execute it, it actually selects its corresponding job τi,j in the job-shop τi which is assigned to the heterogeneous platform. The start si,k and release ri,j times of every job will be the same in both platforms. Although the jobs assigned to the heterogeneous platform finish its processing in less time they follow the execution of the homogeneous platform (see Figure 3). Since τ ′i ∈ τactive its schedulability is guaranteed by BAR06, as previously mentioned, the execution requirements have been given in terms of the slowest processor speed, then at run-time, when τ ′i,j is assigned to the homogeneous platform, with an execution requirement of Ci, at instant si,j then the job associated ,τi,j, executed on the πl processor does not exceed the limit imposed by the homogeneous platform, this is si,j + Ci · sl smax ≤ si,j + Ci as 0 < sl ≤ smax ≤ 1 then slsmax ≤ 1, where sl is the speed rate of πl. As the speeds in the heterogeneous platform are faster than in the identical (homogeneous) platform (see lemma 1), the execution requirement of every job τi,j, ci,j, will meet ci,j ≤ Ci, where Ci is the execution requirement of the associated pseudo-periodic task τ ′i. 8 Conclusions This extension to BAR06 lets determine online if a new entering nonpreemptive job-shop is able to meet its deadline over a heterogeneous platform. Since this schedulability analysis extension has a linear time complexity, according to the number of tasks, it is convenient to implement the presented result on embedded systems with limited computational resources where human activities occur at any moment generating new aperiodic entering tasks. It is important to emphasize that the time of estimating the stay of a task in embedded systems represent a challenge due to the availability and capabilities of resources. It should be also noted that important aspects like memory management and communication delays in task migration are not yet considered. To fulfill current system requirements most of these features should be included to provide better schedulability analysis. Nevertheless this is a deterministic approximation that focuses on systems with limited resources on platforms composed of diverse components and it could be used as guidance for developing another algorithms, metrics, and strategies. The presented strategy has the potential to be implemented in Decision Support Systems (DSS). One concrete example could be dispatcherr [5] which is a family of DSS that are meant to support the logistics and production control decisions to be made in the context of the continuous process industries and related fields. Another example could be an air traffic management system where the scheduling conditions change frequently and the necessities of resources usage (for instance, the aircraft takeoff and landing areas) must be validated constantly. Conflict of interest The authors declare no conflict of interest. References [1] D. Applegate and W. Cook, A Computational Study of the Job-Shop Scheduling Problem, IN- FORMS Journal on Computing, vol. 3, pp. 149–156, May 1991. [2] H. Baek, J. Kwak and J. Lee, Non-Preemptive Real-Time Multiprocessor Scheduling Beyond Work-Conserving, in 2020 IEEE Real-Time Systems Symposium (RTSS), Houston, TX, USA, 2020 pp. 102-114. [3] S. K. Baruah, The Non-preemptive Scheduling of Periodic Tasks upon Multiprocessors, Real-Time Systems Journal, vol. 32, pp. 9–20, 2006. https://doi.org/10.15837/ijccc.2021.3.4221 11 [4] J. Carpenter, S. Funk, P. Holman, J. Anderson, and S. Baruah, A Categorization of Real-time Multiprocessor Scheduling Problems and Algorithms, in Handbook of Scheduling - Algorithms, Models, and Performance Analysis, Chapman and Hall/CRC, pp. 1–19, 2004. [5] F. G. Filip, A Decision-Making Perspective for Designing and Building Information Systems. International Journal of Computers Communications & Control, [S.l.], v. 7, n. 2, p. 264-272, sep. 2014. ISSN 1841-9844. [6] M. R. Garey, D. S. Johnson, and S. Michael, Computers and Intractability: A Guide to the Theory of NP-completeness, in Books in mathematical series, W. H. Freeman publisher, 1979. [7] N. Guan, W. Yi, Z. Gu, Q. Deng, and G. Yu, New Schedulability Test Conditions for Non- preemptive Scheduling on Multiprocessor Platforms, in 2008 Real-Time Systems Symposium, pp. 137–146, Nov 2008. [8] K. Jeffay, D. F. Stanat, and C. U. Martel, On non-preemptive scheduling of period and sporadic tasks, in Proceedings Twelfth Real-Time Systems Symposium, pp. 129-139, Dec 1991. [9] S. Nogd, G. Nelissen, M. Nasri and B. B. Brandenburg, Response-Time Analysis for Non- Preemptive Global Scheduling with FIFO Spin Locks, 2020 IEEE Real-Time Systems Symposium (RTSS), 2020, pp. 115-127. [10] M. A. Palomera Perez, H. Benitez-Perez and J. Ortega-Arjona, Coordinated Tasks: A Framework for Distributed Tasks in Mobile Area Networks, vol. 5, May 2011. [11] S. Kato and N. Yamasaki, Scheduling Aperiodic Tasks Using Total Bandwidth Server on Multipro- cessors, in 2008 IEEE/IFIP International Conference on Embedded and Ubiquitous Computing, vol. 1, pp. 82–89, Dec 2008. [12] J. Zhang, G. Ding, Y. Zou, S. Qin, and J. Fu, Review of job shop scheduling research and its new perspectives under Industry 4.0, Journal of Intelligent Manufacturing, vol. 30, No. 4, pp. 1809–1830, Apr 2019. Copyright ©2021 by the authors. Licensee Agora University, Oradea, Romania. This is an open access article distributed under the terms and conditions of the Creative Commons Attribution-NonCommercial 4.0 International License. Journal’s webpage: http://univagora.ro/jour/index.php/ijccc/ This journal is a member of, and subscribes to the principles of, the Committee on Publication Ethics (COPE). https://publicationethics.org/members/international-journal-computers-communications-and-control Cite this paper as: Hermosillo-Gomez, J. A.; Benitez-Perez, H. (2021). Job-shop scheduling over a heterogeneous platform, International Journal of Computers Communications & Control, 16(3), 4221, 2021.