tipska 25 Maria Mikela Chatzimichailidou 1 , Stefanos Katsavounis 2 , Dusko Lukac 3 1 Polytechnic School, Department of Civil Engineering, Democritus University of Thrace, Greece 2 Polytechnic School, Department of Production and Management Engineering, Democritus University of Thrace, Greece 3 University of Applied Sciences, Rheinische Fachhochschule Köln GmbH, Germany Management 2015/74 A Conceptual Grey Analysis Method for Construction Projects UDC: 005.8.005.22 007:005.82 DOI: 10.7595/management.fon.2015.0005 XIV International Symposium SymOrg 2014, 06 - 10 June 2014, Zlatibor, Serbia 1. Introduction It is apparent that project management is the core business process and competency, for which project man- agers (PMs) create and monitor project schedules. Probabilistic approaches were primarily used to bridge the gap between conflicting issues, such as cost, time, and uncertainty, whilst the newly introduced systems thinking suggests a more holistic view, under which the analyst should take into account many phenomena of the project’s internal and external environment [1]. Probability and statistics, fuzzy mathematics, and grey systems theory are the three known research meth- ods employed for the investigation of uncertain systems. Probability and statistics study the phenomena of sto- chastic uncertainty, with the emphasis placed on revealing the historical statistical laws. They also investigate the chance for each possible outcome of the stochastic uncertain phenomenon to occur. Their starting point is the availability of large samples that are required to satisfy a certain typical form of distribution. PERT, the most traditionally used statistical tool in project management, is a method that analyzes the involved tasks in completing a given project, especially the time needed to complete each task and identify the estimated min- imum time needed to complete the total project [1]. The PERT technique is a specific type of the three point estimation technique. The difference between them lies in the weights that PERT introduces in the formula of mean task durations. This weighting technique of PERT is known as weighted average. PERT’s draw- backs are probability approximations and lack of critical resources. In the context of the present paper, one of PERT’s significant disadvantages is that it constitutes a precondition for all tasks, in order that they be in- variably described by using the same shape and scale parameters of beta probability distribution that “works behind” PERT. It is apparent that PERT is insufficient in case of dynamic scheduling and projects with randomly Concerning engineers, project management is a crucial field of research and development. Projects of high un- certainty and scale are characterized by risk, primarily related to their completion time. Thus, safe duration es- timations, throughout the planning of a project, are a key objective for project managers. However, traditional linear approaches fail to include and sufficiently serve the dynamic nature of activities duration. On this ground, attention should be paid to designing and implementing methodologies that approximate the duration of the activities during the phase of planning and scheduling too. The grey analysis mathematical modeling seems to gain grounds, since it gradually becomes a well-adapted and up-to-date technique for numerous scientific sectors. This paper examines the contribution of the logic behind the aforementioned analysis, aiming to pre- dict possible future divergences of task durations in big construction projects. Based on time observations of critical instances, a conceptual method is developed for making duration estimations and communicating de- viations from the original schedule, in a way that approximations will fit reality better. The whole procedure en- deavors to investigate the decrease of uncertainty, regarding project completion time and reduce, up to a scale, a possible inaccurate estimation of a project manager. The utmost effort is about exploiting the gained experi- ence and eliminating the “hedgehog syndrome”. This is attainable by designing a reliable, easily updated, and readable information system. An enlightening example is to be found in the last section. Keywords: Grey Analysis, Information System, Project Management, Technical Constructions, Uncertainty distributed duration data [1]. There is indeed a necessity for tasks, not only the critical ones, to be system- atically and carefully reconsidered, either in planning or execution phases. As regards the uncertainty of tasks duration in large-scale technical projects, the literature offers little evi- dence of approaches that involve stochastic activity durations. Some of the most cited research works, e.g., [2], [3], refer to the parallel examination of time and cost, in a manner that cost affects duration and vice versa. There is also a group of research papers, e.g., [4], where duration is correlated with resource suffi- ciency and other significant and sophisticated research topics, such as the RCMPSP problem. But, such problems exceed the scope of this paper, although they have been examined by the authors in the recent past [5]. The aim of the present paper is not to question other proposed techniques in a general framework; its the contribution is in advocating the use of the gained experience through the use of data derived from past analogous large projects. Apart from conventional methods, which are commonly used in project management, there is a need for a new perspective on the estimation of activity durations, based apparently on actual task durations. These data have to be taken into account, not only during the planning phase of a project by gathering informa- tion on similar past situations, but also in the execution phase. This can be achieved by utilizing any data of current projects. To this extent, grey analysis, as a group of mathematical models, is examined to introduce a different point of view. Grey systems and fuzzy logic models are preceded by the same principle and they investigate situations with no crisp values but fuzzy ones. Grey analysis is still in an infantile state; it is very popular in environmental matters though, once they do show great endogenous complexity. Grey models make a practical tool in case of having sequences of data, something very common in activity durations. Complied with this rule, and by utilizing previous time data, the present work focuses on grounding a data- base and an information system thereupon. This need accrued after reviewing a conference paper [1] which was about grey models in project man- agement. After the paper was accepted by the conference editor, the co-authors realized that there was a pool of latent knowledge that must be definitely utilized. This pool is the experience of decision makers and project managers. As mentioned before, there does exist a so called “hedgehog syndrome”. This phenom- enon refers to the inability of project contributors to learn from experiences, and utilize their knowledge in a beneficial, for the upcoming project, phase [6]. Hedgehogs, for instance, are killed on the road, although they already know the existing risks. But why don’t they learn? The answer is often hidden in plain sight, but people can be blind to it. It is the feedback loops that render knowledge more concrete and transformable. There can be no gain by repeating the same mistakes and problematic processes, structured and stan- dardized knowledge backups are promising though. 2. Grey analysis Many social, economic, agricultural, industrial, ecological, biological, etc. systems are named by consider- ing the features of classes of the research objects, while grey systems are labeled using the color of the sys- tems of concern. In the theory of control, people often make use of colors, to describe the degree of clearness of the available information. Objects with unknown internal information are black boxes, where “black” indicates unknown information, hence “white” signifies the completely known information, and “grey” the partially known and partially unknown information. The research objects of grey systems theory consist of uncertain systems that are known only partially, with small samples and poor information. Theory focuses on the generation and mining of the partially known information, to accurately describe and understand the material world. Incomplete system information may arise either from the behavior of the system, or from its structure, boundaries, and elements. Incompleteness in information is the fundamental meaning of being “grey”. It is apparent that such a general framework includes many interrelated fields. The (GA) main contents of grey analysis are [7]: grey algebraic system, grey sequences, grey cluster evaluation and analysis. Moreover, GM (1,1) is the precondition for any estimation or prediction model, and decision-making models are rep- resented by multi-attribute intelligent grey target decision models. The system of grey combined models is innovatively developed for producing new and practically useful results and the optimization model, consists mainly of grey programming, grey input-output analysis, grey game theory, and grey control. 26 2015/74Management 2.1 The GM (1,1) and Other Models GM (1,1) model is the best-known grey forecasting model, exceedingly applicable in the field of industry, agri- culture, society and economy and is indispensable among a range of other models such as GM (n, m), Ver- hulst model, grey econometric model, grey Markov, and grey neural network model [7]. The contribution of this model is the fact that there is no need of moderating data, but using them as raw in- formation. It is also an estimation and prediction model, suitable in case of low amount of data, where de- cision makers should be objective and efficient. It also belongs to the broader family of GM (n, m) models, where n indicates the degree derivative and m is about the number of values making the input of the model. Hence, GM (1,1) is the grey model of the first order and of one variable. In order to smoothen the random- ness, the primitive data, obtained from the system to form the GM (1,1), are subjected to an operator named accumulating generation operator (AGO). The differential equation of the GM (1,1) model is solved to ob- tain the k-step ahead of predicted value of the system. Finally, using the predicted value, the inverse accu- mulating generation operator (IAGO) is applied to find the predicted values of the original data [8]. 3. Information system design The main contribution of the present paper is focused on the fact that there has been no attempt, until now, that correlates grey analysis to project management. Some pending challenges are presented below: Ch.1 How are task durations estimated? Ch.2 Is this a part of the responsibility of project managers only? Do they use their experience? Ch.3 Is there any documentation concerning previous similar projects? Ch.4 Is there any comparative data between observed and estimated durations? Ch.5 Are data well enough and carefully selected, so as to represent the duration of a task in crucial time points, i.e., 30%, 50%, 70% task completion, according to S-curves? Ch.6 Will it be helpful to systematically make measurement for each task of a given project? These challenges draw the context of the present paper and shape the whole algorithmic procedure, so as to achieve the best information flow and utilization. The information system (IS) should cover both planning and execution phases, so that it will be realistic and effective. Information systems are the basic tool that contributes to bringing out and reinforcing the information flow and display. Above all stands a database (DB), which is associated with information and records stemming from previous projects. Proceeding from this necessity, the following list of necessary elements is collocated. This, in practice, constructs the IS. ID: Task ID is a unique number assigned to each new registration in the DB. Description: It simply refers to the name of the task, with the purpose that the PM should know which the examined task is. Type_ID: It is suggested to divide the tasks into categories, so as to shape groups of tasks that have such- like attitude. These types are not random, they are carefully chosen instead, based on the deep knowledge of technical elements, e.g., foundation, scaffold etc., concerning tasks and the project as a whole. It is ap- parent, though, that according to the type of the project, e.g., construction, IT, research, manufacturing, there is a different way of dispensing tasks to types and there are thus different types of tasks as well. It is interesting that, due to singularity characteristics, there may exist only one task for a kind of type. Type_Name: This information is coupled with the “Type_ID”, because it describes the type to which a task may belong. Status: The status evidently depicts the phase in which a task evolves. There are four values referring to sta- tus: The “Zero” (0) value is for tasks totally executed (100%) and belongs to normal duration, whereas num- ber “one” (1) is for crashing duration, i.e., “compressed” task duration. Tasks with execution percentage <100%, i.e., semi-executed, are given the number “two” (2) for normal duration, and the number “three” (3) for the crashing one. 27 Management 2015/74 Number: In respect of calculations, mostly referring to GA models and models that utilize sequences of data, the number of time-points comprising time-series is not random but needs to be considered according to the nature and the constraints of the project under consideration. With a quantitative means of interpret- ing the physical significance of these points, it is clear that the exact number of audits are done and the ob- servation data during these audits are already known. It is of high significance to keep this kind of information, since it constitutes the input data for any sequence-based model, such as GM (1,1). %Complete: This percentage represents the completion rate of each task of a project and the value entry is done manually. In this way the PM is always informed about the course of events and can decide how and to what extent he/she will take advantage of the available information. Some possible scenarios may con- cern the following: (a) If a task is 100% complete, then he/she can use this information, as gained experi- ence, to manage another project. (b) If, on the contrary, the task is incomplete, i.e., <100%, then the PM uses the known time-point to predict the recessive part of the incomplete task. At this point, the “Number“ infor- mation is indispensable for the prediction of the remaining time that a task needs to be completed. Project_code: This information refers to the project to which the examined task belongs, and it is useful mainly because of the following: in case the PM needs to ponder the behavior of the task, within the proj- ect context, and consider its influence on the project, he/she also needs to perceive the project as a whole, i.e., as the interaction between the tasks. Project_date: In huge construction projects, e.g., superstructures, engineers and managers usually deal with tough decisions related to adjustments in the course of events, or in subprojects that need to be re- designed and re-planned, in order for the whole project to be successfully completed. These adjustments are inevitably affected by the level of the available technological infrastructure and the innovative ideas of the team of engineers, their utmost goal being to encounter challenging conditions, such as weather phenom- ena, composition of soil, even cultural characteristics, such as in, e.g., the Shanghai World Financial Cen- ter. It is thus vital to base any decision on gained experience from projects belonging to the recent past. Project date refers to the project completion date and it may be considered as being a practical filter, for avoiding obsolete data. If the DB is further extended, other kinds of dates, such as the date when a project was delayed or when it malfunctioned, could be used as warning indicators for future projects. Observed_total_duration: It corresponds to the total duration of tasks already completed, i.e., 100%. The tasks may belong either to a previous project, or to the ongoing one. It is one of the most important regis- trations because the PM can select, from the DB, the task or tasks that have a total observed duration very close to the one pursued. Observed_Values: At this point, the DB displays the time-points of a sequence of data separately. These points refer to the time-points where “informal” checks, i.e., not project milestones but task checks, took place during the execution of a task. Practically, the points may contain input data for the GM (1,1) model, so as to estimate or predict task and project durations. Estimated_total_duration: It is the output of the GM (1,1), or any time-series model. Here, the estimated (with the use of the estimation model) values (estimated total duration values) derive from the use of “Ob- served_Values”, as input data to the GM (1,1) model, as used herein. Estimated_values: It is the same as in the case of “Observed_Values“. Relative_error: The deviation between the “Observed_total_duration” and the “Estimated_total_ dura- tion” signifies the error between the two values. Mainly for practical reasons, percent error is preferable and it is also by far easier to compare such types of error. 4. The conceptual model Aiming to reclaim and use the hidden information codified through the aforementioned classification, a con- ceptual model, describing the fundamental decision and action steps, is of high significance and need. In the following two cases, normal duration in planning and execution phases are examined preparatively. 28 2015/74Management Before elaborating on the method, it is useful to point out that in the proposed model, resources are not taken into account directly, since GM (1,1) model only uses time-series data. Under this consideration, the PM has to determine and exercise possible resource deficiency on his/her own. The input data and estimations will be affected by this assumption. Figure 1: IS matrices and types of relations 4.1 Planning Phase The conceptual model, i.e., the algorithmic steps, constitutes the core of the suggested procedure. The planning phase includes a sub-process, which is named a “preprocessing” phase, and it is the time period during which the PM should make some crucial and subjective decisions about the project as a whole. PLANNING -Start Preprocessing- Step 1: Create the first project sheet in any project management software, where the PM will enter only his/her subjective estimations, about each activity of the project. He/she should namely decide on the fol- lowing: • The tasks of the project • The task relationships: finish-to-start (FS), start-to-start (SS), finish-to-finish (FF), start-to-finish • (SF) The sequencing constraints, e.g., “not earlier than” • Time estimations for each task, made by the PM The step that follows is the critical path calculation, which will draw a first picture regarding the project dura- tion. After this, the PM can save the first baseline file of the project. That baseline will be updated in the next steps of the planning phase. 29 Management 2015/74 -Finish Preprocessing- Using DB to update PM’s estimations: For each of the tasks, the PM should trace back to the DB, so as to assign one or more (one-to-many relationship, 1 → ∞) “older” tasks to the one currently examined. Step 2: Search in the DB for tasks having Status=0, i.e., tasks already executed, having the same Type_ID, since the request is to look for tasks that belong to the same category or exhibit analogous behavior. Step 3: In this step, the PM should definitely consider the maximum deviation amplitude between his/her es- timation and the DB entry. If this amplitude constraint is not covered by at least one registration, the PM should then proceed to the closest one. Figure 2: Algorithmic steps in the planning phase Step 4: So, after the filtering, there are three possible scenarios for the DB entries: 1. More than one tasks have “passed through the filters” 2. No task has “passed through the filters” 3. Only one task has “passed through the filters” Scenario 4.1: In this case one more filter referring to “Relative_error” is put into action. The accepted val- ues should lay down this threshold. In case of tie, the “Project_date” the filter is activated. New ties are broken arbitrarily by the PM. If the DB entry-task has the duration sonsistent with the PM’s estimation, then (1) either the sequence of data is kept as the one in the DB, or (2) it is changed in such a way that the sum of the time-points is equal to the initial. Then an estimation of the duration, based on GM (1,1), is made. If the two duration estimations disagree, the sequence should inevitably change, in such a way that the sum of the time-points is equal to the PM’s initial estimation. If, for example, the PM makes an estimation which is higher or lower than the DB observed value, the sequence is slightly altered, in order to reach the PM’s estimation. After either of the two cases the GM (1,1) estimation model is applied. 30 2015/74Management Scenario 4.2: If no entry is compatible, then the PM leans on his/her own subjective estimation. Scenario 4.3: In this case the same procedure is followed as the one mentioned in Scenario 4.1 Step 5: Project managers perform the same actions for all the tasks of the project and also calculate the GM (1,1) estimations. Then, the baseline file has to be updated and saved as BASELINE 0. This is the project plan that will be used in the execution phase. Fig.2 is a graphical representation of the aforementioned pro- cedure regarding normal duration in the planning phase. 4.2 Execution Phase In this phase, the objective is to approximate reality. It is of high importance to collect as much information as possible and translate it to time-points, which shape sequences of data, i.e., the input of sequence-based models. A safe (to an extent) prediction, with low uncertainty, needs to be achieved. In case of construction projects, the lowest number of time-points in a sequence of data is determined. This number is equal to five and this means that one needs at least three time-points to predict the other two, in case of an unfinished task. This number depends on the total duration of the task and of some technical and practical issues, such as cost, available human resources to perform control actions, and avoidance of destruction. Frequent and informal checks, apart from predefined project milestones, aim to facilitate information management and flow, not to fetch process stiffness. If there is a task that lasts long, then a relatively satisfying number of time- points is ten. To predict such an incomplete task, and for the same reasons argued about above, six out of ten time-points are adequate. EXECUTION Step 1: One should first specify the dates of the milestones. A milestone is an event that receives special at- tention, thus in this case, the milestones approximate 30%, 50% and 70% of the total project duration, where projects tend to alter their behavior and performance. Check dates should be posterior to the preceding ones and definitely posterior to project start date. Figure 3: Algorithmic steps in the execution phase Step 2: For each and every task that has already started before the check dates there are the following op- tions: 1. The task or tasks have finished, i.e., 100% completion, before the check date. In this case, the PM should enter into the project management software the observed-actual start and end dates, which may differ from the planned ones. As for the “%Complete”, it is registered as equal to 100%. The se- quence of data is also registered to the DB and the “Status” is “0” or “1”, since it refers to normal du- rations. 31 Management 2015/74 2. There is a case where the starting date of a task precedes check dates but this is according to the planning. If during the execution phase this task is delayed for some reason, then the PM should enter 0% to the “%Complete“ DB element. 3. If the task has already started and it is still ongoing, then the following possibilities are recognized: If there is insufficient number of known time-points, i.e., the unofficial checks are not enough or the task is in a premature stage, then the PM had better avoid any prediction sequence-based model. He/she could enter the precise and observed value of “%Complete” and the new starting date if this has changed due to delays or accelerations. If there is a sufficient number of known time-points, GM (1,1) forecasting model takes these points as an input and predicts the rest of the task duration. The predicted values are registered in the project management software, so as to predict the new total project duration. Step 3: The last step is to update the project schedule and save the new baseline file, i.e., BASELINE 1. That procedure is repeated for every milestone of the project. After the last check date, i.e., milestone, the proj- ect management software displays the most recent prediction, referring to the updated project duration and end date. Fig.3 illustrates the execution process. 5. Comprehensive numerical example The numerical example is the one presented by Chatzimichailidou et al. in [1]. For the reason that it was the incentive to design the aforementioned rudimentary IS and DB, the authors intentionally chose to present the same example to clear the mist. What is more, by using the same example it is also verified that the sug- gested DB works in practice, and can be further developed and enforced to better cover real needs. 5.1. Planning Phase Fig.4 [1] refers to task durations according to PM’s estimations. In particular, it displays durations and thir- teen simply enumerated tasks, since bridges consist of specialized tasks, which are generally difficult to un- derstand. In Fig.4 the critical tasks are given, whilst the durations included in the table are those determined, during the phase of planning, by the project management team (project managers) and regarding their ex- perience in bridge constructions. According to planning calculations the project is going to last 101 days due to critical path total duration. So, this is the point where planning phase starts. Figure 4: Basic information about the bridge construction project with PM’s estimations 32 2015/74Management Figure 5: Data base with activated Type_ID and Status filters The project tasks having been decided, the next step is to search for appropriate registrations in the DB. The “%Complete” filter is applied first for the first task, i.e., task A. Status and Type_ID filters are also activated. This is because Task A refers to “foundation” type of tasks, and the status is “0” for tasks totally executed in normal duration. Hence compatible entries are depicted in Fig.5. Next, the amplitude of the task durations should be checked, so as to further reduce the DB registrations. According to Fig.5 and having defined an amplitude of ± 2, the entry which has 30 days total observed duration satisfies the following filters: ampli- tude, error, date. To sum up: - PM’s estimation: 28d - DB entry: 30d - the given sequence of data (see Fig.5) is (2,4,1,7,2,2,3,2,3,4), which means that the time- points were time snapshots, where unofficial milestones should take place because of critical sub- tasks. There is a slight change, i.e., the sequence is reduced by two units, so as to have a sum=28 and the new sequence of data is (2,4,1,5,2,2,3,2,3,4), which represents the input to GM (1,1) estimation model. - GM (1,1) estimation: 27.99d (27d and 99/100*24h=27d and 23h) Under the same notion the whole procedure concludes to a summary, Fig.6 [1], aided by GM (1,1) estimations for all the other tasks. As far as one can notice, there are tasks that have a big amplitude respecting PM’s and GM (1,1) estimations and others do not; task H, for example. According to Fig.6, this task was esti- mated to last 8 days and it finally takes 7.16 days to be completed. This shows a great gain of time, while on the contrary there are tasks, e.g., task A, which have low fluctuation concerning their estimated durations. This could stem from the distribution of time-points during the checks, and shows that the PM’s prediction is very close to the model output. In a complex, and vastly expensive project, such as long-lasting construction projects, even hours can affect the whole project outcome if someone attempts to accumulate them. Increased sensitivity is one of the most meaningful advantages of the proposed method and it is the first one presented in the literature. Hence, the model estimates that the project will be completed 9 hours earlier (100d & 15h) than stated in the initial PM’s estimation (101d). The amount of 101 days belongs to PM’s initial estimations and 100d & 15h to the first baseline file, i.e., BASELINE 0. This is the point, where the planning phase finishes and the resched- uling, due to execution updates, starts 5.2. Execution phase In the phase of execution, task-progress data are measured by the staff, who monitor the tasks and subtasks. By using these data, the input data of the GM (1,1) prediction model are formed, and then the new esti- mated duration of the unfinished task is calculated. This input data of the GM (1,1) prediction model are a sequence belonging to the already executed project or part of the project. Each sequence has a one-to-one relation with the project tasks and each point of this sequence of data represents the amount of days needed to partially execute the task being considered at that time. The days corresponding to the points of one se- quence are those observed during milestone controls, i.e., unofficial inspections by the staff. Along these lines, the number of the tasks concurs with the number of sequences. The number of time-points, consist- ing a sequence of data, depends on whether the task is about to last long or not, and can be subjectively determined [1]. 33 Management 2015/74 Figure 6: The PM’s and GM(1,1) estimations in the phase of planning Figure 7: Delayed and forwarded tasks during the first check At this stage, the PM has to choose some milestones, i.e., official inspections, to monitor and possibly update the project schedule. These milestones are usually imposed by S-curves, to the degree that each project shows change in its behavior of near 30%, 50% and 70% of its total duration [9]. In the examined project, the first check takes place at 35% [1] (36d of the project gone by), because the project needs to run and relatively proceed in order that enough data should be obtained for the forecasting process. Tasks A and C are already executed, whilst B, D, and E are still under construction. Keeping in mind that the data concern these unfinished tasks, the GM (1,1) prediction model is now used to approximate the new ex- ecution durations of B, D, and E [1]. Specifically, the sequence of data for task B is (2,5,4,-,-), which means that eleven days of work are gone by, while the use of “-” symbol means that two milestones remain to be checked and forecasted by GM (1,1). Similarly, for D is: (3,2,5,-,-), and for E (2,4,3,-,-). The information con- cerning B, D, and E tasks is presented in Fig.7 [1]. The last step is to calculate the new total project duration, after the delay of B, and the unexpected progress of D and E. However, there is no change after the rescheduling, because B is not a critical task, so the proj- ect is going to be executed in 100 days. The new version of baseline is saved as BASELINE 1. The same procedure is performed for the 50% (BASELINE 2) of the project duration (54d of the project gone by), where F is unfinished, and then for the 70% (BASELINE 3) (68d of the project gone by). At this last check, H is unfinished, but there are no sufficient data to calculate its duration via GM (1,1), explaining the question mark in Fig.8 [1]. In this manner, a task with a very short duration, e.g., 3d, is not appropriate to be divided into time-points, i.e., unofficial milestones. Figure 8: The PM’s and GM (1,1) estimations in the phase of scheduling and execution by focusing on tasks 34 2015/74Management Figure 9: The different durations in planning and execution phases Fig. 9 [1] refers to the project as a whole, and takes into account all these changes that took place in the previous Fig. 8. Since there is a high possibility for the critical activities to change, when durations change, they have to be carefully reconsidered. A typical case in complex projects is the existence of multiple critical paths. It must be feasible to detect and examine multiple critical paths, even though the known software packages present only one of them each time. The problem is probably detected in the choice of algorith- mic criteria, thus a suitable algorithmic process could be used to create all the possible critical paths, as- sisting project managers to find the most appropriate schedule solution. 35 Management 2015/74 The fluent duration is the most critical issue in the large-scale complex projects and implicates mismatches and failures, affected by the involved uncertainty. Along these lines, the combination of data and information of traditional project re- ports, concerning current and previous projects, may contribute to organize suitable and elaborate databases, so as to retrieve helpful information when, and if, needed. Besides, even the PM’s experience concerning time uncertainty can be effectively transformed and used to predict the duration of the overall project. An easily implemented IS can be developed as a suitable tool for the proposed GA conceptual model. The fact that the initial GM (1,1) estimation, presented in the previous numerical example, indicated that the project will be completed only nine hours earlier than the PM’s estimation, does not constitute a weakness of the suggested method presented in this paper. The explanation may be twofold: on the one hand, there is always a possibility for the PM to make well-aimed estimations, owing to his/her experience, for example. Besides, the GM (1,1) estimation is based on the entries of the DB, which implies that experience from previous projects is indeed significant for the currently executed project. On the other hand, the fact that in the above numerical example, the chasm between the PM’s estimation, i.e. 101d, and the GM(1,1) estimation, i.e.100d & 15h, is not too wide, does not necessarily entail that this is the norm. The GM (1,1) estimation is always affected by data registered in the DB. The efficiency of the proposed method can be further tested by using a crashing mode for each of the project activities. Since crashing becomes relevant only during execution and by doing corrective movements, the PM has to take into account the possibility parameters of the crashing durations in order that he/she should schedule realistic emergency plans and avoid serious cost overruns, due to underestimated time delays. Technical reports of activities that are not yet completed have to be considered carefully because of their importance to the overall project duration. One of our forthcoming research plans is to appropriately fit the above method to crashing settings as well. However, the conceptual model of the crashing mode seems not to be simple. That is, due to the plethora of constraints and unforeseen circumstances in the crashing of a project it would not be realistic to suggest the same conceptual model. The crashing process may sound simple on the theoretical and experimental bases, however, the validation of the crashing method is quite an extended task, and needs a whole paper dedicated to it in order that it be adequately presented, explained, as well as validated. As mentioned in the main body of this paper, it comprises the only published and extensive attempt which from the very planning phase uses data in combination with grey mathematical modeling to support PM in decision making. It also proves that GM (1,1) is the key factor in utilizing PM’s gained experience as well as information from previous projects. To conclude, the proposed method, its corresponding testing and validation are offered to scholars as an incentive to exercise and even doubt it. Discussion and conclusions 36 2015/74Management About the Author REFERENCES [1] Chatzimichailidou, M.M., Katsavounis, S., and Lukac, D., “Project Duration Estimations Using Grey Analy- sis Models”, International Conference for Entrepreneurship, Innovation and Regional Development, Is- tanbul, Turkey, 2013. [2] Ke, H., and Liu, B., “Project Scheduling Problem with Stochastic Activity Duration Times”, International Journal of Applied Mathematics and Computation, 168 (2005) 342-353. [3] Ke, H., Ma, W., and Chen, X., “Modeling Stochastic Project Time-cost Trade-offs with Time-dependent Ac- tivity Durations”, International Journal of Applied Mathematics and Computation, 218 (2012) 9462-9469. [4] Zhu, G, Bard, J., and Yu, G., “A Two-stage Stochastic Programming Approach for Project Planning with Uncertain Activity Durations”, Journal of Scheduling, 10 (2007) 167-180. [5] Chatzimichailidou, M.M., Katsavounis, S., Chatzopoulos, C., and Lukac, D., “Mass Customization as a Project Portfolio for Project-oriented Organizations”, ACTA Technica Corviniensis, Bullentin of Engineering, 6 (2012) available on line: [6] Maylor, H., Project Management, Pearson, Essex, 2010. [7] Liu, S. F., and Lin Y., Grey Systems Theory and Applications, Springer Verlag, Berlin, 2010. [8] Kayacan, E., Ulutas, B., and Kaynak, O., “Grey Theory-based Models in Time Series Prediction”, Jour- nal of Expert Systems with Applications, 37 (2009) 1784-1789. [9] Cioffi, D.F., “A Tool for Managing Projects: An Analytic Parameterization of the S-curve”, International Journal of Project Management, 23 (2005) 215-222. Receieved: December 2014. Accepted: February 2015. Maria Mikela Chatzimichailidou Polytechnic School, Department of Civil Engineering, Democritus University of Thrace, Greece mikechat@civil.duth.gr Maria Mikela Chatzimichailidou was born on 20 February 1988 in Nicosia, Cyprus and brought up all over Greece by a military family. She has always been interested in the way things work and that is why she chose to study engineering. Safety in any mode, i.e., Time-cost safety, Safety of human life and property and System safety as a specific field of research constitute the core of her research efforts. However, the awareness of the environment in which artifacts and people (i.e., socio-technical systems) are set is a precondition for systems to be less vulnerable against any kind of risk Stefanos Katsavounis Polytechnic School, Department of Production and Management Engineering, Democritus University of Thrace, Greece Dr.Stefanos Katsavounis works as an assistant professor at Democritus University of Thrace, Xanthi, Greece. He also participates as a lecturer in the MSc Program on Construction Project Management in Aristotle’s University of Thessaloniki, School of Civil Engineering. Duško Lukač University of Applied Sciences, Rheinische Fachhochschule Köln GmbH, Germany Dr.Duško Lukač is a lecturer at Rheinische Fachhochschule Köln, University of Applied Sciences in Cologne, Germany. His further responsibilities include cooperation between the Rheinische University and industial companies and for the development and use of joint modules in education. He conducts and is also involved in different R&D projects. Dr. Lukač studied in Cologne, London and Krakow and earned his degrees in the fields of engineering sciences and economy.