Plane Thermoelastic Waves in Infinite Half-Space Caused FACTA UNIVERSITATIS Series: Mechanical Engineering https://doi.org/10.22190/FUME200828008Z Original scientific paper NOVEL METHODOLOGY FOR REAL-TIME STRUCTURAL ANALYSIS ASSISTANCE IN CUSTOM PRODUCT DESIGN Milan Zdravković, Nikola Korunović Faculty of Mechanical Engineering in Niš, University of Niš, Serbia Abstract. Mass-customization is related to optimizing the balance between flexibility, strongly required by the customer-focused industries and manufacturing efficiency, which is critical for market competitiveness. In the conventional industries, the process of designing, validating and manufacturing a product is long and expensive. Some of the common approaches for addressing those issues are parametric product modeling and Finite Element Analysis (FEA). However, the costs involved are still relatively high because of the very special expertise needed and the cost of the specialized software. Also, the specific design of the product cannot be validated in a real-time, which often leads to making hard compromises between the specific customer requirements and the structural properties of the product in its exploitation. In this paper, we propose the novel methodology for real-time structural analysis assistance for custom product design. We introduce the concept of so-called compiled FEA model, a Machine Learning (ML) model, consisting of dataset of characteristic product parameters and associated physical quantities and properties, selected ML algorithms and the sets of associated hyperparameters. A case study of creating a compiled FEA model for the case of internal orthopedic fixator is provided. Key Words: Machine Learning, Gradient Boosting, Finite Element Analysis, Parametric Modeling 1. INTRODUCTION The contemporary production has shown significant progress in adopting disrupting technologies such as rapid prototyping, cloud-based storage, enhanced interoperability of diverse enterprise information systems in the value chain and, last but not the least, Internet of Things. Two of the most qualitative effects of such digitalization to the production processes are more efficient mass-customization [1] and streamlined collaboration-based value chain [2]. While the latter unleashed the vast, diverse real-time Received August 28, 2020 / Accepted January 02, 2021 Corresponding author: Milan Zdravković Faculty of Mechanical Engineering in Niš, University of Niš, ul. Aleksandra Medvedeva 14, Niš, Serbia E-mail: milan.zdravkovic@gmail.com 2 M. ZDRAVKOVIĆ, N. KORUNOVIĆ data about operations, logistics and product lifecycle, the former pushed the trend of servitization [3] over the limits. This trend created the opportunities for enhanced collaboration in a product value chain and affordable use of high-end services in the whole production span, starting from structural product analysis to marketing automation and micro-customer segmentation. Recently, advances in applied Artificial Intelligence (AI) have made possible notable acceleration and quality improvement in the product design stage, especially by considering the integrated product lifecycle management [4], extended lifecycle in circular economy [5] while benefiting for the integrated access to vast data about its design and exploitation [6]. In this paper, we explore the practical impact of using the new disrupting technologies (namely, Machine Learning and cloud-based integration) to resolving the problem of cost- and time-efficient validation of the design of the custom product, based on product family generic design. Such generic design is often represented by the parametric model of the complex product geometry, with other associated relevant features, such as exploitation and environment properties, material properties and others. Customized product design problem is often solved by parametric modeling. Instead of designing the custom product instance from scratch (or by adapting the existing model to new desired properties), designers can choose the appropriate values of the previously defined, critical features of the product family geometric and structural properties, namely, the product parameters. Those choices are made based on the different criteria, including customer requirements, part and material market cost and availability, product pricing policies, exploitation conditions, manufacturability and others. The set of the product’s parameters combined with the other fixed properties is called a parametric product model. The benefits of such a product modeling approach for customization are numerous. The parameters can be used to adapt the product design to the different aspects of its exploitation, as well as manufacturing, such as manufacturability (if the design service is outsourced), cost (including materials and manufacturing complexity), assembly restrictions and customer-focused requirements, such as usability and customer-tailored design (for example, special medical devices that need to be fully adopted to the patient's physiognomy and physiology), among others. In any stage of the custom product design, the single design instance may be validated. Such validation can be relatively simple and quick (for example, inspection of the product visual properties by the customer) but sometimes, very troublesome and significant cost- incurring, such as testing of the product instance physical properties and its integrity in the exploitation conditions. Some of the physical quantities and properties that can be of great importance for the design are deformation, stress and product mass. Testing in exploitation conditions is often replaced by simulating those conditions by using the Finite Element Analysis (FEA) method [7]. FEA relies on the Finite Element Method (FEM), which is a numerical method highly used in structural, thermal, and various multi-physics analyses to simulate product behavior in exploitation and calculate the fields of various physical quantities. FEA could help to calculate the extreme values of physical quantities and properties, such as deformation, stress or strain and compare them to critical ones, before the product is prototyped and tested in realistic environments. Unfortunately, despite numerous researchers’ efforts to consider possibility of real-time simulation [8][9], the FEA analysis of the customized products is often removed from the design pipeline due to the Novel Methodology for Real-Time Structural Analysis Assistance in Custom Product Design 3 mass-customization related time/cost pressures (FEA software annual subscription rates are as high as tens of thousands of dollars), long duration (complex product FEA alone, even without considering FEA model preparation, can last for hours, even days), high- level expertise requirements and consequently, high service cost. We addressed the above issues by assuming the following scenario (see Fig. 1). A manufacturing company maintains a parametric model of the product family design. Upon the customer request, the designer needs to create this model’s instance, so this instance meets all the given requirements. Instead of launching the FEA on the specific instance, the designer is assisted in a real-time by the software which is using the model we call the “compiled” generic FEA model. This software is integrated with the CAD package used by the designer. Fig. 1 Concept of using compiled FEA models for real-time assistance in product design and validation The compiled FEA model is based on the physical quantities and properties (for example, level of mechanical stresses in critical product areas, product mass and the like) of the number of “characteristic” data instances. Characteristic data instances dataset is a relatively large collection of product model parameter (lengths, widths, distances, material properties, etc.) values in the selected regions, associated with previously calculated mechanical quantities and properties (such as stresses and product mass). Those are calculated once (by using FEA software) for each of the parameters' instances, for the whole product family and then used to fit the prediction function, derived by using a Machine Learning (ML) algorithm [10]. Therefore, the compiled FEA model is actually a serialized ML model and it involves the dataset with characteristic instances, the selected ML algorithm and the best performing hyper-parameters. From the performance point of view, predicting the physical quantities and properties based on the specific set of the parametric model values is trivial and such service can be executed in a real-time, during the custom product design. More important, no additional cost is incurred. The key hypothesis of the research work behind this paper is that based on the above dataset, ML models can be developed for predicting physical quantities and properties of the custom product which was instantiated by selecting the appropriate design parameters 4 M. ZDRAVKOVIĆ, N. KORUNOVIĆ with sufficient accuracy. Another hypothesis, which will not be addressed in this paper is that multi-criteria optimization methods [11] can be used to identify all local optimums, namely, to identify the characteristic instances from the dataset that are associated with the best combination of physical quantities and properties. Some initial work addressing the optimization problem has been already done [12]. The concept of the solution has been already proposed by the authors [13]. In this paper, the concept is further elaborated and demonstrated by considering realistic design and exploitation aspects (dataset), with improved methodology, analysis of the results and their visualization. The remainder of the paper is structured as follows. First, a novel methodology for facilitating real-time assistance in validating the custom product design is presented. Then, the methodology is demonstrated in the case study of validating the design of the internal fixator medical device. Finally, the guidelines for the implementation of the methodology and its use in daily practice are provided. 2. METHODOLOGY The process in which the compiled FEA model is built consists of two major activities: design of experiment and training the prediction model. Design-of-experiment (DOE) feature of the selected FEA tool is used to create the dataset of characteristic product instances, based on the selected product family parametric model. DOE feature of standard FEA tool is usually a part of design exploration functionality and serves as a basis for design surface-based optimization. Design surfaces are fitted to the dataset obtained from DOE and serve as meta-models predicting the relations between input and output parameters. Various experimental plans are usually available when DOE is performed. The choice of experimental plan depends on the non-linearity of the relations between design parameters and output parameters (such as deformation or stress). Highly nonlinear relations require experimental plans that contain more data points and cover the whole design space, including its extreme values. The ML-based model proposed here requires that a detailed dataset should be created. If this cannot be accomplished using standard detailed experimental plans, a custom experimental plan may be used. After the experimental dataset is created by DOE, the ML prediction model is created by fitting the selected ML algorithm with the dataset above, where the design parameters are considered as input and the physical quantities and properties as output features. The prediction model is developed by using the Python programming language. Its development follows the typical ML pipeline, namely correlation analysis, feature selection, algorithm selection and optimization of the selected algorithm hyper-parameters. Correlation analysis aim is to reduce the problem dimensionality. For very complex products, number of design parameters can be measured in hundreds. While creating a compiled model for such a product is one-time job and thus it does not have a significant effect to a process, prediction (including necessary data pre-processing) may come with a computational cost and consequently slower performance which could affect the user’s experience. By selecting the most relevant product geometrical properties, we can address this problem. Two-way correlation analysis will be performed. First, correlation of the individual parameters with the physical quantities and properties will be assessed by Novel Methodology for Real-Time Structural Analysis Assistance in Custom Product Design 5 looking at the Pearson coefficients. Second, the Recursive Feature Elimination (RFE) [14] method will be used to assess the combined relevance of all n-tuples of input features to each of the individual output features. Different ML algorithms will be tested to choose the one with the least Mean Absolute Error (MAE) - a key indicator for assessing the accuracy. Selected algorithms are linear regression, K-Nearest Neighbors, Support Vector Machine Regressor, Decision Tree and two ensemble methods, namely Random Forest and Gradient Boosting. K-Nearest Neighbors [15] is a non-parametric method used since the beginning of 1970-ties. It is so-called instance-based method; it stores all available cases/instances and classifies new cases based on a similarity measure (namely, a distance function). Support Vector Machine (SVM) belongs to the group of kernel methods [16]. It was initially developed for two-group classification problems. Decision Tree or in this case so-called regression tree is the method in which observations about some item, represented as branches are used to make decisions about its target values, represented as leaves. Random Forest [17] belongs to a group of ensemble methods that combine a number of decision trees, and then adopt a mean forecast of the predictions of the individual trees. Random Forest is today considered as one of the most powerful algorithms in the Machine Learning without considering Artificial Neural Networks, namely deep learning architectures. Gradient Boosting [18] adopts the idea of boosting - an optimization of a suitable cost function [19], where an ensemble of weak prediction models, namely decision trees are staged one after another. Some of the selected algorithms, namely K-Nearest Neighbors and Support Vector Machine regressor require that before training data is normalized (scaled in (0,1) range). Feature scaling is required to reduce the training time and improve the prediction accuracy. Those algorithms will be used to develop respective prediction models and test their accuracies. The algorithms with the best performances, as validated by K-fold Cross Validation method will be selected, trained and produced models will be serialized - those models are actually what we call compiled models for real-time structural analysis assistance. Standard deviations of the output features will be used as reference values for assessing the accuracies. K-Fold Cross validation is a method which produces reliable prediction accuracy metrics for a given dataset. Instead of a single split between data for training and testing, it does k-1 splits where each of the folds/sections of data is used as a test set. Hence, the model will be validated in k test runs, each of which will produce an accuracy measure. Mean of those values is then adopted as accuracy of the prediction model. Validation is being carried out for predicting each of the output features, namely, product physical quantities and properties. It is expected that the performance of the models based on different estimators will differ for some of the physical quantities and properties data. Thus, all models, associated with the set of the optimal settings will be serialized. Obviously, those with the best performances for the specific quantities and properties will be used for prediction. Each of the estimators used is associated with the set of so-called hyper-parameters, which define its different properties, related to the way how the model is trained and validated. The best prediction performance for the given dataset is achieved only with a unique set of their values. This set, for each of the estimators and each of the output features is typically determined in a process called Grid Search optimization [20]. Grid 6 M. ZDRAVKOVIĆ, N. KORUNOVIĆ Search calculates accuracy score (per defined scoring function using the specific metric) across defined hyper-parameter space (defined by the value ranges and/or enumerations), most desirably by using the K-Fold Cross Validation method. The search through the combination of the hyper-parameter value from the defined space can be exhaustive (all combinations) or randomized. All steps that involve use of estimators, namely RFE and training models with data, must be carried out with the same conditions. This applies to using not only the same parameters (k) but also the same data in different steps of K-Fold Cross Validation. This is especially important for the models which are trained with a small number of instances - n-tuples of parameters and physical quantities and properties; in those cases, the results (especially in the optimization step) can be misleading. The exception from this rule may be the optimization process for regression problems, in the case that NMAE is selected as a model metrics (which is the case here). Then, Mean Squared Error (MSE) or R^2 regression score could be used. Python implementation of the above methods and functions is Scikit-learn [21] library. It is initially released in 2007. It consists of number of classification, regression and clustering algorithms, ensemble methods, data pre-processing tools, metrics, feature engineering tools and others. 3. CASE STUDY The compiled FEA model is developed for the case of orthopedic device – internal fixator, used in subtrochanteric fractures of thigh bone (femur). This is the case of a highly customizable product which needs to be fitted to the different requirements arising from patient physical and physiological properties, some of many different types of fractures, etc. The process in which this fitting is carried out is out of the scope of the research behind this paper. The fixator parametric model has been created by using SolidWorks CAD software. In this case, it is defined by 6 relevant geometry parameters and fixed design. The illustration of the model is provided on Fig. 2 below. Fig. 2 Parametric model of internal fixator, used in subtrochanteric fractures of thighbone (femur) Novel Methodology for Real-Time Structural Analysis Assistance in Custom Product Design 7 Design Explorer module of ANSYS FEA software, which was used for calculation, features the design-of-experiment functionality. Namely, it is used to generate the set of values of input parameters that defines the collection of characteristic product instances. These values are then used to create the CAD model instances in SolidWorks and send them back to ANSYS for calculation of physical quantities. Central Composite Design (CCD)/Face Centered/Enhanced method was applied in planning the experiment (dataset generation). Central Composite designs are five level fractional factorial designs, which are appropriate for calibrating the quadratic response model. By default, CCD varies the input parameters on three levels each, but still generates less data than a full factorial plan. Face Centered type of CCD ensured that the extreme values of the input parameters were included in the dataset. „Enhanced” option was used to add more data between extreme and middle values of input parameters, resulting in more extensive datasets. Created dataset [22] is used as input to the typical ML pipeline. This dataset, with six parameters counts 89 rows. A small number of data instances were used in a case study for the practical reasons (single instance calculation of physical quantities and properties by ANSYS takes time) as well as because of strong representativeness of data generated by design-of-experiment feature, namely balanced distribution of parameters value over the given range. Distribution of the output features data in the dataset is illustrated by using boxplots in the Fig. 3 below. Fig. 3 Distribution of the physical quantities and properties values in the dataset (boxplots) Standard deviations for Total Deformation Maximum, Equivalent Stress and Fixator Mass are as it follows: std(def) = 1.075536549693965 std(str) = 88.62041183766537 std(mas) = 0.029059066340160897 8 M. ZDRAVKOVIĆ, N. KORUNOVIĆ 3.1 Correlation analysis Analysis of data correlation was carried out by considering Pearson linear correlation and Recursive Feature Elimination (RFE) methods. The aim of the analysis is to determine if the dimensionality of the problem can be reduced, namely if it is reasonable to exclude some of the input variables from the training dataset. It was found that there existed a significant linear correlation between:  Bar length and total deformation (p=-0.950)  Bar length and fixator mass (p=0.899)  Bar length and equivalent stress (p=-0.655)  Bar end thickness and equivalent Stress (p=-0.633) A notable linear correlation is found between:  Bar diameter and Total Deformation Maximum (p=-0.250)  Bar diameter and Fixator mass (p=0.397) Other input values did not have a notable linear correlation with output variables, namely deformation, stress and mass. This implies that some features could be removed from the model, namely, radius trochanteric unit, radius bar end and clamp distance. The correlation of the individual geometrical features with the physical quantities and properties is also illustrated with the scatter plots, displayed in Fig. 4. Fig. 4 Correlation of the individual geometrical features with the physical quantities and properties (Pearson) Novel Methodology for Real-Time Structural Analysis Assistance in Custom Product Design 9 The issue of the linear correlation based on the Pearson coefficient is that it can help in assessing only the relevance of individual input features for prediction of the output ones. In other words, while one specific input feature may have a very low correlation with the output, it may appear that in combination with the other ones, its changes may significantly affect the output features. Thus, to complement the correlation analysis, a Recursive Feature Elimination (RFE) method is applied in order to explore the relevance ranking of the subsets of the input features. RFE is a method which performs backward feature elimination. The algorithm begins with the set of all features and successively eliminates the feature which induces the smallest effect on the output features. It can be applied by using the selected algorithms, in our case - simple linear regression, SVM, Decision Trees, Random Forest and Gradient Boosting regressors. KNN is excluded because its regressor does not expose the attributes relevant to RFE. RFE method calculates ranks which are the measures of the relevance of individual input features in combination with others for predicting the output features. Ranks are calculated in the range (1,5) where less value means better relevance/correlation. All results are then displayed in bar charts to provide an effective illustration of the ranks by different features (Fig. 5). The relevance of the first three input features is clearly confirmed by the RFE method and all algorithms. The exception is SVM regressor which produced outlier results because data was not normalized (requirement for SVR) before RFE estimation. RFE with some of the algorithms suggest some relevance of the input feature #5, namely clamp distance for predicting equivalent stress and deformation mass. The behavior and performance of many ML algorithms are referred to as stochastic because they involve randomness (random state initialization of the models, random selection of data in K-fold Cross Validation, etc.). For that reason, the indicators that are produced by ML models are typically calculated as a statistical measure (for example, mean) of the population of the specific indicator values produced by the ML models in multiple runs. The minor relevance of clamp distance for predicting some of the output features is not continuously visible in multiple RFE runs. Therefore, it should be excluded from the final set of features, together with trochanteric unit radius (feature #3) and bar end radius (feature #4). Fig. 5 Correlation ranks of the individual geometrical features with the physical properties (Recursive Feature Elimination method) 10 M. ZDRAVKOVIĆ, N. KORUNOVIĆ However, it is important to strongly highlight that the decision to reduce the dimensionality of the parameter set in this specific case would not be practical because the complexity of the product is very low, so the possible savings in the computational performance are infinitesimal. Still, in the cases of very complex products with hundreds of parameters, this methodological step could help achieving critical benefits. 3.2 Compiled models According to the proposed concept, the compiled model is actually a serialized ML model for predicting the physical quantities and properties of the product, based on the parameter values. Three characteristic quantities need to be predicted by the compiled model: maximum total deformation, maximum equivalent stress and fixator mass. Before the model is compiled, the algorithm with the best performance needs to be selected from the pre-selected list that includes: linear regression, K-Nearest Neighbors, Support Vector Machine regressor, Decision Tree regressor, Random Forest and Gradient Boosting regressor. In the following step, the selected ML algorithms with default hyper-parameters were fitted with the dataset and the outcomes of the resulting models’ accuracies were compared. K-fold cross validation (k=4) was used for validation and Negative Mean Absolute Error (NMAE) was used as indicator. Same KFold object will be used in all relevant steps in order to get comparable data. Object is set not to shuffle data because randomness in selecting data for the folds with such a small dataset would not be beneficial. Testing produced the results as shown in Table 1. Table 1 NMAE for different algorithms with default set of hyper-parameters LRE KNN SVR DTR RFR GBR Total Deformation Maximum -0.182 -0.799 -0.209 -0.052 -0.090 -0.034 Equivalent Stress -39.678 -57.302 -65.971 -20.904 -21.127 -18.423 Fixator mass -0.0041 -0.0203 -0.0243 -0.0015 -0.0026 -0.0009 All NMAE indicators are well within the standard deviations for the considered output features and certainly within the limits of acceptable error in structural analysis of products of this type. Given the high linear correlation, as found by the Pearson coefficients, the expectation that the linear regression method will produce good results is confirmed. 3.3 Estimator optimization Grid Search method was used for optimization of hyperparameters. While NMAE is used in the table below, for the optimization, another metrics will be used, namely R^2 (coefficient of determination) regression score. Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). Grid search optimization is implemented in iterative fashion where the specific set of optimal hyper-parameters is determined for each output feature (physical property) and each estimator. It produced the sets of hyper parameters which significantly improved the performance of models based Novel Methodology for Real-Time Structural Analysis Assistance in Custom Product Design 11 on K-Nearest Neighbors and SVR. Notable improvement was made in predicting equivalent stresses with Random Forest and Gradient Boosting estimators. Based on the R2 score values, several conclusions can be made. As expected, Random Forest and Gradient Boosting have shown the best general performance. Both of those ensemble methods are most often used for addressing regression and classification problems by practitioners, despite possible overfitting issues. In our case, with balanced distribution of input features values, overfitting is not a serious concern. With optimized hyper-parameters, Gradient Boosting estimator produces excellent results in predicting fixator mass and total maximum deformation, with R2>0.99 and in prediction equivalent stresses (R2=0.91). Comparable performance was achieved by Random Forest, in predicting fixator mass (R2=0.92) and maximum total deformation (R2=0.95) and by SVR, in predicting maximum total deformation (R2=0.98). Values of NMAE indicator after training the estimators by using the optimal sets of hyper-parameters are shown in Table 2. Table 2 NMAE for different algorithms with set of optimal hyper-parameters LRE KNN SVR DTR RFR GBR Total Deformation Maximum -0.182 -0.331 -0.094 -0.133 -0.106 -0.047 Equivalent Stress -39.678 -37.421 -26.51 -15.726 -14.555 -14.778 Fixator mass -0.0041 -0.0094 -0.0243 -0.0044 -0.0041 -0.0015 4. IMPLEMENTATION To conclude, the considered research hypotheses have been convincingly confirmed in a case study. The developed ML model can be serialized as a compiled FEA model and used in hypothetical CAD tool add-on – container for compiled models of selected product families. CAD model enriched with this add-on can provide real-time structural analysis assistance of custom product design and thus, significantly reduce its time and cost. Fig. 6 depicts the design of the infrastructure for the implementation of the proposed solution for real-time assistance in customized product design. The process starts with the development of parametric model and design of experiment. Design of experiment data is used to develop a compiled FEA model, as described above. It is then deployed as a web service resource. Web infrastructure facilitates:  the deployment of compiled FEA models and parametric models,  management (including versioning) of non-geometric model parameters (in the above example, maximal equivalent stress over the product and product mass)  end user authentication and tracking logic and  a business model (subscription based, pay per view, etc.) of choice. It should be exposed by REST API with authentication and key verification functionalities. Client is considered as add-on to one of the commonly used CAD platforms. Add-on facilitates: 12 M. ZDRAVKOVIĆ, N. KORUNOVIĆ  user login,  definition and serialization of non-geometric model parameters (e.g. exploitation and environment effects, material properties)  display of user interface with the add-on toolbox and visualization of predicted physical quantities and properties  synchronous REST calls to a web service using associated compiled FEA model, where input is current set of parameters (geometric and nongeometric) and output – predicted physical quantities and properties. Fig. 6 Concept of the integration of CAD system with prediction services based on compiled models 5. CONCLUSION Mass-customization trend, implying the need for design and manufacturing of custom product designs with efficiency near to mass-production is a new industrial reality. Quite obviously, this trend creates new challenges in manufacturing and custom product design domains. Most of the challenges are due to the efforts in finding the right balance between flexibility, strongly required by the customer-focused industries and efficiency, which is critical for market competitiveness. In more conventional industries, this balance is often searched for by implementing outsourcing practices, even for critical activities in the manufacturing process. Another way is digitalization, which helps to facilitate fast decision-making processes and thus, quick responses to the variety of demand and supply circumstances. Today, with the advance of AI methods and tools, it becomes possible to digitalize even knowledge-intensive operations and thus, not only reduce the lead time but also significantly reduce the total cost of product manufacturing. Novel Methodology for Real-Time Structural Analysis Assistance in Custom Product Design 13 The proposed solution aims to solve the problem of a long and expensive custom product design process, and in specific, the need for a special (expensive) expertise in building FEA models, lots of computational resources needed and expensive FEA software. Each parametric model is defined by the finite set of parameters, mostly related to geometrical features. The values of those parameters, in most of the cases vary within the specific range in order to keep the integrity of the design. The level of correlation of those values with the actual physical quantities and properties of the product define the guidelines important for the ordering process. This process now includes customization sub-process, in which the customer and the designer actually negotiate the design that fits the customer's requirements in the best possible way, in a real time, while still maintaining the integrity of the product in the target exploitation conditions and its manufacturability. The centerpiece of the proposed novel methodology is so-called compiled FEA model, offering the best approximations of non-geometric parameters, vital for the exploitation behavior and manufacturability of the custom product design. The use of compiled FEA model during geometric parameter tuning facilitates real-time review of the critical nongeometric features and immediate assessment of the designed product physical quantities and properties. Moreover, the proposed solution creates opportunities for new collaborative business models, in which the roles of CAD and FEA specialists are separated across the enterprises and FEA can be implemented as online service. Acknowledgements: This research was financially supported by the Ministry of Education, Science and Technological Development of the Republic of Serbia. REFERENCES 1. Da Silveira, G., Borenstein, D., Fogliatto, F.S., 2001, Mass customization: literature review and research directions, International Journal of Production Economics, 72(1), pp. 1–13. 2. Simatupang, T.M., Sridharan, R., 2005, The collaboration index: a measure for supply chain collaboration, International Journal of Physical Distribution & Logistics Management, 35(1), pp. 44–62. 3. Vandermerwe, S., Rada, J., 1988, Servitization of business: adding value by adding services, European Management Journal, 6(4), pp. 314–324. 4. Kwong, C.K., Jiang, H., Luo, X.G., 2016, AI-based methodology of integrating affective design, engineering, and marketing for defining design specifications of new products, Engineering Applications of Artificial Intelligence, 47, pp. 49–60. 5. Ghoreishi, M., Happonen, A., 2020, New promises AI brings into circular economy accelerated product design: a review on supporting literature, E3S Web Conf., 158, 06002. 6. Tao, F., Cheng, J., Qi, Q., Zhang, M., Zhang, H., Sui, F., 2018, Digital twin-driven product design, manufacturing and service with big data, The International Journal of Advanced Manufacturing Technology, 94(9), pp. 3563–3576. 7. Cook, R.D., 2007, Concepts and applications of finite element analysis, 4th ed., Wiley: New York, NY, 2001. 8. Marinkovic, D., Zehn, M., 2019, Survey of finite element method-based real-time simulations, Applied Sciences, 9(14), 2775. 9. Marinkovic, D., Zehn, M., Rama, G., 2018, Towards real-time simulation of deformable structures by means of co- rotational finite element formulation, Meccanica, 53(11), pp. 3123–3136. 10. Michie, D., 1968, “Memo” functions and machine learning, Nature, 218(5136), pp. 19–22. 11. Marler, R.T., Arora, J. S., 2004, Survey of multi-objective optimization methods for engineering, Structural and Multidisciplinary Optimization, 26(6), pp. 369–395. 12. Korunovic, N., Marinkovic, D., Trajanovic, M., Zehn, M., Mitkovic, M., Affatato, S., 2019, In silico optimization of femoral fixator position and configuration by parametric CAD model, Materials, 12(14), pp. 2326. 13. Korunović, N., Zdravković, M., 2019, Real-time structural analysis assistance in customized product design, In ICIST 2019 Proceedings; Vol. 1, pp. 217–220. 14 M. ZDRAVKOVIĆ, N. KORUNOVIĆ 14. Guyon, I., Weston, J., Barnhill, S., Vapnik, V., 2002, Gene selection for cancer classification using support vector machines, Machine Learning, 46(1/3), pp. 389–422. 15. Altman, N.S., 1992, An introduction to kernel and nearest-neighbor nonparametric Regression, The American Statistician, 46(3), pp. 175–185. 16. Cortes, C., Vapnik, V., 1995, Support-vector networks, Machine Learning, 20(3), pp. 273–297. 17. Ho, T.K., 1995, Random decision forests, In Proceedings of 3rd International Conference on Document Analysis and Recognition; IEEE Comput. Soc. Press: Montreal, Que., Canada, 1995; Vol. 1, pp. 278–282. 18. Friedman, J.H., 2001, Greedy function approximation: a gradient boosting machine, The Annals of Statistics, 29(5), pp. 1189–1232. 19. Breiman, L., 1997, Arcing the edge, Technical Report 486. Statistics Department, University of California, Berkeley. 20. Lerman, P.M., 1980, Fitting segmented regression models by grid search, Journal of the Royal Statistical Society. Series C (Applied Statistics), 29(1), pp. 77–84. 21. Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., Duchesnay, É., 2011, Scikit-learn: Machine learning in Python, Journal of Machine Learning Research, 12(85), pp. 2825–2830. 22. Korunović, N., Zdravković, M., 2020, Geometry and physical properties of fixator, Dataset. https://doi.org/10.34740/KAGGLE/DSV/1114146.