Microsoft Word - ETASR_V13_N3_pp10989-10993 Engineering, Technology & Applied Science Research Vol. 13, No. 3, 2023, 10989-10993 10989 www.etasr.com Elangovan & Subedha: Adaptive Particle Grey Wolf Optimizer with Deep Learning-based Sentiment … Adaptive Particle Grey Wolf Optimizer with Deep Learning-based Sentiment Analysis on Online Product Reviews Durai Elangovan Sathyabama Institute of Science and Technology, India elangovan.durai@yahoo.com (corresponding author) Varatharaj Subedha Department of Computer Science and Engineering, Panimalar Institute of Technology, India subedha@gmail.com Received: 16 February 2023 | Revised: 16 March 2023 | Accepted: 23 March 2023 Licensed under a CC-BY 4.0 license | Copyright (c) by the authors | DOI: https://doi.org/10.48084/etasr.5787 ABSTRACT The increasing use of e-commerce websites and social networks is continually generating an immense amount of data in various forms, such as text, images or sounds, videos, etc. Sentiment analysis (SA) in online product reviews is a method of identifying the overall sentiment of customers about a specific product or service. This study used Natural Language Processing (NLP) and Machine Learning (ML) algorithms to identify and extract opinions and emotions expressed in text. Online reviews are often written in informal language, slang, and dialects, making it difficult for ML models to accurately classify sentiments. In addition, the use of misspelled words or incorrect grammar can further complicate the analysis. The recent developments of Deep Learning (DL) models can be used for the accurate classification of sentiments. This paper presents an Adaptive Particle Grey Wolf Optimizer with Deep Learning Based Sentiment Analysis (APGWO-DLSA) method to accurately classify sentiments in product reviews. Initially, data pre-processing was performed to improve the quality of the product reviews using the word2vec embedding process. For sentiment classification, the proposed method used a Deep Belief Network (DBN) model. Finally, the hyperparameter tuning of the DBN was performed using the APGWO algorithm. An extensive experimental analysis demonstrated the improved results of APGWO-DLSA over other methods, showing a maximum accuracy of 94.77% and 85.31% on the Cell Phones And Accessories (CPAA) and Amazon Products (AP) datasets. Keywords-sentiment analysis; online product reviews; machine learning; deep learning; natural language processing I. INTRODUCTION Sentiment Analysis (SA) uses Machine Learning (ML) and Natural Language Processing (NLP) methods to extract and identify subjective data from text [1]. It is useful to understand the sentiments of product reviews since they allow companies to know the overall satisfaction of users [2]. SA regulates the insolence of a writer or a speaker regarding the contextual polarity, a specific topic, or some particular event, discussion, etc. The indispensable task of SA is to identify the polarity of the text at a document or sentence level [3]. The rise in Internet use has enabled all users to share their views using various platforms [4]. SA assists to examine such opinioned data and deriving certain significant insights which could be helpful for others to make decisions [5]. Social networking sites generate various kinds of data such as sports, products, movies, healthcare, hotel, and news and articles reviews. Many SA algorithms have been proposed, using computational linguistic methods or ML [6], such as Support Vector Machines (SVMs), Naive Bayes (NB), and Maximum Entropy [7]. ML methods show much better efficiency than computational linguistic methods. As Deep Learning (DL) presents notable results for a variety of NLP problems, it has attracted the interest of researchers [8], and several DL methods have been used, such as DCNN, CNN, deep Restricted Boltzmann Machine (RBM), Deep Neural Networks (DNNs), etc. [9]. However, reviews or sentences stating various aspects related to complicated sentiments are not treated very well by these methods. Similarly, complete SA evaluation using ML has not offered efficient training time and better accuracy [10]. Several methods have been proposed to conduct SA using DL, and the best method depends on the specific needs of the application. Although several approaches have been proposed for sentiment classification, there is still a need to improve their performance. Due to the continuous expansion of the DL models, the parameter count increases, leading to overfitting of Engineering, Technology & Applied Science Research Vol. 13, No. 3, 2023, 10989-10993 10990 www.etasr.com Elangovan & Subedha: Adaptive Particle Grey Wolf Optimizer with Deep Learning-based Sentiment … the model. As the manual selection of hyperparameters is a laborious task, it is useful to use evolution algorithms. This study developed an Adaptive Particle Grey Wolf Optimizer with DL-driven SA (APGWO-DLSA) model to accurately classify sentiments on online product reviews. For sentiment classification, the proposed APGWO-DLSA model used a Deep Belief Network (DBN). The hyperparameter tuning of the DBN was performed using the APGWO approach. A wide range of simulations was carried out to demonstrate the improved performance of the APGWO-DLSA model. II. RELATED WORKS In [11], a novel word representative method was presented, that incorporated the contribution of a sentiment dataset into the standard TF-IDF method and produced weighted word vectors. This weighted word vector can be input to bi- directional LSTM (BLSTM) to efficiently capture contextual datasets. The sentiment tendency of a comment can be gained using FFNN classifiers. In a similar condition, the SA approach was compared to NB, CNN, LSTM, and RNN SA methods. In [12], a Graph Convolution Network was presented with an External Knowledge model (EK-GCN). In [13], a particular order of pre-processing stages was presented to enrich the SA performance of an ANN, since typically, the weights of the ANN are arbitrarily initialized (R-ANN) and may not provide a favorable result. In [14], a new cognitive computing method used big data analysis tools for SA and pre-processing to eliminate unnecessary words. In [15], the performance of companies was investigated using DL and ML, showing how AI can improve business procedures and results using SA. This study examined all aspects of AI in the business domain with its benefits in enhancing business performance. In [16], DL was used to analyze the sentiments of pets. III. THE PROPOSED MODEL This study presents the APGWO-DLSA method for sentiment classification on online product reviews. The APGWO-DLSA method includes data preprocessing, word2vec word embedding, DBN classification, and APGWO- based parameter tuning. Figure 1 illustrates the entire workflow of the APGWO-DLSA method. A. Data Preprocessing Initially, data pre-processing was performed to improve the importance of product reviews. Data analysis requires data pre- processing to eliminate redundancies and improve the learning method of the classification model and accuracy [17]. Redundant data represent any information that contributes minimal or none to predicting a targeted class, but it increases the feature vector size and presents redundant computation complexity. Subsequently, the accuracy of a classifier model is degraded if improper or no pre-processing is performed before encoding. This study used Python's NLP toolkit to preprocess the data. At first, the text was transformed into lowercase, and then punctuation, links, and HTML tags were discarded. Then, lemmatization and stemming approaches were implemented to clean the stopwords, and finally, the text was detached.  Change to lowercase: the text was converted to lowercase. The model considers low- and upper-case words as distinct, influencing the classification performance and the training process.  Removal of punctuation, URL links, numbers, and tags: They do not contribute to the classifier accuracy since they provide no further meaning for the learning model and increase feature space. Thus, eliminating them aids in decreasing the feature space.  Lemmatization and stemming: The objective of this step is to minimize inflectional forms and occasionally derivative relevant forms of the word to the typical baseline.  Removal of stopwords: Stopwords are often used words that provide no valuable data. Fig. 1. Working process of the APGWO-DLSA method. B. Word Embedding The word2vec model was used for the word embedding process. Word2vec is a popular word embedding algorithm that maps word types that have the same meaning and are closer to each other [18]. This method uses 2 approaches; The former is the skip-gram approach that accepts the center word as input, transfers it to the embedded layer, and later predicts the context word in a small dataset. The other approach is the Continuous BoW (CBOW) approach, which uses contextual words as input, transfers them to the embedding layer, and forecasts the center or the original word. CBOW works very fast and offers a good representation of the most common word. C. Sentiment Classification The APGWO-DLSA used the DBN model to classify sentiments. DBN uses the structure blocks of RBM and several RBM approaches [19]. RBM with a single hidden layer may not be effective for extracting features from data. The feature learned and then trained in an RBM network is used as input to distinct RBM networks. Therefore, the last RBM network learns features of the whole trained method and extracts the features from the input data. Back-Propagation (BP) is frequently used to train a typical ANN with a huge number of Engineering, Technology & Applied Science Research Vol. 13, No. 3, 2023, 10989-10993 10991 www.etasr.com Elangovan & Subedha: Adaptive Particle Grey Wolf Optimizer with Deep Learning-based Sentiment … model parameters. This can be achieved more efficiently using the pre‐training method. The pre‐ training method in a DBN is a procedure of greedy layer‐wise and alternative sampling. After the unsupervised pre‐processing from the greedy layer- wise procedure, h k (x) refers to the representation of abstracts x in the k layer. To achieve optimum distinctive performance, the labeled data were used to correct the parameter space W. This was developed by including a final layer of variables prepared by the chosen label samples in the trained database. This optimizer method is signified by the subsequent equation: ��ℎ� ���, �� = ∑ ∑ �� ������ �ℎ� ���� � ×�� � (1) where T is the loss function. The squared error function has been generally exploited in BP, and the loss function was: � = ℎ� ��� � × �� (2) D. Hyperparameter Tuning APGWO is considered a hyperparameter optimizer of the DBN model. Eberhart and Kennedy presented a PSO model on the herd prey hunting approach from the environment, where all the animals in a pack are aware of their position relative to the food and the position adjacent to it [20]. This study proposed a PSO technique to solve optimization problems. The two basic features of PSO are the present element position x and the velocity v. Simultaneously, the Fitness Function (FF) computes the fitness values for all parts. At the same time as the departure, the position of all the elements is stated at random. Every feature is influenced by two position parameters: pBest and gBest. The PSO element navigates the problem space with subsequent attributes that are present. Afterward, all steps, the velocity, and the position of all components can be defined by the following equations: ����� = � ∗ ��� + ��� ∗ � !" ∗ �#$%&'�� − ��� � + �)� ∗� !" ∗ �*+%&' − ��� � (3) ����� = ��� + ��� (4) The values of c1 and c2 are generally stated as constants in PSO for balancing the exploration phase, most probably to c1=c2=1 or c1=c2=2. During all the iterations, the equation is used for changing the acceleration coefficients. Equations (5) and (6) define these novel coefficients: ��, = 1.2 − 0�12 3 � 0�4567�� (5) �), = 0.5 + 0�12 3 � 0�4567�� (6) where t is the iteration, k is the coefficient, and f is the global optimum fitness of swarms. The values 0.5 and 1.2 are chosen by empirical analysis. The instance of the inertia equation is: �� = �: �;'%� − '� ∗ <=>? A<=BC D>?EFGH + �IJ! (7) The sigmoid function is: �� K ��� = &J* L�� ���M = ���6NOPQ�R� (8) The development of the particle's positions is determined by: �� �' + 1� = S1, J� �� < &J* L�� �' + 1�M0� U'ℎ%��J&% (9) where the ij parameter is a value from 0 to 1. During the PSO execution, some GWO iteration rounds reproduce the possibility of mutations that resolve the result in hybrid variation. The possibility of mutations was fixed at 0.1. The internal round can be only activated sometimes, as these values were smaller to make sure that the solution quality was unaffected. The primary and final weights can be represented by w_max and w_min, and this study used fixed values of 0.9 and 0.2, respectively. The purpose for reducing the FF is: IJ!J:JV% W × X� + �1 − W� × YZ (10) where Eτ signifies the validation set rate of errors, a=0.9, and S and L represent the count of the selected features and the entire count of features, respectively. An effort is developed to concurrently optimize this FF and decrease the count of chosen features to improve validation accuracy. The optimization focuses on improving validation accuracy once the a value was higher. The fitness value can be computed by the APGWO algorithm by: [J'!%&& = max �_� (11) _ = `a`a�ba (12) where TP and FP signify the true and false positives. IV. RESULTS AND DISCUSSION The APGWO-DLSA model was developed and executed using Python 3.6.5 on an i5-8600K/16GB RAM/GeForce 1050Ti 4GB PC. The SA effectiveness of the APGWO-DLSA method was examined on two datasets: Cell Phones And Accessories (CPAA) and Amazon Products (AP), as shown in Table I. Figure 2 shows the confusion matrix created by the APGWO-DLSA model on the two datasets. The results showed that the APGWO-DLSA model accurately determined two kinds of sentiments. For instance, with 70% TRS in the CPAA dataset, the APGWO-DLSA model recognized 61410 positive samples and 7256 negatives. Meanwhile, with 30% TSS in the CPAA dataset, the APGWO-DLSA model recognized 26327 positive samples and 3120 negatives. Eventually, with 70% TRS in the AP dataset, the APGWO-DLSA model recognized 9189 positive samples and 361 negatives. Finally, with 30% TSS in the AP dataset, the APGWO-DLSA method recognized 3936 positive samples and 169 negatives. TABLE I. DATASET DETAILS Class No. of instances CPAA AP Positive 88516 13251 Negative 11484 749 Total 100000 14000 Table II shows the SA results of the APGWO-DLSA method on the CPAA dataset, demonstrating that it distinguished positive and negative data instances. With 70% TRS, the APGWO-DLSA model achieved average accuy, precn, recal, Fscore, and MCC of 94.77%, 95.75%, 94.77%, Engineering, Technology & Applied Science Research Vol. 13, No. 3, 2023, 10989-10993 10992 www.etasr.com Elangovan & Subedha: Adaptive Particle Grey Wolf Optimizer with Deep Learning-based Sentiment … 95.25%, and 90.52%, respectively. Additionally, with 30% TRS, the APGWO-DLSA method achieved average accuy, precn, recal, Fscore, and MCC of 94.65%, 96.20%, 94.65%, 95.41%, and 90.84%, respectively. Fig. 2. Confusion matrices: (a-b) CPAA dataset on TRS/TSS of 70:30; (c-d) AP dataset on TRS/TSS of 70:30. TABLE II. SA RESULTS OF THE APGWO-DLSA MODEL ON THE CPAA DATASET Class Accuracybal Precn Recal Fscore MCC Training Phase (70%) Positive 99.08 98.77 99.08 98.93 90.52 Negative 90.46 92.73 90.46 91.58 90.52 Average 94.77 95.75 94.77 95.25 90.52 Testing Phase (30%) Positive 99.21 98.71 99.21 98.96 90.84 Negative 90.10 93.69 90.10 91.86 90.84 Average 94.65 96.20 94.65 95.41 90.84 TABLE III. SA RESULTS OF THE APGWO-DLSA MODEL ON THE AP DATASET Class Accuracybal Precn Recal Fscore MCC Training Phase (70%) Positive 98.93 98.38 98.93 98.66 73.06 Negative 70.51 78.48 70.51 74.28 73.06 Average 84.72 88.43 84.72 86.47 73.06 Testing Phase (30%) Positive 99.32 98.30 99.32 98.81 77.26 Negative 71.31 86.22 71.31 78.06 77.26 Average 85.31 92.26 85.31 88.43 77.26 Table III shows the overall SA results of the APGWO- DLSA method on the AP dataset, demonstrating that the model distinguished positive and negative samples proficiently. For example, with 70% TRS, the APGWO-DLSA method achieved average accuy, precn, recal, Fscore, and MCC of 84.72%, 88.43%, 84.72%, 86.47%, and 73.06%, respectively. Furthermore, with 30% of TSS, the APGWO-DLSA model achieved average accuy, precn, recal, Fscore, and MCC of 85.31%, 92.26%, 85.31%, 88.43%, and 77.26%, respectively. Figure 3 displays the accuracy and loss curves examination of the APGWO-DLSA method on the CPAA and AP datasets. Fig. 3. Accuracy and loss (a-b) on the CPAA dataset, (c-d) on the AP dataset. Table IV shows the comparative SA results of the APGWO-DLSA with existing models in the CPAA dataset [21], demonstrating its superior performance. The APGWO- DLSA model reached the highest accuy of 94.77%, while the XGBoost, RF, SVM, gradient boosting, NB, and DL models reached 91.60%, 91.38%, 90.16%, 89.74%, 91.01%, and 90.85%, respectively. Simultaneously, the APGWO-DLSA model had a higher Fscore of 95.25%, while the XGBoost, RF, SVM, gradient boosting, NB, and DL achieved 91.89%, 91.79%, 89.81%, 90.49%, 91.64%, and 89.88%, respectively. TABLE IV. COMPARATIVE ANALYSIS OF APGWO-DLSA WITH OTHER MODELS ON THE CPAA DATASET Methods Accuracy F-Score APGWO-DLSA 94.77 95.25 XG Boost 91.60 91.89 Random Forest 91.38 91.79 SVM 90.16 89.81 Gradient Boosting 89.74 90.49 Naïve Bayes 91.01 91.64 DL Model 90.85 89.88 Table V compares the SA results of the APGWO-DLSA with existing models on the AP dataset. The results show that the APGWO-DLSA model achieved the best performance. The APGWO-DLSA model achieved the highest accuy of 85.31%, while the XGBoost, RF, SVM, gradient boosting, NB, and DL models achieved 80.63%, 83.23%, 83.02%, 81.49%, 81.38%, and 80.93%, respectively. Similarly, it can be noticed that the APGWO-DLSA model reached the highest Fscore of 88.43%, while the XGBoost, RF, SVM, gradient boosting, NB, and DL Engineering, Technology & Applied Science Research Vol. 13, No. 3, 2023, 10989-10993 10993 www.etasr.com Elangovan & Subedha: Adaptive Particle Grey Wolf Optimizer with Deep Learning-based Sentiment … methods achieved 82.24%, 83.42%, 84.25%, 83.94%, 84.80%, and 84.36%, respectively. These results highlight the enhanced SA results of the proposed APGWO-DLSA method TABLE V. COMPARATIVE ANALYSIS OF APGWO-DLSA WITH OTHER MODELS ON THE AP DATASET Methods Accuracy F-Score APGWO-DLSA 85.31 88.43 XG Boost 80.63 82.24 Random Forest 83.23 83.42 SVM 83.02 84.25 Gradient Boosting 81.49 83.94 Naïve Bayes 81.38 84.80 DL Model 80.93 84.36 V. CONCLUSION This paper presented the APGWO-DLSA method for accurate sentiment classification in online product reviews. This model invokes data preprocessing with word2vec word embedding process, uses the DBN model for sentiment classification, and selects the hyperparameters of the DBN model. The proposed model was tested in two product review datasets and its were compared with the results of other methods. The comparative analysis results showed that APGWO-DLSA achieved optimum accuracy of 94.77% and 85.31% on the CPAA and AP datasets, respectively. In the future, an advanced DL classification model can be developed to further improve the APGWO-DLSA model. REFERENCES [1] R. S. Jagdale, V. S. Shirsat, and S. N. Deshmukh, "Sentiment Analysis on Product Reviews Using Machine Learning Techniques," in Cognitive Informatics and Soft Computing, Singapore, 2019, pp. 639–647, https://doi.org/10.1007/978-981-13-0617-4_61. [2] L. Yang, Y. Li, J. Wang, and R. S. Sherratt, "Sentiment Analysis for E- Commerce Product Reviews in Chinese Based on Sentiment Lexicon and Deep Learning," IEEE Access, vol. 8, pp. 23522–23530, 2020, https://doi.org/10.1109/ACCESS.2020.2969854. [3] A. Onan, "Sentiment analysis on product reviews based on weighted word embeddings and deep neural networks," Concurrency and Computation: Practice and Experience, vol. 33, no. 23, 2021, Art. no. e5909, https://doi.org/10.1002/cpe.5909. [4] R. S. S. Singh, T. J. S. Anand, S. A. Anas, and B. Acharya, "A Real- Time Analytic Face Thermal Recognition System Integrated with Email Notification," Engineering, Technology & Applied Science Research, vol. 13, no. 1, pp. 9961–9967, Feb. 2023, https://doi.org/10.48084/ etasr.5430. [5] K. Aldriwish, "A Deep Learning Approach for Malware and Software Piracy Threat Detection," Engineering, Technology & Applied Science Research, vol. 11, no. 6, pp. 7757–7762, Dec. 2021, https://doi.org/ 10.48084/etasr.4412. [6] T. Akhtar, N. G. Haider, and S. M. Khan, "A Comparative Study of the Application of Glowworm Swarm Optimization Algorithm with other Nature-Inspired Algorithms in the Network Load Balancing Problem," Engineering, Technology & Applied Science Research, vol. 12, no. 4, pp. 8777–8784, Aug. 2022, https://doi.org/10.48084/etasr.4999. [7] M. Bhalekar and M. Bedekar, "The New Dataset MITWPU-1K for Object Recognition and Image Captioning Tasks," Engineering, Technology & Applied Science Research, vol. 12, no. 4, pp. 8803–8808, Aug. 2022, https://doi.org/10.48084/etasr.5039. [8] M. A. Fauzi, "Word2Vec model for sentiment analysis of product reviews in Indonesian language," International Journal of Electrical and Computer Engineering (IJECE), vol. 9, no. 1, pp. 525–530, Feb. 2019, https://doi.org/10.11591/ijece.v9i1.pp525-530. [9] P. Verma, A. Dumka, A. Bhardwaj, and A. Ashok, "Product Review- Based Customer Sentiment Analysis Using an Ensemble of mRMR and Forest Optimization Algorithm (FOA)," International Journal of Applied Metaheuristic Computing (IJAMC), vol. 13, no. 1, pp. 1–21, Jan. 2022, https://doi.org/10.4018/IJAMC.2022010107. [10] S. Sindhura, S. P. Praveen, M. A. Safali, and N. Rao, "Sentiment Analysis for Product Reviews Based on Weakly-Supervised Deep Embedding," in 2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA), Coimbatore, India, Sep. 2021, pp. 999–1004, https://doi.org/10.1109/ICIRCA51532.2021. 9544985. [11] G. Xu, Y. Meng, X. Qiu, Z. Yu, and X. Wu, "Sentiment Analysis of Comment Texts Based on BiLSTM," IEEE Access, vol. 7, pp. 51522– 51532, 2019, https://doi.org/10.1109/ACCESS.2019.2909919. [12] T. Gu, H. Zhao, Z. He, M. Li, and D. Ying, "Integrating external knowledge into aspect-based sentiment analysis using graph neural network," Knowledge-Based Systems, vol. 259, Jan. 2023, Art. no. 110025, https://doi.org/10.1016/j.knosys.2022.110025. [13] A. Thakkar, D. Mungra, A. Agrawal, and K. Chaudhari, "Improving the Performance of Sentiment Analysis Using Enhanced Preprocessing Technique and Artificial Neural Network," IEEE Transactions on Affective Computing, vol. 13, no. 4, pp. 1771–1782, Jul. 2022, https://doi.org/10.1109/TAFFC.2022.3206891. [14] D. K. Jain, P. Boyapati, J. Venkatesh, and M. Prakash, "An Intelligent Cognitive-Inspired Computing with Big Data Analytics Framework for Sentiment Analysis and Classification," Information Processing & Management, vol. 59, no. 1, Jan. 2022, Art. no. 102758, https://doi.org/ 10.1016/j.ipm.2021.102758. [15] A. A. A. Ahmed, S. Agarwal, Im. G. A. Kurniawan, S. P. D. Anantadjaya, and C. Krishnan, "Business boosting through sentiment analysis using Artificial Intelligence approach," International Journal of System Assurance Engineering and Management, vol. 13, no. 1, pp. 699–709, Mar. 2022, https://doi.org/10.1007/s13198-021-01594-x. [16] M. F. Tsai and J. Y. Huang, "Sentiment analysis of pets using deep learning technologies in artificial intelligence of things system," Soft Computing, vol. 25, no. 21, pp. 13741–13752, Nov. 2021, https://doi.org/10.1007/s00500-021-06038-z. [17] M. Mujahid et al., "Sentiment Analysis and Topic Modeling on Tweets about Online Education during COVID-19," Applied Sciences, vol. 11, no. 18, Jan. 2021, Art. no. 8438, https://doi.org/10.3390/app11188438. [18] Z. H. Kilimci, S. Akyokus, and I. Czarnowski, "Deep Learning- and Word Embedding-Based Heterogeneous Classifier Ensembles for Text Classification," Complexity, vol. 2018, Jan. 2018, https://doi.org/ 10.1155/2018/7130146. [19] A. A. Süzen, "Developing a multi-level intrusion detection system using hybrid-DBN," Journal of Ambient Intelligence and Humanized Computing, vol. 12, no. 2, pp. 1913–1923, Feb. 2021, https://doi.org/ 10.1007/s12652-020-02271-w. [20] T. T. M. Huynh, T. M. Le, L. T. That, L. V. Tran, and S. V. T. Dao, "A Two-Stage Feature Selection Approach for Fruit Recognition Using Camera Images With Various Machine Learning Classifiers," IEEE Access, vol. 10, pp. 132260–132270, 2022, https://doi.org/10.1109/ ACCESS.2022.3227712. [21] A. Iqbal, R. Amin, J. Iqbal, R. Alroobaea, A. Binmahfoudh, and M. Hussain, "Sentiment Analysis of Consumer Reviews Using Deep Learning," Sustainability, vol. 14, no. 17, Art. no. 10844, Jan. 2022, https://doi.org/10.3390/su141710844.