Title Science and Technology Indonesia e-ISSN:2580-4391 p-ISSN:2580-4405 Vol. 5, No. 3, July 2020 Research Paper Simulation Study of Autocorrelated Error Using Bayesian �antile Regression Nayla Desviona1, Ferra Yanuar2* 1,2Department of Mathematics, Andalas University, Limau Manis Campus, Padang 25163, Indonesia *Corresponding author: ferrayanuar@sci.unand.ac.id Abstract The purpose of this study is to compare the ability of the Classical �antile Regression method and the Bayesian �antile Regression method in estimating models that contain autocorrelated error problems using simulation studies. In the quantile regression approach, the data response is divided into several pieces or quantiles condition on indicator variables. Then, The parameter model is estimated for each selected quantiles. The parameters are estimated using conditional quantile functions obtained by minimizing absolute asymmetric errors. In the Bayesian quantile regression method, the data error is assumed to be asymmetric Laplace distribution. The Bayesian approach for quantile regression uses the Markov Chain Monte Carlo Method with the Gibbs sample algorithm to produce a converging posterior mean. The best method for estimating of parameter is the method that produces the smallest absolute value of bias and the smallest confidence interval. This study resulted that the Bayesian �antile method produces smaller absolute bias values and confidence interval than the quantile regression method. These results proved that the Bayesian �antile Regression method tends to produce be�er estimate values than the �antile Regression method in the case of autocorrelation errors. Keywords �antile Regression Method, Bayesian �antile Regression Method, Confidence Interval, Autocorrelation. Received: 20 May 2020, Accepted: 16 July 2020 https://doi.org/10.26554/sti.2020.5.3.70-74 1. INTRODUCTION Parameter estimation in linear regression analysis is performed using the Ordinary Least Squares Method. This method has several assumptions that must be approved in order to get the Best Linear Unbias Estimator (BLUE). In some empirical data, not all assumptions can be ful�lled, such as autocorrelation among errors. Then, the quantile regression method appears to overcome weaknesses in the Ordinary Least Square (OLS) method. This method uses a parameter estimation approach by separating or dividing data into quantiles by assuming conditional quan- tile functions in data distribution and minimizing the absolute symmetric weighted errors. This quantile regression analysis is used to overcome assumptions that are not met, including the existence of autocorrelation, normality assumptions, no multi- collinearity and homogeneity of variances (Yanuar et al., 2019a) Large data size is usually needed in the quantile regression method. A sampling of large data requires a lot of time and a lot of energy. Therefore it is used the Bayes method to evaluate parameters with quantile regression. Bayesian related to variable selection in quantile regression has received great attention in the literature because the Bayes method is able to get good models with small data (Oh et al., 2016), (Yanuar et al., 2013), (Yanuar et al., 2019d). In Bayes’s views, the unknown parameter is assumpted as a random variable and has distribution. Distribution related to the parameter can be obtained from corresponding previous research or based on expert opinion, this distribution known as the prior distribution. Then, the prior distribution is combined with information from data obtained from sampling (known as likelihood function). The combination of both distribution then results in a posterior distribution of parameters. Averages and variations of this posterior distribution are made estimators for the regression parameters by the Bayesian method (Yanuar et al., 2019b). In case of di�culties to identify the distribution of posterior distribution or because of complex formulation, Bayesian method uses the MCMC (Markov Chain Monte Carlo) algorithm to estimate the mean posterior and variance posterior of the parameter model. In previous studies (Muharisa et al., 2018), the Bayesian Quan- tile Regression Method with Abnormal Error has been discussed in the case of Low Birth Weight (BBLR) in West Sumatra in the data of 2016 to 2018. Furthermore (Delviyanti et al., 2018) has been examined the application of the Quantile Regression with the Bootstrap Method to the autocorrelated error in the case of the interest rate on Indonesia’s in�ation rate. This article https://doi.org/10.26554/sti.2020.5.3.70-74 Desviona et. al. Science and Technology Indonesia, 5 (2020) 70-74 will compare the ability of the Quantile Regression method and the Bayesian quantile Regression method in overcoming the autocorrelated error problem using a simulation study. 2. EXPERIMENTAL SECTION 2.1 Materials 2.1.1 Quantile Regression The quantile method is one of the methods of regression mod- eling by dividing data sets into several equal parts with data sorted from the smallest or largest. Quantile regression, in the- ory, is able to overcome the existence of autocorrelation, normal- ity assumptions, heteroskedasticity, multicollinearity problems and etc. Quantile regression minimizes asymmetric weighted data and agrees with the data function on the data distribution, (Muharisa et al., 2018). Linear equations for quantile � can be written with the equation : Y = X ′ i �(�) + " (1) The general estimation for it to � can be written as follows: argmin x ′ i ���R n ∑ i=1 � � (yi − Q�(Y |X)) (2) Where : � : Show as quantile index �(0,1 ) � � : is an asymmetric loss function Q � (Y |X) = X ′ i � the quantile function to � from Y on X con- dition, (Yanuar et al., 2019a). 2.1.2 Bayesian Methods Bayes introduced a parameter estimation method where we need to know the form of the initial distribution (prior) of parameters to �nd the estimated � parameter of the population, known as the Bayesian method. The Bayesian method uses the prior f (�) distribution together with the likelihood function to determine the posterior f (�|x1,x2, ...,xn) distribution (Yanuar et al., 2019c) Given y = (y1,y2, . . .yn) where the prior distribution of � is � (�). The prior distribution taken in this research is prior informa- tive which is originating from previous research. Determination of prior distribution parameters are very subjective, depending on the researcher’s intuition. 2.1.3 Bayesian Quantile Regression A random variable of Y is said to follow Asymmetric Laplace Distribution (ALD)(�, �, �) with location � parameter, scale of �>0 parameter and skewness p parameter in (0,1), the density function of the probability of ALD as follows, (Alhamzawi and Yu, 2012) ; (Feng and He, 2015). f (y|�,�,�) = p(1 − p) � exp(−�p( y − � � )) (3) Where �p is loss function that is de�ned as follows : �p(u) = u(p − 1u<0) (4) With I shows the indicator function. (Benoit and Van den Poel, 2010) ; (Yanuar et al. (2019). 2.2. Methods The data used in this study was data generated using R software version 3.6.1 (R Development Core Team, 2011). The data used in this study consisted of two variables X1 and X2 each generated from N (0.1). While the dependent Y variable was set with the value Yt = 0.5X1t + 2X2t + "t ."t = Sin(Seq(0.1�, 15.0�, 0.1�)) + Zt with ∼ N(0, 0.1) to t = 1, 2, ..., 150. 3. RESULTS AND DISCUSSION In this section, We will describe the results of parameter estima- tions and comparisons for the Quantile regression and Bayesian Quantile regression methods. 3.1. Durbin Watson (DW) test on Simulation Data In this study, to see the existence of autocorrelation in error from simulation data used The Dubin Watson test. Based on the results of Durbin Watson (DW) using R version 3.6.1 software (R Development Core Team, 2011) the statistical DW value was 0.124904. To �nd out whether the error of the simulation data was free from autocorrelation, We have com- pared the statistical DW values with DW table values. With the number of independent K variables was 2 and the number of n observations was 150. Then the values for d L =1.7062 and du=1.7602 were obtained from the DW Table values. In other words, Based on the Durbin Watson test, if the DW value lies between 0 and d L stated that 0