IHJPAS. 36 (3) 2023 331 This work is licensed under a Creative Commons Attribution 4.0 International License Abstract We discussed Bayes estimators for unknown scale parameter  when shape Parameter η is known of Erlang distribution. Assuming different informative priors for unknown scale parameter  . We derived The posterior density with posterior mean and posterior variance using different informative priors for unknown scale parameter  which are the inverse exponential distribution, the inverse chi-square distribution, the inverse Gamma distribution and the standard Levy distribution as prior. And we derived Bayes estimators  based on general entropy loss function (GELF). We used Simulation method to obtain the results. we generated different cases for the parameters ) , η (  of the Erlang model, for different sample sizes. The estimates have been compared in terms of their mean-squared error (MSE). We concluded that the best estimators of the scale parameter ) ( of the Erlang distribution, based on GELF for the shape parameter (c=1,2,3) under inverse gamma prior with 2)b3,(a  for all samples sizes(n) where the true cases of the Erlang model are )3 2, η (   and )3 3, η (   according to the smallest values of MSE. Keywords: The Erlang distribution, Bayes estimation, The posterior density, Posterior mean, Posterior variance, GELF, MSE. 1. Introduction Erlang distribution is one of the continuous probability distributions primarily proposed by Erlang(1909),to establish the origin of queuing theory. Erlang distribution is used in several field such as stochastic processes and mathematical biology. Also, Erlang distribution can be represented as waiting time and message length in telephone traffic. When the duration of individual calls are exponentially distributed then the duration of the succession of calls is the Erlang distribution. The Erlang variate becomes Gamma variate when its shape parameter is an doi.org/10.30526/36.3.3099 Article history: Received 6 November 2022, Accepted 26 December 2022, Published in July 2023. Ibn Al-Haitham Journal for Pure and Applied Sciences Journal homepage: jih.uobaghdad.edu.iq Bayesian Approach for Estimating the Unknown Scale Parameter of Erlang Distribution based on General Entropy Loss Function Jinan A. Naser Al-obedy Technical College of Management-Baghdad, Middle Technical university, Baghdad, Iraq. https://creativecommons.org/licenses/by/4.0/ mailto:jinan.abbas@mtu.edu.iq IHJPAS. 36 (3) 2023 332 integer. Many authors study the effects on Bayes’ estimators of shape and scale parameters of the Erlang distribution based on different loss functions, and the prior distributions. We review some of them as follows:[1] discussed Bayesian estimators of shape and scale parameters of Erlang distribution, using different informative prior distributions under squared error loss function (SELF). They assumed two different informative priors which are represented by truncated poisson and truncated geometric distributions for the shape parameter. And they assumed two different informative priors which are represented by inverted gamma and gamma distributions for the shape parameter. they also assumed two different informative priors for both shape and scale parameters using (truncated Poisson - inverted gamma) and (truncated poisson – gamma) and (truncated geometric - inverted gamma) and (truncated geometric –gamma). They derived the Bayes estimators and posterior variances. They obtained their results using the simulation to compare the behavior of the proposed estimators. And they applied Bayes estimators for real data sets. [2] obtained the Bayesian estimators and the posterior variance of shape and scale parameters of Erlang distribution, using different informative generalized truncated distributions, based on the squared error loss function. They considered generalized truncated Poisson and truncated geometric distributions as priors for shape parameter They also the inverted Gamma distribution as prior for the scale parameter. [3] have introduced the Bayesian procedure of a new generalization of Erlang distribution under the squared error loss function. They assumed the truncated Poisson distribution as prior for shape parameter, and it assumed inverted gamma distribution as prior for scale parameter of Erlang distribution. They derived Bayes estimator and their posterior variance. They used simulation to compare the behavior of the proposed estimators. And they applied numerical Bayes estimators for real data set. [4] have introduced a new class of weighted Erlang distribution. They discussed different cases of weighted Erlang distribution, and they derived the reliability function, the rth non-central moment and moment generating function of weighted Erlang distribution. Also, they introduced the coefficient of variation, the coefficient of skewness, the harmonic mean, and the entropy estimation of weighted Erlang distribution and based on this calculate the AIC (Akaike information criterion), and BIC (Bayesian information criterion) using the real data set. They showed that the weighted Erlang distribution gives better results and estimates as compared to exponential and Erlang distributions for the data set. [5] discussed the Weighted Erlang distribution. They used different estimation methods for the scale parameter of the Weighted Erlang distribution represented by Maximum likelihood method and Bayes estimation method. They derived the posterior mean and posterior variance of the distribution under Squared Error Loss Function (SELF), Quadratic Loss Function (QLF),and entropy Loss Function (ELF) using Jeffrey`s, extension of Jeffrey`s, and Quasi priors. The results of this study showed that the extension of Jeffrey’s prior is more efficient as compared to other priors. [6] suggested a new continuous model called length-biased weighted Erlang (LBWE) distribution. They discussed the basic properties of this distribution such as the mean, variance, coefficient of variation, harmonic mean, moments, skewness, kurtosis, generating functions, reliability analysis, Rényi of entropy, order statistics, and record statistics. They also estimated the parameters of this distribution by using the moments and maximum likelihood techniques. They used a real application for the data set. The results showed that the LBWE distribution yields a better fit as compared with the Erlang distribution according to the AIC (Akaike information criterion), AICC (corrected Akaike information criterion), CAIC (consistent Akaike information criterion) and BIC (Bayesian information criterion). [7] discussed Bayes estimators for shape and scale parameters of Erlang distribution based on the squared error loss function. They assumed IHJPAS. 36 (3) 2023 333 different informative priors which are represented by Consul and Geeta and Size Biased Poisson distributions for shape parameter. They also assumed two different informative priors for both shape and scale parameters which are represented by (Consul - Inverted Gamma) and (Consul and Gamma) and (Geeta and Inverted Gamma) and (Geeta and Gamma) and (Size Biased Poisson and Inverted Gamma). [8] used Bayesian approach for estimating the scale parameter of weighted Erlang distribution. They assumed inverse exponential and inverse chi-square as prior distributions. They derived Bayesian estimators based on squared error loss function (SELF), and precautionary loss function (PLF). They used simulation to obtain the numerical value of the bayesian estimates and posterior risks. They concluded SELF yields least estimates than PLF, and the squared error loss function of inverse exponential prior was the best among Bayesian estimators. [9] used different estimation methods for the rate parameter, including the maximum likelihood method, method of moments and Bayesian method of estimation. they derived Bayesian estimator under two different priors Jeffrey’s prior and Quasi prior based on three different Loss Functions which are precautionary loss function (PLF), Al-Bayyati’s loss function (ALF) , LINEX loss function (LLF). They fellfield simulation study in R-Software to compare different methods by using mean square error with different sample sizes. They applied parametric estimation to the data sets of real life. [10] applied power transformation method to obtain an extension of Erlang distribution. They also derived several mathematical quantities for this distribution. They also discussed moments, moment generating function, skewness, kurtosis, incomplete moments, mode, median, order statistics, different measure of entropies, mean deviations, Bonferroni and Lorenz curves. An account of reliability analysis. The applied maximum likelihood estimation for estimating the parameters of the distribution. A Bayesian estimators are discussed in this study for unknown scale parameter  when shape Parameter η is known for Erlang distribution, Assuming different informative priors for unknown scale parameter .We derived the posterior density with posterior mean and posterior variance using different informative priors for unknown scale parameter  which are the inverse exponential distribution, the inverse chi-square distribution, the inverse Gamma distribution and the standard Levy distribution as prior .We depend on general entropy loss function (GELF) to derive Bayes estimators for  . 2. Erlang distribution Supposed ) t,..., t,(t n21 to be a random sample of size (n) with probability density function (pdf) of Erlang distribution [3] is given by (1) 1,2,3,η , 0 , 0 t, ) t exp(t Γ(η) 1 ) η,g(t, 1η η      where ) Γ(. is a gamma function and  , η are the shape and scale parameters respectively, such that η is an integer number. When the shape parameter is ) 0 (η  then it is known as the Gamma Distribution. The cumulative distribution function (cdf) has the following form (2) Γ(η) z)γ(η, ) γ,G(t,  IHJPAS. 36 (3) 2023 334 where z)γ(η, is the lower incomplete gamma function which equals to (3) dz z)exp(zz)γ(η, 1γ t 0    Also, we can verify the )(r th moments for (r=1,2 ,...) as follows (4) dt ) t exp(t Γ(η) 1 )dt η,g(t, 1rη η 00 r       r t Suppose dwdttw t w    after substituting in equation (4) ,we have dw w)exp((w) Γ(η) )( dw w)exp(w)( Γ(η) 1 μ 1rη 0 η rη 1rη 0 ηr           So , the )(r th moments[11] of Erlang distribution would be (5) Γ(η) r)Γ(η μ r r    If r=1, we obtain the mean of the Erlang distribution (6) η μmean Γ(η) 1)Γ(η μ 11      If r=2, we have (7) η)η(1 Γ(η) 2)Γ(η )E(tμ 2 2 2 2      Then, we can define the variance of Erlang distribution as (8) η )η η(η η)) ( η)η(1[E(t)]- )E(t 222 222 222 222     we can obtain the coefficient of variation (C.V) [11] of Erlang distribution as follows: (9) η 1 η η V.C 2 1      3. Bayesian approach we used the Bayesian approach to derive the Bayes estimators for unknown scale parameter  when shape Parameter η is known for Erlang distribution. We assumed four informative priors for unknown scale parameter  which are the inverse exponential distribution, the inverse chi-square distribution, the inverse Gamma distribution and the standard Levy distribution as prior distributions. So we derived the posterior density using the different informative priors as follows. 3.1 The posterior density using the inverse exponential distribution By assuming the inverse exponential distribution with scale parameter ( ) [12]as the prior distribution for an unknown scale parameter  when shape parameter η is known with the probability density function (pdf) given by (10) 0 , 0 , )exp( ) ;(h 21        IHJPAS. 36 (3) 2023 335 For the probability density function (pdf) of Erlang distribution as defined in equation (1),we can define the likelihood function of the sample observations ) t,..., t,(t n21 as (11) ) t exp(t) Γ(η) 1 () η,g(t,) t; , L(η i n 1i n 1i nη- n 1i 1η i      n The posterior distribution of  given the sample observations ) t,..., t,(t n21 [11] is defined as follows: (12) )d ; (ω)h t;, L(η ) ; (ω)h t; , L(η ) t\ (P 1 1 1       After substituting equation (10) and equation (11) in equation (12), yields the posterior probability density function of the scale parameter , we have (13) )]d ω exp( ω ][ ) t exp(tΠ) Γ(η) 1 [( )] ω exp( ω ][ ) t exp(tΠ) Γ(η) 1 [( ) t\ (P 2 i n 1i n 1i nη-n 0 2 i n 1i n 1i nη-n 1 1η i 1η i                      (14) ω))dt( 1 exp( ω))t( 1 exp( )t\(P ω))dt( 1 exp( ω))t( 1 exp( )t\(P i n 1i 1)1)((nη- 0 i n 1i 1)1)((nη- 1 i n 1i 2nηη- 0 i n 1i 2nηη- 1                                   By multiplying the integral in equation (14) by the following quantity ) ω)t( )Γ( ( ) Γ( ω)t( ( 1)(nη 1)(nη 1)nη 1)(nη i n 1i i n 1i         , where ) Γ(. is a gamma function. After some simplification, it yields (15) ω)t( 1 exp( )Γ( ω)t( ) t\ (P i n 1i 1)1)((nη-i n 1i 1 K1(t, 1)nη 1)(nη            where 1)dt( 1 exp( Γ( ω)t( ) i n 1i 1)1)((nη-i n 1i 0 1)nη 1)(nη K1(t,              is the integral of the pdf of inverse gamma distribution [14]. Then the posterior distribution of  is inverse gamma distribution as (16) 0ω , η n, 0, , ω)t( 1 exp( Γ( ω)t( ) t\ (P i n 1i 1)1)((nη-i n 1i 1 1)nη 1)(nη           IHJPAS. 36 (3) 2023 336 It means that 1))(nηω),t gamma(( inverse )t \ ( i n 1i   with posterior mean is nη ω)t( 1-1)(nη ω)t( )t \ E( i n 1ii n 1i       and posterior variance is 1)(nη) nη ( ω) i tn 1i ( 2)-1)((nη1)-1)((nη ω)t( )t \ var( 2 2 2 2 i n 1i         . 3.2 The posterior density using the inverse Gamma distribution By assuming the inverse Gamma distribution [13]as prior distribution for unknown scale parameter  when shape parameter η is known with the probability density function (pdf) given by (17) 0ba, , 0, ) a exp( Γ(b) a α ) β; α, (h 1)(b b 2      After substituting equation (17) and equation (11) in equation (12), yields the posterior probability density function of the scale parameter , we have (18) ]d ) a exp( Γ(b) a ][ ) t exp(tΠ) Γ(η) 1 [( ] ) a exp( Γ(b) a ][ ) t exp(tΠ) Γ(η) 1 [( ) t\ (P 1)(b b i n 1i n 1i nη-n 0 1)(b b i n 1i n 1i nη-n 2 1η i 1η i                          (19) )d a)t( 1 exp( ) a)t( 1 exp( )d a)t( 1 exp( ) a)t( 1 exp( ) t\ (P i n 1i - 0 i n 1i 1)b)((nη- i n 1i 1)(b-nη- 0 i n 1i 1)(b-nη- 2 1)b)((nη-                                   By multiplying the integral in equation (19) by the quantity which equals to ) a)t( )Γ( ( ) Γ( a)t( ( b)(nη bnη b)nη b)(nη i n 1i i n 1i         ,where ) Γ(. is a gamma function .After some simplification, it yields (20) a)t( 1 exp( )Γ( a)t( )t \ (P i n 1i 1)b)((nη-i n 1i 2 K2(t, b)nη b)(nη            where 1a)dt( 1 exp( Γ( a)t( ) i n 1i 1)b)((nη-i n 1i 0 b)nη b)(nη K2(t,              is the integral of the pdf of inverse gamma distribution [14]. Then the posterior distribution of  is inverse gamma distribution as (21) 0ba, , η n, , 0 , a)t( 1 exp( Γ( a)t( ) t\ (P i n 1i 1)b)((nη-i n 1i 2 b)nη b)(nη           It means that b))(nηa),t gamma(( inverse )t \ ( i n 1i   with posterior mean is 1)-b(nη a)t( )t \ E( i n 1i     and posterior variance is 2)-bnη(1)-b((nη a)t( )t \ var( 2 2 i n 1i     . IHJPAS. 36 (3) 2023 337 3.3 The posterior density using the inverse chi-square distribution By assuming the inverse chi-square distribution[14] as prior distribution for unknown scale parameter when shape parameter η is known with the probability density function (pdf) given by (22) 0 υ, 0, ) 2 υ exp( 2)Γ(υ/ /2)(υ α ) ; (υh ) 2 υ (12 υ 3      After substituting equation (22) and equation (11) in equation (12), yields the posterior probability density function of the scale parameter , we have (23) ]d ) 2 υ exp( 2)Γ(υ/ /2)(υ ][ ) t exp(tΠ) Γ(η) 1 [( ] ) 2 υ exp( 2)Γ(υ/ /2)(υ ][ ) t exp(tΠ) Γ(η) 1 [( ) t\ (P ) 2 υ (12 υ i n 1i n 1i nη-n 0 ) 2 υ (12 υ i n 1i n 1i nη-n 3 1η i 1η i                            (24) )d ) 2 υ t( 1 exp( ) ) 2 υ t( 1 exp( ) t\ (P i n 1i 1)) 2 υ ((nη- 0 i n 1i 1)) 2 υ ((nη- 3                  By multiplying the integral in equation (24) by the quantity which equals to ) ) 2 t( )Γ( ( ) Γ( ) 2 t( ( ) 2 (nη i n 1i i n 1i 2 nη ) 2 nη ) 2 (nη               ,where ) Γ(. is a gamma function .After some simplification, it yields (25) ) 2 υ t( 1 exp( )Γ( ) 2 υ t( ) t\ (P i n 1i 1)) 2 υ ((nη-i n 1i 3 K3(t, ) 2 υ nη ) 2 υ (nη            where 1d ) 2 υ t( 1 exp( Γ( ) 2 υ t( ) i n 1i 1)) 2 υ ((nη-i n 1i 0 ) 2 υ nη ) 2 υ (nη K3(t,             is the integral of the pdf of inverse gamma distribution [14]. Then the posterior distribution of  is inverse gamma distribution as (26) 0 , η n, , 0 , ) 2 υ t( 1 exp( Γ( ) 2 υ t( ) t\ (P i n 1i 1)) 2 υ ((nη-i n 1i 3 ) 2 υ nη ) 2 υ (nη           IHJPAS. 36 (3) 2023 338 It means that )) 2 (nη), 2 t gamma(( inverse )t \ ( i n 1i     with posterior mean is 1)- 2 (nη ) 2 t( )t \ E( i n 1i        and posterior variance is 2)- 2 nη(1)- 2 (nη ) 2 t( )t \ var( 2 2 i n 1i        . 3.4 The posterior density using the standard Levy distribution By assuming the standard Levy distribution [15]as prior distribution for unknown scale parameter  when shape parameter η is known with the probability density function (pdf) given by (27) 0 κ, 0, ) 2 κ exp() 2π κ ( α ) ; κ ( h ) 2 3 ( 2 1 4      After substituting equation (27) and equation (11) in equation (12), yields the posterior probability density function of the scale parameter , we have (28) ]d ) 2 κ exp() 2 κ ( ][ ) t exp(tΠ) Γ(η) 1 [( ] ) 2 κ exp() 2 κ ( ][ ) t exp(tΠ) Γ(η) 1 [( ) t\ (P ) 2 3 ( 2 1 i n 1i n 1i nη-n 0 ) 2 3 ( 2 1 i n 1i n 1i nη-n 4 1η i 1η i                            (29) )d ) 2 κ t( 1 exp( ) ) 2 κ t( 1 exp( ) t\ (P i n 1i 1)) 2 1 ((nη- 0 i n 1i 1)) 2 1 ((nη- 4                  By multiplying the integral in equation (2) by the quantity which equals to ) ) 2 κ t( )Γ( ( ) Γ( ) 2 κ t( ( ) 2 1 (nη i n 1i i n 1i 2 1 nη ) 2 1 nη ) 2 1 (nη         ,where ) Γ(. is a gamma function.After some simplification, it yields (30) ) 2 κ t( 1 exp( )Γ( ) 2 κ t( ) t\ (P i n 1i 1)) 2 1 ((nη-i n 1i 4 K4(t, ) 2 1 nη ) 2 1 (nη            Where 1d ) 2 κ t( 1 exp( Γ( ) 2 κ t( ) i n 1i 1)) 2 1 ((nη-i n 1i 0 ) 2 1 nη ) 2 1 (nη K4(t,             is the integral of the pdf of inverse gamma distribution [14]. Then the posterior distribution of  is inverse gamma distribution as IHJPAS. 36 (3) 2023 339 (31) 0 κ, η n, , 0 , ) 2 κ t( 1 exp( Γ( ) 2 κ t( ) t\ (P i n 1i 1)) 2 1 ((nη-i n 1i 3 ) 2 1 nη ) 2 1 (nη           It means that )) 2 1 (nη), 2 κ t gamma(( inverse )t \ ( i n 1i   with posterior mean is ) 2 1 (nη ) 2 κ t( 1)- 2 1 (nη ) 2 κ t( )t \ E( i n 1ii n 1i         and posterior variance is ) 2 3 nη() 2 1 (nη ) 2 κ t( 2)- 2 1 nη(1)- 2 1 (nη ) 2 κ t( )t \ var( 2 2 i n 1i 2 2 i n 1i         . 4.Bayes estimation under general entropy loss function Calabria and Pulcini [16] (1994) introduced general entropy loss function (GELF), with the following form (32) 0c , 1)(clog)() , ( ^^ ^ e c       Where c represents the shape parameter of general entropy loss function. And we can define the risk function as follows 1))(clog)E(() , K( , )] , (E[) , K( ^^ ^^^ e c               t)d\ p( 1))(clog)(() , K( ^^ ^ e c             d )t)d\ 1p(t)d\ )p((logct)d\ p()() , K( ^^ ^ e c  By taking Partial derivative for ^  and make it equal to zero to obtain ^  based on GELF c t)\E( 1-c )( c ) , K( ^ ^ ^ ^        , which satisfies the following condition    0 ) , K( ^ ^   1- )( ct)\E( 1-c )( c ^^   ,we obtain Bayes estimator based on general entropy loss function (GELF) of  denoted by ^  under different priors as follows (33) ]t)d\P( c [ t)]\ c [( c 1 GELF c 1 GELF ^^        IHJPAS. 36 (3) 2023 340 We can verify that if 1c  , it yields Bayes estimator of  based on entropy loss function. And if 1c  , it yields Bayes estimator of  based on weighted square error loss function .Also if 1c  , it yields Bayes estimator of  based on square error loss function. So we can derive Bayes estimator for unknown scale parameter  when shape parameter η is known based on general entropy loss function (GELF) as follows. 4.1 Bayes estimator of scale parameter using the inverse exponential as prior distribution By using the posterior density using the inverse exponential distribution, we can derive Bayes estimator for  based on general entropy loss function (GELF) ,By substituting equation (16) in equation (33), yields (33) ])d t\ (P c [ c 1 1GELF(1) ^      c 1 i n 1i 1)1)((nη-i n 1i 0 GELF(1) ]d ω))t( 1 exp( Γ( ω)t( c [ 1)nη 1)(nη^               (34) ]d ω))t( 1 exp( Γ( ω)t( [ c 1 i n 1i 1)1)c((nη-i n 1i 0 GELF(1) 1)nη c-1)c(nη^               By multiplying the integral in equation (34) by the quantity which equals to ) )Γ( )Γ( ( 1cnη 1cnη   ,where ) Γ(. is a gamma function. After some simplification, it yields (35) )]B1(t, c ω)t(Γ( Γ( [ c 1 i n 1i GELF(1) 1)nη 1)cnη ^        Where 1 d ω))t( 1 exp( Γ( ω)t( )B1(t, i n 1i 1)1)c((nη-i n 1i 0 1)cnη 1)c(nη              is the integral of the pdf of inverse gamma distribution [13] .So the Bayes estimator for  will be (36) ] c ω)t(Γ( Γ( [ c 1 i n 1i GELF(1) 1)nη 1)cnη ^        4.2 Bayes estimator of scale parameter using the inverse Gamma as prior distribution By using the posterior density using the inverse Gamma distribution , we can derive Bayes estimator for  based on general entropy loss function (GELF) ,By substituting equation (21) in equation (33), yields (33) ])d t\ (P c [ c 1 2GELF(2) ^      c 1 i n 1i 1)b)((nη-i n 1i 0 GELF(2) ]a))dt( 1 exp( Γ( a)t( c [ b)nη b)(nη^               (37) ]a))dt( 1 exp( Γ( a)t( [ c 1 i n 1i 1)c)b((nη-i n 1i 0 GELF(2) b)nη c-c)b(nη^               IHJPAS. 36 (3) 2023 341 By multiplying the integral in equation (37) by the quantity which equals to ) )Γ( )Γ( ( cbnη cbnη   ,where ) Γ(. is a gamma function .After some simplification, it yields (38) )]B2(t, c a)t(Γ( Γ( [ c 1 i n 1i GELF(2) b)nη c)bnη ^        Where 1a))dt( 1 exp( Γ( a)t( )B2(t, i n 1i 1)c)b((nη-i n 1i 0 c)bnη c)b(nη              is the integral of the pdf of inverse gamma distribution [13] .So the Bayes estimator for  will be (39) ] c a)t(Γ( Γ( [ c 1 i n 1i GELF(2) b)nη c)bnη ^        4.3 Bayes estimator of scale parameter using the inverse chi-square as prior distribution By using the posterior density using the inverse chi-square distribution, we can derive Bayes estimator for  based on general entropy loss function (GELF) ,By substituting equation (26) in equation (33), yields (33) ])d t\ (P c ([ c 1 3GELF(3) ^      c 1 i n 1i 1)) 2 υ ((nη-i n 1i 0 GELF(3) ]d ) 2 υ t( 1 exp( Γ( ) 2 υ t( c ([ ) 2 υ nη ) 2 υ (nη ^               c 1 i n 1i 1)c) 2 υ ((nη-i n 1i 0 GELF(3) ]d )) 2 υ t( 1 exp( Γ( ) 2 υ t( [ ) 2 υ nη ) 2 υ (nη ^              (40) ]d )) 2 υ t( 1 exp( Γ( ) 2 υ t( [ c 1 i n 1i 1)c) 2 υ ((nη-i n 1i 0 GELF(3) ) 2 υ nη c-c) 2 υ (nη ^              By multiplying the integral in equation (40) by the quantity which equals to ) )Γ( )Γ( ( c 2 υ nη c 2 υ nη   ,where ) Γ(. is a gamma function .After some simplification, it yields (41) )]B3(t, ) 2 υ t(Γ( Γ( [ c 1 i n 1i GELF(3) c ) 2 υ nη c) 2 υ nη^        IHJPAS. 36 (3) 2023 342 Where 1d )) 2 υ t( 1 exp( Γ( ) 2 υ t( )B3(t, i n 1i 1)c) 2 υ ((nη-i n 1i 0 c) 2 υ nη c) 2 υ (nη             is the integral of the pdf of inverse gamma distribution [13] .So the Bayes estimator for  will be (42) ] ) 2 υ t(Γ( Γ( [ c 1 i n 1i GELF(3) c ) 2 υ nη c) 2 υ nη^        4.4 Bayes estimator of scale parameter using the standard Levy as prior distribution By using the posterior density using the standard Levy distribution, we can derive Bayes estimator for  based on general entropy loss function (GELF) ,By substituting equation (31) in equation (33), yields (33) ])d t\ (P c [ c 1 4GELF(4) ^      c 1 i n 1i 1)) 2 1 ((nη-i n 1i 0 GELF(4) ]d )) 2 κ t( 1 exp( Γ( ) 2 κ t( c ([ ) 2 1 nη ) 2 1 (nη ^               c 1 i n 1i 1)c) 2 1 ((nη-i n 1i 0 GELF(4) ]d )) 2 κ t( 1 exp( Γ( ) 2 κ t( [ ) 2 1 nη ) 2 1 (nη ^              (43) ]d )) 2 κ t( 1 exp( Γ( ) 2 κ t( [ c 1 i n 1i 1)c) 2 1 ((nη-i n 1i 0 GELF(4) ) 2 1 nη c-c) 2 1 (nη ^              By multiplying the integral in equation (43) by the quantity which equals to ) )Γ( )Γ( ( c 2 1 nη c 2 1 nη   ,where ) Γ(. is a gamma function .After some simplification, it yields (44) )]B4(t, ) 2 κ t(Γ( Γ( [ c 1 i n 1i GELF(4) c ) 2 1 nη c) 2 1 nη^        Where 1d )) 2 κ t( 1 exp( Γ( ) 2 κ t( )B4(t, i n 1i 1)c) 2 1 ((nη-i n 1i 0 c) 2 1 nη c) 2 1 (nη             is the integral of the pdf of inverse gamma distribution [13] .So the Bayes estimator for  will be IHJPAS. 36 (3) 2023 343 (45) ] ) 2 κ t(Γ( Γ( [ c 1 i n 1i GELF(4) c ) 2 1 nη c) 2 1 nη^        5. Simulation and Discussion The Simulation method was conducted by using MATLAB-R2018a program, under 5000 replications is assumed, we generated different random samples of sizes n = (15, 25, 50) from Erlang distribution with parameters ) , η (  using the sum of identical independent distribution from exponential with parameter )( . i.e. we generated η numbers ηi2i1i U,...,U,U random samples of sizes (n) from exponential distribution with parameter )( , then add them )U...UU t( ηi2i1ii  to obtain random sample of size (n) from Erlang distribution with parameters ) , η (  .In the same way , we generated different cases for the parameters ) , η (  of the Erlang model have been represented by )7,5(),3,5(),3,3(),3,2() , η (  with selected different values for the hyper parameters υb),(a,, ω and κ are known of the prior distributions as  For inverse exponential prior with ) 5.0(ω  .  For the inverse Gamma prior with )2b,3a (  .  For the inverse chi-square prior with )5 υ(  .  For the standard Levy prior with ) 5.1 (  . We compared the behavior of the Bayes estimates of scale parameter  for the Erlang distribution according to their mean square error (MSE) which has been computed as ) (46 )- )r ( ( 5000 1 MSE 2 ^5000 1r    under different informative priors for unknown scale parameter based on General entropy loss function . The results of simulation summarized in tables (1 to 4) , including the estimated values )( ^  and their mean square error (MSE) for parameter )( of the Erlang distribution for different values of sample size(n) and  . The best Bayes estimates )( ^  of scale parameter ) ( would be under different priors. IHJPAS. 36 (3) 2023 344 Table 1: Bayes estimates )( ^  of scale parameter ) ( and their mean square error (MSE) of the Erlang distribution under invers exponential prior with ) 5.0(ω  based on General entropy loss function. n c )3 2, η (   )3 3, η (   )3 5, η (   )7 5, η (   ^  MSE ^  MSE ^  MSE ^  MSE 15 -1 1 2 3 3.0190 2.9216 2.8756 2.8312 0.2987 0.2856 0.2862 0.2909 3.0080 2.9426 2.9111 2.8804 0.1987 0.1934 0.194 0.1965 2.9992 2.9597 2.9405 2.9215 0.1223 0.1207 0.1211 0.1222 7.0126 6.9203 6.8752 6.8308 0.6494 0.6386 0.6397 0.6447 25 -1 1 2 3 3.0077 2.9487 2.9202 2.8923 0.1823 0.1778 0.1782 0.1801 3.0009 2.9614 2.9421 2.9231 0.1259 0.1241 0.1244 0.1254 3.0063 2.9824 2.9706 2.9590 0.0694 0.0686 0.0686 0.0689 7.0072 6.9516 6.9242 6.8970 0.3921 0.3882 0.3886 0.3904 50 -1 1 2 3 2.9963 2.9666 2.9520 2.9376 0.0872 0.0866 0.0870 0.0877 2.9999 2.9800 2.9702 2.9604 0.0607 0.0603 0.0604 0.0607                 Note.1: The shadow cells represent the smallest value of MSE. Table 2: Bayes estimates )( ^  of scale parameter ) ( and their mean square error (MSE) of the Erlang distribution under invers gamma prior with 2)b3,(a  based on General entropy loss function. n c )3 2, η (   )3 3, η (   )3 5, η (   )7 5, η (   ^  MSE ^  MSE ^  MSE ^  MSE 15 -1 1 2 3 3.0022 2.9084 2.8640 2.8212 0.2794 0.2706 0.2728 0.2787 2.9970 2.9332 2.9025 2.8725 0.1901 0.1866 0.1878 0.1909 2.9926 2.9538 2.9348 2.9161 0.1191 0.1181 0.1188 0.1201 6.9532 6.8629 6.8187 6.7753 0.6345 0.6348 0.6409 0.6509 25 -1 1 2 3 2.9977 2.9400 2.9122 2.8849 0.1752 0.1721 0.1730 0.1755 2.9943 2.9554 2.9364 2.9177 0.1227 0.1215 0.122 0.1232 3.0022 2.9786 2.9669 2.9554 0.0683 0.0677 0.0678 0.0682 6.9714 6.9166 6.8895 6.8627 0.3867 0.3868 0.3891 0.3928 50 -1 1 2 3 2.9914 2.9620 2.9476 2.9334 0.0856 0.0853 0.0858 0.0867 2.9966 2.9768 2.9671 2.9574 0.0599 0.0597 0.0598 0.0602                 Note.1: The shadow cells represent the smallest value of MSE. Table 3: Bayes estimates )( ^  of scale parameter ) ( and their mean square error (MSE) of the Erlang distribution under invers chi-square prior with ) 5 υ(  based on General entropy loss function. n c )3 2, η (   )3 3, η (   )3 5, η (   )7 5, η (   ^  MSE ^  MSE ^  MSE ^  MSE 15 -1 1 2 3 2.9387 2.8483 2.8055 2.7641 0.2744 0.2773 0.2845 0.2951 2.954 2.8918 2.8618 2.8326 0.1882 0.1900 0.1937 0.1991 2.9665 2.9283 2.9096 2.8911 0.1186 0.1197 0.1212 0.1235 6.9012 6.8121 6.7686 6.7257 0.6338 0.6433 0.6538 0.6679 25 -1 1 2 3 2.9589 2.9025 2.8753 2.8486 0.1735 0.1748 0.1778 0.1821 2.9682 2.9299 2.9112 2.8927 0.1220 0.1228 0.1243 0.1265 2.9864 2.9630 2.9514 2.9400 0.0679 0.0681 0.0685 0.0693 6.9399 6.8855 6.8587 6.8321 0.3864 0.3900 0.3939 0.3992 50 -1 1 2 3 2.9717 2.9427 2.9285 2.9144 0.0855 0.0863 0.0873 0.0888 2.9834 2.9638 2.9541 2.9445 0.0598 0.0600 0.0605 0.0610                 Note.1: The shadow cells represent the smallest value of MSE. IHJPAS. 36 (3) 2023 345 Table 4: Bayes estimates )( ^  of scale parameter ) ( and their mean square error (MSE) of the Erlang distribution under standard Levy prior with ) 5.1 (  based on General entropy loss function. n c )3 2, η (   )3 3, η (   )3 5, η (   )7 5, η (   ^  MSE ^  MSE ^  MSE ^  MSE 15 -1 1 2 3 3.0786 2.9777 2.9300 2.8841 0.3148 0.2892 0.2844 0.2842 3.0474 2.9804 2.9482 2.9168 0.2054 0.1947 0.1928 0.1930 3.0227 2.9827 2.9631 2.9438 0.1244 0.1210 0.1204 0.1207 7.0630 6.9694 6.9237 6.8787 0.662 0.6416 0.6381 0.6388 25 -1 1 2 3 3.0431 2.9828 2.9537 2.9253 0.1878 0.1789 0.1773 0.1774 3.0244 2.9843 2.9648 2.9455 0.1282 0.1245 0.1239 0.124 3.0203 2.9963 2.9844 2.9726 0.0704 0.0688 0.0685 0.0685 7.0374 6.9813 6.9536 6.9262 0.3966 0.3893 0.3880 0.3883 50 -1 1 2 3 3.0139 2.9839 2.9691 2.9546 0.0883 0.0866 0.0865 0.0867 3.0116 2.9916 2.9817 2.9719 0.0612 0.0604 0.0602 0.0603                 From the results of simulation which are listed in table 1 to table 4, we can see that the best Bayes estimates under different priors based on General entropy loss function (GELF) with the shape parameter (c) according to the smallest value of MSE as follows: For the results listed in table 1 , we see that the best Bayes estimates )( ^  of scale parameter ) ( under invers exponential prior with ) 5.0(ω   when the shape parameter (c) of GELF is equal to one for all sample sizes (n=15,25, 50),where the true cases of the Erlang model are )3 2, η (   , )3 3, η (   .  when the shape parameter (c) of GELF is equal to one for samples sizes (n=15,25), and the estimation does not exist for n=50 , where the true cases of the Erlang model are )3 5, η (   , )7 5, η (   . For the results listed in table 2, we see that the best Bayes estimates )( ^  of scale parameter ) ( under inverse gamma prior with 2)b3,(a   when the shape parameter (c) of GELF is equal to one for all sample sizes (n=15,25, 50), where the true cases of the Erlang model are )3 2, η (   , )3 3, η (   .  when the shape parameter (c) of GELF is equal to one for sample sizes (n=15,25), and the estimation does not exist for n=50, , where the true case of the Erlang model is )3 5, η (   .  when the shape parameter (c) of GELF is equal to (-1) ,for sample sizes (n=15,25), and the estimation does not exist for n=50,where the true case of the Erlang model is )7 5, η (   . For the results listed in table 3, we see that the best Bayes estimates )( ^  of scale parameter ) ( under invers chi-square prior with ) 5 υ(   when the shape parameter (c) of GELF is equal to (-1) for all sample sizes (n=15,25, 50) , where the true cases of the Erlang model are )3 2, η (   , )3 3, η (   . IHJPAS. 36 (3) 2023 346  when the shape parameter (c) of GELF is equal to (-1) for sample sizes (n=15,25), and the estimation does not exist for n=50, , where the true cases of the Erlang model are )3 5, η (   , )7 5, η (   . For the results listed in table 4, we see that the best Bayes estimates )( ^  of scale parameter ) ( under standard Levy prior with ) 5.1 (   when the shape parameter (c) of GELF is equal to (3) all sample sizes (n=15,25, 50),where the true case of the Erlang model is )3 2, η (   .  when the shape parameter (c) of GELF is equal to (2) all sample sizes (n=15,25, 50),where the true cases of the Erlang model are )3 3, η (   .  when the shape parameter (c) of GELF is equal to (2) sample sizes (n=15,25) and the estimation does not exist for n=50, where the true cases of the Erlang model are )3 5, η (   , )7 5, η (   . In general, the Bayes estimation for the scale parameter ) ( does not exist, where the true cases ) , η (  of the Erlang model are large values , also for large sample size i.e. )50n (  and under all informative priors. 6. Conclusion In this study, we review the properties of Erlang distribution. We derived cumulative distribution function (cdf) , mean and variance and the coefficient of variation (C.V) of Erlang distribution. Also , we derived The posterior density with posterior mean and posterior variance using different informative priors for unknown scale parameter  which are the inverse exponential distribution, the inverse chi-square distribution, the inverse Gamma distribution and the standard Levy distribution as priors based on general entropy loss function (GELF) for different values of the shape parameter (c). we can draw our conclusions about the results in the following points:  The Bayes estimators for the scale parameter ) ( of the Erlang distribution based on GELF for the shape parameter (c=1,2,3) under invers gamma prior with 2)b3,(a  - for all sample sizes(n) where the true cases of the Erlang model are )3 2, η (   and )3 3, η (   . - for samples (n=15,25) where the true case of the Erlang model are )3 5, η (   and )7 5, η (   . have less than mean square error compared with the other bayes estimators.  The Bayes estimation for the scale parameter ) ( of the Erlang distribution based on GELF does not exist for large sample size )50n (  under all informative priors , where the true cases of the Erlang model are )3 5, η (   and )7 5, η (   .  In general, the Bayes estimation for the scale parameter ) ( does not exist, where the true cases ) , η (  of the Erlang model are large values , also for large sample size i.e. )50n (  and under all informative priors. IHJPAS. 36 (3) 2023 347 References 1. Haq A.; Dey S.; Bayesian estimation of erlang distribution under different prior distributions. Journal of Reliability and Statistical Studies, 2011,4(1),1-30. 2. Khan A. H.; Jan T. R.; Bayesian estimation of erlang distribution under different generalized truncated distributions as Priors. Journal of Modern Applied Statistical Methods. 2012, 11)2(, 416-442. 3. Bhat B. A.; Kumar P.; Wani M. A.; New generalization of erlang distribution with Bayes estimation. International Journal of Innovative Research and Review ISSN: 2347 – 4424 (Online). An Online International Journal Available at http://www.cibtech.org/jirr.htm 2016, 4 (2),14-19/Bhat et al. 4. Mudasir, S.; Ahmad S. P.; Characterization and information measures of weighted erlang distribution. Journal of Statistics Applications &Probability Letters, 2017,4)3(, 109-122. 5. Mudasir, S.; Ahmad S. P.; Parameter Estimation of Weighted Erlang Distribution Using R Software. Mathematical Theory and Modeling. ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online), 2017, 7)6(, 1-21. 6. Moussa A. A.; Othman S. A.; Reyad H. M.; The length-biased weighted erlang distribution. Asian Research Journal of Mathematics. 2017, 6(3):.1-15, Article no. ARJOM.35963, ISSN: 2456-477X. 7. Khan Adil H.; Jan T.R.; Bayesian estimation of shape and scale parameters of Erlang distribution. 3rd International Conference on Recent Developments in Science, Humanities and Management Mahratta Chamber of Commerce, Industries and Agriculture, pune (India). 22nd July 2018.www. conferenceword.in. ISBN:978-93-87793-35-4. pp.184-205. 8. Hincal E.; Alsaadi S.; Posterior analysis of weighted Erlang distribution. AIP Conference Proceedings 2183, 070023 .2019; https://doi.org/10.1063/1.5136185. 9. Ahmad K.; Ahmad S. P.; A Comparative Study of Maximum Likelihood Estimation and Bayesian Estimation for Erlang Distribution and Its Applications. Open access peer-reviewed chapter in Statistical Methodologies. 2019, DOI: http://dx.doi.org/10.5772/ intechopen.85627. 10. Aijaz A. ; Qurat ul Ain S.; Ahmad A.; Tripathi R.; An Extension of Erlang Distribution with Properties Having Applications In Engineering and Medical-Science.Int. J. Open Problems Compt. Math. .2021, 14)1(, Print ISSN: 1998-6262, Online ISSN: 2079- 0376. 11. Bickel P.J.; Doksum, K. A.; Mathematical Statistics: Basic Ideas and Selected Topics. 1977, Holden- Day, Inc., San Francisco. 12. Ieren T.G.; Abdullahi J.; Properties and applications of a two-parameter inverse exponential distribution with a decreasing failure rate. Pakistan Journal of Statistics. 2020, 36(3), 183- 206. 13. Rivera, P.A.; Calderín-Ojeda, E.; Gallardo, D.I.; Gómez, H.W.; A compound class of the inverse gamma and power series distributions. Symmetry ,2021, 13, 1328. https:// doi. org/ 10.3390/ sym13081328. 14. Ahmad A.; Ahmad S.P.; Weighted analogue of inverse gamma distribution: statistical properties, estimation and simulation study; Pak.j.stat.oper.res. 2019, XV No.1, 25-37. 15. Achcar JA.; Coelho-Barros EA.; Cuevas JRT; Mazucheli J.; Use of Lèvy distribution to analyze longitudinal data with asymmetric distribution and presence of left censored data. Communications for Statistical Applications and Methods ,2018, 25) 1(, 43–60. https://doi.org/10.1063/1.5136185 IHJPAS. 36 (3) 2023 348 16. Calabria R.; Pulcini G.; An engineering approach to bayes estimation for the weibull distribution. Microelectronics Reliability.1994, 34)5(, 789–802.