IHJPAS. 36(2)2023 289 This work is licensed under a Creative Commons Attribution 4.0 International License Abstract In this paper, two parameters for the Exponential distribution were estimated using the Bayesian estimation method under three different loss functions: the Squared error loss function, the Precautionary loss function, and the Entropy loss function. The Exponential distribution prior and Gamma distribution have been assumed as the priors of the scale γ and location δ parameters respectively. In Bayesian estimation, Maximum likelihood estimators have been used as the initial estimators, and the Tierney-Kadane approximation has been used effectively. Based on the Monte- Carlo simulation method, those estimators were compared depending on the mean squared errors (MSEs). The results showed that the Bayesian estimation under the Entropy loss function, assuming Exponential distribution and Gamma distribution priors for the scale and location parameters, respectively, is the best estimator for the scale parameter. The best estimation method for location is the Bayesian estimation under the Entropy loss function in case of a small value of the scale γ (say γ < 1). Bayesian estimation under the Precautionary loss function is the best in case of a relatively large value of the scale γ (say γ > 1). Keywords: Exponential distribution, Maximum likelihood estimator, Squared error loss function, Precautionary loss function, Entropy loss function, Tierney and Kadane approximation. 1.Introduction Exponential distribution has great importance and a great role in different industrial and engineering applications, in addition to the theory of waiting in lines or queues (Queuing Theory), which exists in different situations. Also, it is used when the failure rate is constant with time, where sometimes there is a need to find reliability that starts from a specific time and not from zero. Even though the importance of Exponential distribution has both theoretical and practical sides, it remains poorly studied. Therefore, the main objective of this paper is to find the best estimator for the location and the scale parameters using Bayesian estimation and compare the doi.org/10.30526/36.2.2946 Article history: Received 3 August 2022, Accepted 4 September 2022, Published in April 2023. Ibn Al-Haitham Journal for Pure and Applied Sciences Journal homepage: jih.uobaghdad.edu.iq Bayesian Estimation for Two Parameters of Exponential Distribution under Different Loss Functions Maryam N. Abd Department of Mathematics, College of Science, Mustansiriyah University, Baghdad, Iraq. memealzubaydi@gmail.com Huda A. Rasheed Department of Mathematics, College of Science, Mustansiriyah University, Baghdad, Iraq. iraqalnoor1@gmail.com https://creativecommons.org/licenses/by/4.0/ mailto:memealzubaydi@gmail.com IHJPAS. 36(2)2023 290 performance of these estimators under a mean squared error (MSE)-based Monte Carlo simulation study. The probability density function (p.d.f) of the two parameters exponential distribution is given by [1]: 𝑓(𝑥; 𝛾, 𝛿) = 1 𝛾 𝑒 − (𝑥−𝛿) 𝛾 ; 𝑥 > 𝛿 , 𝛾 > 0 , 𝛿 ≥ 0 (1) Where, γ and δ are the scale and location parameters, respectively. The Exponential distribution is a special case of the Weibull distribution where γ = 1 [2]. The cumulative distribution function (CDF) of the Exponential distribution is: 𝐹(𝑥; 𝛾, 𝛿) = 1 − 𝑒 − (𝑥−𝛿) 𝛾 . (2) In general, the reliability function is defined as follows: 𝑅(𝑥; 𝛾, 𝛿 ) = 1 − 𝐹(𝑥; 𝛾, 𝛿). Therefore, the reliability function R(t) of the Exponential distribution is: 𝑅(𝑥; 𝛾, 𝛿 ) = 𝑒 − (𝑥−𝛿) 𝛾 . (3) The hazard function is given by: ℎ(𝑥; 𝛾, 𝛿) = 𝑓(𝑥; 𝛾, 𝛿) 𝑅(𝑥; 𝛾, 𝛿 ) . After substituting (1) and (2) into ℎ(𝑥; 𝛾, 𝛿), the hazard function of Exponential distribution becomes as follows: ℎ(𝑥; 𝛾, 𝛿) = 1 𝛾 . (4) 2. Estimation Methods This section focuses on some Bayesian estimators of the two unknown parameters of Exponential distribution using the Maximum likelihood estimator (MLE) as an initial values for Bayesian estimators. 2.1 Maximum Likelihood Estimator (MLE) The Maximum likelihood (ML) method was developed by R. A. Fisher (1912) and has been widely used since then [3]. The Maximum likelihood method aims to maximize the likelihood function. Assume that 𝑥1, 𝑥2, …, 𝑥𝑛 are a random samples of size (𝑛) drawn from an Exponential distribution with scale parameter γ and the location parameter δ. Then, the maximum likelihood estimator can be obtained by deriving the logarithm of the likelihood function and its equality to zero. The likelihood function for the two-parameter the Exponential distribution will be as follows: 𝐿(𝛾, δ| 𝑥 ) = ∏ 𝑓(𝑥𝑖 ; 𝑛 𝑖=1 𝛾, δ) = ∏ [ 1 𝛾 𝑒 − (𝑥−𝛿) 𝛾 ]𝑛𝑖=1 = 1 𝛾𝑛 𝑒 − 1 𝛾 ∑ (𝑥𝑖−𝛿) 𝑛 𝑖=1 . (5) The logarithm of the likelihood function is given by. ℓ𝑀𝐿 = − nln 𝛾− 1 𝛾 ∑ (𝑥𝑖 − 𝛿) 𝑛 𝑖=1 . (6) Differentiating ℓ𝑀𝐿 partially with respect to 𝛾 and 𝛿, respectively and equating to zero yields: 𝜕ℓ𝑀𝐿 𝜕δ = −𝑛 𝛾 = 0. IHJPAS. 36(2)2023 291 Notice that, δ ̂ that maximizes the likelihood function can be obtained using order statistic where, δ̂ = min(X1, X2, … , Xn) = 𝑋(1). 𝜕ℓ𝑀𝐿 𝜕γ = −𝑛 𝛾 + 1 𝛾2 ∑ (𝑥𝑖 − 𝛿) 𝑛 𝑖=1 = 0. After some simplification, we get: 𝛾 = 1 𝑛 ∑ (𝑥𝑖 − 𝑋(1)) 𝑛 𝑖=1 . 3. Bayesian Estimation 3.1 Posterior Density Function Using Exponential and Gamma Priors To estimate the two unknown parameters for Exponential distribution 𝛾 and δ, the prior for 𝛾 J1(.) is assumed as Exponential distribution with the scale parameter c, i.e. [4]: J1(𝛾)= { 𝑐𝑒−𝑐𝛾 ; 𝑐 > 0 , 𝛾 > 0 0 ; 0. 𝑤 On the other hand, the prior distribution J2(.) for δ is assumed as the Gamma distribution with two unknown parameters (a, b) as a scale and shape parameters for 𝜇 respectively. i.e. [4]: J2(δ) = { (𝑏)𝑎δ𝑎−1𝑒 −𝑏δ Γ(𝑎) ; a > 0, b > 0, δ > 0 0 ; 0. 𝑤 Furthermore, γ and μ are assumed as independent random variables. Therefore, the joint prior distribution of γ and δ will be as follows: J(γ,δ) = J1(γ) J2(δ) = 𝑐𝑒−𝑐𝛾 (𝑏)𝑎δ𝑎−1𝑒 −𝑏δ Γ(𝑎) . Hence, the joint posterior density function of γ and δ is given by: h(γ, δ|x1, x2, … , xn) = L(x1, x2, … , xn, ; γ, δ) J(γ, δ) ∫ ∫ L(x1, x2, … , xn, ; γ, δ) J(γ, δ)dγdδ ∞ 0 ∞ 0 , ℎ(𝛾, δ│x) = 𝛾−𝒏 𝒆 − ∑ (𝒙𝒊−𝜹) 𝒏 𝒊=𝟏 𝛾 𝑐𝑒 −𝑐𝛾 𝑏𝑎 δ𝑎−1𝑒 −𝑏δ ∫ ∫ 𝛾−𝒏 𝒆 − ∑ (𝒙𝒊−𝜹) 𝒏 𝒊=𝟏 𝛾 𝑐𝑒 −𝑐𝛾 𝑏𝑎 δ𝑎−1𝑒 −𝑏δ dγdδ ∞ 𝟎 ∞ 𝟎 . 3.2 Tierney-Kadane Approximation Assume that u(γ, δ) be any function for γ and δ. Therefore, E[u(γ, δ)] = ∫ ∫ u(γ, δ) ∞ 0 ∞ 0 h(γ, δ|x1, … xn)dγ dδ = ∫ ∫ u(γ,δ) ∞ 0 ∞ 0 L(x1,x2,…,xn;γ,δ) J(γ,δ) dγ dδ ∫ ∫ L(x1,x2,…,xn;γ,δ) J(γ,δ) dγ dδ ∞ 0 ∞ 0 . Notice that, there is a difficulty in obtaining the solution of the ratio of two integrals. Therefore, many approximation methods exist for this purpose. One of them is the Tierney-Kadane approximation which can be applied as follows: Consider the functions δ(γ, δ) and 𝛿𝛾 ∗(𝛾, 𝛿) are defined as follows, respectively: 𝛿(𝛾, 𝛿) = 𝑙𝑛𝐿(𝛾,𝛿)+𝑙𝑛𝐽(𝛾,𝛿) 𝑛 . (7) 𝛿𝛾 ∗(𝛾, 𝛿) = δ(γ,δ) + ln 𝑢(𝛾,𝛿) 𝑛 . (8) Furthermore, assume that δ(𝛾𝛿 ,𝛿𝛿 ) and 𝛿𝛾 ∗(𝛾𝛿∗ , 𝛿𝛿∗ ) maximize the functions δ(γ, δ) and 𝛿𝛾 ∗(𝛾, 𝛿), respectively, where 𝛾𝛿∗ is the initial value for γ and 𝛿𝛿∗ is the initial value for δ. IHJPAS. 36(2)2023 292 Assume that│∑│𝑎𝑛𝑑 |∑γ ∗ | denote the determinants of negative inverse Hessian of δ(γ, δ) and 𝛿𝛾 ∗(𝛾, 𝛿), where │∑│ = [ ∂2δ ∂γ2 ∂2δ ∂δ2 − ∂2δ ∂γ ∂δ ∂2δ ∂δ ∂γ ]−1. And |∑γ ∗ | = [ ∂2𝛿𝛾 ∗ ∂γ2 ∂2𝛿𝛾 ∗ ∂δ2 − ∂2𝛿𝛾 ∗ ∂γ ∂δ ∂2𝛿𝛾 ∗ ∂δ ∂γ ]−1. Therefore, E[u(γ, δ)] can be approximated as the following: E[u(γ, δ)] = √ |∑γ ∗ | │∑│ 𝑒[𝑛{𝛿𝛾 ∗ (�̂�𝛿∗ ,�̂�𝛿∗ )− 𝛿(�̂�𝛿,�̂�𝛿)}]. 3.3 Bayes Estimator under Squared Error Loss Function The Squared error loss function is one of the symmetric functions of γ ̂ given by Mood, Graybill, and Boes (1974). It is widely used for most estimation problems. It can be defined as [5]: L(γ̂, γ) = (γ̂ − γ)2. The risk function RS(γ̂, γ) is the posterior expectation of the loss function L(γ̂, γ) with respect to h(γ|X). That is: RS(γ̂, γ) = E[L(γ̂, γ)], RS(γ̂, γ) = ∫ L(γ̂, γ)h(γ|x)dγ ∞ 0 , RS(γ̂, γ) = ∫ (γ̂ − γ) 2∞ 0 h(γ|x)dγ, RS(γ̂, γ) = γ̂ 2 − 2γ̂E(γ|x) + E(γ2|x). (9) Thus, the value of γ̂ that minimizes the posterior risk (9) is obtained by setting its first partial derivative with respect to γ̂ equal to zero. That is, γ̂s is the posterior mean. Taking the partial derivative for RS(γ̂, γ) with respect to γ̂ and setting it equal to zero yields, 2γ̂ − 2E(γ|x) = 0, γ̂s = E(γ|x). Where γ̂s is denoted by the Bayesian estimation for γ under the Squared error loss function. i) Bayesian Estimation for γ under Squared Error Loss Function To obtain Bayesian estimation for 𝛾 under SELF, assume that: u(γ, δ ) = γ. Hence, 𝐸[γ│𝑋] = ∫ ∫ γ ∞ 0 ∞ 0 γ−n e − ∑ (xi−δ) n i=1 γ 𝑐𝑒 −𝑐𝛾 𝑏𝑎 δ𝑎−1𝑒 −𝑏δ𝑑𝛾𝑑𝛿 ∫ ∫ γ−n e − ∑ (xi−δ) n i=1 γ 𝑐𝑒 −𝑐𝛾 𝑏𝑎 δ𝑎−1𝑒 −𝑏δ 𝑑𝛾𝑑𝛿 ∞ 0 ∞ 0 . Therefore, the Tierney-Kadane approximation will be used as follows: 𝑙𝑛𝐽(𝛾, 𝛿) = 𝑙𝑛𝑐 − 𝑐 𝛾 + 𝑎𝑙𝑛𝑏 + (𝑎 − 1) 𝑙𝑛𝛿 − 𝑏𝛿 − 𝑙𝑛𝛤(𝑎). (10) Substituting (6) and (10) in (7) yields, 𝛿 (𝛾, 𝛿) = 1 𝑛 [−𝑛 𝑙𝑛 𝛾 − ∑ (𝑥𝑖−𝛿) 𝑛 𝑖=1 𝛾 + 𝑎 𝑙𝑛 𝑏 + (𝑎 − 1) 𝑙𝑛 𝛿 − 𝑏𝛿 + 𝑙𝑛 𝑐 − 𝛾𝑐 − 𝑙𝑛 𝛤(𝑎)]. Now, according to (8), 𝛿𝛾 ∗(𝛾, 𝛿) can be obtained as follows: 𝛿𝛾 ∗(𝛾, 𝛿) = 1 𝑛 [−𝑛 𝑙𝑛 𝛾 − ∑ (𝑥𝑖−𝛿) 𝑛 𝑖=1 𝛾 + 𝑎 𝑙𝑛 𝑏 + (𝑎 − 1) 𝑙𝑛 𝛿 − 𝑏𝛿 + 𝑙𝑛 𝑐 − 𝛾𝑐 − 𝑙𝑛 𝛤(𝑎)] + ln 𝛾 𝑛 , 𝜕𝛿 𝜕𝛾 = 1 𝑛 [ −𝑛 𝛾 + ∑ (𝑥𝑖−𝛿) 𝑛 𝑖=1 𝛾2 – 𝑐], IHJPAS. 36(2)2023 293 ∂2δ ∂γ2 = 1 𝑛 [ 𝑛 𝛾2 − 2 ∑ (𝑥𝑖−𝛿) 𝑛 𝑖=1 𝛾3 ], 𝜕𝛿 𝜕𝛿 = 1 𝑛 [ 𝑛 𝛾 + (𝑎−1) 𝛿 − 𝑏 ], ∂2δ ∂δ2 = (1−𝑎) 𝑛𝛿2 , ∂2δ ∂γ ∂δ = −1 𝛾2 , │∑│= [ 1 𝑛 [ 𝑛 𝛾2 − 2 ∑ (𝑥𝑖−𝛿) 𝑛 𝑖=1 𝛾3 ] (1−𝑎) 𝑛𝛿2 − 1 𝛾4 ]−1. In order to compute |∑𝛾𝑆𝐸𝐿 ∗ | , we first get the following expressions: 𝜕𝛿∗ 𝜕𝛾 = 1 𝑛 [ −𝑛 𝛾 + ∑ (𝑥𝑖−𝛿) 𝑛 𝑖=1 𝛾2 – 𝑐] + 1 𝑛𝛾 , ∂2𝛿∗ ∂γ2 = 1 𝑛 [ 𝑛 𝛾2 − 2 ∑ (𝑥𝑖−𝛿) 𝑛 𝑖=1 𝛾3 ] − 1 𝑛𝛾2 , 𝜕𝛿∗ 𝜕𝛿 = 1 𝑛 [ 𝑛 𝛾 + (𝑎−1) 𝛿 − 𝑏 ], ∂2𝛿∗ ∂δ2 = (1−𝑎) 𝑛𝛿2 , ∂2𝛿∗ ∂γ ∂δ = −1 𝛾2 , |∑𝛾𝑆𝐸𝐿 ∗ | = [ 1 𝑛 [ 𝑛 𝛾2 − 2 ∑ (𝑥𝑖−𝛿) 𝑛 𝑖=1 𝛾3 − 1 𝛾2 ] (1−𝑎) 𝑛𝛿2 − 1 𝛾4 ]−1, 𝛾𝑆= √ |∑𝛾𝑆𝐸𝐿 ∗ | │∑│ 𝑒[𝑛{𝛿𝛾 ∗ (�̂�𝛿∗ ,�̂�𝛿∗ )− δ(�̂�𝛿,�̂�𝛿)}]. (11) ii) Bayesian Estimation for δ under Squared Error Loss Function Similarly, 𝛿 can be estimated as follows: Assume that, 𝑢(𝛾, 𝛿) = 𝛿. And therefore, 𝛿𝛿 ∗ (𝛾, 𝛿) = δ(γ,δ) + ln 𝛿 𝑛 . In order to compute |∑𝛿𝑆𝐸𝐿 ∗ | , we first get the following expressions: 𝜕𝛿∗ 𝜕𝛾 = 1 𝑛 [ −𝑛 𝛾 + ∑ (𝑥𝑖−𝛿) 𝑛 𝑖=1 𝛾2 – 𝑐], ∂2𝛿∗ ∂γ2 = 1 𝑛 [ 𝑛 𝛾2 − 2 ∑ (𝑥𝑖−𝛿) 𝑛 𝑖=1 𝛾3 ], 𝜕𝛿∗ 𝜕𝛿 = 1 𝑛 [ 𝑛 𝛾 + (𝑎−1) 𝛿 − 𝑏 ] + 1 𝑛𝛿 , ∂2𝛿∗ ∂δ2 = (1−𝑎) 𝑛𝛿2 − 1 𝑛δ2 , ∂2𝛿∗ ∂γ ∂δ = −1 𝛾2 , |∑𝛿𝑆𝐸𝐿 ∗ | = [ 1 𝑛 [ 𝑛 𝛾2 − 2 ∑ (𝑥𝑖−𝛿) 𝑛 𝑖=1 𝛾3 ] (1−𝑎) 𝑛𝛿2 − 1 𝑛δ2 − 1 𝛾4 ]−1. Then, IHJPAS. 36(2)2023 294 �̂�𝑺= √ |∑𝜹𝑺𝑬𝑳 ∗ | │∑│ 𝒆[𝒏{𝜹𝜹 ∗ (�̂�𝜹∗ ,�̂�𝜹∗ )− 𝛅(�̂�𝜹,�̂�𝜹)}]. (12) 3.4 Bayes Estimator under Precautionary Loss Function Norstrom (1996) introduced asymmetric surrogate precautionary loss functions and also introduced a general class of precautionary loss functions as a special case. These loss functions get infinitely close to the origin to prevent underestimation, thus giving conservative estimates, especially when estimating low failure rates. These capabilities are very useful, but underestimating them can have disastrous consequences. A very useful and simple asymmetric spare-loss function is [6]: L(θ̂, θ) = (θ−θ̂) 2 θ̂ . Based on the precautionary loss function, the risk function RP(γ̂, γ) can be derived as follows: RP(γ̂, γ) = E[L(γ̂, γ)] = ∫ L(γ̂, γ) h(γ|x)dγ ∞ 0 RP(γ̂, γ) = ∫ (γ − γ̂)2 γ̂ ∞ 0 h(γ|x)dγ = ∫ (γ2γ̂−1) h(γ|x)dγ − ∫ 2γ h(γ|x)dγ ∞ 0 + ∫ γ̂ ∞ 0 ∞ 0 h(γ|x)dγ RP(γ̂, γ) = E(γ 2|x)γ̂−1 − 2E(γ|x) + γ̂. Taking the partial derivative for RP(γ̂, γ) with respect to γ̂ and setting it equal to zero yields: −E(γ2|x)γ̂−2 + 1 = 0. Thus, γ̂P 2 = E(γ2|x), Hence, the Bayesian estimator for γ relative to the Precautionary loss function is denoted by γ̂P is given by: γ̂P = √E(γ 2|x) . i) Bayesian Estimation for γ under Precautionary Loss Function Obtaining Bayesian estimation for 𝛾 under PLF assumes that: u(γ, δ ) = 𝛾 2. And therefore, 𝛿𝛿 ∗ (𝛾, 𝛿) = δ(γ, δ) + ln 𝛾2 𝑛 . In order to compute |∑𝛾𝑃𝐿 ∗ |, we first get the following expressions: 𝜕𝛿∗ 𝜕𝛾 = 1 𝑛 [ −𝑛 𝛾 + ∑ (𝑥𝑖−𝛿) 𝑛 𝑖=1 𝛾2 – 𝑐]+ 2 𝑛𝛾 , ∂2𝛿∗ ∂γ2 = 1 𝑛 [ 𝑛 𝛾2 − 2 ∑ (𝑥𝑖−𝛿) 𝑛 𝑖=1 𝛾3 ] − 2 𝑛𝛾2 , 𝜕𝛿∗ 𝜕𝛿 = 1 𝑛 [ 𝑛 𝛾 + (𝑎−1) 𝛿 − 𝑏 ], ∂2𝛿∗ ∂δ2 = (1−𝑎) 𝑛𝛿2 , ∂2𝛿∗ ∂γ ∂δ = −1 𝛾2 , IHJPAS. 36(2)2023 295 |∑𝛾𝑃𝐿 ∗ | = [ 1 𝑛 [ 𝑛 𝛾2 − 2 ∑ (𝑥𝑖−𝛿) 𝑛 𝑖=1 𝛾3 − 2 𝛾2 ] (1−𝑎) 𝑛𝛿2 − 1 𝛾4 ]−1. Hence, 𝛾𝑃 =[√ |∑𝛾𝑃𝐿 ∗ | │∑│ 𝑒[𝑛{𝛿𝛾 ∗ (�̂�𝛿∗ ,�̂�𝛿∗ )− δ(�̂�𝛿,�̂�𝛿)} ] 1 2. (13) ii) Bayesian Estimation for the Location Parameter δ under Precautionary Loss Function Similarly, 𝛿 can be estimated as follows: Assume that, 𝑢(𝛾, 𝛿) = 𝛿 2. And therefore, 𝛿𝛿 ∗ (𝛾, 𝛿) = δ(γ, δ) + ln 𝛿2 𝑛 . In order to compute |∑𝛿𝑃𝐿 ∗ | , we first get the following expressions: 𝜕𝛿∗ 𝜕𝛾 = 1 𝑛 [ −𝑛 𝛾 + ∑ (𝑥𝑖−𝛿) 𝑛 𝑖=1 𝛾2 – 𝑐], ∂2𝛿∗ ∂γ2 = 1 𝑛 [ 𝑛 𝛾2 − 2 ∑ (𝑥𝑖−𝛿) 𝑛 𝑖=1 𝛾3 ], 𝜕𝛿∗ 𝜕𝛿 = 1 𝑛 [ 𝑛 𝛾 + (𝑎−1) 𝛿 − 𝑏 ]+ 2 𝑛𝛿 , ∂2𝛿∗ ∂δ2 = (1−𝑎) 𝑛𝛿2 − 2 𝑛𝛿2 , ∂2𝛿∗ ∂γ ∂δ = −1 𝛾2 , |∑𝛿𝑃𝐿 ∗ | = [ 1 𝑛 [ 𝑛 𝛾2 − 2 ∑ (𝑥𝑖−𝛿) 𝑛 𝑖=1 𝛾3 ] (1−𝑎)−2 𝑛𝛿2 − 1 𝛾4 ]−1. Thus, 𝛿𝑃 =[√ |∑𝛿𝑃𝐿 ∗ | │∑│ 𝑒[𝑛{𝛿𝛿 ∗ (�̂�𝛿∗ ,�̂�𝛿∗ )− δ(�̂�𝛿,�̂�𝛿)} ] 1 2 . (14) 3.3 Bayes Estimator under Entropy Loss Function The entropy loss function was originally proposed by Galabria and Pulcini (1994). It is derived from the Linex loss function and defined as [7]: L(θ̂, θ) ∝ ( θ̂ θ ) 𝑝 − 𝑝 ln ( θ̂ θ ) − 1. We’ll assume that 𝑝 = 1. Based on Entropy loss function, the risk function RE(θ̂, θ) can be derived as follows: RE(θ̂, θ) = E[L(θ̂, θ)] = ∫ L(θ̂, θ) h(θ|x)dγ ∞ 0 RE(θ̂, θ) = ∫ (( θ̂ θ ) − ln ( θ̂ θ ) − 1) ∞ 0 h(θ|x)dθ. Taking the partial derivative for RE(θ̂, θ) with respect to θ̂ yields, ∂RE(θ̂,θ) ∂θ̂ = ∫ ( 1 θ ) h(θ|x)dγ − ∫ 1 θ̂ ∞ 0 ∞ 0 h(θ|x)dγ. Let ∂RE(θ̂,θ) ∂θ̂ = 0 , yields E ( 1 θ |x) − 1 θ̂ = 0. Hence, the Bayesian estimator under entropy loss function will be as follows: IHJPAS. 36(2)2023 296 θ̂E = [E(θ −1|X)]−1. i) Bayesian Estimation for γ under Entropy Loss Function Bayesian estimation for γ under Entropy loss function can be obtained as follows: Let 𝑢(𝛾, 𝛿) = 1 𝛾 . And therefore, 𝛿𝛾 ∗(𝛾, 𝛿) = δ(γ, δ) + ln 1 𝛾 𝑛 . In order to compute |∑𝛾𝐸𝐿 ∗ |, we first get the following expressions: 𝜕𝛿∗ 𝜕𝛾 = 1 𝑛 [ −𝑛 𝛾 + ∑ (𝑥𝑖−𝛿) 𝑛 𝑖=1 𝛾2 – 𝑐]- 1 𝑛𝛾 , ∂2𝛿∗ ∂γ2 = 1 𝑛 [ 𝑛 𝛾2 − 2 ∑ (𝑥𝑖−𝛿) 𝑛 𝑖=1 𝛾3 ] + 1 𝑛𝛾2 , 𝜕𝛿∗ 𝜕𝛿 = 1 𝑛 [ 𝑛 𝛾 + (𝑎−1) 𝛿 − 𝑏 ], ∂2𝛿∗ ∂δ2 = (1−𝑎) 𝑛𝛿2 , ∂2𝛿∗ ∂γ ∂δ = −1 𝛾2 , |∑𝛾𝐸𝐿 ∗ | = [ 1 𝑛 [ 𝑛 𝛾2 − 2 ∑ (𝑥𝑖−𝛿) 𝑛 𝑖=1 𝛾3 + 1 𝛾2 ] (1−𝑎) 𝑛𝛿2 − 1 𝛾4 ]−1. Then, 𝛾𝐸= [√ |∑𝛾𝐸𝐿 ∗ | │∑│ 𝑒[𝑛{𝛿𝛾 ∗ (�̂�𝛿∗ ,�̂�𝛿∗ )− δ(�̂�𝛿,�̂�𝛿)}]]−1 . (15) ii) Bayesian Estimation for δ under Entropy Loss Function Bayesian estimation for δ under Entropy loss function can be obtained as follows: Let 𝑢(𝛾, 𝛿) = 1 𝛿 . Therefore, 𝛿𝛿 ∗ (𝛾, 𝛿) = δ(γ, δ) + ln( 1 𝛿 ) 𝑛 . In order to compute |∑𝛿𝐸𝐿 ∗ | , we first get the following expressions: 𝜕𝛿∗ 𝜕𝛾 = 1 𝑛 [ −𝑛 𝛾 + ∑ (𝑥𝑖−𝛿) 𝑛 𝑖=1 𝛾2 – 𝑐], ∂2𝛿∗ ∂γ2 = 1 𝑛 [ 𝑛 𝛾2 − 2 ∑ (𝑥𝑖−𝛿) 𝑛 𝑖=1 𝛾3 ], 𝜕𝛿∗ 𝜕𝛿 = 1 𝑛 [ 𝑛 𝛾 + (𝑎−1) 𝛿 − 𝑏 ]- 1 𝑛𝛾 , ∂2𝛿∗ ∂δ2 = (1−𝑎) 𝑛𝛿2 + 1 𝑛δ2 , ∂2𝛿∗ ∂γ ∂δ = −1 𝛾2 , |∑𝛿𝐸𝐿 ∗ | = [ 1 𝑛 [ 𝑛 𝛾2 − 2 ∑ (𝑥𝑖−𝛿) 𝑛 𝑖=1 𝛾3 ] (1−𝑎)+1 𝑛𝛿2 − 1 𝛾4 ]−1. Then, δ̂𝐸 = [√ |∑𝛿𝐸𝐿 ∗ | │∑│ 𝑒[𝑛{𝛿𝛿 ∗ (�̂�𝛿∗ ,�̂�𝛿∗ )− δ(�̂�𝛿,�̂�𝛿)}]]−1. (16) IHJPAS. 36(2)2023 297 Simulation Study In this section, Monte-Carlo simulation is employed to compare the performance of five different estimates (Maximum likelihood estimator, Bayes estimator under Squared error loss function, Bayes estimator under Entropy loss function, Bayes estimator under Precautionary loss function) for unknown scale and location parameters. The comparison is made on the basis of the mean squared error (MSE’s), which is defined as follows: MSE(𝜃) = ∑ (�̂�𝑖−𝜃) 2𝑅 𝑖=1 𝑅 . Where R is the number of replications (generated samples). In this paper, R = 5000 sample of size n = 10, 30, 50, and 100 is to represent small, moderate, and large sample sizes from an Exponential distribution with γ = 0.5, 1.5 and δ = 0.8, 2. The parameter for the prior distribution of γ is chosen as c = 0.4 and the two parameters of the Gamma prior of δ are assumed as a = 0.3 and b = 0.6 Discussion The results are summarized and tabulated in Tables (1-8) which contain the expected values and (MSEs) for estimating γ and δ, and we have observed that: 1. The MSE values for different estimation methods increase with increasing values of γ or δ. 2. The Bayesian estimation under Entropy loss function with assuming the Exponential distribution and Gamma distribution priors of the scale and the location parameters, respectively, is the best estimator for γ. 3. It is clear that the best estimation method for δ is the Bayesian estimation under the Entropy loss function in the case of a small value of γ (say γ < 1) while Bayesian estimation under the Precautionary loss function is the best in the case of a relatively large value of γ (say γ > 1). 4. Generally, the Bayes estimate for each of γ and δ are better than the Maximum-likelihood estimates. Table 1: The expected values of different estimators for scale parameter γ of Exponential distribution when γ = 0.5 n Method δ 10 30 50 100 γ̂ML 0.8 0.44891420 0.48373730 0.49016020 0.49513680 γ̂SE 0.44833940 0.48366570 0.49013460 0.49513010 γ̂E 0.44833720 0.48366560 0.49013450 0.49513010 γ̂P 0.44834030 0.48366580 0.49013470 0.49513010 γ̂ML 2 0.44891410 0.48373720 0.49016010 0.49513680 γ̂SE 0.44881240 0.48372580 0.49015650 0.49513560 γ̂E 0.44881220 0.48372580 0.49015660 0.49513550 γ̂P 0.44881240 0.48372580 0.49015670 0.49513570 IHJPAS. 36(2)2023 298 Table 2: The MSE values of different estimators for scale parameter γ of Exponential distribution when γ = 0.5 n Method δ 10 30 50 100 γ̂ML 0.8 0.02490506 0.00825872 0.00499760 0.00255322 γ̂SE 0.02480166 0.00825400 0.00499656 0.00255308 γ̂E 0.02480095 0.00825400 0.00499656 0.00255308 γ̂P 0.02480199 0.00825401 0.00499655 0.00255308 γ̂ML 2 0.02490506 0.00825872 0.00499760 0.00255323 γ̂ SE 0.02488599 0.00825792 0.00499743 0.00255320 γ̂E 0.02488597 0.00825792 0.00499743 0.00255320 γ̂P 0.02488600 0.00825792 0.00499743 0.00255320 Table 3: The expected values of different estimators for scale parameter γ of Exponential distribution when γ = 1.5 n Method δ 10 30 50 100 γ̂ML 0.8 1.34674300 1.45121400 1.47047900 1.48540900 γ̂SE 1.33628700 1.44952700 1.46982700 1.48523600 γ̂E 1.33607700 1.44952300 1.46982600 1.48523600 γ̂P 1.33638800 1.44952900 1.46982700 1.48523600 γ̂ML 2 1.34674300 1.45121400 1.47047900 1.48540900 γ̂SE 1.34434900 1.45090400 1.47036600 1.48537900 γ̂E 1.34433600 1.45090400 1.47036600 1.48537900 γ̂P 1.34435500 1.45090500 1.47036700 1.48537900 Table 4: The MSE values of different estimators for scale parameter γ of Exponential distribution when γ = 1.5 n Method δ 10 30 50 100 γ̂ML 0.8 0.22414510 0.07432850 0.04497841 0.02297903 γ̂SE 0.21975310 0.07402348 0.04490448 0.02296835 γ̂E 0.21958860 0.07402191 0.04490434 0.02296834 γ̂P 0.21983030 0.07402430 0.04490453 0.02296836 γ̂ML 2 0.22414520 0.07432850 0.04497840 0.02297903 γ̂SE 0.22287770 0.07426836 0.04496492 0.02297720 γ̂E 0.22286580 0.07426828 0.04496492 0.02297719 γ̂P 0.22288340 0.07426836 0.04496482 0.02297721 IHJPAS. 36(2)2023 299 Table 5: The expected values of different estimators for the location parameter δ of Exponential distribution when δ = 0.8 n Method γ 10 30 50 100 δ̂ML 0.5 0.84943810 0.81654290 0.80986330 0.80493010 δ̂SE 0.84540890 0.81520250 0.80906610 0.80453480 δ̂E 0.84528550 0.81519470 0.80906360 0.80453440 δ̂P 0.84546530 0.81520490 0.80906710 0.80453500 δ̂ML 1.5 0.94831380 0.84963150 0.82959090 0.81479020 δ̂SE 0.80684980 0.77114510 0.77653750 0.78571520 δ̂E 0.77337370 0.74146620 0.76579320 0.78308200 δ̂P 0.82826040 0.77865910 0.78010240 0.78680470 Table 6: The MSE values of different estimators for the location parameter δ of Exponential distribution when δ = 0.8 n Method γ 10 30 50 100 δ̂ML 0.5 0.00488748 0.00055560 0.00019943 0.00004816 δ̂SE 0.00453428 0.00051516 0.00018462 0.00004448 δ̂E 0.00452783 0.00051498 0.00018458 0.00004448 δ̂P 0.00453743 0.00051525 0.00018464 0.00004448 δ̂ML 1.5 0.04398723 0.00500045 0.00179484 0.00043346 δ̂SE 0.03550759 0.00582472 0.00223016 0.00055773 δ̂E 0.07198691 0.01483009 0.00375661 0.00069869 δ̂P 0.03065065 0.00471544 0.00189898 0.00050857 Table 7: The expected values of different estimators for the location parameter δ of Exponential distribution when δ = 2 n Method γ 10 30 50 100 δ̂ML 0.5 2.04943400 2.01654200 2.00986100 2.00492500 δ̂SE 2.04767900 2.01599500 2.00954100 2.00476700 δ̂E 2.04766800 2.01599400 2.00954100 2.00476700 δ̂P 2.04768400 2.01599500 2.00954000 2.00476800 δ̂ML 1.5 2.14831400 2.04962900 2.02959000 2.01479300 δ̂SE 2.04289200 2.00840900 2.00449300 2.00213900 δ̂E 2.00838700 2.00557600 2.00363300 2.00195300 δ̂P 2.05189000 2.00958700 2.00489100 2.00222900 IHJPAS. 36(2)2023 300 Table 8: The MSE values of different estimators for the location parameter δ of Exponential distribution when δ = 2 n Method γ 10 30 50 100 δ̂ML 0.5 0.00488748 0.00055560 0.00019943 0.00004816 δ̂SE 0.00471630 0.00053813 0.00019317 0.00004663 δ̂E 0.00471540 0.00053812 0.00019316 0.00004663 δ̂P 0.00471674 0.00053814 0.00019317 0.00004663 δ̂ML 1.5 0.04398722 0.00500045 0.00179484 0.00043346 δ̂SE 0.03865999 0.00363416 0.00115354 0.00024975 δ̂E 0.06743077 0.00395757 0.00118017 0.00025086 δ̂P 0.03536728 0.00354006 0.00114343 0.00024927 References 1.Ahmad, S. P.; Bhat, B. A., Posterior Estimates of Two Parameter Exponential Distribution Using S-PLUS Software, Journal of Reliability and Statistical Studies, 2010, 3, 2, 27-34. 2.Rashid M. Z.; Akhter A. S. Estimation Accuracy Of Exponential Distribution Parameters, Pakistan Journal Of Statistics And Operation Research, 2011, 7, 2, 217-232,. 3. He H., Zhou N., Zhang R., On estimation for the Pareto distribution, Statistical Methodology, 2014, 21, 49–58. 4. Kumar, D.; Kumar, P. Singh, S. K. ; Singh, U. A New Asymmetric Loss Function: Estimation Of Parameter Of Exponential Distribution. Journal Of Statistics Applications & Probability Letters, 2019, 6, 1, 37-50. ISSN 2090-8458. 5.Li, J.; Ren, H., Estimation Of One Parameter Exponential Family Under A Precautionary Loss Function Based On Record Values, International Journal Of Engineering And Manufacturing, 2012, 2, 3, 75-81. 6.Naji, L. F.; Rasheed, H. A.Bayesian Estimation For Two Parameters Of Gamma Distribution Under Precautionary Loss Function, Ibn AL-Haitham Journal For Pure And Applied Sciences, 2019,32, 1, 193-202, 7.Rashidi, N., Sanjari Farsipour, N., Bayesian Estimation Of Reliability For Rayleigh Distribution Under The Entropy Loss Function, Journal Of Statistical Modelling: Theory and Applications, 2022,3, 1, 1-8.