Ibn Al-Haitham Jour. for Pure & Appl. Sci. 53 (1)2022 60 This work is licensed under a Creative Commons Attribution 4.0 International License. Posterior Estimates for the Parameter of the Poisson Distribution by Using Two Different Loss Functions Jinan A. Naser Al-obedy drjanan1964@gmail.com Technical College of Management-Baghdad, Baghdad, IRAQ Abstract In this paper, Bayes estimators of Poisson distribution have been derived by using two loss functions: the squared error loss function and the proposed exponential loss function in this study, based on different priors classified as the two different informative prior distributions represented by erlang and inverse levy prior distributions and non-informative prior for the shape parameter of Poisson distribution. The maximum likelihood estimator (MLE) of the Poisson distribution has also been derived. A simulation study has been fulfilled to compare the accuracy of the Bayes estimates with the corresponding maximum likelihood estimate (MLE) of the Poisson distribution based on the root mean squared error (RMSE) for different cases of the parameter of the Poisson distribution and different sample sizes. Keywords: The Poisson distribution, MLE, Bayes estimation, SELF, the proposed loss function. 1. Introduction The Poisson distribution is a discrete probability distribution for the counts of events that occur randomly in a given interval of time (or space). Also, it is an appropriate model for; the number of phone calls received by a telephone operator in a ten minutes the number of flaws in a bolt of fabric, the number of spelling errors on each page of a document. The Poisson distribution has widespread applications in almost every science, engineering and medicine. So, it is important to study different estimation methods for the Poisson distribution. Many authors investigate the effects on Bayes’ estimators of the Poisson distribution based on different loss functions, and the prior distributions represented by informative prior and non- informative prior. We mention some of them as follows: [1] discussed Bayes estimators for the binomial and Poisson distribution, based on informative and non-informative priors. He concludes that using non-informative priors results in equal tails posterior probability intervals in the corresponding frequentist confidence intervals. Also, he pointed out that the posterior Ibn Al Haitham Journal for Pure and Applied Science Journal homepage: http://jih.uobaghdad.edu.iq/index.php/j/index Doi: 10.30526/35.1.2800 Article history: Received 7,September , 2021, Accepted 5, December, 2021, Published in January 2022. https://creativecommons.org/licenses/by/4.0/ mailto:drjanan1964@gmail.com Ibn Al-Haitham Jour. for Pure & Appl. Sci. 53 (1)2022 61 mean is larger than the MLE, which explains why the Bayesian interval is slightly shifted to the right compared to the frequentist interval. [2] examined Bayes estimators of unknown parameters of the Poisson distribution under different priors. They have derived the posterior distributions for the unknown parameter of the Poisson distribution using single priors such as uniform, Jeffrey’s, Gamma distribution, Also under double priors such as Gamma-Chi-square distributions, Gamma-exponential distributions, Chi-square-exponential distributions. They used R Software to find posterior estimates. They explained the results of this study through numerical and graphical posterior densities of the parameters. [3] deals with the problem of estimating parameters of some well-known distribution functions such as Binomial, Poisson, normal and exponential distribution function. He derived estimation of parameters by using maximum likelihood, method of moment, and Bayes estimation. He derived Bayes estimators for the parameters of these distributions using Lindley's Approximation based on different types of priors. [4] discussed different estimation methods for Poisson parameter estimations. Which were represented by maximum likelihood, Markov chain Monte Carlo, and Bayes method. He derives the Bayes estimators under the squared error loss function based on gamma prior distribution. He used a simulation study for investigating the performance of the ML method, the Markov chain Monte Carlo method, and the Bayes method. Also, he applies to test a hypothesis that the means of Poisson parameter estimations obtained from the ML method, Markov chain Monte Carlo method, and Bayes method were not different from the true parameters. [5] described several interval estimators for the Poisson mean, such as classical interval estimators: The Wald interval estimator, the score interval estimator, the exact interval estimator, and the bootstrap interval estimator. Also, they described Bayes credible estimators , such as the equal tails credible interval estimator, Jeffrey's prior credible interval estimator, the highest posterior density (HPD) credible interval estimator, the relative surprise credible interval estimator. They derived Bayes estimators based on four different priors, such as uniform prior, exponential prior, gamma prior and chi-square prior. Performances of the proposed Bayes estimators have been studied and compared in terms of coverage probabilities and coverage lengths based on a simulation study. The methodology is also illustrated on a real data set. [6] derived the Bayes posterior estimator of the parameter of the Poisson distribution under the squared error and Stein's loss functions. He obtains the empirical Bayes estimators of the parameter of the Poisson distribution based on gamma prior distribution. He investigates the behavior of estimators for the parameter of Poisson distribution by using simulation results. [7] discussed the E-Bayesian and empirical E- Bayesian estimates for the parameter of the Poisson distribution. He also derived posterior risk for empirical E-Bayesian E-Bayesian approximation based on the squared error loss function. He investigates the behavior of different estimators for the parameter of Poisson distribution based on Monte Carlo simulation. He also applied EE-Bayesian estimates and EE-posterior risk on a real data set. [8] used different estimation methods for the parameter Poisson, represented by maximum likelihood, Empirical Bayes and Bayes estimation. she derived the posterior distribution of the Poisson parameter under the squared error and quadratic loss functions based on gamma prior distribution. She used a simulation method to obtain the results, to including the point estimates and confidence intervals and the mean square error (MSE) for the parameter Poisson. She applied methods of estimation on a real data set. Our aim in this study is to examine the effects of the squared error loss function, and the proposed exponential loss function which are presented in this study, based on different priors Ibn Al-Haitham Jour. for Pure & Appl. Sci. 53 (1)2022 62 represented by erlang and levy prior distributions and non-informative prior on Bayes’s estimators of Poisson distribution. We compared the accuracy for Bayes’ estimators with the corresponding maximum likelihood estimator (MLE) of Poisson distribution based on the root mean squared error (RMSE). 2. Poisson Distribution Assuming that ) t..., , t, t( n21 be identical independent distribution (iid) random variables from the Poisson distribution with the following probability mass function [7,8]. ) 1 (0θ ... ,2, 0,1 t, t! θe θ and ) t;( p tθ   With the shape parameter 0θ  . The cumulative distribution function has no particular form. With mean =variance= θ . 2.1 Maximum Likelihood Estimation (MLE) The likelihood function for the sample observation of the Poisson distribution defined by equation (1) will be as follows [7,8]: (2) t!Π θe θ) n 1i ; n21 i tn 1inθ t..., , t, tL(     The log-likelihood function L) (ln  ,then the first partial derivative of the log of the likelihood L with respect to θ as follows 0 θ i t --n θ Logt -Logθt i nθ n 1i i n 1i i n 1             Then Maximum Likelihood Estimator(MLE) of θ is given by (3) t n t θ i n i MLE ^   2.2 Bayesian Estimation We derive the posterior distribution of θ under assuming different priors informative priors such as erlang and levy and non-informative prior whereas, Bayes estimation under squared error loss function and Bayes estimation under the proposed loss function based on different priors. 2.2.1 Posterior Distribution (a). Posterior distribution under erlang's prior information It is assumed the prior for an unknown parameter θ is erlang distribution with hyperparameters ) δ ( as given below[9,10]: (4) 0 δ, θth wi) θ δ exp(- θ δ ) θ (k 2 1  Then, the posterior distribution of θ for the given the data t is given by : ) (5 )dθ θ k( ) L(θ θ ) θ k( ) L(θ t) \θ h(   Substituting equation (2) and equation (4) in equation (5), yields the posterior probability density function of the shape parameter θ as the following: Ibn Al-Haitham Jour. for Pure & Appl. Sci. 53 (1)2022 63 (6) θ 0 ))dn (δ θ exp( 1 i t θ ))n (δ θ exp( 1 i t θ ]dθ ) θ δ exp(- θ δ [ t! n 1i Π i tθ θ)exp(-n 0 ] ) θ δ exp(- θδ [ t!Π i t θ θ)exp(-n t) \θ (h n n 2 n 2 n 1i 1i 1i n 1i 1i 1                   Rewrite 12)t(1t i n 1ii n 1i   and by multiplying the integral in equation (6) by the quantity which equals to ) i t )n (δ 2) i tΓ( ( ) 2) i tΓ( i t )n (δ ( 2)( 2)( n 1i n 1i n 1i n 1i             ,where ) Γ(. is a gamma function .After some simplification, it yields (7) ))n (δ θ exp( θ θ),A(t 2)t Γ( t )n (δ t) \θ (h 1)2( 2)( i n 1i t i n 1i i n 1i 1            Where 1dθ ))n (δ θ exp( θ 2)tΓ( t )n (δ 0 θ)A(t, 1)2( 2)( i n 1i t i n 1i i n 1i              . Be the integral of the pdf of gamma distribution [11]. Then the posterior distribution of θ is gamma distribution as (8) 0n ,δ , 0θ , ))n (δ θ exp( θ 2)tΓ( t )n (δ t) \θ (h 1)2( 2)( i n 1i t i n 1i i n 1i 1            i.e. ))n (δ2),t gamma(( )t \θ ( i n 1i   with posterior mean is )n (δ 2)t( )t \θ E( i n 1i     and posterior variance is 2 )n (δ 2)t( )t \θ var( i n 1i     . (b). Posterior distribution under inverse levy's prior information It is assumed the prior for an unknown parameter θ is inverse levy distribution with hyperparameters (v) as given below[9,12]: (9) 0 v, θth wi) θ 2 v exp(- θ 2π v ) θ (k 2 1 2   Substituting equation (2) and equation (9) in equation (5) yields the posterior probability density function of the shape parameter θ like the following: ]dθ ) θ 2 v exp(- 2 1 θ 2π v [ t! n 1i Π i t θ θ)exp(-n 0 ] ) θ 2 v exp(- 2 1 θ 2π v [ t! n 1i Π i t θ θ)exp(-n t) \θ (h n 1i n 1i 2            Ibn Al-Haitham Jour. for Pure & Appl. Sci. 53 (1)2022 64 ) 10 ( θ 0 ))dn 2 v ( θ exp(2 1 i t θ ))n 2 v ( θ exp(2 1 i t θ t) \θ (h n 1i n 1i 2         Rewrite 1) 2 1 t( 2 1 t i n 1ii n 1i   and by multiplying the integral in equation (10) by the quantity which equals to ) i t )n (0.5v )5.0 i tΓ( ( ) )5.0 i tΓ( i t )n (0.5v ( )5.0n 1i ( n 1i n 1i )5.0n 1i (             ,where ) Γ(. is a gamma function .After some simplification, it yields (11) ))n (0.5v θ exp( θ θ),)A1(t 5.0 i tΓ( i t )n (0.5v t) \θ (h 1)5.0( n )5.0n( i n 1i t 1i 1i 2            Where 1dθ ))n (0.5v θ exp( θ )5.0 i tΓ( i t )n (0.5v 0 θ)A1(t, 1)5.0( n )5.0n( i n 1i t 1i 1i              . Be the integral of the pdf of gamma distribution [11]. Then the posterior distribution of θ is gamma distribution as (12) 0n , v, 0θ , ))n (0.5v θ exp( θ )5.0 i tΓ( i t )n (0.5v t) \θ (h 1)5.0( n )5.0n( i n 1i t 1i 1i 2             i.e. ))n (0.5v),5.0tgamma(( )t \θ ( i n 1i   with posterior mean is )n (0.5v )5.0 i t( )t \θ E( n 1i     and posterior variance is )n (0.5v )5.0 i t( )t \θ var( 2 n 1i     . (c). Posterior distribution under non-informative's prior information It is assumed that the prior for an unknown parameter θ is non-informative with hyper parameter )c ( as given below[9]: (13) 0 c θ, with c θ 1 ) θ (k α 3  Substituting equation (2) and equation (13) in equation (5) yields the posterior probability density function of the shape parameter θ like the following: Ibn Al-Haitham Jour. for Pure & Appl. Sci. 53 (1)2022 65 ) 14 ( θ 0 )dn θ exp( c i t θ )n θ exp( c i t θ ]dθ c θ 1 [ t! n 1i Π i t θ θ)exp(-n 0 ] c θ 1 [ t! n 1i Π i t θ θ)exp(-n t) \θ (h n n n 1i n 1i 1i 1i 3                   Rewrite 11)ct(ct i n 1ii n 1i   and by multiplying the integral in equation (14) by the quantity which equals to ) i t )n ( 1)c i tΓ( ( ) 1)c i tΓ( i t )n ( ( 1)cn( n n 1)cn( 1i 1i 1i 1i           ,where ) Γ(. is a gamma function .After some simplification, it yields (15) )n θ exp( θ θ),1)A2(t c i tΓ( i t )(n t) \θ (h 11)c( n 1)cn( i n 1i t 1i 1i 3           Where 1dθ )n θ exp( θ 1))c i tΓ( i t )(n 0 θ)A2(t, 11)c( n 1)cn( i n 1i t 1i 1i             . Be the integral of the pdf of gamma distribution [11]. Then the posterior distribution of θ is gamma distribution as (16) 0n , 1c , 0θ , )n θ exp( θ 1)c i tΓ( t )(n t) \θ (h 11)c( n 1)cn( i n 1i t 1i i1i 3           i.e. ))(n 1),ctgamma(( )t \θ ( i n 1i   with posterior mean is 1cfor n 1)c i t( )t \θ E( n 1i     and posterior variance is 1cfor n 1)c i t( )t \θ var( 2 n 1i     . 2.2.2 Bayes Estimation under Squared Error Loss Function We derive Bayes estimation under the squared error loss function assuming different priors' informative priors such as erlang and levy and non-informative prior. Then, the risk function is denoted by ]θ) - θ (L E[θ) , θ (R 2 11 ^^  , t) \θ E(t) \θ E(θ2 θ θ)- θ (R 2 ^ 2 ^ 1 ^  .The value of θ minimizes the risk function under squared error loss function which satisfies the following Ibn Al-Haitham Jour. for Pure & Appl. Sci. 53 (1)2022 66 condition 0 θ)- θ R( ^ ^ θ    ; we get Bayes estimator of θ denoted by θ ^ for each of the previous priors as follows ) (17 dθ t) \θ (h θt) \θ E(θ 0 ^    i.e., t) \θ E(θ ^  is equal to the posterior mean for different priors informative priors( erlang and levy ) and non-informative prior as have been derived in section 4.1. 2.2.3 Bayes Estimation under the Proposed Exponential Loss Function We derive Bayes estimation under the proposed exponential loss function assuming different priors' informative priors such as erlang , levy and non-informative prior. Then, the risk function is denoted by ]θ)) ( exp - )θ (exp(L E[θ) ,θ (R 2 ^ 2 ^ 2  , t) \θ) E(exp(2t) \θ) exp( )E(θ2exp()θ2exp( θ), θ (R ^^ 2 ^  .The value of θ minimizes the risk function under the proposed exponential loss function which satisfies the following condition 0 θ)- θ R( ^ ^ θ    ; we get Bayes estimator of θ denoted by θ ^ for each of the previous priors ) (18 ) dθ t) \θ h( ) exp(θln(t) \θ) exp( E(lnθ 0 ^    (a). The Bayes estimator for parameter under erlang prior can be derived as follows ) (18 ) dθ t) \θ (h θ) exp(ln(t) \θ) exp( lnE(θ 1 0 ^    ) (19 ) dθ )) 1-n (δ θ exp( θ ) 2tΓ( t )n (δ 0 ln( θ ) dθ ))n (δ θ exp( θ ) 2tΓ( t )n (δ θ) exp(ln(θ 1)2( n 2)n( 1)2( n 2)n( i n 1i i n 1i t i1i i1i^ t i1i i1i 0 ^                          By multiplying the integral in equation (19) by the quantity which equals to t ) 1-n (δ t ) 1-n (δ 2)n( 2)n( i1i i1i         ,it yields ) θ)B(t, t ) 1-n (δ t )n (δ ln( θ 2)n( 2)n( i1i i1i^          , where Ibn Al-Haitham Jour. for Pure & Appl. Sci. 53 (1)2022 67 1 dθ )) 1-n (δ θ exp( θ ) 2t Γ( t ) 1-n (δ 0 ) θ t,B( 1)2( n 2)n( i n 1i t i1i i1i              , be the integral of the pdf of gamma distribution [11], i.e. )20....( t ) 1-n δ n δ (ln θ) t ) 1-n (δ t )n (δ ln(θ 2)n( 2)n( 2)n( i1i ^ i1i i1i^                (b). The Bayes estimator for parameter under inverse levy prior can be derived as follows ) (18 ) dθ t) \θ (h θ) exp(ln(t) \θ) exp( lnE(θ 2 0 ^    ) (21 ) dθ 1))-n (0.5v θ exp( θ )5.0tΓ( t )n (0.5v 0 ln( θ ) dθ ))n (0.5v θ exp( θ )5.0tΓ( t )n (0.5v θ) exp(ln(θ 1)5.0( n )5.0n( 1)5.0( n )5.0n( i n 1i i n 1i t i1i i1i^ t i1i i1i 0 ^                          By multiplying the integral in equation (21) by the quantity which equals to t ) 1-n (0.5v t ) 1-n (0.5v )5.0n( )5.0n( i1i i1i         ,it yields ) ) θ t,B1( t ) 1-n (0.5v t )n (0.5v ln( θ )5.0n( )5.0n( i1i i1i^          , where 1 dθ 1))-n (0.5v θ exp( θ )5.0tΓ( t 1)-n (0.5v 0 ) θ t,B1( 1)5.0( n )5.0n( i n 1i t i1i i1i              , be the integral of the pdf of gamma distribution [11], i.e. )22....( t ) 1-n5.0 n 0.5v (ln θ) t ) 1-n(0.5v t )n (0.5v ln(θ )5.0n( )5.0n( )5.0n( i1i ^ i1i i1i^                v (c). The Bayes estimator for parameter under non-informative prior can be derived as follows ) (18 ) dθ t) \θ (h θ) exp(ln(t) \θ) exp( lnE(θ 3 0 ^    ) (23 ) dθ ) 1)-(n θ exp( θ 1)ctΓ( t )(n 0 ln( θ ) dθ )n θ exp( θ 1)ctΓ( t )(n θ) exp(ln(θ 11)c( n 1)cn( 11)c( n 1)cn( i n 1i i n 1i t i1i i1i^ t i1i i1i 0 ^                        Ibn Al-Haitham Jour. for Pure & Appl. Sci. 53 (1)2022 68 By multiplying the integral in equation (23) by the quantity which equals to t ) 1-(n t ) 1-(n 1)cn( 1)cn( i1i i1i       , it yields ) ) θ t,B2( t ) 1-(n t )(n ln( θ 1)cn( 1)cn( i1i i1i^        , where 1dθ ) 1)-(n θ exp( θ 1)ctΓ( t ) 1-(n 0 ) θ t,B2( 11)c( n 1)cn( i n 1i t i1i i1i             , be the integral of the pdf of gamma distribution [11], i.e. )24...() 1-n n (ln θ) t ) 1-(n t )n ( ln(θ 1)c( 1)cn( 1)cn( i n 1i t ^ i1i i1i^           3. Simulation Study we perform a simulation study to compare the accuracy of the different estimates of the parameter θ of the Poisson distribution. The experiments have been repeated 3000)(r  with different sample sizes (n = 25, 50, and100). Assuming different values for the true value of the parameter θ and the hyper parameters ( δ ,v , c) as combinations to compare the accuracy of the different estimates for θ as follows:  A data is generated from the Poisson distribution, for several values assumed to the true value of the parameter θ will be 1,3,9θ  .  The value for a parameter of the erlang prior can be chosen arbitrarily as 2,3,5δ  .  The value for a parameter of the inverse levy prior is chosen arbitrarily as 5,3,1v  .  The value for a parameter of the non- informative prior is chosen arbitrarily as 5,3,2c  . To compare between the estimates, we depend on the root mean square error criterion, i.e. the estimates with the smallest RMSE's will be the best estimates. ) 25 ( θ)- )r ( ^ θ( 3000 1r3000 1 RMSE 2    We obtain the results by using MATLAB-R2018a program. The results were summarized and tabulated in the following tables for each estimator and for all sample sizes. Table 1.Estimated values ) θ ( ^ and RMSE’s for the estimators of the Poisson distribution under the SELF, based on different priors. true value ) θ ( sample size (n) method MLE Bayes under Criteria ) δ erlang( prior ) vlevy( inverse prior 2δ  3δ  5δ  1 v  3 v  5 v  1θ  25 Est. values RMSE 1.0004 0.1973 1.0004 0.18268 0.96467 0.17967 0.90036 0.19225 1.0004 0.19343 0.96267 0.18984 0.92766 0.1934 50 Est. values RMSE 0.99681 0.1428 0.99694 0.1373 0.97813 0.13645 0.94256 0.14193 0.99684 0.14138 0.97749 0.14042 0.95887 0.14205 100 Est. values RMSE 0.9994 0.0983 0.9994 0.0964 0.98971 0.09601 0.97086 0.09807 0.9994 0.0978 0.98956 0.09743 0.9799 0.09801 Ibn Al-Haitham Jour. for Pure & Appl. Sci. 53 (1)2022 69 3θ  25 Est. values RMSE 2.9999 0.33873 2.8518 0.3469 2.7499 0.39243 2.5666 0.51721 2.9607 0.33441 2.849 0.35344 2.7454 0.39956 50 Est. values RMSE 3.003 0.24378 2.926 0.24579 2.8708 0.26378 2.7664 0.322 2.9832 0.24193 2.9253 0.24818 2.8695 0.26629 100 Est. values RMSE 3.0008 0.17482 2.9615 0.17566 2.9328 0.18256 2.8769 0.20705 2.9908 0.1742 2.9613 0.17652 2.9325 0.18345 9θ  25 Est. values RMSE 9.0033 0.58613 8.4104 0.80132 8.1101 1.0324 7.5694 1.5117 8.8464 0.59482 8.5125 0.73714 8.203 0.95873 50 Est. values RMSE 9.0036 0.4215 8.6957 0.50678 8.5317 0.61437 8.2214 0.86776 8.9243 0.42412 8.751 0.479 8.5843 0.57785 100 Est. values RMSE 8.9985 0.30358 8.8417 0.3371 8.7559 0.38272 8.5891 0.50244 8.9587 0.30487 8.8705 0.32593 8.7839 0.36661 Note: The shadow cells represent the smallest value of RMSE. Continue Table 1 true value ) θ ( sample size (n) method MLE Bayes under Criteria ) c e(informativnon prior 2c  3c  5c  1θ  25 Est. values RMSE 1.0004 0.1973 0.96043 0.20123 0.92043 0.21274 0.84043 0.25375 50 Est. values RMSE 0.99681 0.1428 0.97681 0.14463 0.95681 0.14915 0.91681 0.16523 100 Est. values RMSE 0.9994 0.0983 0.9894 0.09889 0.9794 0.10046 0.9594 0.10638 3θ  25 Est. values RMSE 2.9999 0.33873 2.9599 0.3411 2.9199 0.34807 2.8399 0.37465 50 Est. values RMSE 3.003 0.24378 2.983 0.24435 2.963 0.24655 2.923 0.25562 100 Est. values RMSE 3.0008 0.17482 2.9908 0.17507 2.9808 0.17588 2.9608 0.17917 9θ  25 Est. values RMSE 9.0033 0.58613 8.9633 0.58727 8.9233 0.59112 8.8433 0.60672 50 Est. values RMSE 9.0036 0.4215 8.9836 0.42181 8.9636 0.42306 8.9236 0.42836 100 Est. values RMSE 8.9985 0.30358 8.9885 0.30379 8.9785 0.30433 8.9585 0.30639 Note: The shadow cells represent the smallest value of RMSE. Table 2. Estimated values ) θ ( ^ and RMSE’s for the estimators of the Poisson distribution under the proposed exponential loss function, based on different prior. true value ) θ ( sample size (n) method MLE Bayes under Criteria ) δ erlang( prior ) vlevy( inverse prior 2δ  3δ  5δ  1 v  3 v  5 v  1θ  25 Est. values RMSE 1.0004 0.1973 1.0194 0.18716 0.98231 0.18025 0.9157 0.18726 1.0206 0.19839 0.9813 0.19065 0.94495 0.19082 50 Est. values RMSE 0.99681 0.1428 1.0066 0.13877 0.98747 0.13654 0.95123 0.13976 1.0068 0.14293 0.9871 0.14056 0.96812 0.14093 100 Est. values RMSE 0.9994 0.0983 1.0043 0.09697 0.99454 0.09608 0.97551 0.09722 1.0044 0.09842 0.99446 0.09751 0.98471 0.09760 3θ  25 Est. values RMSE 2.9999 0.33873 2.9059 0.33315 2.8002 0.36708 2.6104 0.48398 3.0203 0.33939 2.9041 0.33956 2.7966 0.37388 50 Est. values RMSE 3.003 0.24378 2.9545 0.241 2.8982 0.25349 2.7918 0.30552 3.0131 0.24412 2.954 0.24336 2.8972 0.25593 Ibn Al-Haitham Jour. for Pure & Appl. Sci. 53 (1)2022 70 100 Est. values RMSE 3.0008 0.17482 2.9761 0.17388 2.9471 0.17857 2.8907 0.19983 3.0058 0.17492 2.976 0.17474 2.9469 0.17945 9θ  25 Est. values RMSE 9.0033 0.58613 8.5701 0.70043 8.2584 0.91318 7.6984 1.3931 9.0245 0.58671 8.6773 0.64949 8.3558 0.84234 50 Est. values RMSE 9.0036 0.4215 8.7804 0.46441 8.6132 0.55747 8.2971 0.80227 9.0139 0.42173 8.8371 0.44418 8.6672 0.52445 100 Est. values RMSE 8.9985 0.30358 8.8853 0.32032 8.7986 0.35814 8.6302 0.47023 9.0036 0.3036 8.9145 0.3125 8.8271 0.34422 Note: The shadow cells represent the smallest value of RMSE. Continue Table 2 true value ) θ ( sample size (n) method MLE Bayes under Criteria ) c e(informativnon prio 2c  3c  5c  1θ  25 Est. values RMSE 1.0004 0.1973 0.98016 0.20233 0.93934 0.21029 0.8577 0.24656 50 Est. values RMSE 0.99681 0.1428 0.98671 0.14482 0.96651 0.14805 0.92611 0.16204 100 Est. values RMSE 0.9994 0.0983 0.99438 0.09898 0.98433 0.10006 0.96423 0.1051 3θ  25 Est. values RMSE 2.9999 0.33873 3.0208 0.34632 2.9799 0.34628 2.8983 0.36035 50 Est. values RMSE 3.003 0.24378 3.0133 0.24658 2.9931 0.24632 2.9526 0.25074 100 Est. values RMSE 3.0008 0.17482 3.0058 0.1758 2.9958 0.17575 2.9757 0.17738 9θ  25 Est. values RMSE 9.0033 0.58613 9.1475 0.61608 9.1067 0.6076 9.025 0.59869 50 Est. values RMSE 9.0036 0.4215 9.0746 0.43225 9.0544 0.42922 9.014 0.42599 100 Est. values RMSE 8.9985 0.30358 9.0338 0.30696 9.0237 0.30602 9.0036 0.30512 Note: The shadow cells represent the smallest value of RMSE. 4. Discussion For the results listed in table.1and table.2, we see that the best Bayes estimates under the squared error loss function (SELF) according to the smallest value of RMSE as compared with other estimates based on the other values of the parameters for the same priors as listed below  Erlangen prior with 3δ  , inverse levy prior with 3 v  and non- informative prior with 2c  , for all sample sizes (n) when the true value is 1θ  .  Erlang prior with 2δ  , inverse levy prior with 1v  and non- informative prior with 2c  , for all n when 4,3θ  . It is observed that the performance of Bayes estimators under SELF are the best according to the smallest value of RMSE as compared with other estimators in MLE for  Erlang prior with 3δ  and inverse levy prior with 3v  , for all sample sizes (n) when 1θ  .  Inverse levy prior with 1v  , for all n when 3θ  . We see that the best Bayes estimates under the proposed exponential loss function according to the smallest value of RMSE as compared with other estimates based on the other values of the parameters for the same priors as listed below Ibn Al-Haitham Jour. for Pure & Appl. Sci. 53 (1)2022 71  Erlang prior with 3δ  , inverse levy prior with 3v  and non- informative prior with 2c  , for all n when 1θ  .  Erlang prior with 2δ  , for all n when 9,3θ  .  Inverse levy prior with 1v  , for n=25 when 3θ  .and for all n when 9θ  .  Non- informative prior with 5 3,c  , for all n when 3,5θ  respectively. It is observed that the performance of Bayes estimators under proposed exponential are the best according to the smallest value of RMSE as compared with other estimators in MLE for  Erlang prior with 3δ  , inverse levy prior with 3v  for all n when 1θ  .  Erlang prior with 2δ  , for all n when 3θ  .  inverse levy prior with 1v  for n=25 and with 3v  for n=50, 100 when 3θ  . 5. Conclusion In this paper we have presented the Bayesian and maximum likelihood estimates of the parameter of the Poisson distribution. The estimation is conducted on RMSE. Bayes estimators, under squared error loss function and the proposed exponential loss function. The MLE’s are also obtained. Our conclusions about the results are stated in the following points: The Bayes estimates under the proposed exponential loss function usually have the most minor estimated RMSE’s as compared to the RMSE’s estimates under the squared error loss function based on the same prior, with the same values to their parameters for all sample sizes. From table 1 and table 2, we can see that 1. Erlang prior with 3 , 2δ  when 9,3θ  and with 5δ  when 9 , 3 , 1θ  for all sample sizes. 2. Inverse levy prior with v=1 when 9 θ  and with v=3 when 9,3θ  and with v=5 when 9 , 3 , 1θ  . 3. non-informative prior with 3c  when 3 , 1θ  and with 5c  when 9 , 3 , 1θ  . The Bayes estimates under the proposed exponential loss function have the smallest estimated RMSE’s as compared with the RMSE’s of estimates of the maximum likelihood estimates(MLE), for the same value for θ and sample sizes. From table 1 and table 2 , we can see that 1. Erlang prior with ) 2δ (  when 3θ  and with 5 3,δ  when 1θ  . 2. Inverse levy prior with v=3,5 when the true value 1θ  . References 1. Larsson R.;How informative is a non-informative prior? . Working Paper Department of Statistics Uppsala University. 2011, 2:1-8. 2. Sultan R.; Ahmad S. P.; posterior estimates of Poisson distribution using R Software. Journal of Modern Applied Statistical Methods. 2012, 11, 2: 530-535. 3. Sahoo S. A; Study on Bayesian Estimation of Parameters of Some Well Known Distribution Functions. Thesis submitted in partial fulfillment of the requirements for the degree of Master of Science, 2014. 4. Araveeporn A.; parameter estimation of Poisson distribution by using maximum likelihood, markov chain monte Carlo, and Bayes method. thammasat international journal of science and technology. 2014, 19, 3:1-14. 5. Nadarajah S. ; Alizadeh M. ; Bagheri S.F.; Bayesian and non- Bayesian interval estimators for the Poisson mean. Revastat – Statistical Journal. 2015, 13, 3:245–262. Ibn Al-Haitham Jour. for Pure & Appl. Sci. 53 (1)2022 72 6. Zhang Y.; Ze-Yu W.; Zheng-Min D.;Wen M.; The empirical Bayes estimators of the parameter of the Poisson distribution with a conjugate gamma prior under Stein's loss function. Journal of Statistical Computation and Simulation.2019, 89, 16: 3061-3074. 7. Mohammed H. S.; Empirical E-Bayesian Estimation for the Parameter of Poisson Distribution. AIMS Mathematics. 2021, 6, 8: 8205–8220. 8. Supharakonsakun Y.; Bayesian approaches for Poisson distribution parameter estimation. Emerging Science Journal. 2021, 5, 5, ISSN: 2610-9182. 9. Bickel P.J. ; Doksum K. A.; Mathematical Statistics: Basic Ideas and Selected Topics. Holden- Day, Inc., San Francisco. 1977. 10. Aijaz A.; Qurat ul A. S.; Ahmad A.; Tripathi R.; An Extension of Erlang Distribution with Properties Having Applications in Engineering and Medical-Science. Int. J. Open Problems Compt. Math. 2021, 14, 1, Print ISSN: 1998-6262. 11. Carolynne A.k.; Gamma and related distributions. A thesis submitted to the School of Mathematics, University of Nairobi in partial fulfillment of the requirements for the degree of Master of Science in Statistics. 2013. 12. Al-Noor N.H.; Alwan.S. S.; Non-Bayes, Bayes and Empirical Bayes Estimators for the shape parameter of Lomax Distribution. Mathematical Theory and Modeling. 2015, 5, 2: ISSN 2224-5804 (Paper).