http://www.iapress.org/index.php/soic/issue/feedStatistics, Optimization & Information Computing2022-09-30T23:26:51+08:00David G. Yudavid.iapress@gmail.comOpen Journal Systems<p><em><strong>Statistics, Optimization and Information Computing</strong></em> (SOIC) is an international refereed journal dedicated to the latest advancement of statistics, optimization and applications in information sciences. Topics of interest are (but not limited to): </p> <p>Statistical theory and applications</p> <ul> <li class="show">Statistical computing, Simulation and Monte Carlo methods, Bootstrap, Resampling methods, Spatial Statistics, Survival Analysis, Nonparametric and semiparametric methods, Asymptotics, Bayesian inference and Bayesian optimization</li> <li class="show">Stochastic processes, Probability, Statistics and applications</li> <li class="show">Statistical methods and modeling in life sciences including biomedical sciences, environmental sciences and agriculture</li> <li class="show">Decision Theory, Time series analysis, High-dimensional multivariate integrals, statistical analysis in market, business, finance, insurance, economic and social science, etc</li> </ul> <p> Optimization methods and applications</p> <ul> <li class="show">Linear and nonlinear optimization</li> <li class="show">Stochastic optimization, Statistical optimization and Markov-chain etc.</li> <li class="show">Game theory, Network optimization and combinatorial optimization</li> <li class="show">Variational analysis, Convex optimization and nonsmooth optimization</li> <li class="show">Global optimization and semidefinite programming </li> <li class="show">Complementarity problems and variational inequalities</li> <li class="show"><span lang="EN-US">Optimal control: theory and applications</span></li> <li class="show">Operations research, Optimization and applications in management science and engineering</li> </ul> <p>Information computing and machine intelligence</p> <ul> <li class="show">Machine learning, Statistical learning, Deep learning</li> <li class="show">Artificial intelligence, Intelligence computation, Intelligent control and optimization</li> <li class="show">Data mining, Data analysis, Cluster computing, Classification</li> <li class="show">Pattern recognition, Computer vision</li> <li class="show">Compressive sensing and sparse reconstruction</li> <li class="show">Signal and image processing, Medical imaging and analysis, Inverse problem and imaging sciences</li> <li class="show">Genetic algorithm, Natural language processing, Expert systems, Robotics, Information retrieval and computing</li> <li class="show">Numerical analysis and algorithms with applications in computer science and engineering</li> </ul>http://www.iapress.org/index.php/soic/article/view/1523An Alternative Computation of the Entropy of 1D Signals Based on Geometric Properties2022-09-29T14:45:00+08:00Cristian Boninicristianbonini75@yahoo.com.arAndrea Reyarey@frba.utn.edu.arDino Oterodinootero@fibertel.com.arAriel Amadioaamadio@docentes.frgp.utn.edu.arManuel García Blesamanublesa@gmail.comWalter Legnaniwalter@frba.utn.edu.ar<p>The objective of this work is to present a novel methodology based on the computation of a couple of geometric characteristics of the position of the data points in 1D signal to propose an alternative estimation of signal entropy. The conditions to be fulfilled by the signal are minimal; only those necessary to meet the sampling theorem requirement are enough. This work shows some examples in which the proposed methodology can distinguish among signals that cannot be differentiated by other in-use alternatives. Additionally an original example where the usual ordinal pattern algorithm to compute entropy is not applicable, is presented and analyzed. The proposal developed through this work carries some advantages over other alternatives and constitutes a true advancement in the pathway to compute the distribution function of the sequential points of 1D signals later used to compute the entropy of the signal.</p>2022-06-19T00:00:00+08:00Copyright (c) 2022 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/1507Non-parametric Multivariate Kernel Regression Estimation to Describe Cognitive Processes and Mental Representations2022-09-29T14:45:00+08:00Sahar Slamasahar.slama@essths.u-sousse.tnYousri Slaouiyousri.slaoui@math.univ-poitiers.frGwendoline Le Duledu-g@chu-caen.frCyril Perretcyril.perret@univ-poitiers.fr<p>In this research paper, we set forward a non-parametric multivariate recursive kernel regression estimator under missing data using the propensity score approach in order to describe writing word production. Our main objective is to explore cognitive processes and mental representations mobilized when a human being prepares to write a word according to the idea developed in Perret and Olive (2019). We investigate the asymptotic properties of the proposed recursive estimator and compare them to the well known Nadaraya-Watson's regression estimator. We calculate the bias and the variance of the proposed estimator which depend on the choice of some parameters such as the stepsize and the bandwidth. We examine some data-driven procedures to select these parameters. Thus, we demonstrate that, under some optimal choices of these parameters, the MSE (Mean Squared Error) of the proposed estimator can be smaller than the one obtained by using Nadaraya Watson's regression estimator. The elaborated estimator is then applied to the behavioral data to classify some participants in groups. This classication may stand for a departure point to tackle written behavior variations.</p>2022-07-10T00:00:00+08:00Copyright (c) 2022 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/1491Estimation of Zero-Inflated Population Mean with Highly Skewed Nonzero Component: A Bootstrapping Approach2022-09-29T14:45:01+08:00Khyam Panerukpaneru@ut.eduR. Noah Padgettkpaneru@ut.eduHanfeng Chenkpaneru@ut.edu<p>This paper adopts a bootstrap procedure in the maximum pseudo-likelihood method under probability sampling designs. It estimates the mean of a population that is a mixture of excess zero and a nonzero skewed sub-population. Simulations studies show that the bootstrap confidence intervals for zero-inflated log-normal population consistently capture the true mean. The proposed method is applied to a real-life data set.</p>2022-06-12T00:00:00+08:00Copyright (c) 2022 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/1359E-Bayesian Estimations and Its E-MSE For Compound Rayleigh Progressive Type-II Censored Data2022-09-29T14:45:01+08:00Omid shojaeeo.shojaee@sci.ui.ac.irHassan Piriaeih.piriaei@iaub.ac.irManoochehr Babanezhadm.babanezhad@gu.ac.ir<p>Over the past decades, various methods to estimate the unknown parameter, the survival function, and the hazard rate of a statistical distribution have been proposed from the availability of type-II censored data. They are all differing in terms of how the progressive type-II censored data of the underlying distribution are available. In this study, we estimate the parameter, the survival function, and the hazard rate of the compound Rayleigh distribution by using the E-Bayesian estimation when the progressive type-II censored data are available. The resulting estimators are evaluated based on the asymmetric general entropy and the symmetric squared error loss functions. In addition, the E-Bayesian estimators under the different loss functions have been compared through a real data analysis and Monte Carlo simulation studies by calculating the E-MSE of the resulting estimators.</p>2022-04-11T00:00:00+08:00Copyright (c) 2022 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/1230A New Flexible Stress-Strength Model2022-09-29T14:45:01+08:00Alimohammad Beiranvandkazemi@ikiu.ac.irRamin Kazemir.kazemi@sci.ikiu.ac.irAkram Kohansalkohansal@sci.ikiu.ac.irFarshin Hormozinejadrst.kazemi@gmail.com<p>To introduce a flexible stress-strength model, statistical inference of the stress-strength parameter $R=P(X<Y)$, when stress $X$ and strength $Y$ are two independent two-parametre new Weibull-Fr\'{e}chet variables, is considered under Type II progressive censored samples. The MLE, AMLE, asymptotic confidence intervals, Bayes estimate and HPD intervals of $R$ are achieved in three different cases. Also, to compare the performance of three different methods, we apply the Monte Carlo simulations and also analyze a data set for illustrative aims.</p>2021-12-15T00:00:00+08:00Copyright (c) 2021 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/1200Cumulative Residual Extropy for Pareto Distribution in the Presence of Outliers: Bayesian and Non-Bayesian Methods 2022-09-29T14:45:01+08:00Amal Hassandr.amalelmoslamy@gmail.comElsayed Elsherpienyahmedc55@yahoo.comRokaya Mohamedrokayaelmorsy@gmail.com<p>The extropy is considered to be a complementary dual of the well-known Shannon’s entropy and has wide applications in many fields. This article discusses estimating the extropy and cumulative residual extropy of the Pareto distribution using the maximum likelihood and Bayesian methods. We obtain the maximum likelihood of extropies measures in presence of outliers. These estimators are specialized to homogenous case. The Bayesian estimators of both extropy measures are derived based on symmetric and asymmetric loss functions. The Markov chain Monte Carlo methods are used to accomplish some complex calculations. The precision of the Bayesian and the maximum likelihood estimates for extropy estimates are examined through simulations. Regarding results of simulation study, we conclude that the performances of both estimation methods improve with sample sizes. Also, Bayesian estimates of the extropy and cumulative residual extropy under linear exponential loss function are superior to the other Bayesian estimates under the other loss functions in most of cases. The performance for the extropy and cumulative residual extropy estimates increase with number of outliers in almost cases. Generally, there is a great agreement between the theoretical and empirical results. Further performance comparison is conducted by the experiments with real data.</p>2022-04-21T00:00:00+08:00Copyright (c) 2022 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/1110Partial Bayes Estimation of Two Parameter Gamma Distribution Under Non-Informative Prior2022-09-29T14:45:02+08:00Proloy Banerjeeproloy.stat@gmail.comBabulal Sealbabulal_seal@yahoo.com<p>In Bayesian analysis, empirical and hierarchical methods are two main approaches for the estimation of the parameter(s) involved in the prior distribution of one parameter. But in the multi-parameter model, e.g., <em>Gamma</em>(<em>α, p</em>), where both the parameters are unknown, idea of the ‘Partial Bayes (PB) Estimation’ is introduced. When we do no have proper belief regarding the joint parameters of the distribution of the variable and when we are estimating one parameter in presence of others, such method may be used. Partial Bayes estimation of the scale parameter <em>p </em>is done by putting the estimate of the another parameter <em>α </em>obtained by some other classical method in case of two parameter Gamma distribution. Using non-informative prior and computing the risk, it is found that the Partial Bayes estimator has less risk than the Bayes estimator. For this, simulation studies for some choices of shape parameter values have been done. In case of the shape parameter, posterior mean and posterior variance are evaluated through simulations to obtain the risk values for estimator of <em>α </em>with known scale parameter. Finally after fifitting this distribution, two real datasets are illustrated to see the performance of the Partial Bayes estimator.</p>2021-11-29T00:00:00+08:00Copyright (c) 2021 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/1108A New Weighted Skew Normal Model2022-09-29T14:45:02+08:00Forough Naghibiforogh_naghibi@yahoo.comSayed Mohammad Reza Alavialavi_m@scu.ac.irRahim Chinipardazchinipardaz_r@scu.ac.ir<p>Weighted sampling is a useful method for constructing flexible models and analyzing data sets. In this paper, a new weighted distribution of skew normal is introduced with four parameters. The proposed model is a generalized version of several distributions, such as normal, bimodal normal, skew normal and skew bimodal normal-normal. This weighted model is form-invariant under proposed weight function. The basic characteristics of the model are expressed. A method has been used to generate data from the model. The maximum likelihood estimations of parameters are given and evaluated using simulation study. The model is fitted to the three real data sets. The advantage of the proposed model has been shown on the rival distributions using appropriate criteria.</p>2021-07-12T00:00:00+08:00Copyright (c) 2021 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/1246Novel Weighted G family of Probability Distributions with Properties, Modelling and Different Methods of Estimation2022-09-29T14:45:03+08:00Gorgees Shaheedgorgees.alsalamy@qu.edu.iq<p>In this work, we derive and study a new weighted G family of continuous distributions called the new weighted generated family (NW-G). We study some basic properties including quantile function, asymptotic, the mixture for CDF and pdf, residual entropy, and order statistics. Then, we study half-logistic distribution as a special case with more details. Comprehensive graphical simulations are performed under some common estimation methods. Finally, two real-life data sets are analyzed to demonstrate the objectives.<br><br></p>2022-04-22T00:00:00+08:00Copyright (c) 2022 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/1030Testing the Validity of Laplace Model Against Symmetric Models, Using Transformed Data2022-09-29T14:45:03+08:00Hadi Alizadeh Noughabializadehhadi@birjand.ac.ir<p>In this paper, we first present three characterizations of Laplace distribution and then we introduce a goodness of fit test for Laplace distribution against symmetric distributions, based on one of the transformations. The power of the proposed test under various alternatives is compared with that of the existing tests, by simulation. To show the behavior of the proposed test in real cases, two real examples are presented.</p>2021-08-17T00:00:00+08:00Copyright (c) 2021 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/1414Discrete Bilal Distribution in the Presence of Right-Censored Data and a Cure Fraction2022-09-29T14:45:03+08:00Bruno Caparroz Lopes de Freitasbrunoclf19@gmail.comJorge Alberto Achcarachcar@fmrp.usp.brMarcos Vinicius de Oliveira Peresmvperes1991@gmail.comEdson Zangiacomi Martinezedson@fmrp.usp.br<p>The statistical literature presents many continuous probability distributions with only one parameter, which are extensively used in the analysis of lifetime data, such as the exponential, the Lindley, and the Rayleigh distributions. Alternatively, the use of discretized versions of these distributions can provide a better fit for the data in many applications. As the novelty of this study, we present inferences for the discrete Bilal distribution (DB) with one parameter introduced by Altun et al. (2020) in the presence of right-censored data and cure fraction. We assume standard maximum likelihood methods based on asymptotic normality of the maximum likelihood estimators and also a Bayesian approach based on MCMC (Markov Chain Monte Carlo) simulation methods to get inferences for the parameters of the discrete BD distribution. The use of the proposed model was illustrated with three examples considering real medical lifetime data sets. From these applications, we concluded that the proposed model based on the discrete DB distribution has good performance even with the inclusion of a cure fraction in comparison to other existing discrete models, such as the DsFx-I, Lindley, Rayleigh, and Burr-Hatke probability distributions. Moreover, the model can be easily implemented in standard existing software, such as the R package. Under a Bayesian approach, we assumed a gamma prior distribution for the parameter of the DB discrete distribution. We also provided a brief sensitivity analysis assuming the half-normal distribution in place of the gamma distribution for the parameter of the DB distribution. From the obtained results of this study, we can conclude that the proposed methodology can be very useful for researchers dealing with medical discrete lifetime data in the presence of right-censored data and cure fraction.</p>2022-08-30T00:00:00+08:00Copyright (c) 2022 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/1451The k-nearest Neighbor Classification of Histogram- and Trapezoid-Valued Data2022-09-29T14:45:03+08:00Mostafa Razmkhahrazmkhah_m@um.ac.irFathimah Al-Ma'shumahfathimah.14@gmail.comSohrab Effatis-effati@um.ac.ir<pre style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;"><span style="color: #000000;">A histogram-valued observation is a specific type of symbolic objects that represents its value by a list of bins (intervals) along with their corresponding relative frequencies or probabilities</span><span style="color: #000000;">.</span></pre> <pre style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;"><span style="color: #000000;">In the literature</span><span style="color: #000000;">, </span><span style="color: #000000;">the raw data in bins of all histogram-valued data have been assumed to be uniformly distributed</span><span style="color: #000000;">.</span> <span style="color: #000000;">A new representation of such observations is proposed in this paper by assuming that the raw data in each bin are linearly distributed</span><span style="color: #000000;">, </span><span style="color: #000000;">which are called </span><span style="color: #000000;">trapezoid-valued data</span><span style="color: #000000;">.</span></pre> <pre style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;"><span style="color: #000000;">Moreover</span><span style="color: #000000;">, </span><span style="color: #000000;">new definitions of union and intersection between trapezoid-valued observations are made</span><span style="color: #000000;">.</span></pre> <pre style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;"><span style="color: #000000;">This study proposes the </span><span style="color: #008000;">k</span><span style="color: #000000;">-nearest neighbor technique for classifying histogram-valued data using various dissimilarity measures</span><span style="color: #000000;">.</span></pre> <pre style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;"><span style="color: #000000;">Further</span><span style="color: #000000;">, </span><span style="color: #000000;">the limiting behavior of the computational complexities based on the performed dissimilarity measures are compared</span><span style="color: #000000;">.</span></pre> <pre style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;"><span style="color: #000000;">Some simulations are done to study the performance of the proposed procedures</span><span style="color: #000000;">. </span><span style="color: #000000;">Also</span><span style="color: #000000;">, </span><span style="color: #000000;">the results are applied to three various real data sets</span><span style="color: #000000;">.</span></pre> <pre style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;"><span style="color: #000000;">Eventually</span><span style="color: #000000;">, </span><span style="color: #000000;">some conclusions are stated</span><span style="color: #000000;">.</span></pre>2022-09-29T09:15:22+08:00Copyright (c) 2022 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/1547Spatial Assessment of Water River Pollution Using the Stochastic Block Model: Application in Different Station in the Litani River, Lebanon2022-09-29T14:45:04+08:00Alya ATOUIalyaatoui@gmail.comAbir El Hajabir_hajj_1993@hotmail.comYousri Slaouiyousri.slaoui@math.univ-poitiers.frAli Fadelali.fadel@univ-tours.frKamal Slimkamal.slim@hotmail.comSamir Abbad AndaloussiAbbad@u-pec.frRégis Moilleronmoilleron@u-pec.frZaher KHRAIBANIzki@cy-tech.fr<p>Water pollution is a major global environmental problem. In Lebanon, water pollution threatens public health and biological diversity. In this work, a non-classical classification method was used to assess water pollution in a Mediterranean River. A clustering proposal method based on the stochastic block model (SBM) was used as an application on physicochemical parameters in three stations of the Litani River to regroup these parameters in different clusters and identify the evolution of the physicochemical parameters between the stations. Results showed that the used method gave advanced findings on the distribution of parameters between inter and intra stations. This was achieved by calculating the estimated connection matrices between the obtained clusters and the probability vector of belonging of the physicochemical parameters to each cluster in the different stations. In each of the three stations, the same two clusters were obtained, the difference between them was in the estimated connection matrices and the estimated cluster membership vectors. The power of SBM proposed methods is demonstrated in simulation studies and a new real application to the sampling physicochemical parameters in Litani River. First, we compare the proposed method to the classical principal component analysis (PCA) method then to the Hierarchical and the K-means clustering methods. Results showed that these classical methods gave the same two clusters as the proposed method. However, unlike the proposed SBM method, classical approaches are not able to show the blocks structure of the three stations.</p>2022-07-10T00:00:00+08:00Copyright (c) 2022 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/1311Generalizing the properties of Finite Iterative Method for the Computation of the Covariance Matrix Implied by a Recursive Path Model2022-09-29T14:45:05+08:00Seyid Abdellahi Ebnou Abdemseyidebnou@gmail.comIaousse M'barekiaousse@gmail.comZouhair El Hadriz.elhadri@yahoo.fr<p>In this paper, we generalize the properties of the correlation matrix implied by<br>a recursive path model using the Finite Iterative Method into the covariance case where<br>variables are no more supposed to be standardized. We demonstrate that the implied co-<br>variance matrix computed using the Finite Iterative Method is affine with respect to the<br>model parameters. Moreover, many other properties derive from this affinity and will be used<br>to simplify the computation of the first as well as the second derivatives of the Unweighted<br>Least Square Function used as an objective function in the estimation of the model param-<br>eters. Illustrated and numerical examples are given to show the advantage of the proposed<br>properties as an alternative to the classical approximation used to compute the aforemen-<br>tioned derivatives.</p>2022-09-29T09:39:07+08:00Copyright (c) 2022 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/1057Economic Dispatch of Electrical Power in South Africa: An Application to the Northern Cape Province2022-09-29T14:45:05+08:00Thakhani RaveleRavelethakhani@gmail.comCaston Sigaukeravelethakhani@gmail.comLordwell Jhambaravelethakhani@gmail.com<p>Power utility companies rely on forecasting for the operation of electricity demand. This presents an application<br>of linear quantile regression, non-linear quantile regression, and additive quantile regression models for forecasting extreme electricity demand at peak hours such as 18:00, 19:00, 20:00 and 21:00 using Northern Cape data for the period 01 January 2000 to 31 March 2014. The selection of variables was done using the least absolute shrinkage and selection operator. Additive quantile regression models were found to be the best fitting models for hours 18:00, and 19:00, whereas linear quantile regression models were found to be the best fitting models for hours 20:00, and 21:00. Out of sample forecasts for seven days (01 to 07 April 2014) were used to solve the unit commitment problem using mixed-integer programming. The unit commitment problem results showed that it is less costly to use all the generating units such as hydroelectric, wind power, concentrated solar power and solar photovoltaic. The main contribution of this study is in the development of models for forecasting hourly extreme peak electricity demand. These results could be useful to system operators in the energy sector who have to maintain the minimum cost by scheduling and dispatching electricity during peak hours when the grid is constrained due to peak load demand.</p>2022-07-28T00:00:00+08:00Copyright (c) 2022 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/1397Best Linear Unbiased Estimation and Prediction of Record Values Based on Kumaraswamy Distributed Data2022-09-29T14:45:06+08:00Ramy Aldallaldr.raldallal@gmail.com<p>To predict a future upper record value based on Kumaraswamy distributed data, an explicit expression for single and product moments has been established along with some enhanced expressions that makes the applying process on mathematical softwares easier. The best linear unbiased estimator approach for estimating the parameters and the prediction of future record values have been considered and some important tables have been created to help in the calculation processes. Two illustrative examples based on a simulation study and a real-life data are provided to assess the performance of the introduced results.</p>2022-07-23T00:00:00+08:00Copyright (c) 2022 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/1493A new smoothing method for nonlinear complementarity problems involving P0-function2022-09-29T14:45:06+08:00El Hassene OSMANIel-hassene.osmani@insa-rennes.frMounir Haddoumounir-haddou@insa-rennes.frNaceurdine Bensalemnaceurdine.bensalme@univ-setif.dzLina Abdallahlina_abdallah@hotmail.fr<p>In this paper, we present a family of smoothing methods to solve nonlinear complementarity problems (NCPs) involving <em>P</em>0-function. Several regularization or approximation techniques like Fisher-Burmeister’s method, interior-point methods (IPMs) approaches, or smoothing methods already exist. All the corresponding methods solve a sequence of nonlinear systems of equations and depend on parameters that are difficult to drive to zero. The main novelty of our approach is to consider the smoothing parameters as variables that converge by themselves to zero. We do not need any complicated updating strategy, and then obtain nonparametric algorithms. We prove some global and local convergence results and present several numerical experiments, comparisons, and applications that show the efficiency of our approach.</p>2022-09-29T13:30:21+08:00Copyright (c) 2022 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/1365Discrete Inverted Nadarajah-Haghighi Distribution: Properties and Classical Estimation with Application to Complete and Censored data2022-09-30T23:26:51+08:00Bhupendra Singhabhishektyagi033@gmail.comRavindra Pratap Singhabhishektyagi033@gmail.comAmit Singh Nayalabhishektyagi033@gmail.comAbhishek Tyagiabhishektyagi033@gmail.com<p>In this article, we have developed the discrete version of the continuous inverted Nadarajah-Haghighi distribution and called it a discrete inverted Nadarajah-Haghighi distribution. The present model is well enough to model not only the over-dispersed and positively skewed data but it can also model upside-down bathtub-shaped, decreasing failure rate, and randomly right-censored data. Here, we have developed some important statistical properties for the proposed model such as quantile, median, moments, skewness, kurtosis, index of dispersion, entropy, expected inactivity time function, stress-strength reliability, and order statistics. We have estimated the model parameters through the method of maximum likelihood under complete and censored data. An algorithm to generate randomly right-censored data from the proposed model is also presented. The extensive simulation studies are presented to test the behavior of the estimators with complete and censored data. Finally, two complete and two censored data are used to illustrate the utility of the proposed model.</p>2022-09-30T23:13:42+08:00Copyright (c) 2022 Statistics, Optimization & Information Computing