http://www.iapress.org/index.php/soic/issue/feedStatistics, Optimization & Information Computing2024-08-11T23:35:11+08:00David G. Yudavid.iapress@gmail.comOpen Journal Systems<p><em><strong>Statistics, Optimization and Information Computing</strong></em> (SOIC) is an international refereed journal dedicated to the latest advancement of statistics, optimization and applications in information sciences. Topics of interest are (but not limited to): </p> <p>Statistical theory and applications</p> <ul> <li class="show">Statistical computing, Simulation and Monte Carlo methods, Bootstrap, Resampling methods, Spatial Statistics, Survival Analysis, Nonparametric and semiparametric methods, Asymptotics, Bayesian inference and Bayesian optimization</li> <li class="show">Stochastic processes, Probability, Statistics and applications</li> <li class="show">Statistical methods and modeling in life sciences including biomedical sciences, environmental sciences and agriculture</li> <li class="show">Decision Theory, Time series analysis, High-dimensional multivariate integrals, statistical analysis in market, business, finance, insurance, economic and social science, etc</li> </ul> <p> Optimization methods and applications</p> <ul> <li class="show">Linear and nonlinear optimization</li> <li class="show">Stochastic optimization, Statistical optimization and Markov-chain etc.</li> <li class="show">Game theory, Network optimization and combinatorial optimization</li> <li class="show">Variational analysis, Convex optimization and nonsmooth optimization</li> <li class="show">Global optimization and semidefinite programming </li> <li class="show">Complementarity problems and variational inequalities</li> <li class="show"><span lang="EN-US">Optimal control: theory and applications</span></li> <li class="show">Operations research, Optimization and applications in management science and engineering</li> </ul> <p>Information computing and machine intelligence</p> <ul> <li class="show">Machine learning, Statistical learning, Deep learning</li> <li class="show">Artificial intelligence, Intelligence computation, Intelligent control and optimization</li> <li class="show">Data mining, Data analysis, Cluster computing, Classification</li> <li class="show">Pattern recognition, Computer vision</li> <li class="show">Compressive sensing and sparse reconstruction</li> <li class="show">Signal and image processing, Medical imaging and analysis, Inverse problem and imaging sciences</li> <li class="show">Genetic algorithm, Natural language processing, Expert systems, Robotics, Information retrieval and computing</li> <li class="show">Numerical analysis and algorithms with applications in computer science and engineering</li> </ul>http://www.iapress.org/index.php/soic/article/view/2068A novel technique for generating families of continuous distributions2024-08-11T23:00:52+08:00Boikanyo Makubatemakubateb@biust.ac.bwRegent Retrospect Musekwamr21100062@studentmail.biust.ac.bw<p>In this paper, we present the generalized flexible-G family for creating several continuous distributions. Our new technique features are that it adds only two extra shape parameters to any chosen continuous distribution and is not derived from any parent distribution that currently exists. Several special cases of this family are provided. The generalized flexible-G family offers significant improvements in flexibility, fit, and applicability across a wide range of fields. The family's model parameters are estimated using the maximum likelihood estimation method. A simulation study is conducted to assess the consistency of the maximum likelihood estimates. The generalized flexible log-logistic, a specific case of our novel family, is applied to both patient's analgesia and reliability data in order to illustrate the significance of the family. The generalized flexible log-logistic outperforms several competitive models provided in this paper. Furthermore, the generalized flexible log-logistic performs better than traditional distributions such as the BurrXII, Gumbel, and Weibull models.</p>2024-07-29T00:00:00+08:00Copyright (c) 2024 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/1903Prediction problem for continuous time stochastic processes with periodically correlated increments observed with noise2024-08-11T23:00:53+08:00Maksym Luzmaksym.luz@gmail.comMikhail Moklyachukmoklyachuk@gmail.com<p>We propose solution of the problem of the mean square optimal estimation of linear functionals which depend on the unobserved values of a continuous time stochastic process with periodically correlated increments based on observations of this process with periodically stationary noise. To solve the problem, we transform the processes to the sequences of stochastic functions which form an infinite dimensional vector stationary sequences. In the case of known spectral densities of these sequences, we obtain formulas for calculating values of the mean square errors and the spectral characteristics of the optimal estimates of the functionals. Formulas determining the least favorable spectral densities and the minimax (robust) spectral characteristics of the optimal linear estimates of functionals are derived in the case where the sets of admissible spectral densities are given.</p>2024-06-10T00:00:00+08:00Copyright (c) 2024 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2013Generalized variables approach to generalized inverted exponential distribution reliability analyses with progressively type II censored data2024-08-11T23:00:53+08:00Danush Wijekularathnadwijekularathna@troy.edu<p>Reliability analysis plays a crucial role in various fields, including engineering, manufacturing, and quality<br>control. It provides valuable insights into the failure behavior of systems and products. One commonly used distribution in reliability modeling is the generalized inverted exponential distribution (GIED). The GIED distribution is known for its flexibility and adaptability to a wide range of failure data. This paper presents a new method for estimating confidence intervals and testing hypotheses for GIED distribution reliability functions based on a generalized value approach. By transforming the reliability function into the generalized value domain and calculating the generalized lower confidence limit, the proposed method offers enhanced accuracy and precision. Furthermore, the generalized p-value approach for hypothesis testing provides a robust and computationally efficient method for analyzing reliability data. The results from a real data set and Monte Carlo simulations confirm the superiority of the proposed approach over classical methods. The proposed method offers improved accuracy and computational efficiency, making it a valuable tool for reliability analysis using GIED distributions.</p>2024-06-14T00:00:00+08:00Copyright (c) 2024 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/1996Robust M Estimation for Poisson Panel Data Model with Fixed Effects: Method, Algorithm, Simulation, and Application2024-08-11T23:29:33+08:00Ahmed Hassen Youssef Elsayed.gamal@comm.aru.edu.egMohamed Reda AbonazelElsayed.gamal@comm.aru.edu.eg Elsayed G. AhmedElsayed.gamal@comm.aru.edu.eg<p>The fixed effects Poisson (FEP) model is one of the most important for the count data when the data contain<br>periods and cross-sectional units. The maximum likelihood (ML) estimation method for the FEP model provides good results in the absence of outliers, but it is affected by outliers. So, we introduce in this paper robust estimators for the FEP model. These estimators yield stable and good results in case of the presence of outliers. The Monte Carlo simulation study and empirical application were conducted to assess the performance of the non-robust fixed Poisson maximum likelihood (FPML) estimator and the robust estimators: fixed Poisson Huber (FPHR), fixed Poisson Hampel (FPHM) and fixed Poisson Tukey (FPTK). The results of simulation and application show that robust estimators are better than FPML estimator when the count panel data contains outliers. In addition, FPTK is more efficient than other robust estimators.</p>2024-06-03T00:00:00+08:00Copyright (c) 2024 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2076Predicting the closing price of cryptocurrency Ethereum2024-08-11T23:33:00+08:00Vhukhudo Ronny Rambevharavelethakhani@gmail.comCaston Sigaukecaston.sigauke@univen.ac.zaThakhani RaveleRavelethakhani@gmail.com<p>Given that cryptocurrencies are now involved in nearly every financial transaction due to their widespread acceptance as an alternative method of payment and currency exchange, researchers and economists have increased opportunities to analyze cryptocurrency prices. Over time, predicting the daily closing price of Ethereum has been challenging for investors, traders, and investment banks because of its significant price volatility. The daily closing price of cryptocurrency is crucial for trading or investing in Ethereum. This report aims to conduct a comparative analysis of the predictive performance of deep machine learning algorithms within a stacking ensemble modeling framework, utilizing daily historical price data of Ethereum from Coindesk, tweets from Twitter spanning from August 1, 2022, to August 8, 2022, and five additional covariates (closing price lag1, closing price lag2, noltrend, daytype, and month) derived from Ethereum's closing price. Seven models are employed to forecast the daily closing price of Ethereum: recurrent neural network, ensemble stacked recurrent neural network, gradient boosting machine, generalized linear model, distributed random forest, deep neural networks, and a stacked ensemble of gradient boosting machine, generalized linear model, distributed random forest, and deep neural networks. The primary evaluation metric is the mean absolute error (MAE). Based on MAE, the RNN forecasts outperform the other models in this study, achieving an MAE of 0.0309.</p>2024-07-22T00:00:00+08:00Copyright (c) 2024 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2004On Quantile Credibility Estimators Under An Equal Correlation Structure Over Risks2024-08-11T23:35:11+08:00Ghada Arafaarafaghada.1995@gmail.comFarouk Metirifmetiri@yahoo.frAhmed Sadounsaadounahmed1@yahoo.frMohamed Riad Remitar_remita@yahoo.fr<p>In traditional quantile credibility models, it is typically assumed that claims are independent across different risks. Nevertheless, there are numerous scenarios where dependencies among insured individuals can emerge, thereby breaching the independence assumption. This study focuses on examining the quantile credibility model and extending some established results within the context of an equal correlation structure among risks. Specifically, we compute the credibility premiums for both homogeneous and inhomogeneous cases utilizing the orthogonal projection method.</p>2024-07-22T00:00:00+08:00Copyright (c) 2024 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2019An Optimal Strategy for Estimating Weibull distribution Parameters by Using Maximum Likelihood Method2024-08-11T23:00:56+08:00Talal Alharbi ta.alharbi@qu.edu.saFarag Hamad ta.alharbi@qu.edu.sa<p>Several methods have been used to estimate the Weibull parameters such as least square method (LSM), weighted least square method (WLSM), method of moments (MOM), and maximum likelihood (MLE). The maximum likelihood method is the most popular method (MLE). Newton-Raphson method has been applied to solve the normal equations of MLE’s in order to estimate the Weibull parameters. The method was used to find the optimal values of the Weibull distribution parameters for which the log-likelihood function is maximized. We tried to find the approximation solution to the normal equations of the MLE’s because there is no close form for get analytical solution. In this work, we tried to carry out a study that show the difference between two strategies to solve the MLE equations using Newton-Raphson algorithm. Both two strategies are provided an optimal solution to estimate the Weibull distribution parameters but which one more easer and which one converges faster. Therefore, we applied both strategies to estimate the Weibull’s shape and scale parameters using two different types of data (Real and simulation). We compared between the results that we got by applying the two strategies. Two studies have been done for comparing and selecting the optimal strategy to estimate Weibull distribution parameters using maximum likelihood method. We used some measurements to compare between the results such as number of steps for convergence (convergence condition), the estimated values for AIC, BIC and the RMSE value. The results show the numerical solution that we got by applying first strategy convergence faster than the solution that we got by applying second strategy. Moreover, the MRSE estimated by applying the first strategy is lower than the MRSE estimated by applying second strategy for the simulation study with different noise levels and different samples size.</p>2024-08-11T21:45:05+08:00Copyright (c) 2024 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/1779Optimality and Unified Duality for E-differentiable Vector Optimization Problems over Cones involving Generalized E-convexity2024-08-11T23:00:57+08:00Malti Kapoormaltikapoor@mln.du.ac.in<div class="page" title="Page 1"> <div class="layoutArea"> <div class="column"> <p>This paper explores a new approach to solve a nondifferentiable vector optimization problem over cones by means of an operator E : Rn → Rn which renders differentiability to the considered problem. Some new generalized convexity notions are introduced and employed to obtain necessary and sufficient KKT-type optimality conditions. Further, a unified dual is associated with the considered problem encapsulating both Mond-Weir and Wolfe type duals in the same apparatus and duality results are proved.</p> </div> </div> </div>2024-07-29T00:00:00+08:00Copyright (c) 2024 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/1875Solving Multiobjective Optimization Problems with Inequality Constraint Using An Augmented Lagrangian Function2024-08-11T23:23:36+08:00Appolinaire Tougmaappolinaire.tougma19@gmail.comKounhinir Somésokous11@gmail.com<p>We propose a method for solving multiobjective optimization problems under the constraints of inequality. In this method, the initial problem is transformed into a single-objective optimization without constraints using an augmented Lagrangian function and an <em>ϵ</em>-constraint approach. Indeed, the augmented Lagrangian function is used to convert a given problem with multiple objective functions into a single objective function. The <em>ϵ</em>-constraint approach allows for the transformation of constrained optimization problems into unconstrained optimization problems. To demonstrate the admissibility and Pareto optimality of the obtained solutions, we have provided two propositions with proofs. In addition, a comparison study is made with two other well-known and widely used methods, such as NSGA-II and BoostDMS, on convergence and distribution of obtained solutions using numerical results for fifty test problems taking in the literature. Based on all these theoretical and numerical results, we can say that the proposed method is the best way to solve multiobjective optimization problems.</p>2024-06-08T00:00:00+08:00Copyright (c) 2024 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2023Mean Variance Complex-Based Portfolio Optimization2024-08-11T23:21:31+08:00Izza Anis Majidahizzaanism@gmail.comAmran Rahimamran@science.unhas.ac.idMawardi Bahrimawardibahri@gmail.com<p>Mean-Variance (MV) is a method that collects several assets using appropriate weight intending to maximize profits and to reduce risk. Stock market conditions are very volatile, mean variance method does not reach stock market fluctuation well because MV method is only limited to one time period. This study proposes a mean variance complex-based approach that transforms real returns into complex returns by using Hilbert transform to construct an optimal mean-variance portfolio based on complex returns and then find its dynamic asset allocation. The results show that with the same risk tolerance, the mean variance complex-based approach outperforms MV method in profits, losses, and portfolio performance tests.</p> <p> </p>2024-06-06T00:00:00+08:00Copyright (c) 2024 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2098Optimization of the K-Nearest Neighbor Algorithm to Predict Bank Churn2024-08-11T23:00:58+08:00Sonia Akakpochunhui.yu@farmingdale.eduPatrick Dambrachunhui.yu@farmingdale.eduRachell Pazchunhui.yu@farmingdale.eduTimothy Smythchunhui.yu@farmingdale.eduFrank Torrechunhui.yu@farmingdale.eduChunhui Yuyuc@farmingdale.edu<p>Bank churn occurs when customers switch from one bank to another. Although some customer loss is unavoidable, it is important for banks to avoid voluntary churn as it is easier and cheaper to keep an existing customer than to gain a new one. In our paper, we train and optimize a machine learning algorithm, specifically a k-nearest neighbors algorithm, to predict whether or not a customer will leave their bank using existing demographic and financial information. Bygiving banks a reliable method for predicting whether or not a customer will churn, they can prioritize certain groups in an effort to increase retention rates. We compare the accuracy of our algorithm to other types of machine learning algorithms, such as random forest and logistic regression models, and increase the accuracy of the k-nearest neighbor algorithm by optimizing the k value used in our model, as well as utilizing 10-folds cross-validation. We determine the most important attributes and weight them appropriately. After optimizing this model, we are able to predict with 85.72% accuracy whether or not the customer will churn.</p>2024-07-10T00:00:00+08:00Copyright (c) 2024 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/1990A novel Mathematical Modeling for Deep Multilayer Perceptron Optimization: Architecture Optimization and Activation Functions Selection2024-08-11T23:00:59+08:00Taoufyq Elansarit.elansari@edu.umi.ac.maMohammed Ouananm.ouanan@umi.ac.maHamid Bourrayhbourrayh@yahoo.fr<p>The Multilayer Perceptron (MLP) is an artificial neural network composed of one or more hidden layers. It has found wide use in various fields and applications. The number of neurons in the hidden layers, the number of hidden layers, and the activation functions employed in each layer significantly influence the convergence of MLP learning algorithms. This article presents a model for selecting activation functions and optimizing the structure of the multilayer perceptron, formulated in terms of mixed-variable optimization. To solve the obtained model, a hybrid algorithm is used, combining stochastic optimization and the backpropagation algorithm. Our algorithm shows better complexity and execution time compared to some other methods in the literature, as the numerical results show.</p>2024-06-06T00:00:00+08:00Copyright (c) 2024 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2037Fast approximation of the traveling salesman problem shortest route by rectangular cell clustering pattern to parallelize solving2024-08-11T23:00:59+08:00Vadim Romanukeromanukevadimv@gmail.com<p>A method of quickly obtaining an approximate solution to the traveling salesman problem (TSP) is suggested, where a dramatic computational speedup is guaranteed. The initial TSP is broken into open-loop TSPs by using a clustering method. The clustering method is based on either imposing a rectangular lattice on the nodes or dividing the dataset iteratively until the open-loop TSPs become sufficiently small. The open-loop TSPs are independent and so they can be solved in parallel without synchronization, whichever the solver is. Then the open-loop subroutes are assembled into an approximately shortest route of the initial TSP via the shortest connections. The assemblage pattern is a symmetric rectangular closed-loop serpentine. The iterative clustering can use the rectangular assembling approach as well. Alternatively, the iterative clustering can use the centroid TSP assembling approach, which requires solving a supplementary closed-loop TSP whose nodes are centroids of the open-loop-TSP clusters. Based on results of numerical simulation, it is ascertained that both the iterative clustering and rectangular cell clustering pattern are roughly equally accurate, but the latter is way more computationally efficient on squarish datasets. Fast approximation of the TSP shortest route serves as an upper bound. In addition, the route can be studied to find its bottlenecks, which subsequently are separated and another bunch of open-loop TSPs is approximately solved. For big-sized TSPs, where the balance of the accuracy loss and computational time is uncertain, the rectangular cell clustering pattern allows obtaining fast solution approximations on multitudinous non-synchronized parallel processor cores.</p>2024-06-06T00:00:00+08:00Copyright (c) 2024 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/1906Optimized Parameter Estimation and Integrating Neural Network Forecasting of Dynamic Plant-Livestock Model for Early Warning in Agro-Environment Control Systems2024-08-11T23:16:37+08:00Agolame Puoetsileagolame.puoetsile@studentmail.biust.ac.bwMokaedi Lekgarilekgarim@biust.ac.bwSemu Kassakassas@biust.ac.bwGizaw Mengistu Tsidumengistug@biust.ac.bw<p>The research utilizes the Lotka-Volterra prey-predator model to study Plant-Herbivore dynamics, focusing on the relationship between traditional livestock farming and vegetation conditions. Advanced methods are developed to improve the precision and efficiency of parameter estimation in these models. Neural networks are incorporated to enhance forecasting abilities, and an extension of the Plant-Herbivore models includes Botswana's climate and livestock variables. Efficient parameter space exploration is achieved using the Runge-Kutta method along with Multistart and the local solver $fmincon$ in MATLAB. This method improves parameter estimation accuracy. To address the impact of homogeneity assumptions in the data, estimate aggregation through weighting and time conversion is applied. Furthermore, the study investigates the use of nonlinear least squares to further refine the process, allowing for the identification of parameters that best fit observed livestock data, even with non-linearity. By using optimized parameter estimation techniques along with normalized nonlinear least squares, the cumulative error was reduced from an initial 1563.4521 to a final value of 0.0038, well within the specified thresholds (1.0, 0.1, and 0.01). Comparisons between Autoregressive Integrated Moving Average (ARIMA) and Neural Network Auto-Regressive (NNAR) models showed that NNAR models outperformed ARIMA models, with lower variance estimates (0.000004 - 0.000562) compared to ARIMA (0.103 - 0.155). NNAR models displayed Mean Error (ME) values ranging from -0.0012 to 0.0140, indicating a close match between forecasts and actual values with minor deviations. As a result, NNAR forecasting was used for predicting soil moisture, death, and harvest rates, which were integrated into the extended Plant-Herbivore model. This integration enabled the estimation of livestock production trajectories for 2021-2022, along with corresponding interpretations. The study also assessed the uncertainty propagation from NNAR forecasts onto the Plant-Herbivore dynamic model, revealing an increase in uncertainty with longer lead times.</p>2024-07-01T00:00:00+08:00Copyright (c) 2024 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/1253Optimal Power Flow in DC Networks Using the Whale Optimization Algorithm2024-08-11T23:12:26+08:00Sebastian Camilo Jimenez Hernandezsebastianjimenez238176@correo.itm.edu.coLuis Fernando Grisales Noreñaluisgrisales@itm.edu.coJhon Jairo Rojas Montanojhonrojas7420@correo.itm.edu.coOscar Danilo Montoyaodmontoyag@udistrital.edu.coWalter Gil Gonzalezwalter.gil@pascualbravo.edu.co<p>This paper presents a solution method for the optimal power flow (OPF) problem in direct current (DC) networks. The method implements a master-slave optimization that combines a whale optimization algorithm (WOA) and a numerical method based on successive approximations (SA). The objective function is to reduce the power losses considering the set of constraints that DC networks represent in a distributed generation environment. In the master stage, the WOA determines the optimal amount of power to be supplied by each distributed generator (DG) in order to minimize the total power losses in the distribution lines of the DC network. In the slave stage, the power or load flow problem is solved in order to evaluate the objective function of each possible configuration proposed by the master stage. To validate the efficiency and robustness of the proposed model, we implemented three additional methods for comparison: the ant lion optimizer (ALO), a continuous version of the genetic algorithm (CGA), and the algorithm of black hole-based optimization (BHO). The efficiency of each solution method was validated in the 21- and the 69-node test systems using different scenarios of penetration of distributed generation. All the simulations, performed in MATLAB 2019, demonstrated that the WOA achieved the greatest minimization of power losses, regardless of the size of the DC network and the level of penetration of distributed generation.</p>2024-08-05T00:00:00+08:00Copyright (c) 2024 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2027Breast cancer survival analysis and machine learning to predict the impact of different treatments2024-08-11T23:01:02+08:00Amal Elnawasanyaml.elnawasany@ci.suez.edu.egBenBella Tawfikbenbellat@ci.suez.edu.egMohamed Makhloufm.abdallah@ci.suez.edu.eg<p>Breast cancer is the most common form of cancer among women, impacting approximately one million women worldwide. New treatments are being developed yearly, improving breast cancer patients' survival rates. To explore the impact of different treatments, we conducted this study using data from the Surveillance, Epidemiology, and End Results (SEER) database. The study employed Kaplan-Meier analysis to examine breast cancer-specific survival (BCSS) and overall survival (OS) rates across various treatment options, including ‘chemotherapy’, ‘radiotherapy, ‘both therapies’, and ‘no therapy’. The log-rank test was also utilized to assess the statistical significance of differences observed between multiple survival curves. We found that recommended treatment for most breast cancer cases, based on BCSS analysis, is the combination of ‘both’ chemotherapy and radiotherapy. On the other hand, according to OS analysis, ‘radiotherapy only’ or ‘in conjunction with chemotherapy’ is the superior treatment for most breast cancer cases. They are often preferred over ‘chemotherapy only’ for most breast cancer patients. Machine learning was used to develop ten models predicting the survivability for OS and BCSS. C5.0 algorithm consistently achieves strong overall performance. It achieves high accuracy 0.98 and sensitivity of 0.99 for both OS and BCSS, reasonably RMSE of (0.14, 0.15 for BCSS and OS, respectively), and good ROC score of (0.91, 0.88 for BCSS and OS, respectively).</p>2024-05-30T00:00:00+08:00Copyright (c) 2024 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2088A Redundancy Allocation Model for Uncertainty Random Water Supply System in Water Saving Management Contract Projects2024-08-11T23:01:02+08:00Qian Zhangzhangqian2021@student.usm.mySin Yin Tehtehsyin@usm.myCheang Peck Yeng Sharonsharon@usm.my<p>Water saving management contract (WSMC) projects provide advanced technology and management for a water supply system to achieve water conservation and set redundant components to ensure water supply reliability. Project managers focus on the reliability optimization problem and require redundancy allocation strategies of the above system. This paper presents an optimization method by dealing with the lifetime of the whole water supply system. Assuming the lifetimes of advanced components are uncertain variables and the old ones are random variables, a reliability optimization model of water supply systems is established based on chance theory, and the redundancy allocation solutions are obtained by an optimization toolkit. A WSMC case in Shenzhen, China is studied and the results show that the reliability of the water supply system has been in a high state based on the allocation strategy. This study provides theoretical support for improving water-saving safety and popularizing the WSMC service mechanism.</p>2024-07-22T00:00:00+08:00Copyright (c) 2024 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/1743A FMKL-GLD Quantile Method for Estimating Economic Growth in Nigeria in the Presence of Multicollinearity and Outlier 2024-08-11T23:01:03+08:00Ayooluwade Ebiwonjumijumiayooluwade@yahoo.comRetius Chifurirachifurira@ukzn.ac.zaKnowledge Chinhamuchinhamu@ukzn.ac.za<p>Nigeria’s economic growth posed to be a serious concern to policymakers, economics and scholars due to the challenges of economic characteristics, consequences and contradictions. Despite the effort made by the Central Bank of Nigeria in implementing several policies such as tightening of monetary policy rate and heavy borrowing for infrastructural development to stimulate economic growth in the past few years. The economic growth in Nigeria between 1999-2007 was 6.95 percent, between 2008-2010, it was stood 7.98 percent and it was 4.80 percent between 2011-2015 and from 2016 till day the economic growth stood at 0.81 percent. In this study, we estimate parameters to determine Nigerian economic growth in the presence of multicollinearity and outliers. We employed the FMKL-GLD quantile model to quarterly data from 1986 to 2021 obtained from the Central Bank of Nigeria. Exploratory data analysis (EDA) and diagnostic test carried out ascertained the presence of multicollinearity and outlier. However, it was discovered that in model that INDT, RINR, REXR and OPEN contributed positively to the economic growth to the turn of 0.15%, 0.24%, 0.06% and 1.76% while, it was revealed that EXDT contributed negatively to the economic growth in Nigeria thus, reduced economic growth by 0.02% and as such showed the tendency and potential macroeconomic variables under study as a veritable determinant of economic growth. The location, scale and space parameters , , , and for the fitted FMKL-GLD model were -0.0253, 34.1488 -0.3303, and -0.5758. Therefore, it can be concluded based on the location, scale and space parameters from FMKL-FGLD and GLD Q,Q plots as well as the estimated parameter of RGDP, FMKL-FGLD was the best model that described the economic situation in Nigeria by showing that economy was growing in retrogressive direction and the need for drastic effort and strong willed to curtail the situation. To achieve this, the adoption of economic openness to the development and the growth of economy in Nigeria is a veritable policy direction that must be strictly followed.</p> <p> </p>2024-07-27T00:00:00+08:00Copyright (c) 2024 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2011Modelling and forecasting Masvingo Province maternal mortality using time series models 2024-08-11T23:01:04+08:00Tendai Makonitpmakoni@gmail.comNomagugu T Ndlovunomagugun@ymail.com<p>In light of the Zimbabwean government's efforts, particularly through the Ministry of Health and Child Care (MoHCC), to reduce maternal mortality rates and achieve the aim of Sustainable Development Goal (SDG) number three, target one (SDG 3.1), which aims to reduce the global maternal mortality rate to fewer than 70 deaths per 100,000 live births, maternal mortality rates in Zimbabwe continue to rise. Time series techniques were used to model and predict quarterly maternal mortality statistics for Masvingo Province from January 2014 to December 2021. The time plot analysis revealed significant fluctuations in mortality, with the highest rates of maternal deaths recorded in 2018. The application of the Box-Jenkins methodology identified the ARIMA(2, 1, 1) model as the most suitable for modeling and forecasting quarterly maternal deaths among the fitted models. The suitability of the model was validated by the Akaike Information Criterion (AIC), and its forecast accuracy was confirmed by the Mean Absolute Error (MAE) and the root mean square error (RMSE). Consequently, it was used to project future maternal deaths. The projected values indicate a slight increase in quarterly mortality rates during the period, but there appears to be a relatively stable trend with moderate fluctuations, suggesting that Zimbabwe has not met the targets of SDG 3.1. These results underscore the need to re-assess current intervention programs aimed at reducing maternal mortality. The findings of the study could guide the refinement of existing strategies and the implementation of innovative solutions to urgently address unacceptably high mortality rates. Using statistical models such as the one used in this study, the Ministry of Health and Child Care (MoHCC) can make informed decisions in the health sector and implement effective interventions to combat maternal mortality.</p>2024-07-12T00:00:00+08:00Copyright (c) 2024 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/1918Julia sets of transcendental functions via a viscosity approximation-type iterative method with s-convexity2024-08-11T23:01:04+08:00Iqbal Ahmadiqbal@qec.edu.saMohammed Sajidm.sajid@qec.edu.saRais Ahmadraisain_123@rediffmail.com<p>In this article, we explore and analyze the different variants of Julia set patterns for the complex exponential function $W(z) =\alpha e^{z^n}+\beta z^2 + \log{\gamma^t}$ and complex sine function $T(z) =\sin({z^n})+\beta z^2 + \log{\gamma^t}$, where $n\geq 2, \alpha, \beta\in\mathbb{C}, \gamma\in\mathbb{C}\backslash {0}$, and $t\in\mathbb{R}, ~t\geq 1$ by employing a viscosity approximation-type iterative method with $s$-convexity. We utilize a viscosity approximation-type iterative method with s-convexity to derive an escape criterion for visualizing Julia sets. This is achieved by generalizing the existing algorithms, which led to visualization of beautiful fractals as Julia sets. Additionally, we present graphical illustrations of Julia sets to demonstrate their dependence on the iteration parameters. Our study concludes with an analysis of variations in the images and the influence of parameters on the color and appearance of the fractal patterns. Finally, we observe intriguing behaviors of Julia sets with fixed input parameters and varying values of $n.$</p>2024-06-06T00:00:00+08:00Copyright (c) 2024 Statistics, Optimization & Information Computing