http://www.iapress.org/index.php/soic/issue/feedStatistics, Optimization & Information Computing2025-06-10T11:55:11+08:00David G. Yudavid.iapress@gmail.comOpen Journal Systems<p><em><strong>Statistics, Optimization and Information Computing</strong></em> (SOIC) is an international refereed journal dedicated to the latest advancement of statistics, optimization and applications in information sciences. Topics of interest are (but not limited to): </p> <p>Statistical theory and applications</p> <ul> <li class="show">Statistical computing, Simulation and Monte Carlo methods, Bootstrap, Resampling methods, Spatial Statistics, Survival Analysis, Nonparametric and semiparametric methods, Asymptotics, Bayesian inference and Bayesian optimization</li> <li class="show">Stochastic processes, Probability, Statistics and applications</li> <li class="show">Statistical methods and modeling in life sciences including biomedical sciences, environmental sciences and agriculture</li> <li class="show">Decision Theory, Time series analysis, High-dimensional multivariate integrals, statistical analysis in market, business, finance, insurance, economic and social science, etc</li> </ul> <p> Optimization methods and applications</p> <ul> <li class="show">Linear and nonlinear optimization</li> <li class="show">Stochastic optimization, Statistical optimization and Markov-chain etc.</li> <li class="show">Game theory, Network optimization and combinatorial optimization</li> <li class="show">Variational analysis, Convex optimization and nonsmooth optimization</li> <li class="show">Global optimization and semidefinite programming </li> <li class="show">Complementarity problems and variational inequalities</li> <li class="show"><span lang="EN-US">Optimal control: theory and applications</span></li> <li class="show">Operations research, Optimization and applications in management science and engineering</li> </ul> <p>Information computing and machine intelligence</p> <ul> <li class="show">Machine learning, Statistical learning, Deep learning</li> <li class="show">Artificial intelligence, Intelligence computation, Intelligent control and optimization</li> <li class="show">Data mining, Data analysis, Cluster computing, Classification</li> <li class="show">Pattern recognition, Computer vision</li> <li class="show">Compressive sensing and sparse reconstruction</li> <li class="show">Signal and image processing, Medical imaging and analysis, Inverse problem and imaging sciences</li> <li class="show">Genetic algorithm, Natural language processing, Expert systems, Robotics, Information retrieval and computing</li> <li class="show">Numerical analysis and algorithms with applications in computer science and engineering</li> </ul>http://www.iapress.org/index.php/soic/article/view/1710Rao-Robson-Nikulin Goodness-of-fit Test Statistic for Censored and Uncensored Real Data with Classical and Bayesian Estimation2025-05-29T01:11:30+08:00Salwa L. AlKhayyatslalkhayyat@uj.edu.saHaitham M. Yousofhaitham.yousof@fcom.bu.edu.egHafida Goualgoual.hafida@gmail.com Talhi Hamidahamida.talhi@univ-annaba.dz Mohamed S. Hamedmssh@gulf.edu.sa Aiachi Hibahiba.ayachi@univ-annaba.orgMohamed Ibrahimmohamed_ibrahim@du.edu.eg<p>In this work, we provide a new Pareto type-II extension for censored and uncensored real-life data. With an emphasis on the applied elements of the model, some mathematical properties of the new distribution are deduced without excess. A variety of traditional methods, including the Bayes method, are used to estimate the parameters of the new distribution. The censored case maximum likelihood technique is also inferred. Using Pitman's proximity criteria, the likelihood estimation and the Bayesian estimation are contrasted. Three loss functions such as the generalized quadratic, the Linex, and the entropy functions are used to derive the Bayesian estimators. All the estimation techniques provided have been evaluated through simulated studies. The BB algorithm is used to compare the censored maximum likelihood method to the Bayesian approach. With the aid of two applications and a simulation study, the construction of the Rao-Nikulin-Robson (RRN) statistic<br>for the new model in the uncensored case is explained in detail. Additionally, the development of the Rao-Robson-Nikulin statistic for the novel model under the censored situation is shown using data from two<br>censored applications and a simulation study.</p>2025-02-24T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/1872The New Topp-Leone-Type II Exponentiated Half Logistic-Marshall-Olkin-G Family of Distributions with Applications2025-05-29T01:11:32+08:00Broderick Oluyedeoluyedeo@biust.ac.bwGomolemo Jacqueline Lekonolg21100063@studentmail.biust.ac.bwLesego Gabaitirigabaitiril@biust.ac.bw<p>In this paper, we propose a new family of generalized distributions called the Topp-Leone type II Exponentiated Half Logistic-Marshall-Olkin-G (TL-TIIEHL-MO-G) distribution. The new distribution can be expressed as an infinite linear combination of exponentiated-G family of distributions. Some special models of the new family of distributions are explored. Statistical properties including the quantile function, ordinary and incomplete moments, stochastic orders, probability weighted moments, distribution of the order statistics and Renyi entropy are presented. The maximum likelihood method is used for estimating the model parameters and Monte Carlo simulation is conducted to examine the performance of the model. The flexibility and importance of the new family of distributions is demonstrated by means of applications to real data for censored and complete sets, respectively</p>2025-03-19T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2056A New Left Truncated Distribution for Modeling Failure time data: Estimation, Robustness study and Application2025-06-10T11:55:11+08:00KRISHNAKUMARI Kkrishnavidyadharan@gmail.comDais Georgekrishnavidyadharan@gmail.com<p>Truncation arises in many practical situations such as Epidemiology, Material science, Psychology, Social Sciences and Statistics where one wants to study about data which lie above or below a certain threshold or with in a specified range. Left-truncation occurs when observations below a given threshold are not present in the sample. It usually arises in employment, engineering, hydrology, insurance, reliability studies, survival analysis etc. In this article, we develop and analyze a new left truncated distribution by truncating an asymmetric and heavy tailed distribution namely Esscher transformed Laplace distribution from the left so that the resulting distriution lies with in (b,$\infty$). Various distributional and reliability properties of the proposed distribution are investigated. A real data analysis is done using failure time data.</p>2025-03-01T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2133Discrimination between quantile regression models for bounded data2025-05-29T01:11:33+08:00Alla Abdul AlSattar Hammodatallahamoodat@uomosul.edu.iqZainab Tawfiq Hamidzainab.tawfeek@uomosul.edu.iqZakariya Yahya Algamalzakariya.algamal@uomosul.edu.iq<p>Most often when we use the term `bounded', we mean a response variable that retains inherent upper and lower boundaries; for instance, it is a proportion or a strictly positive for example incomes. This constraint has implications for the type of model to be used since most traditional linear models may not respect these boundaries. Parametric quantile regression with bounded data thus comes with a framework for analysis and interpretation of how the predictor of interest influences the response variable over different quantiles while constrained by the bounds of the theoretically assumed distribution. In this paper, several parametric quantile regression models are explored and their performance is investigated under several conditions. Our Monte Carlo simulation results suggest that some of these parametric quantile regression models can bring significant improvement relative to other existing models under certain conditions.</p>2025-03-18T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2157Optimizing Automobile Insurance Pricing: A Generalized Linear Model Approach to Claim Frequency and Severity2025-05-29T01:11:34+08:00Mekdad Slimeslime.mekdad@gmail.comAbdellah Ould Khalslime.mekdad@gmail.com Abdelhak Zoglatslime.mekdad@gmail.com Mohammed El Kamlislime.mekdad@gmail.comBrahim Battislime.mekdad@gmail.com<p>Morocco's insurance sector, particularly auto insurance, is experiencing significant growth despite economic challenges. To remain competitive, companies must innovate and adjust their pricing to meet customer expectations and strengthen their market position. Traditionally, actuaries have used the linear model to assess the impact of explanatory variables on the frequency and severity of claims. However, this model has limitations that do not always accurately reflect the reality of claims or costs, especially in auto insurance. Our study adopted the generalized linear model (GLM) to address these shortcomings, enabling a more precise statistical analysis that better aligns with market realities. This paper examines the application of GLM to model the total claim burden of an automobile portfolio and establish an optimal rate. The steps include data processing and analysis, segmentation of rating variables, as well as the selection of appropriate distributions using statistical tests such as the Wald test and the deviance test, all performed using SAS software.</p>2025-04-03T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2167Advanced Parameter Estimation for the Gompertz-Makeham Process: A Comparative Study of MMLE, PSO, CS, and Bayesian Methods2025-05-29T01:11:35+08:00Adel S. Hussainmhmdmath@hotmail.comMuthanna Subhi Sulaimanmuthanna.sulaiman@uomosul.edu.iqSura Mohamed Husseinsura.alalwlia@uomosul.edu.iqEmad A. Az-Zo’bieaaz2006@mutah.edu.joMohammad Tashtoushtashtoushzz@su.edu.om<p>A research study investigates how to estimate Gompertz-Make ham Process (GMP) parameters within non-homogeneous Poisson processes (NHPP). Authorities have developed Modified Maximum Likelihood Estimation (MMLE) as an improvement over standard Maximum Likelihood Estimation (MLE) to resolve parameter estimation accuracy issues. The study utilizes combination artificial intelligence optimizations through particle swarm optimization (PSO) and cockoo search (CS) alongside Bayesian estimation to assess different methods. This study evaluates MMLE and PSO and CS with Bayesian methods through Root Mean Square Error (RMSE) and Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) statistical accuracy measurements during a simulation analysis. The MMLE estimation technique delivers better estimation precision than PSO, CS and Bayesian methods during the performance assessment. The methodology is validated through its use in modeling operational failures at the Badoush Cement Factory and COVID-19 case occurrences in Italy, showing its capability to model failure rates alongside event occurrences. The research generates progress in NHPP statistical estimation methods which gives a stronger analytical platform for reliability monitoring and survival model prediction and epidemiological projection. Research into the GMP needs to focus on including time-dependent elements and structural dependency mechanisms to enhance the model's capability and guess making power.</p>2025-03-06T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2293Bayesian accelerated life testing models for the log-normal and gamma distributions under dual-stresses2025-05-29T01:11:36+08:00Neill Smitneillsmit1@gmail.com<p>In this paper, a Bayesian approach to accelerated life testing models with two stressors is presented. Lifetimes are assumed to follow either a log-normal distribution or a gamma distribution, which have been mostly overlooked in the Bayesian literature when considering multiple stressors. The generalized Eyring relationship is used as the time transformation function, which allows for the use of one thermal stressor and one non-thermal stressor. Due to the mathematically intractable posteriors of these models, Markov chain Monte Carlo methods are utilized to obtain posterior samples on which to base inference. The models are applied to a real dataset, where model comparison metrics are calculated and estimates are provided of the model parameters, predictive reliability, and mean time to failure. The robustness of the models is also investigated in terms of the prior specification.</p>2025-03-21T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2463A Novel Fréchet-Poisson Model: Properties, Applications under Extreme Reliability Data, Different Estimation Methods and Case Study on Strength-Stress Reliability Analysis2025-05-29T01:11:37+08:00Mohamed Ibrahimmohamed_ibrahim@du.edu.egS. I. Ansarisaiful.islam.ansari@gmail.comAbdullah H. Al-Nefaieaalnefaie@kfu.edu.saAhmad M. AboAlkhairaaboalkhair@kfu.edu.saMohamed S. Hamedmssh@gulf.edu.saHaitham M. Yousofhaitham.yousof@fcom.bu.edu.eg<p>A new compound extension of the Fréchet distribution is introduced and studied. Some of its properties including moments, incomplete moments, probability weighted moments, moment generating function, stress strength reliability model, residual life and reversed residual life functions are derived. The mean squared errors (MSEs) for some estimation methods including maximum likelihood estimation (MLE), Cram\'{e}r--von Mises (CVM) estimation, Bootstrapping (Boot.) estimation and Kolmogorov estimates (KE) method are used to estimate the unknown parameter via a simulation study. Two real applications are presented for comparing the estimation methods. Another two real applications are presented for comparing the competitive models. The nonparametric Hill estimator under the breaking stress of carbon fibers is estimated using the tail index (TIx) of the new model. Finally, a case study on reliability analysis of composite materials for aerospace applications is presented.</p>2025-04-18T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2069New parameter of conjugate gradient method for unconstrained nonlinear optimization2025-05-29T01:11:38+08:00Mohamed Lamine Ouaouao_mohameddz@yahoo.frSamia Khelladisamiakhelladi2021@gmail.comDjamel Benterkidj_benterki@yahoo.fr<p>We are interested in the performance of nonlinear conjugate gradient methods for unconstrained optimization. In<br>particular, we address the conjugate gradient algorithm with strong Wolfe inexact line search. Firstly, we study the descent<br>property of the search direction of the considered conjugate gradient algorithm based on a new direction obtained from a<br>new parameter. The main objective of this parameter is to improve the speed of the convergence of the obtained algorithm.<br>Then, we present a complete study that shows the global convergence of this algorithm. Finally, we establish comparative<br>numerical experiments on well-known test examples to show the efficiency and robustness of our algorithm compared to<br>other recent algorithms.</p>2025-02-24T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2182Blockchain technology for Green Manufacturing: A Systematic Literature Review on applications, drivers, enablers and challenges2025-05-29T01:11:38+08:00Clement Regis TUYISHIMEc.regis-tuyishime@ueuromed.orgAsmae ABADIa.abadi@ueuromed.orgChaimae ABADI chaimae.abadi@gmail.comMohammed ABADIabadi.s.mohammed@gmail.com<p>Blockchain technology(BCT) is a promising technology for Industry 4.0 and enhances sustainability, traceability, and resilience for Green manufacturing(GM) in the value chain. This literature study aims to evaluate the existing and current literature for contributing to the research focusing on BCT to GM industries with insight into the drivers, enablers, and challenges of BCT. This review is not limited to highlighting the contributions and application of blockchain to eco-friendly manufacturing, it will take into account the role of emerging technology applicable to GM in Industry 4.0. In conducting this review, the number of 113 qualitative articles were selected to be analyzed deeply using bibliometric and content analysis, based on their contents, year of publication, keywords, the methodology used, and recommendations of the authors. The results accentuated the connection between BCT and their associated technology, including Artificial Intelligence(AI) and the Internet of Things(IoT), for enhancing GM by accounting for the drivers, enablers, and challenges of implementing the BCT to GM.</p> <p>In the conclusion of our literature review reveals that BCT is a promising technology in the context of our review since it offers two main capabilities: transaction transparency and robustness, which are mandatory for GM implementation. In addition, we concluded that the majority of existing research works focus only on one or two aspects of GM and are destined to specific industries or use cases that limit their applicability. Unfortunately, there are gaps related to standardization, the 4.0 industry implications, and the adoption of BCT identified during the analysis of this review.</p>2025-03-29T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2096On derivability criteria of h-Convex Functions2025-05-29T01:11:40+08:00Mousaab Bouafiamousaab84@yahoo.fr Adnan Yassineadnan.yassine@univ-lehavre.frThabet Abdeljawadtabdeljawad@psu.edu.sa<p>This study pursues two main objectives. First, we aim to generalize the Criterion of Derivability for convex functions, which posits that for a specific type of mathematical function defined on an interval, the function is convex if and only if its rate of change (first derivative) is monotonically increasing across that interval. We aim to expand this concept to encompass the realm of 'h-convexity' which generalizes convexity for nonnegative functions by allowing a function h to act on the right hand side of the convexity inequality.</p> <p>Additionally, we delve into the second criterion of convexity, which asserts that for a similar type of function on an interval, the function is convex if and only if its second derivative remains non-negative across the entire interval, adhering to the conventional definition of convexity. Our goal is to reinterpret this criterion within the framework of 'h-convexity'. Furthermore, we prove that if a certain non-zero function defined on the interval [0,1] is non-negative, concave, and bounded above by the identity function, then this function is fixing the end point of the interval if and only if it is the identity function.<br>Finally, we will also provide a response in to the conjecture given by Mohammad W. Alomari (See [6]) that it is incorrect with two counterexamples.</p>2025-03-21T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2170An Application of Ensemble Stacking in Machine Learning to Predict Short-term Electricity Demand in South Africa2025-05-29T01:11:40+08:00Claris ShokoShokoc@ub.ac.bwCaston Sigaukecaston.sigauke@univen.ac.zaKatleho Makatjaneshokoc@ub.ac.bw<p>The massive increase in the collected data and the need for data mining and analyses has prompted the need to improve the accuracy and stability of traditional data mining and learning algorithms. This study proposes a robust stacking-ensemble algorithm for predicting the hourly electricity demand in South Africa. The structure of the proposed model is in two layers: the base model and the meta-model. Four machine learning models, that is, the gradient boosting machine (GBM), the deep neural network (DNN), the generalised linear model (GLM), and the random forest (RF), make up the base models. Output from the base models is integrated using ensemble stacking to form the meta-model. The stacking-ensemble (SE) model predicts South Africa's hourly electricity demand. The performance of the models is tested in different forecasting horizons. The prediction performance of the stacking-ensemble model is compared with the prediction performance of each of the base models using the root mean square error (RMSE), the mean absolute error (MAE), and the mean square error (MSE). In addition, the Giacomini-White test is used to identify the dominant model. Results showed that the RF model produced the most accurate predictions in all the forecasting horizons. The order of dominance is as follows: RF> SE > GBM> GLM. Thus, RF demonstrates the highest predictive capability, dominating the other models. The stacking-ensemble model produced the second most accurate results, with its results in the shortest forecasting horizon almost equal to that of the RF model. Thus, in this context, the stacking ensemble performs better than 3 of the 4 meta models. The proposed model produces a reasonable and accurate prediction of hourly electricity demand, which is strategically significant in planning and formulating electricity load-shedding strategies in South Africa or any other country.</p>2025-03-28T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2210Reflexive Edge Strength in Certain Graphs with Dominant Vertex2025-05-29T01:11:42+08:00Marsidimarsidiarin@gmail.comDafikd.dafik@unej.ac.idSusantosusanto.fkip@unej.ac.idArika Indah Kristianaarika.fkip@unej.ac.idIka Hesti Agustinikahesti.fmipa@unej.ac.idM Venkatachalamvenkatmaths@kongunaducollege.ac.in<p>Consider a basic, connected graph G with an edge set of $E(G)$ and a vertex set of $V(G)$. The functions $f_e$ and $f_v$, which take $k=max\{k_e, 2k_v\}$, from the edge set to the first $k_e$ natural number and the non-negative even number up to $2k_v$, respectively, are the components of total $k$-labeling. An \textit{edge irregular reflexive $k$ labeling} of the graph $G$ is the total $k$-labeling, if for every two different edges $x_1x_2$ and $x_1'x_2'$ of $G$, $wt(x_1x_2) \neq wt(x_1'x_2')$, where $wt(x_1x_2)=f_v(x_1)+f_e(x_1x_2)+f_v(x_2)$. The reflexive edge strength of graph $G$ is defined as the minimal $k$ for graph $G$ with an edge irregular reflexive $k$-labeling; it is denoted by $res(G)$. The $res(G)$, where $G$ are the book, triangular book, Jahangir, and helm graphs, was found in this work.</p>2025-03-28T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2247Intelligent operation of photovoltaic generators in isolated AC microgrids to reduce costs and improve operating conditions2025-05-29T01:11:43+08:00Catalina Díaz Cácerescatdiaz17@alumnos.utalca.clLuis Fernando Grisales Noreñaluis.grisales@utalca.clBrandon Cortés-Caicedobrandon.cortes@pascualbravo.edu.coJhony Andrés Guzmán-Henaojhonyguzman72415@correo.itm.edu.coRubén Iván Bolañosrubenbolanos@itm.edu.coOscar Danilo Montoya Giraldoodmontoyag@udistrital.edu.co<p>This paper addresses the challenges associated with optimizing the operation of photovoltaic distributed generators in isolated electrical microgrids. With the aim of reducing energy production and system maintenance costs and improving the microgrid operating conditions, a master--slave methodology is proposed. In the master stage, the problem of intelligently injecting active power from photovoltaic generators is solved using the continuous versions of four optimization techniques: the Monte Carlo method, the Chu \& Beasley genetic algorithm, the population genetic algorithm, and the particle swarm optimizer. Meanwhile, the slave stage evaluates the solutions proposed by the master stage by solving an hourly power flow problem based on the successive approximations method. The proposed solution methodologies are validated in two test scenarios of 10 and 27 buses to select the one with the best performance. Then, the most efficient methodology is implemented in a real isolated grid located in Huatacondo, Chile. This validation aims to assess its ability to optimize the operation of photovoltaic generators in isolated microgrids, considering variations in power generation and demand across the different seasons of the year. The study underscores the importance of financial considerations in achieving an efficient and economically viable operation of photovoltaic generation systems. Furthermore, it provides valuable input to successfully integrate non-conventional renewable energy sources into isolated electrical microgrids.</p>2025-04-13T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2288Function Representation in Hilbert Spaces Using Haar Wavelet Series2025-05-29T01:27:19+08:00Andres Felipe Cameloafkamelo@utp.edu.co Carlos Alberto Ramírezcaramirez@utp.edu.coJosé Rodrigo Gonzálezjorodryy@utp.edu.co<p>This work explores the application of integral transforms using Scale and Haar wavelet functions to numerically represent a function \( f(t) \). It is based on defining a vector space where any function can be represented as a linear combination of orthogonal basis functions. In this case, the Haar wavelet transform is used, employing Haar functions generated from Scale functions. First, the fundamental mathematical concepts such as Hilbert spaces and orthogonality, necessary for understanding the Haar wavelet transform, are presented. Then, the construction of the Scale and Haar wavelet functions and the process for determining the coefficients for function representation are detailed. The methodology is applied to the function \( f(t) = t^2 \) over the interval \( t \in [-3, 3] \), showing how to calculate the series coefficients for different resolution levels. As the resolution level increases, the approximation of \( f(t) \) improves significantly. Furthermore, the representation of the function \( f(t) = \sin(t) \) over the interval \( t \in [-6, 6] \) using the Haar wavelet series is presented.</p>2025-03-06T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2302Fuzzy Volterra Integral Equation Approximate Solution Via Optimal Homotopy Asymptotic Methods 2025-05-29T01:11:45+08:00Alzubi Muath Talal Mahmoudmuathalzubi@student.usm.myFarah Aini Abdullahfarahaini@usm.myAli Fareed Jameelhomotopy33@gmail.comAdila Aida Azaharadilaazahar@usm.my<p>The field of fuzzy integral equations (FIEs) is significant for modeling complex, time-delayed, and uncertain physical phenomena. Nevertheless, the majority of current solutions for FIEs encounter considerable challenges, such as the inability to manage intricate fuzzy functions, stringent assumptions regarding the forms of fuzzy operations utilized, and numerical instability in extremely nonlinear issues. Moreover, the capability of traditional methods in producing precise or reliable outcomes for practical applications is limited, and if they can, will incur substantial computing expenses. These challenges underscore the demand for more effective and efficient methodologies. This study aims to address the demand by developing two approximate analytical techniques to solve the FIEs namely optimal homotopy asymptotic method (OHAM) and the multistage optimal homotopy asymptotic method (MOHAM). A novel iteration of fuzzy OHAM and MOHAM is introduced by integrating the fundamental concepts of these methodologies with fuzzy set theory and optimization techniques. Then, OHAM and MOHAM are further formulated to solve the second-kind linear Volterra fuzzy integral equations (VFIEs). These methods are named fuzzy Volterra optimal homotopy asymptotic method (FV-OHAM) and fuzzy Volterra multistage optimal homotopy asymptotic method (FV-MOHAM), respectively. From two linear examples, FV-MOHAM and FV-OHAM generated significantly more accurate results than other existing methods. A thorough assessment is performed to evaluate their effectiveness and practical use, potentially aiding in solving complex problems across several scientific and engineering fields.</p>2025-03-10T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2307Numerical Solution of the Lotka-Volterra Stochastic Differential Equation2025-05-29T01:28:57+08:00Erisbey Marín Cardonaerics10-@utp.edu.coCarlos Alberto Ramírez-Vanegascaramirez@utp.edu.coJosé Rodrigo González Granadajorodryy@utp.edu.co<p>This paper presents the modeling of the stochastic differential equation of Lotka-Volterra and introduces the application of two numerical methods to approximately obtain the solution to this stochastic model. The methods used to solve the stochastic differential equation are the Euler-Maruyama method and the Milstein method. Additionally, a methodology will be presented to obtain the parameters of the predator-prey model equation based on empirically obtained data from observations conducted over a fixed period of time.</p>2025-03-04T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2397Jackson’s theorem for the Kontorovich-Lebedev-Clifford transform2025-05-29T01:11:47+08:00Yassine FANTASSEfantasse.yassine@gmail.comAbdellatif Akhlidjfantasse.yassine@gmail.com<p>In this paper, by using the Kontorovich-Lebedev-Clifford translation operators studied recently by A. Prasad and U.K. Mandal (The Kontorovich-Lebedev-Clifford transform, Filomat 35:14 (2021), 4811–4824.), we prove Jackson's theorem associated with the Kontorovich-Lebedev-Clifford transform.</p>2025-03-29T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2398From Extraction to Reasoning: A Systematic Review of Algorithms in Multi-Document Summarization and QA2025-05-29T01:11:48+08:00Emmanuel Efosa-Zuwatheefosazuwa@gmail.comOlufunke Oladipupofunke.oladipupo@covenantuniversity.edu.ngJelili Oyeladejelili.oyelade@covenantuniversity.edu.ng<p>Multi-document summarization and question-answering (QA) have become pivotal tasks in Natural Language Processing (NLP), facilitating information extraction and decision-making across various domains. This systematic review explores the evolution of algorithms used in these tasks, providing a comprehensive taxonomy of traditional, modern, and emerging approaches. We examine the progression from early extractive methods, such as TFIDF and TextRank, to the advent of neural models like BERT, GPT, and T5 and the integration of retrieval-augmented generation (RAG) for QA. Hybrid models combining traditional techniques with neural approaches and graph-based methods are also discussed. Through a detailed analysis of algorithmic frameworks, we identify key strengths, weaknesses, and challenges in current methodologies. Additionally, the review highlights recent trends such as unified models, multimodal algorithms, and the application of reinforcement learning in summarization and QA tasks. We also explore the real-world relevance of these algorithms in sectors such as news, legal, medical, and education. The paper concludes by outlining open research directions, proposing new evaluation frameworks, and emphasizing the need for cross-task annotations and ethical considerations in future algorithmic development.</p>2025-03-15T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2483Randomized Algorithms for Low-Rank Tensor Completion in TT-Format2025-05-29T01:11:49+08:00Yihao Pan294002507@qq.comCongyi Yuyucy944826@hdu.edu.cnChaoping ChenChaopingC2@gmail.comGaohang Yumaghyu@163.com<p>Tensor completion is a crucial technique for filling in missing values in multi-dimensional data. It relies on the assumption that such datasets have intrinsic low-rank properties, leveraging this to reconstitute the dataset using low-rank decomposition or other strategies. Traditional approaches often lack computational efficiency,<br>particularly with singular value decomposition (SVD) for large-scale tensor. Furthermore, fixed-rank SVD methods struggle with determining a suitable initial rank when data are incomplete. This paper introduces two novel randomized algorithms designed for low-rank tensor completion in tensor train (TT) format, named TTrandPI and FPTT. The TTrandPI algorithm integrates randomized tensor train (TT) decomposition with power iteration techniques, thereby enhancing computational efficiency and accuracy by improving spectral decay and minimizing tail energy build-up. Meanwhile, the FPTT algorithm utilizes a fixed-precision low-rank approximation approach that adaptively selects tensor ranks based on error tolerance levels, thus reducing the dependence on a predetermined rank. By conducting numerical experiments on synthetic data, color images, and video sequences, both algorithms exhibit superior performance compared to some existing methods.</p>2025-04-27T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2617Hybrid Butterfly-Grey Wolf Optimization (HB-GWO): A Novel Metaheuristic Approach for Feature Selection in High-Dimensional Data2025-06-01T11:18:34+08:00Mohammed Aly mohammed-alysalem@eru.edu.egAbdullah Shawan Alotaibia.shawan@su.edu.sa<p>Feature selection is a critical preprocessing step in high-dimensional data analysis, aiming to enhance model performance by eliminating irrelevant and redundant features. This paper introduces a novel hybrid metaheuristic algorithm, the Hybrid Butterfly-Grey Wolf Optimization (HB-GWO), which synergizes the global exploration capabilities of the Butterfly Optimization Algorithm (BOA) with the local exploitation strengths of the Grey Wolf Optimizer (GWO) to achieve an effective balance between exploration and exploitation in feature selection tasks. The algorithm incorporates an adaptive switching mechanism that dynamically adjusts the contribution of BOA and GWO throughout the optimization process. HB-GWO was evaluated on multiple benchmark datasets, including Breast Cancer, Madelon, Colon Cancer, and Arrhythmia, using a Random Forest classifier as the evaluation model. Experimental results demonstrate that HB-GWO consistently outperforms state-of-the-art metaheuristic algorithms (GA, PSO, BOA, GWO) in classification accuracy, feature reduction rate, and computational efficiency. An ablation study further confirms the contribution of each component of the hybrid algorithm. These findings position HB-GWO as a robust and efficient method for feature selection in high-dimensional data analysis.</p>2025-05-28T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2524Forecasting Scientific Impact: A Model for Predicting Citation Counts2025-05-29T01:11:51+08:00Bao T. Nguyenbaont@ueh.edu.vnThinh T. Nguyenthinhntt@ueh.edu.vn<p>Forecasting the citation counts of scientific papers is a challenging task, particularly when utilizing textual data such as author names, paper titles, abstracts, and affiliations. This task diverges from conventional regression problems involving numerical or categorical inputs, as it demands the processing of complex, high-dimensional text features. Traditional regression techniques, including Linear Regression, Polynomial Regression, and Decision Tree Regression, often fail to encapsulate the semantic intricacies of textual data and are susceptible to overfitting due to the expansive feature space. In the context of Vietnam, where research output is rapidly growing yet underexplored in predictive modeling, these limitations are especially pronounced. To tackle these issues, we leverage advanced Natural Language Processing (NLP) techniques, employing Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) networks. These deep learning models are adept at handling sequential data, capturing long-range dependencies, and preserving contextual nuances, rendering them well-suited for text-based citation prediction. We conducted experiments using a dataset of academic<br>papers authored by Vietnamese researchers across diverse disciplines, sourced from publications featuring Vietnamese author contributions. The dataset includes features such as author names, titles, abstracts, and affiliations, reflecting the unique characteristics of Vietnam’s research landscape. We compared the performance of LSTM and GRU models against traditional machine learning approaches, evaluating prediction accuracy with metrics like Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). The results reveal that LSTM and GRU models substantially outperform their traditional counterparts. The LSTM model achieved an RMSE of 8.54 and an MAE of 8.1, while the GRU model yielded an RMSE of 8.32 and an MAE of 7.83, demonstrating robust predictive capabilities. In contrast, traditional models such as Decision Tree Regression and Linear Regression exhibited higher error rates, with RMSEs exceeding 12.0. These findings underscore the efficacy of deep learning in forecasting citation counts from textual data, particularly for Vietnamese research outputs, and highlight the potential of LSTM and GRU models to uncover intricate patterns driving scientific impact in emerging research ecosystems.</p>2025-05-28T13:35:43+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2029A New Approach Of Multiple Merger And Acquisition (M &A) In AR Time Series Model Under Bayesian Framework2025-05-29T01:11:53+08:00Jitendra Kumarvjitendrav@gmail.comMohd Mudassirmudassir94stats@gmail.com<p>Merger and acquisition (M\&As) concepts play a pivotal role in fostering economic development and are extensively examined worldwide across various empirical contexts, notably in the banking sector. The primary objective of this study is to introduce a novel approach termed the multiple-merger autoregressive (MM-AR) model, aimed at providing insights into the effects of mergers on model parameters and behaviour. Initially, we propose a comprehensive estimation framework utilizing posterior parameters within the Bayesian paradigm, incorporating diverse loss functions to enhance robustness. The uniqueness of this model is that it will also work for the situation when multiple series get merged at various time points in the same observed series. Bayesian estimation approach is used to record the results of the MM-AR model parameters in terms of MSE, AB, and AE and get good results. Under Bayesian estimation, SELF performs better than the other estimators for most of the parameters. Subsequently, we compute the Bayes factor to quantify the impact of merged series on the overall model dynamics. To further elucidate the efficacy of the proposed model, we conduct both simulation-based analyses and real-world applications focusing on the Indian banking sector. Through this research, we aim to offer valuable insights into the implications of M\&A activities. For the purpose of data analysis, we used PCR banking data of ICICI Banks Ltd. for simulation and empirical analysis to verify the models' applicability and purpose.</p>2025-05-26T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/1569On the Use of Yeo-Johnson Transformation in the Functional Multivariate Time Series2025-05-29T01:11:54+08:00Sameera Abdulsalam Othmansameera.othman@uod.acHaithem Taha Mohammed Alisameera.othman@uod.ac<p>Box-Cox and Yeo-Johnson transformation models were utilized in this paper to use density function to improve multivariate time series forecasting. The K-Nearest Neighbor function is used in our model, with automatic bandwidth selection using a cross-validation approach and semi-metrics used to measure the proximity of functional data. Then, to decorrelate multivariate response variables, we use principal component analysis. The methodology was applied on two time series data examples with multiple responses. The first example includes three time series datasets of the monthly average of Humidity (H), Rainfall (R) and Temperature (T). The simulation studies are provided in the second example. Mean square errors of predicted values were calculated to show forecast efficiency. The results have proved that applying multivariate nonparametric time series transformed stationary datasets using the Yeo-Johnson model more efficient than applying the univariate nonparametric analysis to each response independently. </p>2025-04-09T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://www.iapress.org/index.php/soic/article/view/2442Bayesian Premium Estimators for NXLindley Model Under Different Loss Functions2025-05-29T01:23:15+08:00Ahmed Sadounsaadounahmed88@gmail.comImen Ouchenimen.ouchen@univ.dzFarouk Metirifarouk.metiri@univ.dz<p>The conditional distribution of (X|θ) is regarded as the NXLindley distribution. This study is centered on the estimation of the Bayesian premium using the symmetric squared error loss function and the asymmetric Linex loss function, employing the extension of Jeffreys as non-informative priors and Gamma prior as informative priors. Owing to its complexity and lack of linearity, we rely on a numerical approximation for establishing the Bayesian premium. A simulation and comparison study with several sample sizes is presented.</p>2025-05-28T14:35:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computing