Statistics, Optimization & Information Computing
http://www.iapress.org/index.php/soic
<p><em><strong>Statistics, Optimization and Information Computing</strong></em> (SOIC) is an international refereed journal dedicated to the latest advancement of statistics, optimization and applications in information sciences. Topics of interest are (but not limited to): </p> <p>Statistical theory and applications</p> <ul> <li class="show">Statistical computing, Simulation and Monte Carlo methods, Bootstrap, Resampling methods, Spatial Statistics, Survival Analysis, Nonparametric and semiparametric methods, Asymptotics, Bayesian inference and Bayesian optimization</li> <li class="show">Stochastic processes, Probability, Statistics and applications</li> <li class="show">Statistical methods and modeling in life sciences including biomedical sciences, environmental sciences and agriculture</li> <li class="show">Decision Theory, Time series analysis, High-dimensional multivariate integrals, statistical analysis in market, business, finance, insurance, economic and social science, etc</li> </ul> <p> Optimization methods and applications</p> <ul> <li class="show">Linear and nonlinear optimization</li> <li class="show">Stochastic optimization, Statistical optimization and Markov-chain etc.</li> <li class="show">Game theory, Network optimization and combinatorial optimization</li> <li class="show">Variational analysis, Convex optimization and nonsmooth optimization</li> <li class="show">Global optimization and semidefinite programming </li> <li class="show">Complementarity problems and variational inequalities</li> <li class="show"><span lang="EN-US">Optimal control: theory and applications</span></li> <li class="show">Operations research, Optimization and applications in management science and engineering</li> </ul> <p>Information computing and machine intelligence</p> <ul> <li class="show">Machine learning, Statistical learning, Deep learning</li> <li class="show">Artificial intelligence, Intelligence computation, Intelligent control and optimization</li> <li class="show">Data mining, Data analysis, Cluster computing, Classification</li> <li class="show">Pattern recognition, Computer vision</li> <li class="show">Compressive sensing and sparse reconstruction</li> <li class="show">Signal and image processing, Medical imaging and analysis, Inverse problem and imaging sciences</li> <li class="show">Genetic algorithm, Natural language processing, Expert systems, Robotics, Information retrieval and computing</li> <li class="show">Numerical analysis and algorithms with applications in computer science and engineering</li> </ul>International Academic Pressen-USStatistics, Optimization & Information Computing2311-004X<span>Authors who publish with this journal agree to the following terms:</span><br /><br /><ol type="a"><ol type="a"><li>Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a <a href="http://creativecommons.org/licenses/by/3.0/" target="_new">Creative Commons Attribution License</a> that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.</li><li>Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.</li><li>Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See <a href="http://opcit.eprints.org/oacitation-biblio.html" target="_new">The Effect of Open Access</a>).</li></ol></ol>The Weighted Xgamma Model: Estimation, Risk Analysis and Applications
http://www.iapress.org/index.php/soic/article/view/1677
<p>The weighted xgamma distribution, a new weighted two-parameter lifespan distribution, is introduced in this study. Theoretical characteristics of this model are deduced and thoroughly examined, including quantile function, extreme value, moments, moment generating function, cumulative entropy, and residual cumulative. Some classical estimation methods such as the the maximum likelihood, weighted least square, Anderson Darling and Cramer-von-Mises are considered. A simulation experiments are performed to compare the estimation methods. Four real-life data sets is finally examined to demonstrate the viability of this model. Four key risk indicators are defined and analyzed under the maximum likelihood method. A risk analysis for the exceedances of flood peaks is presented.</p>Majid HashempourMorad AlizadehHaitham Yousof
Copyright (c) 2024 Statistics, Optimization & Information Computing
2024-08-312024-08-311261573160010.19139/soic-2310-5070-1677Reliability estimation of a multicomponent stress-strength model based on copula function under progressive first failure censoring
http://www.iapress.org/index.php/soic/article/view/1894
<p>In reliability analysis of a multicomponent stress-strength model, most studies assumed independence between stress and strength variable. However, this assumption may not be realistic. To account for dependency, copula approach can be used. Although it is important, only few studies considered this case and usually under complete study. Observing the failures for all units may be difficult due to cost and time limitation. Recently, progressive first failure censoring scheme has attracted attention in the literature due to its ability to save time and money. To the best of our knowledge, dependent multicomponent stress-strength model under progressive first failure censoring was not considered yet. In this article, we derived the likelihood function for progressive first failure censored sample under copula and multicomponent stress strength model. A simulation study is performed and a real dataset is analyzed to test the applicability of the model. Maximum likelihood estimates, asymptotic confidence interval and bootstrap confidence intervals are obtained. The results illustrated that the proposed censoring scheme under copula provides a good estimate for the reliability.</p>Ola Abuelamayem
Copyright (c) 2024 Statistics, Optimization & Information Computing
2024-06-292024-06-291261601161110.19139/soic-2310-5070-1894Improved estimation of the sensitive proportion using a new randomization technique and the Horvitz–Thompson type estimator
http://www.iapress.org/index.php/soic/article/view/1807
<p>Randomized response techniques efficiently collect data on sensitive subjects to protect individual privacy. This paper aims to introduce a new randomizing technique in the additive scrambled model so that privacy is well preserved and the estimator's efficiency for the sensitive population proportion is improved. Also, a Horvitz–Thompson type estimator is presented as an unbiased estimator of the sensitive proportion of the population, then convergence to the normal distribution for the Horvitz–Thompson type estimator is considered by the entropy of the inclusion indicators in the Poisson sampling. Eventually, using the new additive scrambled model, the ratio of taking addictive drugs is estimated among students of the University.</p>Hadi FarokhiniaRahim ChinipardazGholamali Parham
Copyright (c) 2024 Statistics, Optimization & Information Computing
2024-07-252024-07-251261612162110.19139/soic-2310-5070-1807A Condition-based Maintenance Policy in Chance Space
http://www.iapress.org/index.php/soic/article/view/2018
<p style="-qt-block-indent: 0; text-indent: 0px; -qt-user-state: 0; margin: 0px;">A condition-based maintenance policy is considered for a deteriorating system including both of preventive and corrective maintenance actions. The gamma process is used to model stochastic degradation in the probability space. Although, the cost of preventive maintenance is considered as an uncertain variable due to incomplete information, and its distribution is estimated based on the opinions of some experts using the Delphi method. <br>The optimal policy is determined by minimizing the expected cost rate function. Since in this function, there are both random variables discussing in a probability space, and an uncertain variable, which is considered in an uncertain space, we have to study the optimal policy in a chance space which is a combination of probability and uncertain spaces. The proposed methodology is explained in an illustrative example. Finally, the results are applied to a real data set.</p>Somayyeh Shahraki DehsoukhtehMostafa Razmkhah
Copyright (c) 2024 Statistics, Optimization & Information Computing
2024-07-262024-07-261261622163910.19139/soic-2310-5070-2018Statistical modelling of cryptocurrencies
http://www.iapress.org/index.php/soic/article/view/1570
<p>There has been tremendous interest invested by researchers and academics in Bitcoin since it's introduction to the financial market. However, in recent years there has been an advancement of the cryptocurrency market where other cryptocurrencies such as Ethereum, Litecoin and Ripple have grown relatively quickly and could potentially challenge the dominant placement of Bitcoin. These cryprocurrencies have been utilized globally as a virtual currency for multiple transactions. The returns of cryptocurrencies are known to be volatile and have been observed to fluctuate quite a bit in recent times. This study assesses and differentiates the performance of generalized autoregressive score (GAS) models integrated with a few heavy-tailed distributions in Value-at-Risk (VaR) estimation of the four most popular cryptocurrencies' returns, i.e. Bitcoin returns, Ethereum returns, Litecoin returns and Ripple returns. This paper proposed VaR models for Bitcoin, Ethereum, Litecoin and Ripple returns, i.e. GAS models combined with the generalized hyperbolic distribution (GHD), the variance gamma (VG) distribution, the normal inverse Gaussian (NIG) distribution and the generalized lambda distribution (GLD). The Kupiec likelihood ratio test was adopted to evaluate the proposed models' adequacy and Backtesting VaR was used to select the superior set of models.</p> <p><br><br></p>Stephanie Danielle SubramoneyKnowledge ChinhamuRetius Chifurira
Copyright (c) 2024 Statistics, Optimization & Information Computing
2024-08-032024-08-031261640166210.19139/soic-2310-5070-1570Overlap Analysis in Progressive Hybrid Censoring: A Focus on Adaptive Type-II and Lomax Distribution
http://www.iapress.org/index.php/soic/article/view/1908
<p>This article explores the adaptive type-II progressive hybrid censoring scheme, introduced by Ng et al. (2009), which is used to make inferences about three measures of overlap: Matusita's measure ($\rho $), Morisita's measure ($\lambda $), and Weitzman's measure ($\Delta $) for two Lomax distributions with different parameters. The article derives the bias and variance of these overlap measures' estimators. If sample sizes are limited, the precision or bias of these estimators is difficult to determine because there are no closed-form expressions for their variances and exact sampling distributions, so Monte Carlo simulations are used. Also, confidence intervals for these measures are constructed using both the bootstrap method and Taylor approximation.</p> <p>To demonstrate the practical significance of the proposed estimators, an illustrative application is provided by analyzing real data.</p>Amal Helu
Copyright (c) 2024 Statistics, Optimization & Information Computing
2024-08-222024-08-221261663168310.19139/soic-2310-5070-1908Optimization of Weibull Distribution Parameters with Application to Short-Term Risk Assessment and Strategic Investment Decision-Making
http://www.iapress.org/index.php/soic/article/view/2099
<p>Accurate parameter estimation is fundamental in financial modeling, especially in investment analysis, where the Modified Internal Rate of Return (MIRR) plays a key role in evaluating investment performance. This study aims to enhance risk and return predictions in Sharia-compliant property investments by exploring the efficacy of various optimization techniques for estimating Weibull distribution parameters within the MIRR framework. To achieve this, we employed a comparative analysis of optimization methods, including Simulated Annealing (SA), Differential Evolution (DE), Genetic Algorithm (GA), and traditional Numerical Methods (NM). Performance was assessed through metrics such as Root Mean Squared Error (RMSE), Akaike Information Criterion (AIC), R-squared (R<sup>2</sup>) values, and Kolmogorov-Smirnov (KS) statistics. The results reveal that metaheuristic algorithms (SA, DE, GA) significantly outperform traditional numerical methods in terms of parameter estimation accuracy. Specifically, SA achieved the lowest RMSE of 0.042, with a Weibull shape parameter estimate of 1.254 and variance of 0.004, followed closely by DE with an RMSE of 0.048, and GA with 0.046. In contrast, NM exhibited a higher RMSE of 0.067, with a shape parameter estimate of 1.310 and a variance of 0.006. The AIC values for metaheuristic methods ranged from 14.25 to 14.68, compared to 15.12 for NM, and R<sup>2</sup> values for metaheuristic methods ranged from 0.932 to 0.945, compared to 0.910 for NM. KS statistics further underscored the superior model fit of metaheuristics, with SA showing the lowest KS value of 0.045. The study underscores the critical role of metaheuristic optimization in improving the accuracy of parameter estimation based on MIRR models. This enhancement provides more reliable risk assessments and returns predictions, offering valuable insights for informed investment decision-making and contributing to optimized financial outcomes in the property sector.</p>Hamza AbubakarMasnita Misiran Amani A. Idris SayedAbubakar Balarabe Karaye
Copyright (c) 2024 Statistics, Optimization & Information Computing
2024-08-232024-08-231261684170910.19139/soic-2310-5070-2099A New Family of Continuous Distributions with Applications
http://www.iapress.org/index.php/soic/article/view/2144
<p>This article introduces a novel set of optimizing probability distributions known as the Survival Power-G (SP-G) family, which employs a specific approach to introduce an additional parameter with the survival function of the original distributions. The utilization of this family enhances the modelling capabilities of diverse existing continuous distributions. By applying this approach to the single-parameter exponential distribution, a new two-parameter Survival Power-Exponential (SP-E) distribution is generated. The statistical characteristics of this fresh distribution and the maximum likelihood estimator are established, and Monte Carlo simulation is utilized to explore the efficiency of the maximum likelihood estimator of the two parameters under varying sample sizes. Subsequently, the new distribution is employed in the analysis of three distinct sets of real data. Through comparison with alternative distributions on these datasets, it is demonstrated that the new distribution outperforms the other distributions.</p>Hazim Ghdhaib Kalt
Copyright (c) 2024 Statistics, Optimization & Information Computing
2024-08-232024-08-231261710172410.19139/soic-2310-5070-2144A dynamic bi-objective optimization model for a closed loop supply chain design under environmental policies
http://www.iapress.org/index.php/soic/article/view/2112
<p>Given the increasing focus on sustainability and environmental policy constraints, companies are required to redesign their supply chains. This paper explores the optimization of a closed loop supply chain (CLSC) network under both economic and environmental considerations. To achieve this, a bi-objective mixed integer linear model was developed. The proposed model identifies the optimal selection of CLSC facilities and manages both forward and reverse flows between them. The economic objective is reached by minimizing the total CLSC costs, while the environmental objective is satisfied by reducing CO2 emissions throughout the network. Products can be returned throughout their entire life cycle, which is why our model incorporates a dynamic aspect by considering product life cycle phases as time periods for the decision horizon. The model was tested through numerical experiments using a meta-heuristic approach based on the non-dominated sorting genetic algorithm NSGA-II. This algorithm produces a set of Pareto-optimal solutions that balance both objectives effectively. The results showed good performance in terms of computational time and optimization. Pareto solutions offered various options for managers and decision makers aiming for a sustainable closed loop supply chain design.</p>Oulfa LabbiAbdeslam Ahmadi
Copyright (c) 2024 Statistics, Optimization & Information Computing
2024-08-072024-08-071261725174410.19139/soic-2310-5070-2112Complexity analysis of primal-dual interior-point methods for semidefinite optimization based on a new type of kernel functions
http://www.iapress.org/index.php/soic/article/view/1927
<p>Kernel functions are essential for designing and analyzing interior-point methods (IPMs). They are used to determine search directions and reduce the computational complexity of the interior point method. Currently, IPM based on kernel functions is one of the most effective methods for solving LO [1,20], second-order cone optimization (SOCO) [2], and symmetric optimization (SO) and is a very active research area in mathematical<br>programming. This paper presents a large-update primal-dual IPM for SDO based on a new bi-parameterized hyperbolic kernel function. Then we proved that the proposed large-update IPM has the same complexity bound as the best-known IPMs for solving these problems. Taking advantage of the favorable characteristics of the kernel function, we can deduce that the iteration bound for the large update method is $\mathcal{O}\left( \sqrt{n}\log n\log\dfrac{n}{\varepsilon }\right) $ when a takes a special value utilizing the favorable properties of the kernel function. These theoretical results play an essential role in the design and analysis of IPMs for CQSCO [7] and the Cartesian$\ P_{\ast }\left( \kappa \right) $-SCLCP [8]. The proximity function has never been used. In order to validate the efficacy of our algorithm and verify the effectiveness of our algorithm, examples are<br>given to illustrate the applicability of our main results, and we compare our numerical results with some alternatives presented in the literature.</p>Bachir BounibaneRanda Chalekh
Copyright (c) 2024 Statistics, Optimization & Information Computing
2024-09-262024-09-261261745176110.19139/soic-2310-5070-1927Second Order Duality Involving Second Order Cone Arcwise Connected Functions and Their Generalizations in Vector Optimization Problem over Cones
http://www.iapress.org/index.php/soic/article/view/2016
<p>In this paper, we introduce second order cone arcwise connected function and its generalizations. Further we study the interrelations among these functions also. Mond Weir type second order dual is formulated and duality results are proved using these functions.</p>Mamta ChaudharyVani Sharma
Copyright (c) 2024 Statistics, Optimization & Information Computing
2024-09-272024-09-271261762177410.19139/soic-2310-5070-2016Image Denoising Using the Geodesics' Gramian of the Manifold Underlying Patch-Space
http://www.iapress.org/index.php/soic/article/view/2124
<p>With the proliferation of sophisticated cameras in modern society, the demand for accurate and visually pleasing images is increasing. However, the quality of an image captured by a camera may be degraded by noise. Thus, some processing of images is required to filter out the noise without losing vital image features. Even though the current literature offers a variety of denoising methods, the fidelity and efficacy of their denoising are sometimes uncertain. Thus, here we propose a novel and computationally efficient image denoising method that is capable of producing accurate images. To preserve image smoothness, this method inputs patches partitioned from the image rather than pixels. Then, it performs denoising on the manifold underlying the patch-space rather than that in the image domain to better preserve the features across the whole image. We validate the performance of this method against benchmark image processing methods.</p>Kelum GajamannageRandy PaffenrothAnura Jayasumana
Copyright (c) 2024 Statistics, Optimization & Information Computing
2024-08-192024-08-191261775179410.19139/soic-2310-5070-2124DeafTech Vision: A Visual Computer's Approach to Accessible Communication through Deep Learning-Driven ASL Analysis
http://www.iapress.org/index.php/soic/article/view/2020
<p>Sign language is commonly used by people with hearing and speech impairments, making it difficult for those without such disabilities to understand. However, sign language is not limited to communication within the deaf community alone. It has been officially recognized in numerous countries and is increasingly being offered as a second language option in educational institutions. In addition, sign language has shown its usefulness in various professional sectors, including interpreting, education, and healthcare, by facilitating communication between people with and without hearing impairments. Advanced technologies, such as computer vision and machine learning algorithms, are used to interpret and translate sign language into spoken or written forms. These technologies aim to promote inclusivity and provide equal opportunities for people with hearing impairments in different domains, such as education, employment, and social interactions. In this paper, we implement a DeafTech Vision (DTV-CNN) architecture based on the convolutional neural network to recognize American Sign Language (ASL) gestures using deep learning techniques. Our main objective is to develop a robust ASL sign classification model to enhance human-computer interaction and assist individuals with hearing impairments. Through extensive evaluation, our model consistently outperformed baseline methods in terms of precision. It achieved an outstanding accuracy rate of 99.87% on the ASL alphabet test dataset and 99.94% on the ASL digit dataset, significantly exceeding previous research, which reported an accuracy of 90.00%. We also illustrated the model's learning trends and convergence points using loss and error graphs. These results highlight the DTV-CNN's effectiveness and capability in distinguishing complex ASL gestures.</p>Shafayat Bin Shabbir MugdhaHridoy DasMahtab UddinMd. Easin ArafatMd. Mahfujul Islam
Copyright (c) 2024 Statistics, Optimization & Information Computing
2024-06-132024-06-131261795181110.19139/soic-2310-5070-2020Multi-sensors search for lost moving targets using unrestricted effort
http://www.iapress.org/index.php/soic/article/view/1975
<p>This paper addresses the problem of searching for multiple targets using multiple sensors, where targets move randomly between a limited number of states at each time interval. Due to the potential value or danger of the targets, multiple sensors are employed to detect them as quickly as possible within a fixed number of search intervals. Each search interval has an available search effort and an exponential detection function is assumed. The goal is to develop an optimal search strategy that distributes the search effort across cells in each time interval and calculates the probability of not detecting the targets throughout the entire search period. The optimal search strategy that minimizes this probability is determined, the stability of the search is analyzed, and some special cases are considered. Additionally, we introduce the $M$-cells algorithm.</p>Abd-Elmoneim A. M. Teamah Mohamed A. Kassem Elham Yusuf Elebiary
Copyright (c) 2024 Statistics, Optimization & Information Computing
2024-06-172024-06-171261812182510.19139/soic-2310-5070-1975Dominant Mixed Metric Dimension of Graph
http://www.iapress.org/index.php/soic/article/view/1925
<p>For $k-$ordered set $W=\{s_1, s_2,\dots, s_k \}$ of vertex set $G$, the representation of a vertex or edge $a$ of $G$ with respect to $W$ is $r(a|W)=(d(a,s_1), d(a,s_2),\dots, d(a,s_k))$ where $a$ is vertex so that $d(a,s_i)$ is a distance between of the vertex $v$ and the vertices in $W$ and $a=uv$ is edge so that $d(a,s_i)=min\{d(u,s_i),d(v,s_i)\}$. The set $W$ is a mixed resolving set of $G$ if $r(a|W)\neq r(b|W)$ for every pair $a,b$ of distinct vertices or edge of $G$. The minimum mixed resolving set $W$ is a mixed basis of $G$. If $G$ has a mixed basis, then its cardinality is called mixed metric dimension, denoted by $dim_m(G)$. A set $W$ of vertices in $G$ is a dominating set for $G$ if every vertex of $G$ that is not in $W$ is adjacent to some vertex of $W$. The minimum cardinality of dominating set is domination number , denoted by $\gamma(G)$. A vertex set of some vertices in $G$ that is both mixed resolving and dominating set is a mixed resolving dominating set. The minimum cardinality of mixed resolving dominating set is called dominant mixed metric dimension, denoted by $\gamma_{mr}(G)$. In our paper, we will investigated the establish sharp bounds of the dominant mixed metric dimension of $G$ and determine the exact value of some family graphs.</p>Ridho AlfarisiSharifah Kartini Said HusainLiliek SusilowatiArika Indah Kristiana
Copyright (c) 2024 Statistics, Optimization & Information Computing
2024-06-212024-06-211261826183310.19139/soic-2310-5070-1925Fractal as Julia sets of complex functions via a new generalized viscosity approximation type iterative method
http://www.iapress.org/index.php/soic/article/view/2089
<p>In this article, we study and explore novel variants of Julia set patterns that are linked to the complex exponential function $W(z)=pe^{z^n}+qz+r$, and complex cosine function $T(z)=\cos({z^n})+dz+c$, where $n\geq 2$ and $c,d,p,q,r\in \mathbb{C}$ by employing a generalized viscosity approximation type iterative method introduced by Nandal et al. (Iteration process for fixed point problems and zero of maximal monotone operators, Symmetry, 2019) to visualize these sets. We utilize a generalized viscosity approximation type iterative method to derive an escape criterion for visualizing Julia sets. This is achieved by generalizing the existing algorithms, which led to visualization of beautiful fractals as Julia sets. Additionally, we present graphical illustrations of Julia sets to demonstrate their dependence on the iteration parameters. Our study concludes with an analysis of variations in the images and the influence of parameters on the color and appearance of the fractal patterns. Finally, we observe intriguing behaviors of Julia sets with fixed input parameters and varying values of $n$ via proposed algorithms.</p>Iqbal AhmadHaider Rizvi
Copyright (c) 2024 Statistics, Optimization & Information Computing
2024-07-122024-07-121261834185310.19139/soic-2310-5070-2089Spotting, Tracking algorithm and the remoteness
http://www.iapress.org/index.php/soic/article/view/1893
<p>On this paper we present a solution to detect and know if a point M is inside a polygon ( A(k), k∈{1,...,n} ) or outside. We are going to give a very simple, practical and explicit method of the triangulation of a convex polygon (convex polyhedron) after a definition and the concretization of the order relation of the points of a polygon in a plane following a well-chosen orientation in before and an arbitrary point of the vertices of the polygon. In the case where the point M is outside the polygon, a simple optimization method will be applied to determine the distance between the point M and the polygon A(1), ..., A(n) and the point P of the border of the polygon closest to M ”The neighboring Point”.</p>Aziz ArbaiAbounaima Mohammed ChaoukiAmina Bellekbir
Copyright (c) 2024 Statistics, Optimization & Information Computing
2024-07-222024-07-221261854187210.19139/soic-2310-5070-1893Interference-aware scheme to improve distributed caching in cellular networks via D2D underlay communications
http://www.iapress.org/index.php/soic/article/view/2094
<p>Underlay Device-to-Device (D2D) communications is a promising networking technology intended to boost the<br>spectral efficiency of future cellular networks, including 5G and beyond. When used for distributed caching, where cellular<br>devices store popular files for direct exchange later with other devices away from the cellular infrastructure, the technology<br>bears more fruits such as enhancing throughput, reducing latency and offloading the infrastructure. However, due to their<br>non-orthogonality, underlay D2D communications can result in excessive interference to the cellular user. To avoid this<br>problem, the present article proposes a scheme with two interference-reduction elements: a guard zone intended to allow<br>D2D communications only for devices far enough from the base station (BS), and a pairing strategy intended to allow D2D<br>pairing for only devices that are close enough to each other. We assess the performance of the scheme using a stochastic<br>geometry (SG) model, through which we characterize the coverage probability of the cellular user. This probability is a<br>principal indicator of maintaining the quality of service (QoS) of the cellular user and of enabling successful caching for the<br>D2D user. We introduce in the process a novel empirical technique which, given a desired level of interference, identifies<br>an upper bound for the distance between two devices to be paired without exceeding that level. We finally validate the<br>analytical findings obtained from the model by intensive simulation to ensure the correctness of both the model and the<br>scheme performance. A salient feature of the scheme is that it requires for its implementation no software or hardware<br>modification in the device</p>Amira EleffMohamed MousaHamed Nassar
Copyright (c) 2024 Statistics, Optimization & Information Computing
2024-07-242024-07-241261873188510.19139/soic-2310-5070-2094Railway Track Faults Detection Using Ensemble Deep Transfer Learning Models
http://www.iapress.org/index.php/soic/article/view/1994
<p>Railway track fault detection is an essential task for ensuring the safety and reliability of railway systems, particularly in the summer and rainy seasons when train wheels may slide due to fractures in the track or corrosion may cause track fractures. In this study, we propose a novel approach for the automated detection of railway track faults using deep transfer learning models. The proposed method combines image processing techniques and the training of three pretrained models: InceptionV3, ResNet50V2, and VGG16, on a dataset of railway track images. We evaluated the performance of our proposed method by measuring its accuracy on a test set of railway track images. The individual training accuracies for InceptionV3, ResNet50V2, and VGG16 were 94.30%, 96.79%, and 94.64%, respectively. We then combined these models using an ensemble approach, which achieved an impressive accuracy of 98.57% on the test set. Our results demonstrate the effectiveness of using deep ensemble transfer learning for railway track fault detection. Moreover, our proposed method can be used as a valuable tool for railway track maintenance and monitoring, which can ultimately lead to the improvement of the safety and reliability of railway systems.</p> <p>our proposed approach for railway track fault detection using ensemble deep transfer learning models shows promising results, indicating that it has great potential for detecting track faults accurately and efficiently. The proposed method can be used in various railway systems worldwide, ultimately leading to improved safety and reliability for passengers and cargo transportation.</p>Ali almadaniVivek MahaleAshok T. Gaikwad
Copyright (c) 2024 Statistics, Optimization & Information Computing
2024-06-072024-06-071261886191110.19139/soic-2310-5070-1994Some Properties of Dominant Local Metric Dimension
http://www.iapress.org/index.php/soic/article/view/2062
<p>Let $G$ be a connected graph with vertex set $V$. Let $W_l$ be an ordered subset defined by $W_l=\{w_1,w_2,\dots,w_n\}\subseteq V(G)$. Then $W_l$ is said to be a dominant local resolving set of $G$ if $W_l$ is a local resolving set as well as a dominating set of $G$. A dominant local resolving set of $G$ with minimum cardinality is called the dominant local basis of $G$. The cardinality of the dominant local basis of $G$ is called the dominant local metric dimension of $G$ and is denoted by $Ddim_l(G)$. We characterize the dominant local metric dimension for any graph $G$ and for some commonly known graphs in terms of their domination number to get some properties of dominant local metric dimension.</p>Reni UmilasariLiliek SusilowatiSlamin AFadekemi Janet OsayeIlham Saifudin
Copyright (c) 2024 Statistics, Optimization & Information Computing
2024-07-312024-07-311261912192010.19139/soic-2310-5070-2062Skin Cancer Diagnosis With Multi-Level Classification
http://www.iapress.org/index.php/soic/article/view/2090
<p>Skin cancer arises from the uncontrolled proliferation of abnormal skin cells, primarily triggered by exposure to the harmful ultraviolet (UV) rays of the sun and the utilization of UV tanning beds. This condition poses a heightened risk due to its potential to progress into blood cancer and lead to rapid fatality. Extensive research efforts have been dedicated to advancing the treatment of this perilous ailment. This paper presents a system designed for the examination and diagnosis of pigmented skin lesions and melanoma.</p> <p>The system incorporates a supervised classification algorithm that combines Convolutional Neural Network (CNN) and Deep Neural Network (DNN) architectures with feature extraction techniques. It operates in two distinct stages: the initial stage classifies images into two categories, namely benign or malignant, while the subsequent stage further categorizes the images into one of three classes: basal cell carcinomas, squamous cell carcinomas, or melanoma. Consequently, the comprehensive system addresses four classes, namely benign, basal cell carcinomas, squamous cell carcinomas, and melanoma.</p> <p>This work contributes to the system's design in three significant ways. Firstly, it implements multiple iterations to select the most optimal images, resulting in the highest classification accuracy. Secondly, it employs various statistical methods to identify the most pertinent features, thereby enhancing the classifier's accuracy by focusing on the most informative features for the classification task. Lastly, a two-stage classification approach is implemented, employing two distinct classifiers at different levels within the overall system. Despite the inherent complexity of the real-world problem, the overall system attains a commendable level of classification accuracy.</p> <p>Following rigorous experimentation, the study identifies the top three models. Each approach culminates in a classifier for each stage. The first approach, utilizing a deep learning classifier, achieves an accuracy of 81.82% in the initial cancer discrimination stage and 58.33% in the subsequent stage. The second approach, employing a machine learning classifier, attains an accuracy of 74.63% in the first stage and 64.41% in the second stage. The third approach, utilizing a linear regression classifier, achieves an accuracy of 98% in the first stage and 90% in the second stage. These results underscore the significance of feature selection in influencing model accuracy and suggest the potential for further optimization.</p>Rania Elbadawy BenBella S. TawfikMohamed Amal Zeidan
Copyright (c) 2024 Statistics, Optimization & Information Computing
2024-08-022024-08-021261921193310.19139/soic-2310-5070-2090A Novel Hybrid ANFIS-NARX and NARX- ANN models to Predict the profitability of Egyptian Insurance Companies
http://www.iapress.org/index.php/soic/article/view/2104
<p>The use of fuzzy logic models with machine learning (ML) models have become common in many areas, especially insurance field. This study aims to compare between non-hybrid models such as artificial neural network (ANN) model, adaptive neural fuzzy inference system (ANFIS) model, nonlinear auto-regressive external input (NARX) model, and the following hybrid models (ANFIS-NARX) and (NARX-ANN) to predict the profits of the insurance activity which represent the important indicator of the good performance of Egypt's 39 insurance companies in the period from 1st January 2009 to 31 December 2022 , monthly .This prediction based on the following factors (net premiums, reinsurance commissions, net income from earmarked investments, other direct income, net compensation, production cost commissions and general and administrative expenses) that help decision makers to make appropriate decisions . The results found that the (ANN) model is given good results compared with the following models (ANFIS), (NARX), hybrid (ANFIS-NARX) and (NARX-ANN) models according to the following prediction accuracy measures (RMSE, MAPE, MAE and Theil inequality). The explanatory ability (R<sup>2</sup>) was appeared (0.79, 0.61) respectively for training and testing phases in persons insurance companies. The explanatory ability also was appeared (0.83, 0.68) respectively in property insurance companies.</p>Hanaa Hussein Ali
Copyright (c) 2024 Statistics, Optimization & Information Computing
2024-08-082024-08-081261934195510.19139/soic-2310-5070-2104Utilizing the Discrete Heisenberg Group and Laser Systems in RGB Image Encryption
http://www.iapress.org/index.php/soic/article/view/1744
<p>This study signifies the endpoint of thorough cryptographic experimentation, leading to the creation of an innovative color image encryption scheme. It embodies a fusion of mathematical concepts rooted in both group theory and chaos theory.<br>The novel encryption procedure entails the creation of cube faces, to depict the relative positions of pixels within a given stream, thereby generating six distinct channels. Within our algorithm, each monochromatic layer of an image is independently encrypted using digraph encryption. This involves a technique of rotating the four faces, followed by another rotation to encrypt the second digraph.<br>Subsequently, matrices derived from Heisenberg theory are integrated with the monochromatic layer from the preceding step to fine-tune the image's parameters and introduce blur. Impressively, our approach has yielded promising outcomes across various images and evaluation criteria, demonstrating resilience against differential attacks and statistical analyses. Furthermore, comparative evaluations have highlighted the superiority of our method over existing algorithms.</p>Fouzia ElazzabyKhalid SabourNabil EL AKKADBouchta ZouhairiSamir Kabbaj
Copyright (c) 2024 Statistics, Optimization & Information Computing
2024-10-152024-10-151261956197210.19139/soic-2310-5070-1744Indonesian News Extractive Summarization using Lexrank and YAKE Algorithm
http://www.iapress.org/index.php/soic/article/view/1976
<p>The surge in global technological advancements has led to an unprecedented volume of information sharing<br>across diverse platforms. This information, easily accessible through browsers, has created an overload, making it challenging for individuals to efficiently extract essential content. In response, this paper proposes a hybrid Automatic Text Summarization (ATS) method, combining LexRank and YAKE algorithms. LexRank determines sentence scores, while YAKE calculates individual word scores, collectively enhancing summarization accuracy. Leveraging an unsupervised learning approach, the hybrid model demonstrates a 2% improvement over its base model. To validate the effectiveness of the proposed method, the paper utilizes 5000 Indonesian news articles from the Indosum dataset. Ground-truth summaries are employed, with the objective of condensing each article to 30% of its content. The algorithmic approach and experimental results are presented, offering a promising solution to information overload. Notably, the results reveal a two percent improvement in the Rouge-1 and Rouge-2 scores, along with a one percent enhancement in the Rouge-L score. These findings underscore the potential of incorporating a keyword score to enhance the overall accuracy of the summaries generated by LexRank. Despite the absence of a machine learning model in this experiment, the unsupervised learning and heuristic approach suggest broader applications on a global scale. A comparative analysis with other state-of-the-art text summarization methods or hybrid approaches will be essential to gauge its overall effectiveness.</p>Julyanto Wijaya Abba Suganda Girsang
Copyright (c) 2024 Statistics, Optimization & Information Computing
2024-06-072024-06-071261973198310.19139/soic-2310-5070-1976Optimization techniques of Assignment Problem using Trapezoidal Intuitionistic Fuzzy Numbers and Interval Arithmetic
http://www.iapress.org/index.php/soic/article/view/1842
<p>This paper discusses the Assignment problem to optimize the assigning of jobs to workers based on their talents and efficiency. In general, scheduling jobs plays a significant role in manufacturing and is advantageous in real world applications as we face more uncertainty and ambiguity in assigning jobs. The Intuitionistic Fuzzy Assignment problem (IFAP) is employed in circumstances when decision-makers have to deal with uncertainty. The domains are Trapezoidal Intuitionistic Fuzzy Numbers (TrIFNs) and the techniques used are Hungarian Method (HM), Brute Force Method (BFM), and Greedy Method (GM). The suggested model's performance is compared with the existing approach with the help of interval arithmetic operations. Allocating work to the individual is illustrated numerically, the optimal solution of minimizing cost is obtained using R programming and the results of comparative analysis are shown diagrammatically that help viewers to easily understand and generate results from comparisons.</p>R. SanjanaG. Ramesh
Copyright (c) 2024 Statistics, Optimization & Information Computing
2024-08-272024-08-271261984199910.19139/soic-2310-5070-1842On the Geometric Pattern Transformation (GPT) Properties of Unidimensional Signals
http://www.iapress.org/index.php/soic/article/view/1924
<p><span style="vertical-align: inherit;">The Geometric Pattern Transformation (GPT) has several advantages of use concerning contemporary algorithms that have been duly studied in previous research. Regarding some of its properties, four different but complementary aspects of the GPT are presented in this work. After a brief review of the GPT concept, how tied data are manifested in data sets is shown, to obtain a symmetric representation of the GPT, a linear transformation is performed that regularizes the geometric representation of the GPT and the theoretical relationship between the GPT and the phase-state representation of 1D signals is analyzed and formalized, then the study of the forbidden pattern is easily revealed, obtaining a strong relationship with the stable and unstable fixed points of the logistic equation. Finally, the characterization of colored noises and the application in real world signals taken through experimental procedures is analyzed. With these results, in this work is proposed an advance in the potential applications of the GPT in an integral way in the processing and analysis of data series.</span></p>Cristian BoniniMarcos MaillotDino OteroAndrea ReyAriel AmadioWalter Legnani
Copyright (c) 2024 Statistics, Optimization & Information Computing
2024-08-252024-08-251262000202110.19139/soic-2310-5070-1924