http://www.iapress.org/index.php/soic/issue/feed Statistics, Optimization & Information Computing 2021-07-30T06:48:17+08:00 David G. Yu david.iapress@gmail.com Open Journal Systems <p><em><strong>Statistics, Optimization and Information Computing</strong></em>&nbsp;(SOIC) is an international refereed journal dedicated to the latest advancement of statistics, optimization and applications in information sciences.&nbsp; Topics of interest are (but not limited to):&nbsp;</p> <p>Statistical theory and applications</p> <ul> <li class="show">Statistical computing, Simulation and Monte Carlo methods, Bootstrap,&nbsp;Resampling methods, Spatial Statistics, Survival Analysis, Nonparametric and semiparametric methods, Asymptotics, Bayesian inference and Bayesian optimization</li> <li class="show">Stochastic processes, Probability, Statistics and applications</li> <li class="show">Statistical methods and modeling in life sciences including biomedical sciences, environmental sciences and agriculture</li> <li class="show">Decision Theory, Time series&nbsp;analysis, &nbsp;High-dimensional&nbsp; multivariate integrals,&nbsp;statistical analysis in market, business, finance,&nbsp;insurance, economic and social science, etc</li> </ul> <p>&nbsp;Optimization methods and applications</p> <ul> <li class="show">Linear and nonlinear optimization</li> <li class="show">Stochastic optimization, Statistical optimization and Markov-chain etc.</li> <li class="show">Game theory, Network optimization and combinatorial optimization</li> <li class="show">Variational analysis, Convex optimization and nonsmooth optimization</li> <li class="show">Global optimization and semidefinite programming&nbsp;</li> <li class="show">Complementarity problems and variational inequalities</li> <li class="show"><span lang="EN-US">Optimal control: theory and applications</span></li> <li class="show">Operations research, Optimization and applications in management science and engineering</li> </ul> <p>Information computing and&nbsp;machine intelligence</p> <ul> <li class="show">Machine learning, Statistical learning, Deep learning</li> <li class="show">Artificial intelligence,&nbsp;Intelligence computation, Intelligent control and optimization</li> <li class="show">Data mining, Data&nbsp;analysis, Cluster computing, Classification</li> <li class="show">Pattern recognition, Computer vision</li> <li class="show">Compressive sensing and sparse reconstruction</li> <li class="show">Signal and image processing, Medical imaging and analysis, Inverse problem and imaging sciences</li> <li class="show">Genetic algorithm, Natural language processing, Expert systems, Robotics,&nbsp;Information retrieval and computing</li> <li class="show">Numerical analysis and algorithms with applications in computer science and engineering</li> </ul> http://www.iapress.org/index.php/soic/article/view/991 Discrete Chebyshev Polynomials for Solving Fractional Variational Problems 2021-07-30T06:26:44+08:00 Fakhrodin Mohammadi f.mohammadi62@hotmail.com Leila Moradi f.mohammadi62@hotmail.com Dajana Conte f.mohammadi62@hotmail.com <p>In ‎the current study, a‎ general formulation of the discrete Chebyshev polynomials is given. ‎The operational matrix of fractional integration for these discrete polynomials is also derived. ‎Then,‎ a numerical scheme based on the discrete Chebyshev polynomials and their operational matrix has been developed to solve fractional variational problems‎. In this method, the need for using Lagrange multiplier during the solution procedure is eliminated.‎ The performance of the proposed scheme is validated through some illustrative examples. ‎Moreover, ‎the obtained numerical results ‎‎‎‎were compared to the previously acquired results by the classical Chebyshev polynomials. Finally, a comparison for the required CPU time is presented, which indicates more efficiency and less complexity of the proposed method.</p> 2021-07-10T00:00:00+08:00 Copyright (c) 2021 Statistics, Optimization & Information Computing http://www.iapress.org/index.php/soic/article/view/765 Generalized Odd Power Cauchy Family and Its Associated Heteroscedastic Regression Model 2021-07-30T06:20:49+08:00 Emrah Altun EA emrahaltun123@gmail.com Morad Alizadeh moradalizadeh78@gmail.com Thiago Ramires thiagogentil@gmail.com Edwin Ortega edwin@usp.br <pre>This study introduces a generalization of the odd power Cauchy family by adding one more shape parameter to<br>gain more flexibility modeling the complex data structures. The linear representations for the density, moments, quantile,<br>and generating functions are derived. The model parameters are estimated employing the maximum likelihood estimation<br>method. The Monte Carlo simulations are performed under different parameter settings and sample sizes for the proposed<br>models. In addition, we introduce a new heteroscedastic regression model based on the special member of the proposed<br>family. Three data sets are analyzed with competitive and proposed models.</pre> 2021-07-10T00:00:00+08:00 Copyright (c) 2021 Statistics, Optimization & Information Computing http://www.iapress.org/index.php/soic/article/view/1240 A Bayesian Inference Approach for Bivariate Weibull Distributions Derived from Roy and Morgenstern Methods 2021-07-30T06:31:04+08:00 Ricardo Puziol de Oliveira rpuziol.oliveira@gmail.com Marcos Vinicius de Oliveira Peres mvperes1991@alumni.usp.br Milene Regina dos Santos rpuziol.oliveira@gmail.com Edson Zangiacomi Martinez rpuziol.oliveira@gmail.com Jorge Aberto Achcar rpuziol.oliveira@gmail.com <p>Bivariate lifetime distributions are of great importance in studies related to interdependent components, especially in engineering applications. In this paper, we introduce two bivariate lifetime assuming three- parameter Weibull marginal distributions. Some characteristics of the proposed distributions as the joint survival function, hazard rate function, cross factorial moment and stress-strength parameter are also derived. The inferences for the parameters or even functions of the parameters of the models are obtained under a Bayesian approach. An extensive numerical application using simulated data is carried out to evaluate the accuracy of the obtained estimators to illustrate the usefulness of the proposed methodology. To illustrate the usefulness of the proposed model, we also include an example with real data from which it is possible to see that the proposed model leads to good fits to the data.</p> 2021-07-12T00:00:00+08:00 Copyright (c) 2021 Statistics, Optimization & Information Computing http://www.iapress.org/index.php/soic/article/view/1101 A New Probability Distribution for Modeling Failure and Service Times: Properties, Copulas and Various Estimation Methods 2021-07-30T06:34:26+08:00 Hanaa Elgohari hanaa_elgohary@mans.edu.eg Mohamed Ibrahim mohamed_ibrahim@du.edu.eg Haitham M. Yousof haitham.yousof@fcom.bu.edu.eg <p>In this paper, a new generalization of the Pareto type II model is introduced and studied. The new density can<br>be “right skewed” with heavy tail shape and its corresponding failure rate can be “J-shape”, “decreasing” and “upside down (or increasing-constant-decreasing)”. The new model may be used as an “under-dispersed” and “over-dispersed” model. Bayesian and non-Bayesian estimation methods are considered. We assessed the performance of all methods via simulation study. Bayesian and non-Bayesian estimation methods are compared in modeling real data via two applications. In modeling real data, the maximum likelihood method is the best estimation method. So, we used it in comparing competitive models. Before using the the maximum likelihood method, we performed simulation experiments to assess the finite sample behavior of it using the biases and mean squared errors.</p> 2021-07-10T00:00:00+08:00 Copyright (c) 2021 Statistics, Optimization & Information Computing http://www.iapress.org/index.php/soic/article/view/1092 Feature Selection Based on Divergence Functions: A Comparative Classiffication Study 2021-07-30T06:37:14+08:00 Saeid Pourmand saeidpourmand1@gmail.com Ashkan Shabbak a.shabbak@gmail.com Mojtaba Ganjali m_ganjali43@yahoo.com <p>Due to the extensive use of high-dimensional data and its application in a wide range of scientifc felds of research, dimensionality reduction has become a major part of the preprocessing step in machine learning. Feature selection is one procedure for reducing dimensionality. In this process, instead of using the whole set of features, a subset is selected to be used in the learning model. Feature selection (FS) methods are divided into three main categories: flters, wrappers, and embedded approaches. Filter methods only depend on the characteristics of the data, and do not rely on the learning model at hand. Divergence functions as measures of evaluating the differences between probability distribution functions can be used as flter methods of feature selection. In this paper, the performances of a few divergence functions such as Jensen-Shannon (JS) divergence and Exponential divergence (EXP) are compared with those of some of the most-known flter feature selection methods such as Information Gain (IG) and Chi-Squared (CHI). This comparison was made through accuracy rate and F1-score of classifcation models after implementing these feature selection methods.</p> 2021-07-10T00:00:00+08:00 Copyright (c) 2021 Statistics, Optimization & Information Computing http://www.iapress.org/index.php/soic/article/view/985 Robust Liu-Type Estimator for SUR Model 2021-07-28T06:16:01+08:00 Tarek Omara Tarek_em@yahoo.com <p>The Liu-type estimator is one of the shrink estimators that is used to remedy for a problem of multicollinearity<br>in SUR model, but it is sensitive to the outlier. In this paper, we introduce the S Liu-type (SLiu-type) and MM Liu-type estimator (MMLiu-type) for SUR model. These estimators merge Liu-type estimator with S-estimator and with MM-estimator which makes it have high robustness at the different level of efficiency and at the same time prevents the bad effects of multicollinearity. Moreover, to get more robust features, we have modified the Liu-type estimator by making it depend on MM estimator instead of GLS estimator. The asymptotical properties for the suggested estimator were discussed and we used the fast and robust bootstrap (FRB) to obtain the suggested robust estimators. Furthermore, we run the simulation study to show the extent of excellence for the suggested robust estimators relative to the other estimators by many factors.</p> 2021-07-26T10:23:57+08:00 Copyright (c) 2021 Statistics, Optimization & Information Computing http://www.iapress.org/index.php/soic/article/view/1035 Similarity Technique Effectiveness of Optimized Fuzzy C-means Clustering Based on Fuzzy Support Vector Machine for Noisy Data 2021-07-27T03:21:16+08:00 Hoda Khanali hkhanali87@gmail.com Babak Vaziri bab.vaziri@iauctb.ac.ir <p>Fuzzy VIKOR C-means (FVCM) is a kind of unsupervised fuzzy clustering algorithm that improves the accuracy<br>and computational speed of Fuzzy C-means (FCM). So it reduces the sensitivity to noisy and outlier data, and enhances performance and quality of clusters. Since FVCM allocates some data to a specific cluster based on similarity technique, reducing the effect of noisy data increases the quality of the clusters. This paper presents a new approach to the accurate location of noisy data to the clusters overcoming the constraints of noisy points through fuzzy support vector machine (FSVM), called FVCM-FSVM, so that at each stage samples with a high degree of membership are selected for training in the classification of FSVM. Then, the labels of the remaining samples are predicted so the process continues until the convergence of the FVCM-FSVM. The results of the numerical experiments showed the proposed approach has better performance than FVCM. Of course, it greatly achieves high accuracy.</p> 2021-07-10T00:00:00+08:00 Copyright (c) 2021 Statistics, Optimization & Information Computing http://www.iapress.org/index.php/soic/article/view/1005 A New Hybrid Optimizer for Global Optimization Based on a Comparative Study Remarks of Classical Gradient Descent Variants 2021-07-30T06:40:44+08:00 Mouad Touarsi touarsi.mouad@gmail.com Driss Gretete touarsi.mouad@gmail.com Abdelmajid Elouadi abdelmajid.elouadi@uit.ac.ma <p>In this paper, we present an empirical comparison of some Gradient Descent variants used to solve global<br>optimization problems for large search domains. The aim is to identify which one of them is more suitable for solving an optimization problem regardless of the features of the used test function. Five variants of Gradient Descent were implemented in the R language and tested on a benchmark of five test functions. We proved the dependence between the choice of the variant and the obtained performances using the khi-2 test in a sample of 120 experiments. Those test functions vary on convexity, the number of local minima, and are classified according to some criteria. We had chosen a range of values for each algorithm parameter. Results are compared in terms of accuracy and convergence speed. Based on the obtained results,we defined the priority of usage for those variants and we contributed by a new hybrid optimizer. The new optimizer is tested<br>in a benchmark of well-known test functions and two real applications are proposed. Except for the classical gradient descent algorithm, only stochastic versions of those variants are considered in this paper.</p> 2021-07-10T00:00:00+08:00 Copyright (c) 2021 Statistics, Optimization & Information Computing http://www.iapress.org/index.php/soic/article/view/879 Automated Noise Detection in a Database Based on a Combined Method 2021-07-30T06:42:53+08:00 Mahdieh Ataeyan mahdiehata@yahoo.com Negin Daneshpour ndaneshpour@sru.ac.ir <p><span class="fontstyle0">Data quality has diverse dimensions, from which accuracy is the most important one. Data cleaning is one of the preprocessing steps in data mining which consists of detecting errors and repairing them. Noise is a common type of error, that occur in database. This paper proposes an automated method based on the </span><span class="fontstyle2">k</span><span class="fontstyle3">-</span><span class="fontstyle0">means clustering for noise detection. At first, each attribute (</span><span class="fontstyle2">A</span><span class="fontstyle4">j</span><span class="fontstyle0">) is temporarily removed from data and the </span><span class="fontstyle2">k</span><span class="fontstyle3">-</span><span class="fontstyle0">means clustering is applied to other attributes. Thereafter, the </span><span class="fontstyle2">k</span><span class="fontstyle3">-</span><span class="fontstyle0">nearest neighbors is used in each cluster. After that a value is predicted for </span><span class="fontstyle2">A</span><span class="fontstyle4">j </span><span class="fontstyle0">in each record by the nearest neighbors. The proposed method detects noisy attributes using predicted values. Our method is able to identify several noises in a record. In addition, this method can detect noise in fields with different data types, too. Experiments show that this method can averagely detect 92% of the noises existing in the data. The proposed method is compared with a noise detection method using association rules. The results indicate that the proposed method have improved noise detection averagely by 13%.</span></p> 2021-06-09T00:00:00+08:00 Copyright (c) 2021 Statistics, Optimization & Information Computing http://www.iapress.org/index.php/soic/article/view/996 An Optimal Adaptive Variable Sample Size Scheme for the Multivariate Coefficient of Variation 2021-07-27T03:21:17+08:00 KHAI WAH KHAW khaiwah@usm.my XINYING CHEW xinying@usm.my MING HA LEE mhlee@swinburne.edu.my WAI CHUNG YEONG waichungy@sunway.edu.my <p>Development of an efficient process monitoring system has always received great attention. Previous studies revealed that the coefficient of variation (CV) is important in ensuring process quality, especially for monitoring a process where its process mean and variance are highly correlated. The fact that almost all industrial process monitoring involves a minimum of two or more related quality characteristics being monitored simultaneously, this paper incorporates the salient feature of the adaptive sample size VSS scheme into the standard multivariate CV (MCV) chart, called the VSS MCV chart. A Markov chain model is developed for the derivation of the chart’s performance measures, i.e the average run length (ARL), the standard deviation of the run length (SDRL), the average sample size (ASS), the average number of observations to signal (ANOS) and the expected average run length (EARL). The numerical comparison shows that the proposed chart prevails over the existing standard MCV chart for detecting small and moderate upward and downward MCV shifts.</p> 2021-07-11T00:00:00+08:00 Copyright (c) 2021 Statistics, Optimization & Information Computing http://www.iapress.org/index.php/soic/article/view/814 Sequential Monte Carlo Filters with Parameters Learning for Commodity Pricing Models 2021-07-30T06:48:17+08:00 Saba Infante sinfante64@gmail.com Luis Sánchez lasa_58@hotmail.com Aracelis Hernández arhernan65@gmail.com José Marcano jmarcano1967@gmail.com <p>In this article, an estimation methodology based on the sequential Monte Carlo algorithm is proposed, that<br>jointly estimate the states and parameters, the relationship between the prices of futures contracts and the spot prices of primary products is determined, the evolution of prices and the volatility of the historical data of the primary market (Gold and Soybean) are analyzed. Two stochastic models for an estimate the states and parameters are considered, the parameters and states describe physical measure (associated with the price) and risk-neutral measure (associated with the markets to futures), the price dynamics in the short-term through the reversion to the mean and volatility are determined, while that in the long term through markets to futures. Other characteristics such as seasonal patterns, price spikes, market dependent volatilities, and non-seasonality can also be observed. In the methodology, a parameter learning algorithm is used, specifically, three algorithms are proposed, that is the sequential Monte Carlo estimation (SMC) for state space models<br>with unknown parameters: the first method is considered a particle filter that is based on the sampling algorithm of sequential importance with resampling (SISR). The second implemented method is the Storvik algorithm [19], the states and parameters of the posterior distribution are estimated that have supported in low-dimensional spaces, a sufficient statistics from the sample of the filtered distribution is considered. The third method is (PLS) Carvalho's Particle Learning and Smoothing algorithm [31]. The cash prices of the contracts with future delivery dates are analyzed. The results indicate postponement of payment, the future prices on different maturity dates with the spot price are highly correlated. Likewise, the contracts with a delivery date for the last periods of the year 2017, the spot price lower than the prices of the contracts with expiration date for 12 and 24 months is found, opposite occurs in the contracts with expiration date for 1 and 6 months.</p> 2021-06-22T00:00:00+08:00 Copyright (c) 2021 Statistics, Optimization & Information Computing http://www.iapress.org/index.php/soic/article/view/1112 GH Biplot in Reduced-Rank Regression based on Partial Least Squares 2021-07-27T03:21:19+08:00 Wilin Alvarez willin.alvarez@ikiam.edu.ec Victor John Griffin victorgriffin77@gmail.com <p>One of the challenges facing statisticians is to provide tools to enable researchers to interpret and present their data and conclusions in ways easily understood by the scientific community. One of the tools available for this purpose is a multivariate graphical representation called reduced rank regression biplot. This biplot describes how to construct a graphical representation in nonsymmetric contexts such as approximations by least squares in multivariate linear regression models of reduced rank. However multicollinearity invalidates the interpretation of a regression coefficient as the conditional effect of a regressor, given the values of the other regressors, and hence makes biplots of regression coefficients useless. So it was, in the search to overcome this problem, Alvarez and Griffin&nbsp; presented a procedure for coefficient estimation in a multivariate regression model of reduced rank in the presence of multicollinearity based on PLS (Partial Least Squares) and generalized singular value decomposition. Based on these same procedures, a biplot construction is now presented for a multivariate regression model of reduced rank in the presence of multicollinearity. This procedure, called PLSSVD GH biplot, provides a useful data analysis tool which allows the visual appraisal of the structure of the dependent and independent variables. This paper defines the procedure and shows several of its properties. It also provides an implementation of the routines in R and presents a real life application involving data from the FAO food database to illustrate the procedure and its properties.</p> 2021-07-10T00:00:00+08:00 Copyright (c) 2021 Statistics, Optimization & Information Computing http://www.iapress.org/index.php/soic/article/view/614 Expectation Properties of Generalized Order Statistics from Poisson Lomax Distribution 2021-07-27T03:21:19+08:00 Haseeb Athar haseebathar68@gmail.com Zubdahe Noor haseebathar68@gmail.com Saima Zarrin almomin646@gmail.com Hanadi N.S. Almutairi hno.8@live.com <p>The Poisson Lomax distribution was proposed by [3], as a useful model for analyzing lifetime data. In this paper,<br>we have derived recurrence relations for single and product moments of generalized order statistics for this distribution. Further, characterization of the distribution is carried out. Some deductions and particular cases are also discussed.</p> 2020-05-29T00:00:00+08:00 Copyright (c) 2020 Statistics, Optimization & Information Computing http://www.iapress.org/index.php/soic/article/view/1130 A New Family of Continuous Distributions: Properties, Copulas and Real Life Data Modeling 2021-07-27T03:21:20+08:00 Mohamed Refaie refaie_top@yahoo.com <p>A new family of distributions called the Kumaraswamy Rayleigh family is defied and studied. Some of its relevant statistical properties are derived. Many new bivariate type G families using the of Farlie-Gumbel-Morgenstern, modified Farlie-Gumbel-Morgenstern copula, Clayton copula and Renyi’s entropy copula are derived. The method of the maximum likelihood estimation is used. Some special models based on log-logistic, exponential, Weibull, Rayleigh, Pareto type II and Burr type X, Lindley distributions are presented and studied. Three dimensional skewness and kurtosis plots are presented. A graphical assessment is performed. Two real life applications to illustrate the flexibility, potentiality and importance of the new family is proposed.</p> 2021-07-26T10:43:17+08:00 Copyright (c) 2021 Statistics, Optimization & Information Computing