Solving a Typical Small Sample Size MRSM Dataset Problem Using a Flexible Hybrid Ensemble Approach for Credibility

  • Delson Chikobvu University of the Free State
  • Domingo Pavolo University of the Free State
Keywords: Multiresponse surface methodology, flexible hybrid ensembling, credibility of results, solution uncertainty, small sample size problems, simultaneous optimisation


Multiresponse surface methodology often involves small data analytics which, statistically, have regression modelling credibility problems. This is worsened by dataset, model selection and solution methodology uncertainties. It is difficult for solution methodologies which select and use single best models per response at simultaneous optimisation to effectively deal with these problems. This paper exploited the fact that model selection criteria choose differently, in a flexible hybrid ensemble system, to generate several solutions for integration and comparison. Mean square prediction error, with bias-variance-covariance decomposition values, was computed and analysed at simultaneous optimisation. Results suggest that the credibility of the final solution is enhanced when working with multiple models, solution methodologies and results. However, the results do not show any significance of small sample size correction to model selection criteria and analysis of bias-variance-covariance decompositions at simultaneous optimisation does not encourage dependence on theoretical optimality for best results.

Author Biography

Delson Chikobvu, University of the Free State
Dr. Delson Chikobvu is a senior lecturer in the Faculty of Agriculture and Natural Sciences in the Department of Mathematical Statistics and Actuarial Sciences.  


A. I. Khuri, The Analysis of Multiresponse Experiments: A Review, Tecnical Report 302, Department of Statistics, University of Florida, Gainesville, FL32611.Khuri, 1988.

S. Mukhopadhyay, and A.I. Khuri, Response Surface Methodology, Wires Computational Statistics, 2, March/April, 128-149, 2010.

R.H. Myers, D.C. Montgomery, and C.M. Anderson-Cook, Response Surface Methodology: Process and Product Optimisation Using Designed Experiments, 4th Edition, ISBN: 978-1-118-91601-8, 2016.

A.I. Khuri, A General Overview of Response Siurface Methodology, Biometrics Biostatistics International Journal, 5(3): 00133. DOI: 10.15406/bbij.2017.05.00133, 2017.

J.O. Rawlings, S.G. Pantula, and A.D. Dickey, Applied Regression Analysis: A Research Tool, Second Edition, Springer-Verlag New York Inc., 1998.

D.G. Jenkins, and P.F. Quintana-Ascencio, A Solution to Minimum Sample Size for Regressions, PLoS ONE 15(2): e0229345. https://, 2020..

T. Xie, Prediction Model Averaging Estimator, Economics Letters 131, 5–8, 2015.

M. Schomaker, and C. Heumann, When and when not to use optimal model averaging, Statistical Papers, in press. DOI: 10.1007/s00362-018-1048-3, 2018

K. Burnham, and D. Anderson, Model selection and multi-model inference, A practical information-theoretic approach, Springer, New York, 2002.

P. Mitra, H. Lian, R. Mitra, H. Liang, and M. Xie, A general framewor for frequentist model averaging, Science China Mathematics, 62(2), 205-226, 2019.

D. Pavolo, and D. Chikobvu, Optimising Vulcanisation Time of Rubber Covered Mining Conveyor Belts Using Multiresponse Surface Methodology, International Journal of Operations and Quantitative Management (IJOQM), Vol. 26, Issue # 1, pages 29-48, 2020.

P. Zhang, On the choice of penalty term in generalised FPE criterion, In Selecting Models from Data (P. Cheeseman and R. W. Oldford, eds.), 41-49, Springer-Verlag, New York, 1994.

W. Wang, Some Fundamental Issues in Ensemble Methods, International Joint Conference on Neural Networks (IJCNN 2008) pp: 2243 – 2250, 2008.

S. Geman, E. Bienenstock, and R. Doursat , Neural networks and the bias/variance dilemma, Neural Computation, vol. 4, no. 1, pp. 1–58, 1992.

A. Krogh,and J. Vedelsby, Neural Network Ensembles, Cross Validation and Active Learning, NIPS, 7, 231–238, 1995.

N. Ueda, and R. Nakano, Generalisation error of ensemble estimators, IEEE International Conference on Neural Networks, 1996, vol. 1, pp. 90–95, 1996.

L. K. Hansen, and P. Salamon, Neural networ ensembles, IEEE transactions on pattern analysis and machine intelligence, 12(10):993–1001, 1990.

D. Opitz, and R. Maclin, Popular Ensemble Methods: An Empirical Study, Journal of Artificial Intelligence Research, 11:169–198, 1999.

R. Polikar, Ensemble based systems in decision making, IEEE Circuits Syst. Mag 6(3):21–45, 2006.

T. Hastie, R. Tibshirani, and J. Friedman, The elements of statistical learning: Data mining, inference, and prediction, Springer Series in Statistics, USA, 2nd Edition, 2016.

C. Ju, A. Bibaut, and M. Van de Laan, The Relative Performance of Ensemble Methods with Deep Convulational Neural Networks for Image Classification, J. Appl. Stat. Vol. 45 (15), pp: 2800-2818, 2018.

Y. Brain, H. Chen, When Does Diversity Help Generalisation in Classification Ensembles?, IEEE Transactions in Cybernetics, arXiv:1910.13631v1[cs. LG], 30 Oct. 2019.

T.G. Dietterich, Ensemble methods in machine learning, In Multiple classifier systems, pages 1–15. Springer, 2000.

R. Polikar, A. Topalis, D. Parikh, D. Green, J. Frymiare, J. Kounios, and C. Clark, An ensemble based data fusion approach for early diagnosis of alzheimer disease, Information Fusion, 9:83–95, 2008.

P. Yang, Y.H. Yang, B.B. Zhou, and A.Y. Zomaya,, Current Bioinformatics, A review of ensemble methods in bioinformatics: Including stabilioty of feature selection and ensemble feature selection, Vol. 5, (4):296-308, 2010 (updated on 28 Sep. 2016).

A. Ahangi, F.A. Langrondi, F. Yazdanpanah, and S. Mirroshandel, A Novel Fusion Mixture of Active Experts Algorithm for Traffic Signs Recognition Multimedia Tools and Applications, DOI: 10.1007/s11042-019-0, 2019..

S. Zhou, R. Hu, Y. Liu, and Z. Tang, Margin Based Pareto Ensemble Pruning: An Ensemble Pruning Algorithm That Learns to Search Optimised Ensembles, Computational Intelligence and Neuroscience. Hindawi, Vol. 2019, Article ID 7560872, 12 pages, 2019.

Z-H. Zhou, Ensemble Methods: Foundations and Algorithms, CSC Press, Tylor and Francis, USA. ISBN 13:978-1-4398-3005-5. (eBook- PDF), 2012.

P. Kazienko, E. Lughofer, and B. Trawonsk, Editorial on the Special Issue ”Hybrid and Ensemble Techniques in Soft Computing: Recent Advances and Emerging Trends, Soft Computing (2015) 19: 3353 – 3355. DOI 10.1007/s00500-015-1916-x, 2015.

Y. Chali, H. Sadid, and M. Mojahid, Complex question answering:homogeneous or heterogeneous, which ensemble is better?, 19th International Conference on Application of Natural Language to Information Systems (NLDB 2014), Jun 2014, Montpellier, France. pp. 160-163. ffhal-01402563f.

A. Petrokova, M. Affenzeller, and G. Merkurjeva, Heterogeneous versus Homogeneous Machine Learning Ensembles, Information Technology and Management Science, 18(1) pp. 135-140. Doi: 10.1515/itms-2015-0021, 2015.

A.I. Khuri, and M. Conlon, Simultaneous Optimisation of Multiple Responses Represented b Polynomial Regression Functions, Technometrics, 23, 363-375, 1981.

How to Cite
Chikobvu, D., & Pavolo, D. (2024). Solving a Typical Small Sample Size MRSM Dataset Problem Using a Flexible Hybrid Ensemble Approach for Credibility. Statistics, Optimization & Information Computing, 12(2), 310-324.
Research Articles