Computer Science and Information Systems 2022 Volume 19, Issue 2, Pages: 763-781
https://doi.org/10.2298/CSIS210820004S
Full text (
881 KB)
Cited by
A neuroevolutionary method for knowledge space construction
Segedinac Milan
(Faculty of Technical Sciences, Novi Sad, Serbia), milansegedinac@uns.ac.rs
Milićević Nemanja (SmartCat, Novi Sad, Serbia), nemanja.milicevic@smartcat.io
Čeliković Milan (Faculty of Technical Sciences, Novi Sad, Serbia), milancel@uns.ac.rs
Savić Goran
(Faculty of Technical Sciences, Novi Sad, Serbia), savicg@uns.ac.rs
In this paper we propose a novel method for the construction of knowledge spaces based on neuroevolution. The main advantage of the proposed approach is that it is more suitable for constructing large knowledge spaces than other traditional data-driven methods. The core idea of the method is that if knowledge states are considered as neurons in a neural network, the optimal topology of such a neural network is also the optimal knowledge space. To apply the neuroevolutionary method, a set of analogies between knowledge spaces and neural networks was established and described in this paper. This approach is evaluated in comparison with the minimized and corrected inductive item tree analysis, de facto standard algorithm for the data-driven knowledge space construction, and the comparison confirms the assumptions.
Keywords: Genetic algorithms, Knowledge Space Theory, Neural networks, Educational technology
Show references
Doignon, J.-P., Falmagne, J.-C.: Spaces for the assessment of knowledge. International journal of man-machine studies, Vol. 23, No. 2, 175-196. (1985)
Ünlü, A., Sargin, A.: DAKS: an R package for data analysis methods in knowledge space theory. Journal of Statistical Software, Vol. 37, No. 1, 1-31. (2010)
Doignon, J.-P., Falmagne, J.-C.: Knowledge spaces, Springer Science & Business Media, (2012)
Falmagne, J.-C., Doignon, J.-P.: Learning spaces: Interdisciplinary applied mathematics, Springer Science & Business Media, (2010)
Koppen, M.: Extracting human expertise for constructing knowledge space: an algorithm. Journal of mathematical psychology, Vol. 37, No. 1, 1-20. (1993)
Cosyn, E., Thiéry, N.: A practical procedure to build a knowledge structure. Journal of mathematical psychology, Vol. 44, No 3, 383-407. (2000)
Schrepp, M., Held, T., Albert, D.: Component-based Construction of Surmise Relations for Chess Problems. In D. Albert & J. Lukas (Eds.), Knowledge Spaces: Theories, Empirical Research, and Applications (pp. 41-66). Mahwah: NJ. (1999)
Marte, B., Steiner, C. M., Heller, J., Albert, D.: Activity and Taxonomy-Based Knowledge Representation Framework. International Journal of Knowledge and Learning, Vol. 4, No. 1, 189-202. (2008)
Albert, D., Held T.: Establishing knowledge spaces by systematical problem construction. In D. Albert (Ed.), Knowledge Structures. New York: Springer Verlag, 78-112. (1994)
Albert, D., Held, T.: Component based knowledge spaces in problem solving and inductive reasoning, In D. Albert & J. Lukas (Eds.), Knowledge Spaces: Theories, Empirical Research, and Applications. Mahwah, NJ: Lawrence Erlbaum Associates., 15-40. (1999)
Segedinac, M., Horvat, S., Rodić, D., Rončević, T., Savić, G.: Using knowledge space theory to compare expected and real knowledge spaces in learning stoichiometry, Chemistry Education Research and Practice (CERP), Vol. 19, No 3, 670-680. (2018)
Ünlü, A., Albert, D.: The correlational agreement coefficient ca (≤, d)-a mathematical analysis of a descriptive goodness-of-fit measure. Mathematical Social Sciences, Vol. 48, No. 3, 281-314. (2004)
Schrepp, M.: A method for the analysis of hierarchical dependencies between items of a questionnaire. Methods of Psychological Research Online, Vol. 19, No.1, 43-79. (2003)
Schrepp, M.: Extracting knowledge structures from observed data. British Journal of Mathematical and Statistical Psychology, Vol. 52, No. 2, 213-224. (1999)
Spoto,A., Stefanutti, L., Vidotto, G.: An iterative procedure for extracting skill maps from data. Behavior research methods, Vol. 48, No. 1, 729-741, (2016)
Sargin, A., Ünlü, A.: Inductive item tree analysis: Corrections, improvements, and comparisons. Mathematical Social Sciences, Vol. 58, No 3, 376-392. (2009)
Rechenberg, I.: Evolution strategy: Optimization of technical systems by means of biological evolution. Fromman-Holzboog: Stuttgart, Vol. 104, No 1, 15-16. (1973)
Holland, J. H., Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. MIT press. (1992)
Stanley, K. O.: Neuroevolution: A different kind of deep learning. (2017) [Online]. Available: https://www.oreilly.com/ideas/neuroevolution-a-different-kind-of-deep-learning. (current December 2020)
Such, F. P., Madhavan,,V., Conti, E., Lehman, J., Stanley, K. O., Clune, J.: Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning. arXiv (2017) [Online]. Available: https://arxiv.org/abs/1712.06567 (current December 2020)
Angeline, P. J., Saunders. G. M., Pollack, J. B.: An evolutionary algorithm that constructs recurrent neural networks. IEEE transactions on Neural Networks, Vol. 5, No. 1, 54-65. (1994)
Yao, X., Liu, Y.: A new evolutionary system for evolving artificial neural networks. IEEE transactions on Neural Networks, Vol. 8, No. 3. 694-713. (1997)
Stanley, K. O., Miikkulainen, R.: Efficient evolution of neural network topologies. In Proceedings to CEC'02, Honolulu, HI, USA, USA. (2002)
Gauci, J., Stanley, K.: Generating large-scale neural networks through discovering geometric regularities. In Proceedings to GECCO '07, London, England. (2007)
Sher, G. I.: Handbook of neuroevolution through Erlang. Springer Science & Business Media. (2012)
Gruau, F.: Neural network synthesis using cellular encoding and the genetic algorithm. LIP-IMAG. (1994)
Clune, J., Stanley, K. O., Pennock, R. T., Ofria, C.: On the performance of indirect encoding across the continuum of regularity. IEEE Transactions on Evolutionary Computation, Vol. 15, No. 3., 346-367. (2011)
Rumelhart, D. E., Hinton, G. E., Williams, R. J.: Learning internal representations by error propagation., ICS, San Diego, CA, USA. (1985)
Stanley, K. O., & Miikkulainen, R.: Efficient evolution of neural network topologies. In Proceedings of the 2002 Congress on Evolutionary Computation. CEC'02 (2002).
Miikkulainen, R., Liang, J., Meyerson, E., Rawal, A., Fink, D., Francon, O., Raju, B., Shahrzad, H., Navruzyan, A., Duffy, N.: Evolving deep neural networks. In Kozma, R., Alippi, C., Choe, Y., Morabito, F. C. (Eds.) Artificial Intelligence in the Age of Neural Networks and Brain Computing. Elsevier, 293-312. (2019)
Masri, S. F., Chassiakos, A. G., Caughey, T. K.: Identification of nonlinear dynamic systems using neural networks. Journal of Applied Mechanics, Vol 60, No 1, 123-133. (1993)
Mikoni, S. V.: Neural network approach to the formation models of multiattribute utility. International Journal Information Models & Analyses, Vol 3, No 1, 3-9. (2014)
Rituraj, K, Biswal, B.: A model for evolution of overlapping community networks. Physica A: Statistical Mechanics and its Applications, Vol 474, No 1, 380-390. (2017)
de Chiusole, D., Stefanutti, L., Spoto, A.: A class of k-modes algorithms for extracting knowledge structures from data. Behavior research methods, Vol 49, No 4, 1212-1226. (2017)
de Chiusole, D., Stefanutti, L., Anselmi, P., Robusto, E.: Assessing parameter invariance in the BLIM: Bipartition models. Psychometrika, Vol. 78, No.4, 710-724. (2013)