Even when neural networks are widely used in a large number of applications, they are still considered as black boxes and present some difficulties for dimensioning or evaluating their prediction error. This has led to an increasing interest in the overlapping area between neural networks and more traditional statistical methods, which can help overcome those problems. In this article, a mathematical framework relating neural networks and polynomial regression is explored by building an explicit expression for the coefficients of a polynomial regression from the weights of a given neural network, using a Taylor expansion approach. This is achieved for single hidden layer neural networks in regression problems. The validity of the proposed method depends on different factors like the distribution of the synaptic potentials or the chosen activation function. The performance of this method is empirically tested via simulation of synthetic data generated from polynomials to train neural networks with different structures and hyperparameters, showing that almost identical predictions can be obtained when certain conditions are met. Lastly, when learning from polynomial generated data, the proposed method produces polynomials that approximate correctly the data locally.
Palabras Clave: Polynomial regressionNeural networksMachine learning
Índice de impacto JCR y cuartil WoS: 8.050 - Q1 (2020)
Referencia DOI: 10.1016/j.neunet.2021.04.036
Publicado en papel: Octubre 2021.
Publicado on-line: Abril 2021.
P. Morala Miguélez, J. Cifuentes, R.E. Lillo, I. Úcar. Towards a mathematical framework to inform neural network modelling via polynomial regression. Neural Networks. Vol. 142, pp. 57 - 72, Octubre 2021. [Online: Abril 2021]