$L^2(R^d)$ Approximation Capability of Incremental Constructive Feedforward Neural Networks with Random Hidden Units

DOI：10.3770/j.issn:1000-341X.2010.05.004

 作者 单位 隆金玲 大连理工大学数学科学学院, 辽宁 大连 116024 2. 东南大学数学系, 江苏 南京 210096 李正学 大连理工大学数学科学学院, 辽宁 大连 116024 南东 北京工业大学应用数理学院, 北京 100022

本文研究了具有随机隐单元的三层渐增前向神经网络对$L^2(R^d)$中函数的逼近能力.我们重点讨论了带两种类型隐单元的前向神经网络:RBF及TDI(平移伸缩不变)型.现有的传统神经网络逼近理论主要存在性地证明了神经网络的逼近能力,而我们使用的是构造性的方法. 研究结果表明: 对于RBF型隐单元,给定非零激活函数$g:R\rightarrow R$,$\;g(\Big\|x\Big\|_{R^d})\in L^2(R^d)$, 或对于TDI型隐单元, 给定非零激活函数$g(x)\in L^2(R^d)$, 若适当选择隐层单元和输出单元之间权值,则具有随机隐单元渐增网络的输出函数$f_n$, 当$n\rightarrow \infty$时,以概率1收敛于$L^2(R^d)$中的任意目标函数.

This paper studies approximation capability to $L^2(R^d)$ functions of incremental constructive feedforward neural networks (FNN) with random hidden units. Two kinds of there-layered feedforward neural networks are considered: radial basis function (RBF) neural networks and translation and dilation invariant (TDI) neural networks. In comparison with conventional methods that existence approach is mainly used in approximation theories for neural networks, we follow a constructive approach to prove that one may simply randomly choose parameters of hidden units and then adjust the weights between the hidden units and the output unit to make the neural network approximate any function in $L^2(R^d)$ to any accuracy. Our result shows given any non-zero activation function $g: R^ \rightarrow R$ and $g(\left\|x\right\|_{R^d})\in L^2(R^d)$ for RBF hidden units, or any non-zero activation function $g(x)\in L^2(R^d)$ for TDI hidden units, the incremental network function $f_n$ with randomly generated hidden units converges to any target function in $L^2(R^d)$ with probability one as the number of hidden units $n\rightarrow \infty$, if one only properly adjusts the weights between the hidden units and output unit.