Stop Thinking, Just Do!

Sungsoo Kim's Blog

Radial basis function (RBFs)

tagsTags

26 March 2018



Radial basis function (RBFs)

A radial basis function (RBF) is a real-valued function whose value depends only on the distance from the origin, so that ; or alternatively on the distance from some other point c, called a center, so that . Any function that satisfies the property is a radial function. The norm is usually Euclidean distance, although other distance functions are also possible.

Sums of radial basis functions are typically used to approximate given functions. This approximation process can also be interpreted as a simple kind of neural network; this was the context in which they originally surfaced, in work by David Broomhead and David Lowe in 1988, which stemmed from Michael J. D. Powell’s seminal research from 1977. RBFs are also used as a kernel in support vector classification.

Types

Commonly used types of radial basis functions include (writing ):

{

\phi (r)={\frac {1}{1+(\varepsilon
r)\^{2}}}

\phi (r)={\frac {1}{\sqrt
{1+(\varepsilon
r)\^{2}}}}

Approximation

Radial basis functions are typically used to build up function approximations of the form

where the approximating function y(x) is represented as a sum of N radial basis functions, each associated with a different center x~i~, and weighted by an appropriate coefficient w~i~. The weights w~i~ can be estimated using the matrix methods of linear least squares, because the approximating function is linear in the weights.

Approximation schemes of this kind have been particularly used in time series prediction and control of nonlinear systems exhibiting sufficiently simple chaotic behaviour, 3D reconstruction in computer graphics (for example, hierarchical RBF and Pose Space Deformation).

RBF Network

Two unnormalized Gaussian radial basis functions in one input dimension. The basis function centers are located at x~1~=0.75 and x~2~=3.25.

The sum

can also be interpreted as a rather simple single-layer type of artificial neural network called a radial basis function network, with the radial basis functions taking on the role of the activation functions of the network. It can be shown that any continuous function on a compact interval can in principle be interpolated with arbitrary accuracy by a sum of this form, if a sufficiently large number N of radial basis functions is used.

The approximant y(x) is differentiable with respect to the weights w~i~. The weights could thus be learned using any of the standard iterative methods for neural networks.

Using radial basis functions in this manner yields a reasonable interpolation approach provided that the fitting set has been chosen such that it covers the entire range systematically (equidistant data points are ideal). However, without a polynomial term that is orthogonal to the radial basis functions, estimates outside the fitting set tend to perform poorly.


comments powered by Disqus