Statistica Automated Neural Networks (SANN) - Neural Networks Overview

Over the past two decades, there has been an explosion of interest in neural networks. It started with the successful application of this powerful technique across a wide range of problem domains, in areas as diverse as finance, medicine, engineering, geology, and even physics.

The sweeping success of neural networks over almost every other statistical technique can be attributed to power, versatility, and ease of use. Neural networks are very sophisticated modeling and prediction making techniques capable of modeling extremely complex functions and data relationships.

The ability to learn by examples is one of the many features of neural networks that enables the user to model data and establish accurate rules governing the underlying relationship between various data attributes. The neural network user gathers representative data and then invokes training algorithms, which can automatically learn the structure of the data. Although the user does need to have some heuristic knowledge of how to select and prepare data, how to select the appropriate neural network, and how to interpret the results, the level of user knowledge needed to successfully apply neural networks is much lower than those needed in most traditional statistical tools and techniques, specifically when the neural network algorithms are hidden behind well designed and intelligent computer programs that take the user from start to finish with just a few clicks.

Using Neural Networks

Neural networks have a remarkable ability to derive and extract meaning, rules, and trends from complicated, noisy, and imprecise data. They can be used to extract patterns and detect trends that are governed by complicated mathematical functions that are too difficult, if not impossible, to model using analytic or parametric techniques. One of the abilities of neural networks is to accurately predict data that were not part of the training data set, a process known as generalization. Given these characteristics and their broad applicability, neural networks are suitable for applications of real world problems in research and science, business, and industry. Below are examples of areas where neural networks have been successfully applied:

  •  Signal processing
  •  Process control
  •  Robotics
  •  Classification
  •  Data preprocessing
  •  Pattern recognition
  •  Image and speech analysis
  •  Medical diagnostics and monitoring
  •  Stock market and forecasting
  •  Loan or credit solicitations

The Biological Inspiration

Neural networks are also intuitively appealing, since many of its principals are based on crude and low-level models of biological neural information processing systems, which have led to the development of more intelligent computers systems that can be used in statistical and data analysis tasks. Neural networks emerged from research in artificial intelligence, mostly inspired by attempts to mimic the fault-tolerance and ”capacity to learn” of biological neural systems by modeling the low-level structure of the brain (see Patterson, 1996).

The brain is principally composed of a very large number (approximately ten billion) of neurons, massively interconnected with several thousand interconnects per neuron. Each neuron is a specialized cell that can create, propagate, and receive electrochemical signals. Like any biological cell, the neuron has a body, a branching input structure called the dendrites, and a branching output structure known as the axon. The axons of one cell connect to the dendrites of another via a synapse. When a neuron is activated, it fires an electrochemical signal along the axon. This signal crosses the synapses to thousands of other neurons, which may in turn fire, thus propagating the signal over the entire neural system (i.e. biological brain). A neuron fires only if the total signal received at the cell body from the dendrites exceeds a certain level known as threshold.

Although a single neuron accomplishes no meaningful single task on its own, when the efforts of a large number of them are combined, the results become quite dramatic for they can create or achieve various and extremely complex cognitive tasks such as learning and even consciousness. Thus, from a very large number of extremely simple processing units the brain manages to perform extremely complex tasks. While there is a great deal of complexity in the brain that has not been discussed here, it is interesting that artificial neural networks can achieve some remarkable results using a basic model such as this.

The Basic Mathematical Model

Schematic of a single neuron system. The inputs x send signals to the neuron at which point a weighted sum of the signals are obtained and further transformed using a mathematical function f.

Here we consider the simplest form of artificial neural networks with a single neuron with a number of inputs and one (for the sake simplicity) output. Although a more realistic artificial network typically consists of many more neurons, this model helps us to shed light on the basics of this technology.

The neuron receives signals from many sources. These sources usually come from the data and are referred to as the input variables x, or just inputs. The inputs are received from a connection that has a certain strength, known as weights. The strength of a weight is represented by a number. The larger the value of a weight w, the stronger is its incoming signal and, hence, the more influential the corresponding input.

Upon receiving the signals, a weighted sum of the inputs is formed to compose the activation function f  (or just activation) of the neuron. The neuron activation is a mathematical function that converts the weighted sum of the signals to form the output of the neuron. Thus:

The output of the neuron are actually predictions of the single neuron model for a variable in the data set, which is referred to as the target t. It is believed that there is a relationship between the inputs x and the targets t, and it is the task of the neural network to model this relationship by relating the inputs to the targets via a suitable mathematical function that can be learned from examples of the data set.

Feedforward Neural Networks

The model discussed above was the simplest neural network model one can construct. We used this model to explain some of the basic functionalities and principals of neural networks and also describe the individual neuron. However, as mentioned before, a single neuron cannot perform a meaningful task on its own. Instead, many interconnected neurons are needed to achieve any specific goal. This requires us to consider more neural network architectures used in practical applications.

If a network is to be of any use, there must be inputs (which carry the values of variables of interest in the outside world) and outputs (which form predictions, or control signals). Inputs and outputs correspond to sensory and motor nerves such as those coming from the eyes and leading to the hands. However, there also can be hidden neurons that play an internal role in the network. The input, hidden, and output neurons need to be connected.

A simple network has a feedforward structure: signals flow from inputs, forwards through any hidden units, eventually reaching the output units. Such a structure has stable behavior and fault tolerance. Feedforward neural networks are by far the most useful in solving real problems and, therefore, are the most widely used. See Bishop 1995 for more information on various neural networks types and architectures.

A typical feedforward network has neurons arranged in a distinct layered topology. Generally, the input layer simply serves to introduce the values of the input variables. The hidden and output layer neurons are each connected to all of the units in the preceding layer.

Again, it is possible to define networks that are partially connected to only some units in the preceding layer. However for most applications, fully connected networks are better, and this is the type of network supported by STATISTICA Automated Neural Networks. When the network is executed, the input variable values are placed in the input units, and then the hidden and output layer units are progressively executed in their sequential order. Each of them calculates its activation value by taking the weighted sum of the outputs of the units in the preceding layer. The activation value is passed through the activation function to produce the output of the neuron. When the entire network has been executed, the neurons of the output layer act as the output of the entire network.

See also:

SANN Overviews - Neural Network Tasks

SANN Overviews - Network Types

SANN Overviews - Activation Functions

SANN Overviews - Selecting the Input Variables

SANN Overviews - Neural Network Complexity

SANN Overviews - Network Training

SANN Overviews - Network Generalization

SANN Overviews - Pre and Post Processing of Data

SANN Overviews - Predicting Future Data and Deployment

SANN Overviews - Recommended Textbooks