The continuous Hopfield network (CHN)
The continuous Hopfield network (CHN) is a classical neural network model. It can be used to solve some classification and optimization problems in the sense that the equilibrium points of a differential equation system associated to the CHN is the solution to those problems. The Euler method is the most widespread algorithm to obtain these CHN equilibrium points, since it is the simplest and quickest method to simulate complex differential equation systems. However, this method is highly sensitive with respect to initial conditions and it requires a lot of CPU time for medium or greater size CHN instances. In order to avoid these shortcomings, a new algorithm which obtains one equilibrium point for the CHN is introduced in this paper. It is a variable time-step method with the property that the convergence time is shortened; moreover, its robustness with respect to initial conditions will be proven and some computational experiences will be shown in order to compare it with the Euler method.
Introduction
Continuous Hopfield Network
Unlike the discrete Hopfield networks, here the time parameter is treated as a continuous variable. So, instead of getting binary/bipolar outputs, we can obtain values that lie between 0 and 1. It can be used to solve constrained optimization and associative memory problems. The output is defined as:
where,
- vi = output from the continuous hopfield network
- ui = internal activity of a node in continuous hopfield network.
Energy Function
The Hopfield networks have an energy function associated with them. It either diminishes or remains unchanged on update (feedback) after every iteration. The energy function for a continuous Hopfield network is defined as:
To determine if the network will converge to a stable configuration, we see if the energy function reaches its minimum by:
The network is bound to converge if the activity of each neuron wrt time is given by the following differential equation: