continuous Hopfield network (CHN)


The continuous Hopfield network (CHN)


The continuous Hopfield network (CHN) is a classical neural network model. It can be used to solve some classification and optimization problems in the sense that the equilibrium points of a differential equation system associated to the CHN is the solution to those problems. The Euler method is the most widespread algorithm to obtain these CHN equilibrium points, since it is the simplest and quickest method to simulate complex differential equation systems. However, this method is highly sensitive with respect to initial conditions and it requires a lot of CPU time for medium or greater size CHN instances. In order to avoid these shortcomings, a new algorithm which obtains one equilibrium point for the CHN is introduced in this paper. It is a variable time-step method with the property that the convergence time is shortened; moreover, its robustness with respect to initial conditions will be proven and some computational experiences will be shown in order to compare it with the Euler method.


Introduction

The continuous Hopfield network (CHN) of size n is a fully connected neural network with n continuous valued units. Following the notation of Aiyer [1], let Ti,j be the strength of the connection from neuron j to neuron i; each neuron i has also an offset bias of iib; let ui and vi be the current state and the output of the unit 
i∀i∈{1,…,n}


The CHN will solve those classification and optimization problems which can be expressed as the constrained minimization of
E=−12vtTv−(ib)tv,
which has its minimum at the corners of the n-dimensional hypercube [0,1]n, denoted as the Hamming hypercube. See, for instance, Ghosh [3], Nasrabadi [4], Wasserman [5] and Wu [6]..


Continuous Hopfield Network

Unlike the discrete Hopfield networks, here the time parameter is treated as a continuous variable. So, instead of getting binary/bipolar outputs, we can obtain values that lie between 0 and 1. It can be used to solve constrained optimization and associative memory problems. The output is defined as:

v_i = g(u_i)

where,

  • vi = output from the continuous hopfield network
  • ui = internal activity of a node in continuous hopfield network.

Energy Function

The Hopfield networks have an energy function associated with them. It either diminishes or remains unchanged on update (feedback) after every iteration. The energy function for a continuous Hopfield network is defined as:

E = 0.5 \sum_{i=1}^n \sum_{j=1}^n w_{ij}v_i v_j + \sum_{i=1}^n \theta_i v_i

To determine if the network will converge to a stable configuration, we see if the energy function reaches its minimum by:

\frac{d}{dt} E \leq 0

The network is bound to converge if the activity of each neuron wrt time is given by the following differential equation

\frac{d}{dt}u_i = \frac{-u_i}{\tau} + \sum_{j=1}^n w_{ij} v_j + \theta_i

Post a Comment

Previous Post Next Post

Contact Form