You are on page 1of 18

I

IEEE TRANSACTIONS O N CIRCUITS AND SYSTEMS-I:

FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 39, NO. 7 , JULY 1992

531

Resistive Grid Image Filtering: Input/Output Analysis via the CNN Framework
Bertram E. Shi and Leon 0. Chua, Fellow, IEEE
Abstruct-Recently, several researchers have proposed resistive grids as primary components in analog VLSI circuit implementations of various image processing algorithms. In this paper, we use the Cellular Neural Network framework developed by Chua and Yang to analyze the image filtering operation performed by the linear resistive grid. In particular, we show first in detail how the resistive grid can be cast as a CNN and discuss the use of frequency-domain techniques to characterize the input/output behavior of resistive grids of both infinite and finite size. These results lead to a theoretical justification of one of the folk theorems commonly held by researchers using resistive grids: resistive grids are robust in the presence of variations in the values of the resistors. To the best of our knowledge, this conjecture has only been verified by simulation. Finally, we suggest an application to edge detection. In particular, we show that the filtering performed by the grid is similar to the exponential filter in the edge detection algorithm proposed by Shen and Castan.

I. INTRODUCTION
n

n
4 io N

ECENTLY, resistive networks implemented in analog VLSI have attracted quite a bit of attention as image processing tools. Indeed, the resistive grid is one of the key elements of the silicon retina proposed by Mahowald and Mead [ 13. They are also key components of chips designed for motion detection [3] and optical flow field extrapolation [4]. In addition, nonlinear resistive grids have been used by several researchers for image smoothing and segmentation [51-[71. Here, we confine our discussion to linear resistive grids operating on data that is densely sampled on a rectangular grid. In other words, for the purposes of this paper a resistive grid consists of a set of nodes arranged in a regular rectangular array. Each node is associated with a pixel in the image that is to be filtered. We will index the nodes by ( m , n ) for m E {O;..,M - 1) and n E {O;..,N - 1) where the top right node is indexed by (0,O). Each node is connected through a series resistor to a voltage source referenced to ground. The voltage across this source, d,,,, is proportional to the intensity of the input
Manuscript received April 12, 1991; revised January 30, 1992. This work was supported in part by the National Science Foundation under a Graduate Research Fellowship and by the Office of Naval Research under Grant N00014-89-51402. This paper was recommended by Associate Editor T. Roska. The authors are with the Electronics Research Laboratory, University of California, Berkeley, CA 94720. IEEE Log Number 9202202.

image at the associated pixel. In addition, each node of the grid is connected to the four nodes directly to the north, south, east, and west via four resistors. For nodes at the edges of the grid lacking one or more of these neighbors, the corresponding connection is omitted; see Fig. 1. We will refer to a resistor between a node and voltage source as a vertical resistor, and a resistor between two nodes as a transversal resistor. In the ideal case, all of the transverse resistances are equal and all of the vertical resistances are equal. However, the nominal value of the vertical resistances is not necessarily equal to the nominal value of the transversal resistances. We define A to be the ratio of the nominal conductance of the vertical resistances to the nominal conductance of the transversal resistances. Unless specified otherwise, we also assume that the conductance units have been normalized so that the conductance of the transversal resistors is equal to one. At steady state, the nodal voltage waveform of the resistive array, (u,,,lm E (O;.., M - l}, n E (O;.., N - l)),corresponds to a spatially low-pass filtered version of the input waveform, {d,,,lm E (O;.., M - 11, n E (O;.., N - 1)).The parameter A controls the amount of smoothing. Many papers about resistive grid image filtering approach the processing from the standpoint of function minimization. This approach is especially useful for nonlinear resistive grids. The nodal voltages of the resistive array will settle to a waveform which is a local minimum of the resistive co-content [8] of the array
M-1 N - 1

g ( u >=
+ +

m=O n=O M-1 N - 2

C C (urn,, m=O n=O M-2 N-1 m=O n = O

drn,n>

C C
C C

(urn,,

- Urn,n+1I2

(urn,,

- ~ m + l , n ) ~ .

(1)

The co-content function can be broken up into two terms. The first term, AC::~C~:~(U~,~d,,,)*, measures the difference between the nodal voltages and the voltages of the voltage sources. The second term, E::~C~:;(U~,~ u,,,+~)~ Crn=oCn:o(um,n u , + ~ , ~ ) ~ , + M-2 N 1 measures the voltage differences between adjacent nodes. This term essentially measures the smoothness of the nodal voltage waveform. The conductance ratio, A, determines the relative effect of these two terms on the steady state nodal

1057-7122/92$03.00 0 1992 IEEE

I
532 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS-I:

FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 39, NO. 7, JULY 1992

Fig. 1. A 4- by 4-node two-dimensional linear resistive grid consists of 16 nodes. Each node is grounded through a resistor and a voltage source and connected via resistors to its four nearest neighbor nodes.

voltage waveform. For h small, smoothness is favored in the steady-state voltage waveform. Conversely, for h large, closeness to the input voltage waveform is favored. In this paper, we approach the processing from a different perspective. Since the co-content function of the linear resistive grid is strictly convex, the steady-state nodal voltage waveform for a given input voltage waveform is unique. We consider the input-output mapping defined by this mapping. It turns out that the resistive grid can be cast as Cellular Neural Network (CNN) 191, [lo]. This interpretation is useful in deriving some properties of this mapping. In Section I1 we briefly review CNN's and show how the resistive grid may be formulated as a CNN. We conclude this section by mentioning the advantage of this formulation which we will exploit in Section IV. In Section 111, we lay the groundwork for Sections IV and V in a discussion of the use of frequency-doman techniques to characterize the input/output behavior of linear resistive grids. In Section IV, we derive our main theoretical result: linear resistive grids are robust in the presence of variations in the transversal conductances. We derive a first-order estimate of the expected value of the squared norm of the error between an ideal resistive grid and a resistive grid whose transversal conductances vary from the ideal. We assume that the variation in each transversal conductance is accurately modeled by an additive random variable that is independent from resistor to resistor with zero mean and known standard deviation. We also provide the extension of this result to variation with nonzero mean. In Section V, we apply the results of Section I11 to an application of linear resistive grids to edge detection. In the course of this discussion, we point out an interesting link between work with resistive grids and work done in computer vision. Finally, we conclude by summarizing our results in Section VI.

plates by Roska and Chua in [ll]. Like resistive grids, CNN's are massively parallel analog circuits that have been demonstrated to have interesting applications in image processing. However, they are considerably more general than resistive grids. In fact, as indicated in [111 and as we shall demonstrate in detail below, a linear resistive grid is a CNN. In this section, we begin by reviewing the CNN architecture. Then, we show how the resistive grid may be implemented as a CNN. We conclude by mentioning the advantage of this formulation which we will exploit in Section IV. A CNN is an analog circuit network consisting of an array of cells. For image processing applications, each pixel of the image has a cell associated with it. Thus each cell of a single layer CNN processing an M by N pixel image can be indexed by ( m , ) for m E {O,.**,M- 1) n and n E {O;.., N - l}.Each cell consists of a capacitor, a resistor, an independent current source and several voltage controlled current sources (Fig. 2). Like feed-forward neural networks [12], CNN's may have multiple layers [131. However, for the purposes of this paper, all CNN's are assumed to be single layered. The voltage across the capacitor, U,,,, is defined as the state of the cell. Each cell also has an associated input voltage, and an output voltage, Y , , ~ . The output voltage is a memoryless function of the state of the cell. In the original CNN paper, this output function was the piecewise-linear approximation to the sigmoid nonlinearity given by
Ym,n

=f(um,n> =

tIvm,n

+ 1I - iIum,n - 11.

Each cell is connected with the cells in a small neighborhood of itself via the voltage controlled current sources. These VCCS's inject currents into the capacitor of the cell that are proportional to the input and output voltages of neighboring cells. The restriction to nearest neighbor interactions is imposed to ensure that a CNN consisting of a large number of cells can be implemented effectively in an analog VLSI circuit. Since image processing operations are often invariant under translation of the image, the interaction between each cell and its nearest neighbors is often uniform over the entire array. We will assume this to be the case. In addition, we will assume that the resistance, R , capacitance, C, and current source I are identical for each cell in the array. Since the value of the capacitance C only affects the time scale of the dynamics, we shall normalize all the values so that C = 1. With these assumptions, the state equation of each cell of the CNN is given by

r
- r [= - r

GRID A C I " AS 11. THE RESISTIVE


CNN's were introduced by Chua and Yang in [9], [lo] and extended to include nonlinear and delay-type tem-

where r determines the size of the neighborhood each cell interacts with. This equation is only valid for cells for

SHI AND CHUA: RESISTIVE GRID IMAGE FILTERING

533

Fig. 2. Each cell of a single layer CNN consists of a capacitor, a resistor, an independent current source, and several voltage controlled current sources.

which the neighborhood specified by r is contained within the array. For cells at the boundary of the array without the full complement of neighbors, this equation must be modified to take this into account. We will discuss this issue in greater detail later in this paper. Since the right-hand side of this state equation is an affine function, it is uniquely specified by the coefficients of the affine relation. These coefficients are defined to be the C N s cloning template. Thus the values of A ( - ,. ), B(., * 1, I, and R are the cloning template for this CNN. Because of the two-dimensional and local nature of the interactions, it is convenient to express the coefficients of A ( . , . ) and B(., ) using 2 r + 1 by 2 r + 1 matrices, A and . B , where the center element corresponds to the coefficient that weights the effect of the cells own output or input upon its states derivative. For example, if r = 1,

moid nonlinearity and cloning template equal to 1 0 0 0 A = [ % ;4 B = [ O 0 0] 0 A O

H]

I=O

R=-

[ [

A(-1,-1) A ( 0 , -1) 4 1 , -1)

A(-1,O)
A(0,O)

A(1,O) B(-1,O) B(0,O) B(1,O)

A(-1,l) A(0,l) A(1,l) B(-1,l) B(0,l) B(1,1)

B(-1,-1) B ( 0 , -1) B(1, -1)

1 1
.

In order to cast the resistive grid as a CNN, we introduce a capacitor, C, between each node and ground. The introduction of this capacitor will not affect the steady state of the resistive grid. Indeed, there will exist parasitic capacitances in any practical realization of the resistive grid. By Kirchhoff s current law the rate of change of the voltage across the capacitor at node (rn,n), urn,n, given is b Y

It turns out that if we restrict our input voltages to lie in the range ( - 1,1), we can keep the output function as the piecewise linear sigmoid nonlinearity. This will change the state trajectory for certain initial conditions, but will not affect the final steady state. Since cells at the edges of the array do not have the full complement of neighbors that the cells in the interior of the array do, the cell equations of the CNN must be modified at the boundary of the array. Although other boundary conditions are possible depending on the application [14], a natural boundary condition is to delete the resistors that would connect to nodes that fall outside the boundaries of the array. This is equivalent to having a resistive connection to a phantom node outside of the array whose voltage tracks the voltage of the node to which it is connected. In other words, these boundary conditions are the discrete equivalent of the Neumann boundary conditions of zero normal derivative in partial differential equations. In the following, we will refer to these boundary conditions simply as Neumann boundary conditions. These boundary conditions can be implemented in the CNN realization of the resistive grid by either adding additional phantom cells to the outside of the array whose output voltages track to output voltages of the adjacent cell in the interior, or by modifying the state equations of the edge cells by adding one to the self feedback term for each connection that would be made to a nonexistent cell. For example, the A template for a cell on the right-hand boundary of the CNN array would be 1 A = ;3

[% !].

for all rn, n with appropriate modification at the edges of the array. Normalizing our variables so that the capacitance is equal to one, this equation is the same as the equations governing the dynamics of a CNN with input (d,,,lrn E {O;.., M - 11, n E {O,..., N - 111, output function equal to the identity rather than the piecewise sig-

As may be expected, energy function of the system of ODES governing the dynamics of the CNN is exactly equal to (11, the resistive co-content of the resistive grid. In other words, the time derivative of the state of the CNN is the negative of the derivative of the co-content function with respect to the state variables. Thus the value of the co-content function decreases along trajectories of the CNN. In fact, even for a CNN with varying

534

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS-I:

FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 39, NO. 7, J U L Y 1992

nodal capacitances, (1) is still a Lyapunov function of the system. Thus the CNN corresponding to the resistive grid does not oscillate and eventually settles to a stable equilibrium point which is the same as the steady state of the resistive grid. This equilibrium point satisfies the following set of equations:
(4 +
')Urn,, - U m - l , n
-

1
S+A

*v

Um+l,n
Um,n-l
-

Fig. 3. The CNN formulation of the resistive grid is equivalent to the above block diagram. The A parameter is a scalar equal to the conductance ratio of the resistive grid. The A block corresponds to the transversal resistors of the grid. The symbols d and U correspond to the input and nodal voltage waveforms expressed as vectors.

Urn,,+, =

Ad,,,

(2) a necessary background to the discussion in the following sections. If we assume an infinite resistive grid, then the co-content approach is no longer well defined since the co-content will be infinite in general. However, (2) will still hold at all nodes. By taking the two-sided two-dimensional Z transform of both sides of (2) (assuming that they exist) we obtain

for all m ,n with appropriate modification at the edges of the array as discussed above. The advantage of the CNN formulation is that it explicitly separates the effect of the vertical and transversal conductances. The transversal conductances are modeled by the A template, which represents a feedback term. The voltage sources and vertical conductances are modeled by the B template as well as the resistance R. In fact, this parallel combination of a current source and resistor is the Norton equivalent of the series connection of the voltage source and vertical resistance in the resistive grid. Using the CNN formulation, we can express the resistive grid using a block diagram which will be especially useful in Section IV; see Fig. 3. 111. FREQUENCY RESPONSE THE RESISTIVE OF GRID If we view the processing performed by the resistive grid as a mapping from the input voltages to the nodal voltages, (2) indicates that this mapping is linear. Since the processing is also space invariant, we should be able to use frequency-domain techniques, such as the Z and discrete Fourier transforms, to describe this mapping. In fact, for an infinite array, the application of the Z-transform is completely straightforward. However, in the finite grid case, a complication arises because the use of the discrete Fourier transform (DIT) to characterize the input-output properties of the resistive grid is only valid if the resistive grid has periodic boundary conditions. These boundary conditions result when each node at the edge of the array is connected via a resistor to the node on the opposite side of the array. However, as noted above, the natural boundary conditions of the resistive grid are the Neumann boundary conditions. Fortunately, it turns out that we can reconcile this apparent incompatibility. In this section, we begin by using the Z-transform to derive the transfer function and frequency response of the mapping performed by the infinite grid. We also give some indication of the shape of the convolution kernel implemented by the resistive grid by using this transfer function to derive the convolution kernel implemented by the one-dimensional grid. After our discussion of the infinite grid case, we begin our discussion of the application of these results to the finite grid case by discussing how to resolve the inconsistency in using the DFT to describe the finite grid with Neumann boundary conditions. Once this is resolved, application of the results of the infinite grid case to the finite grid follows easily. The results of this section are quite straightforward, but form

(4

-1
- zm - zn -

zn)v(zm'

AD(zm'

(3) Thus, the transfer function is

V (2m
H(Zm,Zn) =

2,)

D@m,

zn)

+- A -2;'

-2,

-2;'

-2,

(4)
E

Evaluating ( 3 ) at z , = e J 2 ? r f m i and 2, = e J 2 ? r f n for f,, f , [0, 1),we obtain the frequency response of the network H( e12vfm,
-

2(1

C O S ( ~ T ~ , +) 2(1 - C O S ( ~ T ~ ,+ )A ) )

'

(5)

The variables f, and f , represent spatial frequency in cycles per node. The cross-section of the two-dimensional frequency response at f , = 0 for A = 1.0 is plotted in Fig. 4. This waveform is also the frequency response of the one-dimensional grid with A = 1.0. Not only can we obtain the frequency response of the infinite array from the transfer function, we can also derive the convolution kernel implemented by the resistive grid. This result has also been derived using different Consider an infinite one-dimensional grid. means in [E]. Its transfer function is given by
A

H(z) ~ + A - z - ' =

-2'

(6)

Note that the transfer function corresponds to a two-sided Z transform. In this case the correct series expansion of the transfer function is a Laurent series in z . Defining a = cosh-' ((2 + A)/2), we can rewrite the transfer function as

SHI AND CHUA: RESISTIVE GRID IMAGE FILTERING

535

4-

-5 0 .

3.-7.5.-

-10 0.-

-.

-12.5.-

1.-

0.1

0.2

0.3

0.4

0.5

- - - - _ _- - - - - - - - - _ . 1

Since

CY

C;2-leanz-n, which converges for

> 0, the first fraction is equivalent to the series (zI < e". The second
Fig. 6 shows how the data of the 2 M by 2 N array is derived from the data of the M by N array. Proof We prove the statement assuming a one-dimensional grid. The proof for the two-dimensional case is completely analogous, only notationally more complex. Denote the solution of the 2 N node grid by {:' Q?. J; To prove the fact we need only check that Go = G 2 N - l and that CN_ = CN. If this is true, then the solution of the 2 N node resistive array is the same as the solution of a modified 2 N node resistive array where the resistors between nodes 0 and 2 N - 1 and between nodes N - 1 and N removed. This modified array is simply two separate N node resistive arrays with Neumann boundary conditions, one with data {d,,):=-,' and the other with data {d,'+-,},=o.- 1 N We prove only that CO = C 2 N - l . The proof of the second case is similar. The solution to the resistive grid with periodic boundary conditions can be obtained by a circular convolution with a convolution kernel (g,},"!; I. Therefore,
2N-1

fraction is equivalent to the series C:=Oe-anz-n,which converges for Iz1 > e-". The sum of the two is a Laurent series which converges for e-" < Izl < e". Thus, the transfer function in (6) is the Z transform of the two-sided sequence {h,), E where and
/2+h\

Define L = 1 / a . In light of (71, L is the space constant that characterizes the spread of the convolution kernel of the one-dimensional array. Fig. 5 shows a plot of L for varying values of A. L is approximately equal to I / 6. For finite arrays, the analog of the Z-transform is the discrete Fourier transform. However, as noted above, use of the DFT assumes periodic boundary conditions while the natural boundary conditions for the resistive grid array are the Neumann boundary conditions. Therefore, it may not be valid to use the DFT to describe the input-output characteristics of a finite resistive grid. However, it turns out that the effect of the different boundary conditions is only significant for cells "near" the boundary. This statement can be made more rigorous using the following fact. Proposition I: The steady-state nodal voltage waveform of an M by N node resistive grid with nominal conductance ratio A, Neumann boundary conditions, and input data (drn,,lmE {O;.., M - 11, n E IO;.., N - 11) is equal to the steady-state voltage waveform on the first M by N nodes of a 2 M by 2 N node resistive grid with conductance ;atio A, periodic boundary conditions and input data{d,,,lm E { 0 ; . . , 2 M - 11, n E { 0 ; . . , 2 N - 1))where

By the definition of d,, d, = d 2 N - l - nfor n E { 0 , . . . , 2 N - 1). In addition, by the symmetry of the array, f , = k Z N - ,for n E { 1 ; . - , 2 N - 1 ) . This implies that g-nmod2N = f,, for n E IO,..., 2 N - l}. Substituting these equalities

536

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS-I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 39, NO. 7, JULY 1992
0 Or---iN
I

into (3) yields


2N-1
%= I

2N
1

c= 0

@2N-1-6

G2N-l.

rn

IFW

Using this proposition, we can show that the effect of the different boundary conditions dies out as we move away from the boundary. Consider two M by N resistive grids with conductance ratio A, one with periodic boundary conditions, the other with the Neumann boundary conditions. Assume that the two grids have the same input voltages, {d,,,lrn E {O,..., M - 1),n E {O,..., N - 1)) and that the absolute values of the input voltages are bounded by R > 0. The output of the array with periodic boundary conditions, {u,&lrn E {O;.., M - l), n E {O,..., N - l)), can be calculated by convolving impulse response of the infinite grid by the infinite waveform, {d&lrn, n E Z), obtained by periodically extending the finite input waveform. By the proposition above, the output of the array with the Neumann boundary conditions, {$ Irn E {O;.., M - l), n E IO;.., N - l)), can be calculated by convolving the impulse response of the infinite grid by the infinite waveform, {e,nIrn, obtained by periodin E Z), cally extending a finite input waveform of size 2 M by 2 N whose first M by N values are equal to the corresponding values in the original input waveform. Consider the difference between the nodal voltages of the two arrays at node (rn, n), A U ~=, u:n - um"ln.By ~ linearity,
z c c

ItM
Fig. 6. (a) The data input to an M by N array and (b) the data of the corresponding 2M by 2 N array in Proposition 1, obtained by reflecting the data of the M by N array around the boundaries.

I
1 2

g h

hum,, =

C <=

- x.

hrn-l,n-c('rt

'<e)

-x

where {hl, I J , { E Z) is the impulse response of the infinite network. It is clear from the nature of the resistive grid filtering that hl,* 2 0 for all J , 6 E Z. Taking norms and noting that dTs - d c g = 0 for J E {O,..., M - 1) and 6 E IO;.., N - 11, we obtain
I A U ~ , ~ I

Fig. 7. For any node of a finite resistive grid with conductance ratio A whose minimum distance from the boundary is greater than p ( ~ A) for , K E [0,1], the difference in the nodal voltage resulting from periodic boundary conditions and the nodal voltages resulting from the usual boundary conditions is guaranteed to be less than KB,where B is a bound on the absolute values of the input voltages. The above graph shows a plot of p(O.1, A).

2 R ( i=
--3c

2 2
r m

hrn-6,n-t

where p = min{rn + 1, n + 1, M - rn, N - n) is the minimum distance between node (rn,n ) and the boundary of the array. This bound on the difference decreases exponentially with the minimum distance from the boundary. For K E [0,1] and A E R', define
-1 -In a

--oc

<=M

C e=C
x c c

hrn-l,n-c

-x.

p(

K,

A)

t=- x

2 5
<= --r

( i \ 8n

I 1 4 + h -1 -. e-a K)

hrn-l,n-c

I A u ~5 ~ ~ n A d , 2
e-a(m+l)

~
+ e-a(n+l)

+ e-a(M-m)

+ e-a(N-n)

1 - e-a

For any node in a resistive grid with conductance value A whose minimum distance from the boundary is greater than p( K , A), the difference the nodal voltage resulting from periodic boundary conditions and the nodal voltage resulting from the usual boundary conditions is guaranteed to be less than KB.Fig. 7 shows the plot of p versus A for K = 0.1. As one would intuitively expect, p decreases for increasing A (less smoothing). Thus, if the resistive grid is large enough in comparison to the amount of smoothing, the different boundary conditions significantly alters the outputs of cells only in a small neighborhood of the boundary. We are now in a position to use the DFT to describe the transfer function of the array without any reservations

SHI AND CHUA: RESISTIVE GRID IMAGE FILTERING

531

about inconsistent

boundary conditions. We define the

two-dimensional DFT of an M by

by

tween nodes n and n 1, departs from its nominal Value, In addition, assume that 4,, is N point waveform x m , n G , by an additive term a random variable that assumes values in ( - G, G ) and is

+,,.

xp,

M-1 N-1 =

xc.r e - I ( 2 " p * / M ) l e - I ( 2 . p v / N ) f .

<=o

(=O

In this case, the inverse transform is 1 M-1 N-1 zl, e~(2pm/M)ie~(2""/N)5. Xm,n = MN < = o c=o The frequency response for the finite grid with periodic boundary conditions is obtained by sampling the frequency response of the infinite grid at regularly spaced points in [O, 112. More precisely, if H(e12"fm,eJ2"fn) for fm,f,, E [O, 1) is the frequency response of the infinite array, then H(eJ(2pp/M), e'(2pv/N))for p E (0, I;.., M 1 and v E {O, l,..., N - 1) is the frequency response for a 1 M by N node grid. Thus from ( 5 ) the frequency response of a M by N node resistive grid with periodic boundary conditions and conductance ratio h is

independent from resistor to resistor with zero mean and standard deyiation U G. Define {dV}:i1 to be the DFT of the input waveform and e to be the difference between the outputs of this resistive grid and the ideal resistive grid with no component variations. Under these conditions, for small U , the expected value of the square of the Euclidean norm of the error, e , is approximately given by
1 N-1
~ [ e ~ = U~S(A)N e ]
U=

w ( z ,A ) I J ~ I ~

(9)

where 2( 1 - cos (27rf))


w(f,h)
=

(2(1 - cos(27rf))

and

for f E [O, l), h for p


E

R ' .

{O;..,M

l}and v

IO;..,

N - l}.

4.1. Proof

IV. ROBUSTNESS ANALYSIS One of the disadvantages of a analog computation is that the degree of accuracy obtainable is limited. This limitation arises from several different sources, such as the introduction of noise into the signal, imprecision in measuring the signal and variations in the analog components away from the ideal. In analog VLSI design there is always a trade-off between the area on a chip occupied by a component and the precision with which it can be specified. Since resistive grid implementations for image processing should have a large number of nodes, area is at a premium. One of the attractions of resistive grid filtering claimed by its proponents is that the results of the filtering operation are robust in the presence of random component variations. However, up to the present time, this has merely been asserted and verified by simulation [16]. T o our knowledge, there have not been any theoretical proofs of this statement. The ability to predict amount of error in the output, given the standard deviation of the resistive component errors would be useful in determining the how precisely and accurately the resistive components must be specified. The rest of this section is devoted to the proof and interpretation of the following proposition. Proposition 2: Given an N-node one-dimensional resistive grid with nominal conductance ratio A, periodic boundary conditions, and input data {d,,}F=-,',assume that for all n E (O,..., N - 11, the transversal conductance be-

Normalize the conductance values so that the conductance of the transversal resistances is equal to one. Then the maximum variation lies in the interval ( - 1 , l ) and the standard deviation of the variation is equal to U . Since we assume periodic boundary conditions, for the rest of this section all subscripts are assumed to be taken mod N. Consider the block diagram, Fig. 3, introduced in Section 11. If we consider only the steady-state response, we can set s = 0. In this case, the system reduces to the system in Fig. 8(a) where the A block corresponds to multiplication of the input vector by the matrix 2 -1
-1 2 -1 -1 -1

0
-1
-1

A =

0
-

-1

2 -1

-1 2-

To incorporate the effect of variation in the conductances, we add a matrix F in parallel with A in the feedback path. See Fig. 8(b). Denote the difference between the actual conductance of the resistor between nodes n and n + 1 and its nominal conductance of 1 by cp,, . Because we assume periodic boundary conditions, +Nis the variation in the resistor between node N - 1

538

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS-I:

FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 39, NO. 7, J U L Y 1992

(a)

Define Ad = d - 2 and the error between the outputs of the ideal resistive grid and the resistive grid with variation as e = v - U. By linearity, e = G A d . Using the block diagram, we can obtain an expression relating Ad to d ,
[ I + A - ' F G ] A d = A-IFGd.

We will prove below that all of the eigenvalues of A-'FG have norm less than one. For now, we assume this to be the case and write

<= 0

Thus, we obtain an explicit closed form expression for the error


m

(C)

Fig. 8. Using the CNN framework, both ideal resistive grids and resistive grids with transversal resistances that depart from the ideal can be expressed as block diagrams. (a) The block diagram corresponding to the ideal resistive grid. (b) The block diagram corresponding to the resistive grid with variation in the transversal resistances. (c) A block diagram system equivalent to the system in (b).

we Before proceeding with the rest Of the prove that the eigenvalues of K ' F G have norm less than &e. In the course of this proof, we also introduce conbe for the cepts and notation that of the analysis. We begin by defining circulant matrices [171.
-

4N-1
-

4 0

40

40 40 + 41 - 41
-

-4N-I

-41

0
-4N- 2
-4N-3

F=

0
-4N- 1

4N-3 + 4 N - 2 -4N-2

-4N-2
4N-2

4N-I -

It can easily be verified that F can be expressed as the and product DT@D,where @ = diag{+,},"

D=

-1 1

-1

0
..i

-1

...

-;I.

if it has the form C = circ(c,,,c,;..,~,-~)

(10)

By a simple loop transformation, the system in Fig. 8(b) is equivalent to the feedback system in Fig. Nc). Note that the block labeled G in Fig. 8(c) is exactly the feedback system corresponding to the ideal resistive grid (Fig. 8(a)). Thus, the effect of the variation in the transversal resistances is to add a variation dependent feedback term to the ideal resistive grid. For zero variation this feedback term disappears and we are left with the ideal resistive grid. To study the effect of the variation in the conductances for a given input d to the resistive grid, define 2 to be the input to the ideal resistive grid block G in Fig. 8(c).

In other words, a circulant matrix C is completely determined by its first column. Denoting the elements of the first column by {cz}:=-,' where co corresponds to the top element, the kth row of a circulant matrix is [ c k - " ... CkPr Ck-(,Thus, multiplication of an N-dimensional vector by a circulant matrix is equivalent to circularly convolving the N-dimensional vector by the N point sequence corresponding to the first column. This fact suggests a connection between circulant matrices and the discrete Fourier transform. Indeed, a matrix which is circulant of order N is diagonalizable by a symmetric matrix. n, whose rows (and, by symmetry, columns) are the set of basis vectors used in

SHI AND CHUA: RESISTIVE GRID IMAGE FILTERING

539

the discrete Fourier transform of an N point waveform, i.e.,

n=I
1

rfl
.

with nk = [ e - J ( 2 a k / N W e - J ( 2 " k / N ) ' ... e - J ( 2 r k / N X N - l ) ] for k E {O;.., N - 1). Multiplication of an N-dimensional vector by I returns an N-dimensional vector whose eleI ments are the DFT of the elements of the original vector. Using this definition, A = KICn-' where A is the diagonal matrix of eigenvalues and II-' = ( l / N ) I I H .The H superscript denotes Hermitian transpose. The eigenvalue of a circulant matrix corresponding to the eigenvector (1/N)nF is equal to the kth component of the N point DFT of the first column of C. Since multiplying a vector by C is equivalent to circularly convolving the vector by the first column of C, this fact is simply a restatement of the equivalence between convolution in the space domain and multiplication in the frequency domain. Conversely, since multiplying two discrete waveforms in the space domain is equivalent to convolving the transforms of the waveforms and multiplying by 1/N, transforming any diagonal matrix M via the similarity transform I I M n - ' results in a circulant matrix whose first column is equal to 1/N times the DFT of the sequence of diagonal entries. Therefore, when dealing with vector/matrix equations where the matrices are circulant and/or diagonal, we can make a coordinate transformation to the discrete frequency domain. For example, consider the equation x = CMy where x and y are N-dimensional vectors, C is a circulant matrix, and M is diagonal. Multiplying both sides by II and defining i = I I x , j - = _ I I y , C = IICII-', and M = I I M I I - I , we obtain i = C M j . The elements of the vectors i and f are the DFT of the vectors x and y . C is a diagonal matrix-whose elements are the DFT of the first column of C . M is a circulant matrix, whose first column is 1/N times the DFT of the diagonal elements of M . In the following, we will use the tilde notation for vectors to denote the vector resulting from right multiplication by II and for matrices to denote the similar matrix resulting from right multiplication by I and left multipliI cation by n-'. With this background, we are now prepared to prove the following lemma.

are preserved under similarity transformations, the eig_envaluts ,Of, P are egual to the eigenvalues of P A-'DHQ,DG where D = diag[{l - ei(2au/N)}L;1] Q, and is the circulant matrix whose first column is equal to 1/N times the DFT of the sequence corresponding to the resistor variations. Because that sequence is rea1,jts DFT is even in magnitude and odd h phase. Thus, Q, is not only circul-ant, but also Hermitian. Since D is diagonal- and 1-- e i ( z a n / N )= 0, the first rows and columns of D and D H _consist solely of zeros. Thus, the first row and column of P are zero. This implies that the Figenvalues of P are 0 and the eigenvalues of the matrix P , , the first lower principle submatrix of P , i.e., P,
=

A~'D~&,,D,G,

where Dl = diag[{1 - e j ( 2 a u / N ) } ~ ; 1 G1 = diag[{A/(2(1 ], - cos (2.rrv)/N) + >)}L-,'] apd Q,, is the first lower principle submatrix of Q,. Since Q, is Hermitian, the eigenvalues of its first principle submatrix sepa-rate its own eigenvalues [18].Since the eigenvalues of Q, are equal to the variations in the transversal conductances that have norm less than one, the-eigenvalues of also have norm less than one. Since is Hermitian, its singular values are equal to the absolute values of its eigenvalues. Thus the norm of the Q,, is less than ?ne. Now consider the matrix P , . Right multiplying by I = D I D ; ] yields

Since the above equation is-a simply coordinate transformation, the_eigenv_aluesof P , are _equ_alto the eigenvalues of Q , , A - ' D r G , D , where A - ' D r G , D , = diag[{(2(1 cos (2rk/N))/(2(1 - cos (2.rrk/N)) + .]A ',} :) Since this matrix is diagonal and all of the diagonal elements have magnitude 1Fss than one, the matrix has norm less than one. Since also has norm less than one, their product has norm less than one. Thus, all of the eigenvalues of P, have magnjtude less than one. Therefore, all of the eigenvalues of P have magnitude less than one. To obtain an estimate of the expected value of ere, we take a first order estimate of e and make a transformation to the frequency domain coordinates.

Then

Lemma I : Assuming that the maximum variation of the transversal conductances from their nominal values is at most one, all of the eigenvalues of the matrix P = A-'FG have norm less than one. Proof: Since G implements the circular convolution corresponding to the ideal resistive grid with periodic boundary conditions, it is a circulant matrix. By (81, G = G" = diag[{A/(2(1 - cos(27rv/N) + A)}Lil].As noted above, F can be factored as F = D T @ D where Q, is diagonal and D and D T are circulant. Since eigenvalues

By linearity and since ;"e'

NeTe,

1 &K2DGG6"&](6Gd). E[eTe] = -(dHGDH)E[

- - - - diag[{(2(1 Now, h-*DGGD" cos (2rv/N)) + A)i. )L' *!] Also,


=

(12)

&,.--, $,-,I,

where

c~s(27rv/N))/(2(1 Q, = (1,") circ [io, is the vth component of the DFT


-

530

IEEE TIZANSACnONS ON CIRCUITS AND SYSTEMS-I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 39, NO. 7, JULY 1992

of the sequence of con_ductance variations. Define_ 4 to of the input has no effect on the error at all. On the other ; be the vth row of @, i.e., (p, = ( 1 / N ) [ 4 u - ~ 4 u - 1 , * . hand, the presence of high frequencies in the input data ., indicates that the input data is less smooth. In other 4uv-(N- By linearity of the expectation, words, there is a greater difference between neighboring pixels. In this case, one would expect variations in the E [ &A-2DGCD&] transversal conductances to have a greater effect on the output image. Thus, intuitively, higher frequency components in the input data should have a greater effect on the output error. Our results indicate this is true for A > 4. Fig. 9 shows a plot of w(f,A) versus f for various values of A. Because of aliasing, the highest frequencies are in the middle of the horizontal scale. For A > 4, the weight increases from Since the conductance variation is independent from re- zero at dc to a maximum of 4/((4 + A)) at f = 0.5 sistor t? Lesistor with zero mean and standard deviation corresponding to the highest unaliased frequency. Surprisingly, in the more useful range for image processing, U , E[@,,@,H] 0 2 / N ) Z for all v E (0, l;.., N - 1). =( A E [0,4], the weight increases from zero at dc to a Thus, (1 maximum of 1/(4A) at f = 1 / ( 2 ~ r ) c o s ~ - (A/2)) and then decreases for higher frequencies. To get an idea of the absolute size of the error, assume U 2 that the norm of the input is equal to one, (ldl12= 1. In E[e7e]= IIGDdlI this case, N
N- 1

U=

where
N-

Thus the sum 2(1 - C O S ~ T V / N )


lciUl2. (14)

IIG6dl12 = A2 Defining

v = o (2(1 - c o s 2 ~ v / N )+ A)2

2( 1 - cos 2 T f )
w(f,h)
=

(2(1 - cos2Trf)

and S(A) = (A/N)Cf:/w((v/N), A) and substituting (14) into (131, we obtain the desired estimate of the mean squared error,

is a weighted average of the values of w((u/N), A) for U E IO,..., N - l}.The maximum of this weighted average over all possible inputs of norm one is equal to the maximum value of w,,,(h) for v E {O;.., N - 1).This is bounded by the maximum value of w ( f , A) for f E [O, l), which was mentioned above. For large values of N, the coefficient multiplying the summation is approximately given by

= n 2 h 2 C w (f , A) df = cr2
4.2. Inrerpreration

2 A2

4 s

The summation in (9) is a weighted sum of the squared norm of the DFT frequency components of the input vector. Thus, the error in the output due to error in the transversal resistances depends upon the spectral content of the input image data. Intuitively, this might be expected. For example, consider an input that is equal to a constant at each node of the array. For all possible positive values of the transversal resistors, the co-content function is minimized with the nodal voltages equal to the input voltages. Indeed, it turns out that the dc component

Thus, for inputs of norm one, the mean squared error estimate is approximately bounded for large N by p( A ) a *, where

SHI AND CHUA: RESISTIVE GRID IMAGE FILTERING


W

541

02

04

0'6

0'8

10

Fig. 9. The weighting factor w(f,A) weights the effects of each frequency component of the input vector on the error.

10

Fig. 10. For inputs to the resistive grid of norm one, the estimate of mean squared error for conductance variations of standard deviation U is bounded by p ( A ) a * .

the resistive grid with varying transversal conductances over many different sets of conductance variations. The theoretical estimate was calculated using the standard deviation of the actual conductance variations used in the computer experiment. Fig. 11 shows the theoretical and experimental mean squared error for a 256-node resistive grid with inputs that are complex exponentials of norm 1. The experimental results were calculated by averaging 100 trials. From this figure, it appears that the maximum standard deviation for which the theoretical estimate is valid increases with A. Define the normalized difference between the theoretical and experimental curves in Fig. 11 to be the Euclidean norm of the difference between the two curves divided by the Euclidean norm of the experimental curve. Fig. 12(a) plots the normalized difference versus the standard deviation of the conductance variations for various values of A. As expected, the normalized difference increases as the standard deviation of the conductance variations increases. However, the rate of increase is slower for larger values of A. Fig. 12(b) plots the maximum standard deviation of conductance variation for which the normalized difference is less than 10% versus A. The plots in Fig. 12 where generated using a 64-node resistive grid and averaging the experimental results over 100 trials. Fig. 13 compares the theoretically predicted expected value of the norm of the error for a resistive grid input with each of the rows of the 256 by 256 pixel image in Fig. 14 for conductance variations of different standard deviations, (T.The experimental data was obtained by averaging over 300 different sets of conductance variations. Reasonably good agreement is obtained, especially as the value of A increases.
4.3. Extension to Conductance Variations with Nonzero Mean

To conclude this section, we mention and prove an The function, @(A), assumes a maximum of 0.048 at extension to Proposition 2 to include the case where the A = 2 (Fig. 10). Thus for input with norm one and conduc- conductance variation has mean not equal to zero. In tance variation with a standard deviation of 20%, the some cases, the assumption that the conductance variamaximum estimated mean squared error is only about tions have zero mean may not be satisfied. For example, 0.0019. on an individual chip, the conductance variations might be To investigate the agreement between the theoretical highly correlated and better modeled by an independent estimate of the error and the actual error, we used a identically distributed process with waveforms of mean E computer to find the solution of the one-dimensional and standard deviation U . It turns out that it is quite easy resistive grid equation with varying transversal conduc- to modify Proposition 2 to take this into account. tances and compared the results to the solution of the Proposition 3: Given an N-node one-dimensional resisideal resistive grid equation. Random conductance varia- tive grid with nominal conductance ratio A, periodic tions of varying standard deviations were generate: ac- boundary conditions, and input data (d,)t=-,', assume that cording to a probability density function Ke - , x 2 / 2 r r for for all n E {O;.., N - l}, the transversal conductance bex ~ [ - l , l ] w h e r e K i s c h o s e n s o t h a t / ~ 1 K e ~ x ~ / 2 u 2 dtween nodes n and II + l , departs from its nominal value, x= 1. For small U , the resulting waveform is approximately G, by an additive term 4,. In addition, assume that +n is Gaussian with standard deviation U . For large U , the a random variable that assumes values in ( - G, G) and is waveform is approximately uniform on the interval [ - 1,1] independent from resistor to resistor with mean EG and with standard deviation 1 / 6 . For a given input to the standard deviation (T G. resistive grid, the experimental mean squared error was Under these conditions, using the same notation as in calculated by averaging the squared norm of the differ- Proposition 2, the expected value of the square of the ence between the outputs of the ideal resistive grid and Euclidean norm of the error, e, for small U and small E

542

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS-I:

FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 39, NO. 7, JULY 1992

Fig. 11. Comparison of the theoretically predicted versus actual error for resistive grids with complex exponential inputs and nominal conductance ratio A and transversal conductance variations cr indicates that the agreement is better for larger values of A. The solid line represents the theoretical estimate. The dotted line represents the experimental error. (a) A = 0.1, ~=0.1.~b~A=1,~=0.1.(c)A=5,~=0.1.(d)A=0.1,~=0.2.(e)A=1,~=0.2.(f)A=5,~=0.2.(g~A=0.1, cr = 0.3. (h) A = 1, cr = 0.3. (i) A = 5, cr = 0.3.

is approximately given by

zero except for a 1 at the vth diagonal element. Thus, E &A

ere]

=.

~ : S ( 1 N )1 ~ ( v , A ) I J , I ~ A - ~ ~
U=

-2

DGGD

$1

+ E2A2- u = o w

N-1

=c
lJul2. (15)

( ~ A ),

The proof is essentially the same as the proof of Proposition 2. However, because the conductance variation is now independent from resistor_ t? resistor with mean E and standard deviation a , E[@,@:] = ( a 2 / N ) Z + E21, where 1, is an N by N matrix whose elements are all

+ m2~-

2DGGD H

SHI AND CHUA: RESISTIVE GRID IMAGE FILTERING

543

0.80 -

jI

U
N .-

7 040 E L
Z
0

0.30

0.20-

035-

0 30-

025-

(T

02OF

I
4

005-

1 ~~

0.0

1.o

-I
2.0

.~

- h
5.0

4.0

(b)

Fig. 12. (a) The normalized difference between the theoretical and experimental curves increases as the standard deviation of the conductance variation increases, However, the increase is slower for larger values of A. (b) The maximum standard deviation of the conductance variation such that the normalized difference between the experimental and theoretically predicted error is less than 10% increases with lambda.

544

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS-I:

FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 39, NO. 7, JULY 1992

1.300

,600

I
,4000

4
L
,200

I000

800

400

ZOO

(e) (f) Fig. 13. A comparison of the theoretical estimate versus the actual error for one-dimensional images consisting of each row of the image in Fig. 14 for (a) A = 0.1, (b) A = 0.2, (c) A = 0.5, (d) A = 1.0, (e) A = 2.0, and (f) A = 5.0, shows that agreement increases with increasing A. The solid lines correspond to the theoretical estimate, the dotted lines to the experimental results.

substituting this into (12) yields (15).

v. APPLICATION TO EDGE DETECTION


5.1. Introduction

Edge detection is one of the primary problems in research into early visual processing. It is an important preprocessing step for many computer vision algorithms. The location of intensity discontinuities is one of the

primary components of the raw primal sketch proposed by Marr [19]. Marr asserts that the importance of the raw primal sketch is that it is the first representation derived from an image whose primitives have a high probability of reflecting physical reality directly. Canny [201 states that the edge detection process serves to simplify the analysis of images by drastically reducing the amount of data to be processed, while at the same time preserving useful struc-

SHI AND CHUA: RESISTIVE GRID IMAGE FILTERING

545

The convolution kernel implemented by the resistive grid is very closely related to Shen and Castans filter in the sense that the discrete space Laplacian of the output of the resistive grid is exactly proportional to the difference between the input to the array and the output. Equation (2) can be rewritten as

Fig. 14. Each row of this 256 by 256 pixel image was used as input to a 256 node one-dimensional resistive grid to compare the theoretically predicted and experimental error on real images.

tural information about object boundaries. For example, the stereo depth detection algorithm of Grimson utilizes edges detected in a preprocessing stage [21]. Ullmans approach to the interpretation of visual motion begins with establishing the correspondence between low level correspondence tokens such as edges [221. Indeed, the applications for edge detection are quite diverse [231-[251. Quite a few analog circuit arrays for edge detection have been proposed by various researchers [51-[71, [261. These have been implemented using resistive grid arrays similar to the resistive grid discussed here. However, these arrays have been nonlinear. Using the insight into the frequency response of the resistive grid, we show that linear resistive grid can be used in an extremely simple edge detection algorithm. This algorithm was derived from an edge detection algorithm developed by Shen and Castan [27]-[29] that is closely related to resistive grid filtering as we discuss below. A similar parallel result has been reported in [30].

5.2. Shen and Castans Edge Detector: A Resistice Grid Implementation


Some of the most popular edge detection methods being used consist of low-pass filtering the image to remove noise then looking for maxima of the gradient or zero crossings of the Laplacian of the resulting image. Shen and Castan have proposed a preprocessing filter for edge detection whose convolution kernel is an exponential function. This filter has the advantage that it is easily implementable using a recursive algorithm. In addition, this filter has the property that the Laplacian of the filtered output is approximately proportional to the difference between the input and the filtered output. This difference is called the residual.

The right-hand side if this equation is a discrete space approximation to the Laplacian of the nodal voltage waveform. As an aside, this simple fact is interesting because it provides a link between the biologically motivated work with the silicon retina and some of the current work in computer vision. The output of the silicon retina is proportional to the difference between the input and the output of the resistive grid. If a one-dimensional or rectangular two-dimensional resistive grid is used, this output is proportional to the Laplacian of the filtered output. For example, the chip developed by Delbruck which focuses and image on itself uses the output of a one-dimensional silicon retina [31]. In light of the above, Delbruck was actually using the Laplacian as a measure of the sharpness of the image. In the computer vision literature, the use of the Laplacian as a measure of sharpness for automatic focusing was also examined by Krotkov [321. Since the Laplacian of an image filtered by the exponential filter is approximately proportional to the residual, Shen and Castan use the zero crossings of the residual to detect edges. In related work, Chen, Lee, and Pavlidis [33] have also proposed using the zero crossings of the residual of an image filtered with a Gaussian convolution kernel as edge locations. A simple edge detection algorithm by residual zero crossings could be easily implemented using a CNN resistive grid architecture as follows. The algorithm first filters the image with a resistive grid, then thresholds the voltage across each vertical resistance at zero. In the CNN implementation, the difference between the input voltage and the cell output voltage would be thresholded at zero. This results in a binary image consisting of patches corresponding to areas that have positive or negative residual. The borders between these areas could be detected with the CNN binary image edge detection template introduced in [9]. These borders would correspond to edges. In practice, it is better to threshold the residual at some small positive value to avoid detecting residual zero crossings resulting from very small fluctuations in the residual. Unfortunately, simply labeling the zero crossings of the residual as edge locations leads to many spurious responses. These spurious responses are generated because the residual contains most of the high frequency components of the image. The residual of an image filtered by a low-pass filter with transfer function H is equal to the

546
1-H(dB)

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS-I:

FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 39, NO. 7, JULY 1992


H1-H2 (dB)

-I1
-30

-30

-zll

image filtered with a filter with transfer function 1 - H . This filter is a high-pass filter that passes most of the high frequency components of the image with near unity gain. Fig. 1.5 shows the cross section of the frequency response of the residual filter for resistive grid filtering, 1 H(e'*"fm, e""fn), at f = 0. Although the high frequency , components contain the information about the edge locations, they also contain high frequency noise components. This high frequency noise can lead to detection of spurious edges. To eliminate these spurious responses. both Shen and Castan's algorithm and Chen et al.'s algorithm use quite complicated postprocessing steps which would be difficult to implement in analog VLSI. However, Chen et al. propose a simple preprocessing step in their algorithm which helps to remove many of the spurious responses. Instead of locating zero crossings of the difference between the image and a low-pass filtered version of it, they actually locate zero crossings of the difference between the image filtered by a low-pass filter with a high cutoff frequency, H1, and the image filtered by a low-pass filter with a low cutoff frequency, H 2 . This is somewhat similar to the Difference of Gaussian (DOG) algorithm of Hildreth and Marr. Fig. 16 shows that the result of this operation has a smaller high frequency content than the residual. Implemented on a resistive grid or a CNN, this algorithm would require either implementing two resistive grids on the chip, or some a programmable CNN with analog storage [34] that could store one of the filtered images. Once the two filtered images are computed, the difference between the two images is then thresholded at a small positive value and the binary edge detection circuit applied. Fig. 18 shows the results of computer simulations of this edge detection algorithm operating on the image in Fig. 17. This image contains 2.56 by 256 pixels where each pixel assumes a value between 0 and 2.55. Fig. 18(a) shows the edges detected by thresholding the residual of the image

Fig. 17. This 256 by 256 pixel image was input to the resistive grid edge detection algorithm described in Section V.

filtered with a resistive grid with A = 0.1 at 10. As expected, there are many spurious edges detected. Many of these are removed by using the zero crossings of the difference between the image filtered by a resistive grid with A = 2 and the image filtered by a resistive grid with A = 0.1 (Fig. 18(b)). The threshold in this case was also chosen to be 10.
VI. CONCLUSION

In this paper, we have used the CNN framework to provide an input-output analysis of the processing performed by resistive grids. The primary result of this analysis is the theoretical validation of one of the folk theorems surrounding work with resistive grids, resistive grids are

I
SHI AND CHUA RESISTIVE GRID IMAGE FILTERING
547

REFERENCES
M. A. Mahowald and C. Mead, Silicon retina, in C. Mead, Ed., Analog VLSI and Neural Systems. Menlo Park, C A AddisonWesley, 1989, ch. 15, pp. 251-278. M. A. C. Maher, S. P. DeWeerth, M. A. Mahowald, and C. A. Mead, Implementing neural architectures using analog VLSI circuits, IEEE Trans. Circuits Syst., vol. 36, pp. 643-652, May 1989. J. Tanner and C. Mead, Optical motion sensor, in C. Mead, Ed., Analog W S I and Neural Systems. Menlo Park, CA: AddisonWesley, 1989, ch. 14, pp. 225-255. J. Hutchinson, C. Koch, J. Luo, and C. Mead, Computing motion using analog and binary resistive networks, Computer, pp. 52-63, Mar. 1988. J. Harris, C. Koch, J. Luo, and J. Wyatt, Resistive fuses: Analog hardware for detecting discontinuities in early vision, in C. Mead and M. Ismail, Eds., Analog VLSI Implementation of Neural Systems. Boston, MA: Kluwer, 1989, ch. 2, pp. 27-56. N. K. Nordstrom, Variational edge detection, Ph.D. dissertation, UC Berkeley, 1990. P. Perona and J. Malik, Scale-space and edge detection using anisotropic diffusion, IEEE Trans. Pattern Anal. Mach. Intell., vol. 12, pp. 629-639, July 1990. L. 0. Chua, Stationary principles and potential functions for nonlinear networks, J . Franklin Inst., pp. 91-114, Aug. 1973. L. 0. Chua and L. Yang, Cellular neural networks: Applications, IEEE Trans. Circuits Syst., vol. 32, Oct. 1988. -Cellular neural networks: Theory, IEEE Trans. Circuits Syst., vol. 32, Oct. 1988. T. Roska and L. 0. Chua, Cellular neural networks with nonlinear and delay-type template elements, in Proc. IEEE Int. Workshop on Cellular Neural Networks and Their Applications, pp. 12-25, 1990. D. E. Rumelhart, J. L. McClelland, and The PDP Research Group, Parallel Distributed Processing: Explorations in the Microstntcture of Cognition, Vol. 1: Foundations. Cambridge, MA: MIT Press, 1986. L. 0.Chua and B. E. Shi, Multiple layer cellular neural networks: A tutorial, in E. F. Deprettere and A. van der Veen, Eds., Algorithms and Parallel VLSI Architectures. vol. A: Tutorials. New York: Elsevier, 1991, pp. 137-168. D. Standley and B. Horn, An object position and orientation IC with embedded imager, in Proc. IEEE Int. Solid State Circuits Conf., pp. 38-39, Feb. 1991. C. Mead, Analog VLSI and Neural Systems. Reading, MA: Addison Wesley, 1989. J. M. Hutchinson and C. Koch, Simple analog and hybrid networks for surface interpolation, in J. S. Denker, Ed., Neural Networks for Compufing. New York: American Institute of Physics, 1986; pp. 235-239. P. J. Davis, Circulant Matrices. New York: Wiley, 1979. J. H. Wilkinson, The Algebraic Eigeni~ilueProblem. Oxford: Oxford University Press, 1965. D. Marr, Vision. New York: W. H. Freeman and Company, 1982. J. Canny, A computational approach to edge detection, IEEE Trans. Patteni Anal. Much. Intell., vol. 8, pp. 679-697, Nov. 1986. W. E. L. Grimson, From Images to Surjaces. Cambridge, MA: MIT Press, 1981. S. Ullman, The Interpretation of Visual Motion. Cambridge, MA: MIT Press, 1985. R. J. Beattie, Edge detection for semantically based early visual processing, Ph.D. dissertation, Univ. Edinburgh, 1984. R. A. Brooks, Symbolic reasoning among 3-d models and 2-d images, Tech. Rep. AIM-343, Stanford Univ., 1981. B. K. P. Horn, The Binford-Horn line-finder, Tech. Rep. A I Memo 285, MIT AI Lab., Cambridge, MA, 1971. J. G. Harris, C. Koch, and J. Luo, A two-dimensional analog VLSI circuit for detecting discontinuities in early vision, Science, vol. 248, pp. 1209-1211, June 1990. J. Shen and S. Castan, A n optimal linear operator for edge detection, in Proc. IEEE Computer Soc. Conf. Computer Vision and Pattern Recognition, pp. 109-114, June 1986. -Edge detection based on multi-edge models, in Jean Besson, Ed., Real Time Image Processing: Concepts and Technologies. SPIE, pp. 46-53, NOV.1987. -Further results on DRF method for edge detection, in Proc. 9th In/.Conf. Pattern Recognition, pp. 223-225, Nov. 1988.

..

(b)

Fig. 18. (a) Edges detected using the zero crossings of the residual of the image in Fig. 17 show many spurious responses. (b) Most of the spurious responses are removed by using zero crossings of the difference between two filtered images.

robust in the presence of parameter variations in the array. We have also used the insight about frequency response of resistive grids gained through this analysis to propose a simple edge detection algorithm.

ACKNOWLEDGMENT The authors would like to thank Dr. T. Roska for many helpful discussions. In addition, they would like to thank K. Crounse for his help designing some of the figures.

548

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS-I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 39, NO. 7, JULY 1992

[30] W. Bair and C. Koch, Real-time motion detection using an analog VLSI zero-crossing chip, in B. P. Mathur and C. Koch, Eds., Wsual Information Processing: From Neurons to Chips. Orlando, FL: Apr. 1991, pp. 59-65. [31] T. Delbruck, A chip that focuses an image on itself, in C. Mead and M. Ismail, Eds., Analog W S I Implementation of Neural Systems. Boston, MA: Kluwer, 1989, pp. 171-188. [32] E. P. Krotkov, Actice Computer Vision by Cooperatice Focus and Stereo. New York: Springer-Verlag, 1989. 1331 M. H. Chen, D. Lee, and T. Pavlidis, Residual analysis for feature detection, IEEE Trans. Pattern Anal. Mach. Intell., vol. 13, pp. 30-40, Jan. 1991. [34] K. Halonen, V. Porra, T. Roska, and L. Chua, VLSI implementation of a reconfigurable cellular neural network containing local

logic (CNNL), in Proc. IEEE Int. Workshop Cellular Neural Networks and Their Applications, pp. 206-215, 1990.

Bertram E. Shi received the B.S. and M.S. degrees in electrical engineering from Stanford University in 1987 and 1988, respectively. He is currently studying for his Ph.D. degree in electrical engineering at U. C. Berkeley. His present research interests lie in the areas of neural networks, signal processing, and computer vision.

Leon 0. Chua (S60-M62-SM70-F74), for a photograph and biography, please see page 27 of the January 1992 issue of this TRANSACTIONS.

You might also like