You are on page 1of 88

Statistical Mechanics of Neocortical Interactions (SMNI) Lester Ingber Lester Ingber Research POB 06440 Wacker Dr PO Sears Tower

Chicago, IL 60606 ingber@ingber.com ingber@alumni.caltech.edu and DRW Investments LLC Chicago Mercantile Exchange Center 30 S Wacker Dr Ste 1516 Chicago, IL 60606 ingber@drwtrading.com

SMNI RATIONALETOP DOWN SMNI DEVELOPMENTBOTTOM UP MATHEMATICAL AND NUMERICAL ASPECTS SMNI APPLICATIONSSTM SMNI APPLICATIONSEEG OTHER APPLICATIONSMNN AND CHAOS SMNI CANONICAL MOMENTA INDICATORSEEG SMNI GENERIC FEATURES The latest Adaptive Simulated Annealing (ASA) code and related reprints can be retrieved via WWW from http://www.ingber.com/ or via FTP from ftp.ingber.com.
$Id: smni_lecture,v 1.84 1998/11/30 13:59:32 ingber Exp ingber $

SMNI

Lester Ingber

Statistical Mechanics of Neocortical Interactions (SMNI) SMNI RATIONALETOP DOWN . . . . . . . . . . . Noninvasive Recordings of Brain Activity . . . . . . . EEG Electrodes . . . . . . . . . . . . . . . . EEG Power Spectra . . . . . . . . . . . . . . Single Electrode Recording of Dipole Activity . . . . . . EEG of Mechanical String . . . . . . . . . . . . String Equation . . . . . . . . . . . . . . . . String Observables . . . . . . . . . . . . . . . SMNI DEVELOPMENTBOTTOM UP . . . . . . . . . Scales Illustrated . . . . . . . . . . . . . . . SMNI vs Articial Neural Networks . . . . . . . . . Microscopic Neurons . . . . . . . . . . . . . . Mesoscopic Aggregation . . . . . . . . . . . . . Mesoscopic Interactions . . . . . . . . . . . . . Mathematical Development . . . . . . . . . . . . Inclusion of Macroscopic Circuitry . . . . . . . . . . Equivalent Nearest-Neighbor Interactions . . . . . . . . Minima Structure of Nonlinear Lagrangian . . . . . . . MATHEMATICAL AND NUMERICAL ASPECTS . . . . . . Induced Riemannian Geometry . . . . . . . . . . . Calculation of Information . . . . . . . . . . . . Measures of Nonlinear Nonequilibrium . . . . . . . . Representations of Path Integral . . . . . . . . . . . PATHINT Calculation of Path Integral . . . . . . . . . Adaptive Simulated Annealing (ASA) . . . . . . . . . SMNI APPLICATIONSSTM . . . . . . . . . . . . . Derivation of Short-Term Memory (STM) . . . . . . . Centering Mechanism . . . . . . . . . . . . . . Applying the Centering MechanismInhibitory State . . . Contours of Inhibitory State . . . . . . . . . . . Applying the Centering MechanismExcitatory State . . . Contours of Excitatory State . . . . . . . . . . . Applying the Centering MechanismBalanced State . . . Contours of Balanced State . . . . . . . . . . . Modeling Visual Cortex STM . . . . . . . . . . . STM Stability and Duration . . . . . . . . . . . . PATHINT Calculations of STM . . . . . . . . . . . PATHINT Calculations of STM BC_VIS . . . . . . . . Primacy Versus Recency Rule . . . . . . . . . . . 40 Hz Models of STM . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44

SMNI SMNI APPLICATIONSEEG . . . . . . . . . . . Local and Global EEG . . . . . . . . . . . . EEG PhenomenaEuler-Lagrange Approximation . . E-L Nearest-Neighbor Contours . . . . . . . . . E-L Propagation of Information . . . . . . . . . Macroscopic Linearization Aids Probability Development EEG Macrocolumnar Lagrangian . . . . . . . . EEG Variational Equation . . . . . . . . . . Macroscopic Coarse-Graining . . . . . . . . . Development of Macrocolumnar EEG Distribution . . . Development of EEG Dipole Distribution . . . . . . Key Indicators of EEG Correlates to Brain States . . . Pilot StudyEEG Correlates to Behavioral States . . . Sample Results Total Group . . . . . . . . . Lessons Learned . . . . . . . . . . . . . Future Sources of Data . . . . . . . . . . . Precursor Information of Epileptic Seizures . . . . . OTHER APPLICATIONSMNN AND CHAOS . . . . . Generic Mesoscopic Neural Networks . . . . . . . Further Development of SMNI for MNN . . . . . . MNN Minicolumnar Interactions . . . . . . . . MNN Learning . . . . . . . . . . . . . . MNN Prediction . . . . . . . . . . . . . MNN Parallel Processing . . . . . . . . . . . Generic MNN . . . . . . . . . . . . . . Nonlinear Dynamics Chaos? . . . . . . . . Dufng EEG Analog Chaos in Noise . . . . . . Dufng EEG Analog Preliminary Indications . . . SMNI CANONICAL MOMENTA INDICATORSEEG . . Canonical Momenta Indicators (CMI) . . . . . . . SMNI vs SMFM (SM of Financial Markets) . . . . . SMNI CMI of Genetic Predisposition to Alcoholism . . Data vs SMNI CMI for Alcoholic Group S1 . . . . Data vs SMNI CMI for Control Group S1 . . . . Data vs SMNI CMI for Alcoholic Group S2 Match . Data vs SMNI CMI for Control Group S2 Match . . Data vs SMNI CMI for Alcoholic Group S2 No Match Data vs SMNI CMI for Control Group S2 No Match . SMNI GENERIC FEATURES . . . . . . . . . . . Scaling Paradigm . . . . . . . . . . . . . Addressing Phenomena at Multiple Scales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Lester Ingber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85

SMNI Increasing Signal to Noise/Audit Trail to Sources . . . Generic Applications to Process Aggregated Information . Utility, Parsimony, & Beauty . . . . . . . . . . . . .

Lester Ingber . . . . . . . . . 86 87 88

SMNI SMNI RATIONALETOP DOWN

Lester Ingber

SMNI

Lester Ingber

Noninvasive Recordings of Brain Activity There are several noninvasive experimental or clinical methods of recording brain activity, e.g., electroencephalography (EEG) magnetoencephalography (MEG) magnetic resonance imaging (MRI) positron-emission tomography (PET) single-photon-emission-computed tomography (SPECT) While MRI, PET, and SPECT offer better three-dimensional presentations of brain activity, EEG and MEG offer superior temporal resolutions on the order of neuronal relaxation times, i.e., milliseconds. Recently, it also has been shown that EEG and MEG offer comparable spatial resolutions on the order of several millimeters; a square millimeter is the approximate resolution of a macrocolumn representing the activity of approximately 105 neurons. This is not quite the same as possessing the ability to discriminate among alternative choices of sets of dipoles giving rise to similar electric elds.

SMNI

Lester Ingber

EEG Electrodes A typical map of EEG electrode sites is given as below. Many neuroscientists are becoming aware that higher electrode densities are required for many studies. For example, if each site below represented 5 closely spaced electrodes, a numerical Laplacian can offer relatively reference-free recordings and better estimates of localized sources of activity.

SMNI

Lester Ingber

EEG Power Spectra Limiting cases of linear (macroscopic) theories of intracortical interaction predict local wave phenomena and obtain dispersion relations with typical wave numbers k = 10 to 100 cm1 and dominant frequencies in the general range of human spontaneous EEG (1-20 Hz). However, human scalp potentials are spatially ltered by both distance and tissue between cortical current sources and surface electrodes so that scalp EEG power is attenuated to about 1%. The alpha mode (9.5 Hz) is consistent with standing waves, whereas the mu mode (8.0 Hz) is more consistent with posterior to anterior traveling waves (k y ) across the electrode array.

High resolution estimate of spectral density function |(k, )|2 for EEG recorded from an awake human subject (eyes closed) using 16 scalp recording sites over the right hemisphere. (a) Mu rhythm at 8.0 Hz. (b) Alpha rhythm at 9.5 Hz from the same 1 minute record.

Figure courtesy of Paul Nunez

SMNI

Lester Ingber

Single Electrode Recording of Dipole Activity Macrocolumns may be considered as point sources of dipole-like interactions, mainly due to coherent current ow of top-layer afferent interactions to bottom-layer efferent interactions. However, there is a problem of nonuniqueness of the electric potential that arises from such source activity; Laplacian measurements can help to address this problem.

Figure courtesy of Paul Nunez

SMNI

Lester Ingber

EEG of Mechanical String The mechanical string has linear properties and is connected to local nonlinear oscillators. Local cortical dynamics in dipole layers is here considered analogous to the nonlinear mechanical oscillators which inuence global modes. Macroscopic scalp potentials are analogous to the lower modes of string displacement. For purposes of illustration, a linear string with attached oscillators, e.g., nonlinear springs may be compared to a one-dimensional strip of neocortex:

SMNI String Equation The following equation describes the string displacement

Lester Ingber

2 2 2 2 c + [ 0 + f ()] = 0 , 2 2 t x for a linear array (length l) of sensors (electrodes) of size s. Thus, wave-numbers in the approximate range

l s can be observed. If the center to center spacing of sensors is also s, l = Ms, where M = (number of sensors - 1), k = 2n /R for n = {1, 2, 3, . . . } (string forms closed loop), and sensors span half the string (brain), l = R/2, then 1nM for some maximum M, which is on the order of 3 to 7 in EEG studies using 16 to 64 electrodes in two-dimensional arrays on the cortical surface. For scalp recordings, the wavenumber restriction is more severe. For example, a typical circumference of the neocortex following a coordinate in and out of ssures and sulci is R = 100 cm (about 50 cm along the scalp surface). If EEG power is mostly restricted to k < 0. 5 cm1 , only modes n < 4 are observed, independent of the number of electrodes. Theory should be able to be similarly ltered, e.g., in order to properly t EEG data.

SMNI String Observables The string displacement (potential within the cortex) is given by (x, t) =

Lester Ingber

G n (t) sin k n x , n=1 G n (t) sin k n x .


M

but the observed is given by (x, t) =


n=1

In the linear case, where f () = 0 (equal linear oscillators to simulate local circuit effects in cortical columns), then 2 2 2 c2 + 0 = 0 , 2 2 t x =
n=1 2 2 n = 0 + c2 k 2 , n

An cos n t sin k n x ,

giving a dispersion relation n (k n ). For the nonlinear case, f () 0, the restoring force of each spring is amplitude-dependent. In fact, local oscillators may undergo chaotic motion. What can be said about (x, t) =

G n (t) sin k n x , n=1

the macroscopic observable displacement potential on the scalp or cortical surface? It would seem that should be described as a linear or quasi-linear variable, but inuenced by the local nonlinear behavior which crosses the hierarchical level from mesoscopic (columnar dipoles) to macroscopic. How can this intuition be mathematically articulated, for the purposes of consistent description as well as to lay the foundation for detailed numerical calculations?

SMNI SMNI DEVELOPMENTBOTTOM UP

Lester Ingber

SMNI

Lester Ingber

Scales Illustrated Illustrated are three biophysical scales of neocortical interactions: (a)-(a* )-(a) microscopic neurons; (b)-(b) mesocolumnar domains; (c)-(c) macroscopic regions. In (a* ) synaptic interneuronal interactions, averaged over by mesocolumns, are phenomenologically described by the mean and variance of a distribution . Similarly, in (a) intraneuronal transmissions are phenomenologically described by the mean and variance of . Mesocolumnar averaged excitatory (E) and inhibitory (I ) neuronal rings are represented in (a). In (b) the vertical organization of minicolumns is sketched together with their horizontal stratication, yielding a physiological entity, the mesocolumn. In (b) the overlap of interacting mesocolumns is sketched. In (c) macroscopic regions of neocortex are depicted as arising from many mesocolumnar domains. These are the regions designated for study here. (c) sketches how regions may be coupled by long-ranged interactions.

SMNI SMNI vs Articial Neural Networks Feature spatial unit spatial scale large sets of neurons neuronal features columnar features scalp EEG applications intra-cortical EEG spike EEG SMNI minicolumn 105 106 neurons pre-analysis statistical constraints columnar interactions yes yes no ANN neuron

Lester Ingber

102 103 neurons post-analysis neuronal mechanisms columnar formation probably not yes yes

While ANN models use simplied algebraic structures to represent real neurons, SMNI models develop the statistics of large numbers of realistic neurons representing huge numbers of synaptic interactionsthere are 104 to 105 synapses per neuron. Furthermore, unlike most ANN approaches, SMNI accepts constraints on all its macrocolumnar averaged parameters to be taken from experimentally determined ranges of synaptic and neuronal interactions; there are no unphysical parameters. The stochastic and nonlinear nature of SMNI development is directly derived from experimentally observed synaptic interactions and from the mathematical development of observed minicolumns and macrocolumns of neurons.

SMNI

Lester Ingber

Microscopic Neurons A derivation has been given of the physics of chemical inter-neuronal and electrical intra-neuronal interactions. This derivation generalized a previous similar derivation. The derivation yields a short-time probability distribution of a given neuron ring due to its just-previous interactions with other neurons. Within j 510 msec, the conditional probability that neuron j res ( j = +1) or does not re ( j = 1), given its previous interactions with k neurons, is exp( j F j ) , p j exp(F j ) + exp(F j ) Fj = a jk =
k a jk (v jk 2

V j a v jk jk + jk 2 )) 1/2 ) ,

(
1 2

A jk ( k + 1) + B jk .

represents the intra-neuronal probability distribution, e.g., of a contribution to polarization achieved at an axon given activity at a synapse, taking into account averaging over different neurons, geometries, etc. represents the inter-neuronal probability distribution, e.g., of thousands of quanta of neurotransmitters released at one neurons postsynaptic site effecting a (hyper-)polarization at another neurons presynaptic site, taking into account interactions with neuromodulators, etc. This development is true for Poisson, and for Poisson or Gaussian. V j is the depolarization threshold in the somatic-axonal region, v jk is the induced synaptic polarization of E or I type at the axon, and jk is its variance. The efcacy a jk , related to the inverse conductivity across synaptic gaps, is composed of a contribution A jk from the connectivity between neurons which is activated if the impinging k-neuron res, and a contribution B jk from spontaneous background noise. Even at the microscopic scale of an individual neuron, with soma 10 m, this conceptual framework assumes a great deal of statistical aggregation of molecular scales of interaction, e.g., of the biophysics of membranes, of thickness 5 103 m, composed of biomolecular leaets of phospholipid molecules.

SMNI

Lester Ingber

Mesoscopic Aggregation This microscopic scale itself represents a high aggregation of sub-microscopic scales, aggregating effects of tens of thousands of quanta of chemical transmitters as they inuence the 5 103 m This microscopic scale is aggregated up to the mesoscopic scale, using P q (q) = dq 1 dq 2 P q1 q2 (q 1 , q 2 ) [q (q 1 + q 2 )] . The rst papers using this approach to aggregate neuronal interactions appeared in Journal of Social and Biological Structures in 1981, in Physica D in 1982, and in Physical Review A in 1983. The SMNI approach can be developed without recourse to borrowing paradigms or metaphors from other disciplines. Rather, in the course of a logical, nonlinear, stochastic development of aggregating neuronal and synaptic interactions to larger and larger scales, opportunities are taken to use techniques of mathematical physics to overcome several technical hurdles. After such development, advantage can be taken of associated collateral descriptions and intuitions afforded by such mathematical and physics techniques as they have been used in other disciplines, but paradigms and metaphors do not substitute for logical SMNI development.

SMNI Mesoscopic Interactions

Lester Ingber

Microscopic Scale Retain independence of excitatory (E) and inhibitory (I ) interactions Retain nonlinear development of probability densities Mesoscopic Scale Convergence<->Divergence minicolumnar<->macrocolumnar Nearest-neighbor (NN) interactions summarize N16 N interactions Macroscopic scale Include long-ranged interactions, constraints on mesocolumns For the purposes of mesoscopic and macroscopic investigation, this biological picture can be cast into an equivalent network. However, the above aspects must not be simply cast away.

SMNI

Lester Ingber

Mathematical Development A derived mesoscopic Lagrangian L M denes the short-time probability distribution of rings in a minicolumn, composed of 102 neurons, given its just previous interactions with all other neurons in its macrocolumnar surround. G is used to represent excitatory (E) and inhibitory (I ) contributions. G designates contributions from both E and I . P M = P G [M G (r; t + )|M G (r; t)] M
G

=
j

N j M E (r; t + ) j M I (r; t + ) p j jI j jE

(2 gGG )1/2 exp(N L G ) , M


G

P M (2 )1/2 g1/2 exp(N L M ) , G G L M = L E + L I = (2N )1 ( M gG )gGG ( M gG ) + M G J G /(2N ) V , M M V = V G ( M G )2 , G


G

gG = 1 (M G + N G tanh F G ) ,
G gGG = (gGG )1 = G 1 N G sech2 F G ,

g = det(gGG ) , FG =
|G| |G| (V G aG v G N G |G| ( [(v G )2

1 2

|G| |G| AG v G M G )

|G| |G| ( G )2 ](aG N G

1 2

|G| AG M G )) 1/2 )

aG = G

1 2

AG + B G , G G

BG are minicolumnar-averaged inter-neuronal synaptic efcacies, G averaged means and variances of contributions to neuronal electric polarizations. M G and N G in F G are afferent macrocolumnar rings, scaled to efferent minicolumnar rings by N /N * 103 , where N * is the number of neurons in a macrocolumn, 105 . Similarly, AG and BG have been scaled by G G N * /N 103 to keep F G invariant.

where AG and G G v G and G are G

SMNI

Lester Ingber

Inclusion of Macroscopic Circuitry The most important features of this development are described by the Lagrangian L G in the negative of the argument of the exponential describing the probability distribution, and the threshold factor F G describing an important sensitivity of the distribution to changes in its variables and parameters. To more properly include long-ranged bers, when it is possible to numerically include interactions among macrocolumns, the J G terms can be dropped, and more realistically replaced by a modied threshold factor F G , FG =
|G| |G| (V G aG v G N G |G| ( [(v G )2

1 2

|G| |G| AG v G M G aE v E N E E E

1 2

AE v E M E ) E E
1 2

|G| |G| ( G )2 ](aG N G

1 2

|G| AG M G

aE N E E

AE M E )) 1/2 ) E

aE = E

1 2

AE + BE . E E

Here, afferent contributions from N E long-ranged excitatory bers, e.g., cortico-cortical neurons, have been added, where N E might be on the order of 10% of N : Of the approximately 1010 to 1011 neocortical neurons, estimates of the number of pyramidal cells range from 1/10 to 2/3. Nearly every pyramidal cell has an axon branch that makes a cortico-cortical connection; i.e., the number of cortico-cortical bers is of the order 1010 . This is further modied for use in MNN.

SMNI

Lester Ingber

Equivalent Nearest-Neighbor Interactions Nearest-neighbor (NN) interactions between mesocolumns are illustrated. Afferent minicolumns of 102 neurons are represented by the inner circles, and efferent macrocolumns of 105 neurons by the outer circles. Illustrated are the NN interactions between a mesocolumn, represented by the thick circles, and its nearest neighbors, represented by thin circles. The area outside the outer thick circle represents the effective number of efferent macrocolumnar nearest-neighbor neurons. I.e., this is the number of neurons outside the macrocolumnar area of inuence of the central minicolumn. This approximation, albeit successful in the 1983 Physical Review A calculations, can be replaced by a more sophisticated algorithm, MNN, published in Physical Review A Rapid Communications in 1992.

SMNI

Lester Ingber

Minima Structure of Nonlinear Lagrangian Examination of the minima structure of the spatially-averaged and temporallyaveraged Lagrangian provides some quick intuitive details about most likely states of the system. This is supported by further analysis detailing the actual spatialtemporal minima structure. Illustrated is the surface of the static (timeindependent) mesoscopic neocortical Lagrangian L over the excitatory-inhibitory E I ring plane (M M ), for a specic set of synaptic parameters. All points on the surface higher than 5 103 / have been deleted to expose this ne structure.

SMNI MATHEMATICAL AND NUMERICAL ASPECTS

Lester Ingber

SMNI

Lester Ingber

Induced Riemannian Geometry A Riemannian geometry is derived as a consequence of nonlinear noise, reecting that the probability distribution is invariant under general nonlinear transformations of these variables. This becomes explicit under a transformation to the midpoint discretization, in which the standard rules of differential calculus hold for the same distribution: M G (t s ) =
1 2
G G G G G (M s+1 + M s ) , M (t s ) = (M s+1 M s )/ ,

P = P , P = . . . DM exp(

s=0

tL Fs ) ,

DM =

g1/2 (2 t)1/2 0+
NG

s=1

g1/2 s+

(2 t)1/2 dM sG , G=1
0

G dM s

G G G M s , M 0 = M tG , M u+1 = M tG , =1

LF =

1 2

1 G G ( M hG )gGG ( M hG ) + hG ;G + R/6 V , 2

[. . .] [. . .],G = , M G hG = gG
1 1/2 1/2 GG g (g g ),G , 2

gGG = (gGG )1 ,
G g s [M G (t s ), t s ] = det(gGG )s , g s+ = g s [M s+1 , t s ] , F hG ;G = hG + GF hG = g1/2 (g1/2 hG ),G , ,G

F g LF [JK , L] = g LF (g JL,K + g KL,J g JK ,L ) , JK R = g JL R JL = g JL g JK R FJKL , R FJKL =


1 2
M N M N (g FK ,JL g JK ,FL g FL,JK + g JL,FK ) + g MN ( FK JL FL JK ) .

SMNI Calculation of Information Information is well dened in terms of a path integral: I= D M P ln( P/P)

Lester Ingber

with respect to a reference distribution P. Like the probability distribution P which denes it, information also is an invariant under general nonlinear transformations. Some investigators have tried to directly t statistical white-noise models of mutual information under different electrodes. SMNI rst ts observables (electric potentials or current ows) to probability distributions, describing inter-electrode interactions via long-ranged bers and intra-electrode interactions via short-ranged bers, before calculating the information. For example, sensory cortex may transmit information to motor cortex, although they have somewhat different neuronal structures or neuronal languages. This information ow can be relatively independent of information ows that take place at ner resolutions, e.g., across a subset of synaptic gaps or individual neurons. The mathematics required to process these stochastic nonlinear forms to describe multivariate, multiplicative noise, Gaussian Markovian systems was developed in the late 1970s in the mathematical physics literature. This neocortical system was the rst physical system to utilize these techniques. Since then, they have been used for studies in nuclear physics (Physical Review C & D 1984-1986), nance (Physical Review A 1990), and large-scale DoD simulations (Mathematical and Computer Modelling 1991).

SMNI Measures of Nonlinear Nonequilibrium Momentum = G = Mass = gGG = Force = L , M G L , (M G /t)

Lester Ingber

2 L , (M G /t)(M G /t)

L L , M G t (M G /t) where M G are the variables and L is the Lagrangian. These physical entities provide another form of intuitive, but quantitatively precise, presentation of these analyses. F = ma: L = 0 =

SMNI

Lester Ingber

Representations of Path Integral The Langevin Rate-Equation exhibits a stochastic equation, wherein drifts can be arbitrarily nonlinear functions, and multiplicative noise is added. M(t + t) M(t)t f [M(t)] , dM M= f , dt M = f + g , < (t) > = 0 , < (t) (t) > = (t t) . The Diffusion Equation is another equivalent representation of Langevin equations. The rst moment drift is identied as f , and the second moment diffusion, the variance, is identied as g2 . P ( fP) 1 2 ( g2 P) = + . M 2 M 2 t The Path-Integral Lagrangian represents yet another equivalent representation of Langevin equations. Recently it has been demonstrated that the drift and diffusion, in addition to possibly being quite general nonlinear functions of the independent variables and of time explicitly, may also be explicit functions of the distribution P itself. P[M t+t |M t ] = (2 g2 t)1/2 exp(tL) , L = ( M f )2 /(2 g2 ) , P[M t |M t 0 ] = . . . dM tt dM t2t . . . dM t 0 +t P[M t |M tt ]P[M tt |M t2t ] . . . P[M t 0 +t |M t 0 ] , P[M t |M t 0 ] = . . . DM exp(
u s=0 u

tL s ) ,

DM = (2 g2 t)1/2 (2 g2 t)1/2 dM s , 0 s
s=1

dM s M s , M0 = M t =1

, M u+1 = M t .

This representation is useful for tting stochastic data to parameters in L.

SMNI

Lester Ingber

PATHINT Calculation of Path Integral Given a form for L, we use the path-integral to calculate the long-time distribution of variables. This is impossible in general to calculate in closed form, and we therefore must use numerical methods. PATHINT is a code developed for calculating highly nonlinear multivariate Lagrangians. The path-integral calculation of the long-time distribution, in addition to being a predictor of upcoming information, provides an internal check that the system can be well represented as a nonlinear Gaussian-Markovian system. The use of the path integral to compare different models is akin to comparing short- and long-time correlations. Complex boundary conditions can be cleanly incorporated into this representation, using a variant of boundary element techniques. The histogram procedure recognizes that the distribution can be numerically approximated to a high degree of accuracy as sum of rectangles at points M i of height P i and width M i . For convenience, just consider a one-dimensional system. The above path-integral representation can be rewritten, for each of its intermediate integrals, as P(M; t + t) = dM[g1/2 (2 t)1/2 exp(L s t)]P(M; t) s = dMG(M, M; t)P(M; t) , P(M; t) =

(M M i )P i (t) , i=1

1 1 1 , (M i M i1 ) M (M i + M i ) , 2 2 (M M i ) = 0 , otherwise , which yields P i (t + t) = T ij (t)P j (t) , T ij (t) = 2 M i1 + M i


M +M M +M M M /2/2dM M M /2/2dMG(M, M; t) .
i i j j i i1 j j1

SMNI

Lester Ingber

Adaptive Simulated Annealing (ASA) This algorithm ts empirical data to a theoretical cost function over a Ddimensional parameter space, adapting for varying sensitivities of parameters during the t. This algorithm was rst published in Mathematical and Computer Modelling in 1989, and made publicly available in November 1992. Heuristic arguments have been developed to demonstrate that this algorithm is faster than the fast Cauchy annealing, T i = T 0 /k, and much faster than Boltzmann annealing, T i = T 0 / ln k. For parameters
i k [ Ai , Bi ] ,

sampling with the random variable x i , x i [1, 1] ,


i i k+1 = k + x i (Bi Ai ) ,

dene the generating function


D 1 i gT (x i ) , i| + T ) i=1 2 ln(1 + 1/T i )(|x i=1 i in terms of parameter temperatures

gT (x) =

T i = T i0 exp(c i k 1/D ) . The cost-functions L under consideration are of the form h(M; ) = exp(L/T ) , L = Lt +
1 2

ln(2 tg2 ) , t

where L is a Lagrangian with dynamic variables M(t), and parameter-coefcients to be t to data. g t is the determinant of the metric, and T is the cost temperature. For several test problems, ASA has been shown to be orders of magnitude more efcient than other similar techniques, e.g., genetic algorithms. ASA has been applied to several complex systems, including specic problems in neuroscience, nance and combat systems.

The latest Adaptive Simulated Annealing (ASA) code and related reprints can be retrieved via WWW from http://www.ingber.com/ or via FTP from ftp.ingber.com.

SMNI SMNI APPLICATIONSSTM

Lester Ingber

SMNI

Lester Ingber

Derivation of Short-Term Memory (STM) At this mesoscopic scale, properties of STM its capacity, duration and stabilityhave been calculated, and found to be consistent with empirical observations. The rst publications on this approach to STM appeared in Physical Review A in 1984. The maximum STM capacity, consistent with the 7 2 rule, is obtained when a centering mechanism is invoked. This occurs when the threshold factor F G takes minima in the interior of M G ring-space (i.e., not the corners of this space), as empirically observed. In the SMNI papers, the background noise BG was reasonably adjusted to G center F G , with J G = 0, but similar results could have been obtained by adjusting the inuence of the long-ranged bers M G . Within a time scale of several seconds, the human brain can store only about 72 auditory chunks of information (42 visual chunks). To derive this, choose empirical ranges of synaptic parameters corresponding to a predominately excitatory case (EC), predominately inhibitory case (IC), and a balanced case (BC) in between. For each case, also consider a centering mechanism (EC, IC, BC), whereby some synaptic parameter is internally manipulated, e.g., some chemical neuromodulation or imposition of patterns of ring, such that there is a maximal efciency of matching of afferent and efferent rings: M G M G 0 . This sets conditions on other possible minima of the static Lagrangian L.

SMNI

Lester Ingber

Centering Mechanism The centering effect is quite easy for the neocortex to accommodate. For example, this can be accomplished simply by readjusting the synaptic background noise from BG to BG , E E BG = E V G ( AG + BG )v G N I I I I
2 1 1 2

AG v G N E E E

vG N G E

for both G = E and G = I . This is modied straightforwardly when regional inuences from M E are included, as used in MNN. In general, BG and BG (and possibly AG and AG due E I E I E to actions of neuromodulators, and J G or M constraints from long-ranged bers) are available to force the constant in the numerator to zero, giving an extra degree(s) of freedom to this mechanism. In this context, it is experimentally observed that the synaptic sensitivity of neurons engaged in selective attention is altered, presumably by the inuence of chemical neuromodulators on postsynaptic neurons. The threshold factors greatly inuence when and how smoothly the step G functions tanh F IC in gG (t) change M G (t) to M G (t + ). I.e., assuming the drifts are a major driving force, t ( M G (t) + N G tanh F G (t)) ) M(t + t) M(t) together with t can be used to approximately describe the inuence on efferent rings from their afferent inputs. I.e., M(t + t) N G tanh F G (t) can be used a rst approximation. An important side result is to drive most probable states, i.e., small L which is driven largely by small F G , to regions where v G AG M E |v G | AG M I . E E I I Since II efcacies typically are relatively quite small, the probability density under the centering mechanism is strongly peaked along the line v E A E M E |v E | A E M I . E E I I

SMNI

Lester Ingber

Applying the Centering MechanismInhibitory State A model of dominant inhibition describes how minicolumnar rings are suppressed by their neighboring minicolumns. For example, the averaged effect is established by inhibitory mesocolumns (IC) by setting I E E * A E = A I = 2 A E = 0. 01N /N . Since there appears to be relatively little I I connectivity, set A I = 0. 0001N * /N . The background synaptic noise is taken to be I I B E = B E = 2B E = 10B I = 0. 002N * /N . As nonvisual minicolumns are observed to I E I have 110 neurons and as there appear to be a predominance of E over I neurons, here take N E = 80 and N I = 30. Use N * /N = 103 , J G = 0 (absence of long-ranged G interactions), and V G , v G , and G as estimated previously, i.e., V G = 10 mV, G G G |v G | = 0. 1 mV, G = 0. 1 mV. The threshold factors F IC for this IC model are G then
E F IC

= =

0. 5M 0. 25M + 3. 0
I E

1/2 (0. 1M + 0. 05M + 9. 80)1/2


I E

I F IC

0. 005M 0. 5M 45. 8
I E

1/2 (0. 001M + 0. 1M + 11. 2)1/2


I E I

I F IC will cause efferent M (t + t) to re for most afferent input rings, as it G I will be positive for most values of M (t) in F IC , which is already weighted heavily E with a term -45.8. Looking at F IC , it is seen that the relatively high positive I E weights of afferent M require at least moderate values of positive afferent M to E E cause rings of efferent M , diminishing the inuence of M . G Using the centering mechanism, B E = 1. 38 and B I = 15. 3, and F IC is E I G transformed to F IC , E F IC

= =

0. 5M 0. 25M
I I E I

1/2 (0. 1M + 0. 05M + 10. 4)1/2

I F IC

0. 005M 0. 5M
I E

1/2 (0. 001M + 0. 1M + 20. 4)1/2

SMNI

Lester Ingber

Contours of Inhibitory State Contours of the Lagrangian illustrate valleys that trap ring-states of mesocolumns. ( L can be as large as 103 .) No interior stable states are observed at scales of L ranging from 103 down to 102 , until the centering mechanism is turned on.

SMNI

Lester Ingber

Applying the Centering MechanismExcitatory State The other extreme of normal neocortical rings is a model of dominant excitation, effected by establishing excitatory mesocolumns (EC) by using the G same parameters { BG , v G , G , A I } as in the IC model, but setting I G G E I E * A E = 2 A E = 2 A I = 0. 01N /N . This yields
E F EC

= =

0. 25M 0. 5M 24. 5
I E

1/2 (0. 05M + 0. 10M + 12. 3)1/2


I E

I F EC

0. 005M 0. 25M 25. 8


I E

1/2 (0. 001M + 0. 05M + 7. 24)1/2


I E

G The negative constants in the numerators of F EC enhance efferent rings for E E both E and I afferent inputs. However, the increased coefcient of M in F EC E E (e.g., relative to its corresponding value in F IC ), and the fact that M can range up to N E = 80, readily enhance excitatory relative to inhibitory rings throughout E most of the range of M . This is only a rst approximation, and the full Lagrangian must be used to determine the actual evolution. G Using the centering mechanism, B E = 10. 2 and B I = 8. 62, and F EC is E I G transformed to F EC , E F EC

= =

0. 25M 0. 5M
I I E I

1/2 (0. 05M + 0. 10M + 17. 2)1/2

I F EC

0. 005M 0. 25M
I E

1/2 (0. 001M + 0. 05M + 12. 4)1/2

SMNI

Lester Ingber

Contours of Excitatory State Contours of the Lagrangian illustrate valleys that trap ring-states of mesocolumns. ( L can be as large as 103 .) No interior stable states are observed at scales of L ranging from 103 down to 102 , until the centering mechanism is turned on.

SMNI

Lester Ingber

Applying the Centering MechanismBalanced State Now it is natural to examine a balanced case intermediate between IC and EC, I labeled BC. This is accomplished by changing A E = A E = A E = 0. 005N * /N . E I This yields
E F BC

= =

0. 25M 0. 25M 4. 50
I E

1/2 (0. 050M + 0. 050M + 8. 30)1/2


E I

I F BC

0. 005M 0. 25M 25. 8


I E

1/2 (0. 001M + 0. 050M + 7. 24)1/2


I E

E Here the constant in the numerator of F BC , while still negative to promote E E efferent rings, is much greater than that in F EC , thereby decreasing the net excitatory activity to a more moderate level. A similar argument applies in I I comparing F BC to F IC , permitting a moderate level of inhibitory ring. Applying the centering mechanism to BC, B E = 0. 438 and B I = 8. 62, and E I G G F BC is transformed to F BC , E F BC

= =

0. 25M 0. 25M
I I E

1/2 (0. 050M + 0. 050M + 7. 40)1/2

I F BC

0. 005M 0. 5M
I I E

1/2 (0. 001M + 0. 050M + 12. 4)1/2

SMNI

Lester Ingber

Contours of Balanced State No interior stable states are observed at scales of L ranging from 103 down to 102 , until the centering mechanism is turned on.

(a) Contours for values less than 0.04 are drawn for L BC . The M axis I increases to the right, from N E = 80 to N E = 80. The M axis increases to the right, from N I = 30 to N I = 30. In each cluster, the smaller values are closer to the center. (b) Contours for values less than 0.04 are drawn for L BC .

SMNI

Lester Ingber

Modeling Visual Cortex STM When N = 220, modeling the number of neurons per minicolumn in visual neocortex, then only 5-6 minima are found, consistent with visual STM. These minima are narrower, consistent with the sharpness required to store visual patterns.

SMNI

Lester Ingber

STM Stability and Duration The attractors of these models can be identied. Possible hysteresis and/or jumps between local minima can be explicitly calculated within the limitations of studying a specic attractors. Detailed calculations identify the inner valleys of the parabolic trough with stable short-term-memory states having durations on the order of tenths of a second. Stability is investigated by M N 2 L ,GG M G . Therefore, minima of the static Lagrangian L are minima of the dynamic transient system dened by L. The time of rst passage is calculated as > > t vp N |L ,GG (<< M > p )| L ,GG (<< M > v )
2 1/2 G

exp {CN [L(<< M > p ) L(< M > v )]} . > < > For L102 , the only values found for all three cases of ring, the time of rst passage t vp is found to be several tenths of second for jumps among most minima, up to 9. There is hysteresis for deeper valleys at 10th-11th minima of G L FBC at the corners of the M plane. The hysteresis occurs in about a few minutes, which is too long to affect the 7 2 rule. This result is exponentially sensitive to N in /D, and exponentially sensitive to (N * N )1/2 in F G , the threshold factor. Use is made in later development of EEG analyses of the discovered nature of the line of stable minima lying in a deep parabolic trough, across a wide range of cases of extreme types of rings.

SMNI

Lester Ingber

PATHINT Calculations of STM PATHINT is an important partner to the ASA code. ASA has made it possible to perform ts of complex nonlinear short-time probability distributions to EEG data. PATHINT details the evolution of the attractors of these short-time distributions, e.g., as studied in 1984. Now, using ASA, the parameters of the tted SMNI distribution can be used to determine a distribution of rings in a short initial time epoch of EEG. Then, PATHINT can be used to predict the evolution of the system, possibly to predict oncoming states, e.g., epileptic seizures of patients baselined to a tted distribution. Below is the evolution of model BC at 0.01 seconds = , after 100 foldings of the path integral. In agreement with previous studies, models BC and BC_VIS support multiple stable states in the interior physical ring M G -space for time scales of a few tenths of a second. Models EC and IC do not possess these attributes.
PATHINT STM BC t=1 BCP_001 0.0382 0.0306 0.0229 0.0153 0.00764

0.05 0.045 0.04 0.035 0.03 0.025 0.02 0.015 0.01 0.005 0 20 10 -50 E 0 50 -30 -20 0 -10 I

SMNI

Lester Ingber

PATHINT Calculations of STM BC_VIS The interior of M G -space of model BC_VIS is examined at 0.01 seconds = .
PATHINT STM BC_VIS t=1 BCP_VIS_001 0.0247 0.0197 0.0148 0.00987 0.00494

0.03 0.025 0.02 0.015 0.01 0.005 0 50 -150 0 -50 E 0 50 -50 150

-100

100

These high resolution calculations were published in Ingber and Nunez Physical Review (1995).

SMNI

Lester Ingber

Primacy Versus Recency Rule SMNI also presents an explanation, or at least an upper statistical constraint, on the primacy versus recency rule observed in serial processing. First-learned items are recalled most error-free, and last-learned items are still more error-free than those in the middle. I.e., the deepest minima are more likely rst accessed, while the more recent memories or newer patterns have synaptic parameters most recently tuned or are more actively rehearsed. Note that for visual cortex, presentation of 72 items would have memories distributed among different clusters, and therefore the recency effect should not be observed. Instead the 42 rule should dictate the number of presented items.

SMNI

Lester Ingber

40 Hz Models of STM An alternate theory of STM, based on after-depolarization (ADP) at synaptic sites, has been proposed. Feature 7 2 Rule 4 2 Rule Primacy versus Recency Large-Scale Inuences Duration SMNI attractors of L attractors of visual L statistics of attractors consistent with EEG local interactions ADP 40 Hz subcycles ? ? ? neuromodulators

ADP proposes a refresher mechanism of 40 Hz to sustain memories for time scales on the order of tenths of seconds within cycles of 512 Hz, even under the inuence of long-ranged regional rings and neuromodulators. SMNI PATHINT calculations show a rapid deterioration of attractors in the absence of external inuences. ADP and SMNI together forge a stronger theory of STM than either separately.

SMNI SMNI APPLICATIONSEEG

Lester Ingber

SMNI

Lester Ingber

Local and Global EEG The derived mesoscopic dispersion relations also are consistent with other global macroscopic dispersion relations, described by long-range bers interacting across regions. This SMNI model yields oscillatory solutions consistent with the alpha rhythm, i.e., 102 sec1 , equivalent to = /(2 ) 16 Hz. This suggests that these complementary local and global theories may be conuent, considered as a joint set of dispersion relations evolving from the most likely trajectories of a joint Lagrangian, referred to as the equations of motion, but linearly simplied in neighborhoods of minima of the stationary Lagrangian. These two approaches, i.e., local mesocolumnar versus global macrocolumnar, give rise to important alternative conjectures: (1) Is the EEG global resonance of primarily long-ranged cortical interactions? If so, can relatively short-ranged local ring patterns effectively modulate this frequency and its harmonics, to enhance their information processing across macroscopic regions? (2) Or, does global circuitry imply boundary conditions on collective mesoscopic states of local ring patterns, and is the EEG a manifestation of these collective local rings? (3) Or, is the truth some combination of (1) and (2) above? For example, the possibility of generating EEG rhythms from multiple mechanisms at multiple scales of interactions, e.g., as discussed above, may account for weakly damped oscillatory behavior in a variety of physiological conditions. This theory has allowed the local and global approaches to complement each other at a common level of formal analysis, i.e., yielding the same dispersion relations derived from the equations of motion, analogous to (forces) = d(momentum)/dt describing mechanical systems.

SMNI

Lester Ingber

EEG PhenomenaEuler-Lagrange Approximation The variational principle permits derivation of the Euler-Lagrange equations. These equations are then linearized about a given local minima to investigate oscillatory behavior. This calculation was rst published in Physical Review A in 1983. Here, long ranged constraints in the form of Lagrange multipliers J G were used to efciently search for minima, corresponding to roots of the Euler-Lagrange equations. This illustrates how macroscopic constraints can be imposed on the mesoscopic and microscopic systems. 0 = L F = L F ,G:t G L F
1 G |G| f |G| M + f G M g|G| 2 M |G| + b|G| M |G| + b M G ,

G G , MG = MG < M > , < >


G

M G = Re M G exp[i( r t)] , osc G M G (r, t) = d 2 d M osc ( , ) exp[i( r t)] , osc


= { 1. 86 + 2. 38( )2 ; 1. 25i + 1. 51i( )2 } , = | | .

It is calculated that
102 sec1 ,

which is equivalent to
= /(2 ) 16 cps (Hz) .

This is approximately within the experimentally observed ranges of the alpha and beta frequencies.

SMNI

Lester Ingber

E-L Nearest-Neighbor Contours A numerical calculation of the coefcients, gG , of nearest-neighbor interactions, (M G )2 , shows that SMNI can support/describe both spreading activation of rings as well as local containment of rings. gG terms are responsible for the spatial dependence of the EEG dispersion relations.

SMNI E-L Propagation of Information The propagation velocity v is calculated from v = d /d 1 cm/sec , 30 ,

Lester Ingber

which tests the NN interactions, the weakest part of this theory. Thus, within 101 sec, short-ranged interactions over several minicolumns of 101 cm may simultaneously interact with long-ranged interactions over tens of cm, since the long-ranged interactions are speeded by myelinated bers and have velocities of 600900 cm/sec. In other words, interaction among different neocortical modalities, e.g., visual, auditory, etc., may simultaneously interact within the same time scales, as observed. This propagation velocity is consistent with the observed movement of attention and with the observed movement of hallucinations across the visual eld which moves at 1/2 mm/sec, about 5 times as slow as v. (I.e., the observed movement is 8 msec/, and a macrocolumn mm processes 180 of visual eld.)

SMNI

Lester Ingber

Macroscopic Linearization Aids Probability Development The tting of the full SMNI probability distribution to EEG data was published in Physical Review A in 1991. Previous STM studies have detailed that the predominant physics of shortterm memory and of (short-ber contribution) to EEG phenomena takes place in a narrow parabolic trough in M G -space, roughly along a diagonal line. I.e., L M can vary by as much as 105 from the highest peak to the lowest valley in M G -space. Therefore, it is reasonable to assume that a single independent ring variable might offer a crude description of this physics. Furthermore, the scalp potential can be considered to be a function of this ring variable. In an abbreviated notation subscripting the time-dependence, >= < >) < >) t << > (M tE , M tI ) a(M tE < M E > + b(M tI < M I > , where a and b are constants of the same sign, and < > and < M G > represent < > < > a minima in the trough. Laplacian techniques help to better localize sources of activity, and thereby present data more suitable for modeling. E.g., then is more directly related to columnar rings, instead of representing the electric potential produced by such activity. This determines an SMNI approach to study EEG under conditions of selective attention.

SMNI EEG Macrocolumnar Lagrangian Again, aggregation is performed,

Lester Ingber

E I E I P [t+t |t ] = d M t+t dM t+t dM tE dM tI P M [M t+t , M t+t |M tE , M tI ] E I [t+t (M t+t , M t+t )] [t (M tE , M tI )] .

Under conditions of selective attention, within the parabolic trough along a G line in M space, the parabolic shapes of the multiple minima, ascertained by the stability analysis, justies a form P = (2 2 dt)1/2 exp[(dt/2 2 ) dxL ] , L =
1 2

|/t|2

1 2 1 2 c |/x|2 0 ||2 F() , 2 2

where F() contains nonlinearities away from the trough, and where 2 is on the order of N , given the derivation of L M above.

SMNI

Lester Ingber

EEG Variational Equation Previous calculations of EEG phenomena showed that the (short-ber contribution to the) alpha frequency and the movement of attention across the visual eld are consistent with the assumption that the EEG physics is derived from an average over the uctuations of the system. I.e., this is described by the Euler-Lagrange equations derived from the variational principle possessed by L , more properly by the midpoint-discretized L , with its Riemannian terms. Hence, L L L + . 0= t (/t) x (/x) When expressed in the ring state variables, this leads to the same results published in Physical Review A in 1983. The result for the equation is:
2 F 2 2 2 c + 0 + =0. t 2 x 2 If the identication F = f () , is made, then 2 2 2 2 c + [ 0 + f ()] = 0 , 2 2 t x is recovered, i.e., the dipole-like string equation. The previous application of the variational principle was at the scale of minicolumns and, with the aid of nearest-neighbor interactions, the spatialtemporal Euler-Lagrange equation gave rise to dispersion relations consistent with STM experimental observations. Here, the scale of interactions is at the macrocolumnar level, and spatial interactions must be developed taking into account specic regional circuitries.

SMNI

Lester Ingber

Macroscopic Coarse-Graining Now the issue posed previously, how to mathematically justify the intuitive coarse-graining of to get , can be approached. In L above, consider terms of the form
2 dx = dx G n G m sin k n x sin k m x n m

= GnGm
n m n

dx sin k n x sin k m x

= (2 /R) G 2 . n By similarly considering all terms in L , a short-time probability distribution for the change in node n is dened, p n [G n (t + t)|G n (t)] . Note that in general the F() term in L will require coupling between G n and G m , n m. This denes P = p1 p2 . . . p .

Now a coarse-graining can be dened that satises some physical and mathematical rigor: P = dk M+1 dk M+2 . . . dk p1 p2 . . . p M p M+1 p M+2 . . . p . I.e., since SMNI is developed in terms of bona de probability distributions, variables which are not observed can be integrated out. The integration over the ne-grained wave-numbers tends to smooth out the inuence of the k n s for n > M, effectively renormalizing G n G n , , L L . This development shows how this probability approach to EEG specically addresses experimental issues at the scale of the more phenomenological dipole model.

SMNI

Lester Ingber

Development of Macrocolumnar EEG Distribution Advantage can be taken of the prepoint discretization, where the postpoint G M (t + t) moments are given by m < >= a < M E > +b < M I >= ag E + bg I ,
2 < ( )2 > < >2 = a2 g EE + b2 g II .

Note that the macroscopic drifts and diffusions of the s are simply linearly related to the mesoscopic drifts and diffusions of the M G s. For the prepoint M G (t) rings, the same linear relationship in terms of { , a, b } is assumed. The data being t are consistent with invoking the centering mechanism. Therefore, for the prepoint M E (t) rings, the nature of the parabolic trough derived for the STM Lagrangian is taken advantage of, and M I (t) = cM E (t) , where the slope c is determined for each electrode site. This permits a complete transformation from M G variables to variables. Similarly, as appearing in the modied threshold factor F G , each regional inuence from electrode site acting at electrode site , given by afferent rings M E , is taken as
E M E = d M (t T ) ,

where d are constants to be tted at each electrode site, and T is the delay time estimated for inter-electrode signal propagation, typically on the order of one to several multiples of = 5 msec. In future ts, some experimentation will be performed, taking the T s as parameters. This denes the conditional probability distribution for the measured scalp potential , 1 P [ (t + t)| (t)] = exp(L t) , (2 2 t)1/2 L = 1 2 2 ( m)2 .

The probability distribution for all electrodes is taken to be the product of all these distributions: P = P ,

L = L .

SMNI Development of EEG Dipole Distribution

Lester Ingber

The model SMNI, derived for P[M G (t + t)|M G (t)], is for a macrocolumnaraveraged minicolumn; hence it is expected to be a reasonable approximation to represent a macrocolumn, scaled to its contribution to . Hence L is used to represent this macroscopic regional Lagrangian, scaled from its mesoscopic mesocolumnar counterpart L. However, the expression for P uses the dipole assumption to also use this expression to represent several to many macrocolumns present in a region under an electrode: A macrocolumn has a spatial extent of about a millimeter. Often most data represents a resolution more on the order of up to several centimeters, many macrocolumns. A scaling is tentatively assumed, to use the expression for the macrocolumnar distribution for the electrode distribution, and see if the ts are consistent with this scaling. One argument in favor of this procedure is that it is generally acknowledged that only a small fraction of rings, those that re coherently, are responsible for the observed activity being recorded. The results obtained here seem to conrm that this approximation is in fact quite reasonable. For example, for the nonvisual neocortex, taking the extreme of permitting only unit changes in M G rings, it seems reasonable to always be able to map the observed electric potential values from a given electrode onto a mesh a fraction of 4N E N I 104 . It is expected that the use of MNN will make this scaling approximation unnecessary.

SMNI

Lester Ingber

Key Indicators of EEG Correlates to Brain States The SMNI probability distribution can be used directly to model EEG data, instead of using just the variational equations. Some important features not previously considered in this eld that were used in the 1991 Physical Review A paper were: Intra-Electrode Coherency is determined by the standard deviations of excitatory and inhibitory rings under a given electrode as calculated using SMNI. Once the SMNI parameters are t, then these rings are calculated as transformations on the EEG data, as described in terms of the SMNI derived probability distributions. This is primarily a measure of coherent columnar activity. Inter-Electrode Circuitry is determined by the fraction of available longranged bers under one electrode which actively contribute to activity under another electrode, within the resolution of time given in the data (which is typically greater than or equal to the relative refractory time of most neurons, about 510 msec). This is primarily a measure of inter-regional activity/circuitry. Realistic delays can be modeled and t to data. The electrical potential of each electrode, labeled by G, is represented by its dipole-like nature, M G (t), which is inuenced by its underlying columnar activity as well as its interactions with other electrodes, M G , G G. This can be expressed as: G i M = gG + gG i , gG = 1 (M G + N G tanh F G ) , gG = (N G / )1/2 sechF G , i FG =
|G| |G| (V G aG v G N G |G| ( [(v G )2

1 2

|G| |G| AG v G M G aE v E N E E E

1 2

AE v E M E ) E E
1 2

|G| |G| ( G )2 ](aG N G

1 2

|G| AG M G

aE N E E

AE M E )) 1/2 ) E

The equivalent Lagrangian is used for the actual ts.

SMNI

Lester Ingber

Pilot StudyEEG Correlates to Behavioral States In a 1991 paper, sets of EEG data, given to the author by Henri Begleiter, Neurodynamics Laboratory at the State University of New York Health Center at Brooklyn, were obtained from subjects while they were reacting to patternmatching odd-ball-type tasks requiring varying states of selective attention taxing their short-term memory. Based on psychiatric and family-history evaluations, 49 subjects were classied into two groups, 25 possibly having highrisk and 24 possibly having low-risk genetic propensities to alcoholism. After each subjects data set, representing averages over 190 points of EEG data, was tted to its probability distribution, the data were again ltered through the tted Lagrangian, and the mean and mean-square values of M G were recorded as they were calculated from . Then, the groups averages and standard deviations were calculated, the latter simply from G 2 G 2 1/2 {[< (M ) > < M > ]n/(n 1)} , where n = 49. This procedure gives the means and standard deviations of the effective rings, M G , aggregated from all subjects under each electrode, as well as the weights d of the time-delayed interelectrode inputs M E . Although M G were permitted to roam throughout their physical ranges of N E = 80 and N I = 30 (in the nonvisual neocortex, true for all these regions), their observed effective regional-averaged ring states were observed to obey the centering mechanism. I.e., this numerical result is consistent with the assumption that the most likely ring states are centered about the region M G 0 M E in FG. Fitted parameters were used to calculate equivalent columnar ring states and time delays between regions. No statistical differences were observed between the total group, the high-risk group, and the low-risk group.

SMNI

Lester Ingber

Sample Results Total Group TABLE I. Means and standard deviations of averages over EEG recordings from 49 subjects, representing 190 points of data per subject, are consistent with the centering mechanism during selective attention tasks. Under each electrode the means and standard deviations of M G are given. Also given for each electrode are the means and standard deviations of the individual-subject standard deviations, here labeled as , aggregated from each subject. The physical bounds for all M E under these nonvisual regions are N E = 80. Also given are the weights d of the regional time-delayed contributions dM E . The physical bounds for all M E and M E under these nonvisual regions are N E = N E = 80; the physical bounds for all M I are N I = 30. Electrode Fz Cz Pz P3 P4 Electrode Fz Cz Pz P3 P4 Electrode Fz Cz Cz Pz Pz P3 Pz P4 Mean M E 1.13661 -0.533493 -0.3158 -0.121703 -0.0208276 Mean M I 1.83249 0.229446 -0.255393 -0.0234756 -0.0271411 Mean d 0.389722 0.377305 0.536313 0.485525 S.D. M E 3.8754 4.83208 3.87471 10.1069 7.47837 S.D. M I 7.11368 5.89307 6.37452 7.39736 6.25121 S.D. d 0.291677 0.266958 0.288519 0.294742 Mean (M E) 6.09339 6.31146 5.44242 8.152 11.0526 Mean (M I ) 11.4131 11.5578 12.4699 10.5579 12.0525 S.D. (M E ) 5.30891 5.59003 5.50453 7.08701 7.04522 S.D. (M I ) 2.71679 2.68969 2.86198 3.2693 2.52846

SMNI

Lester Ingber

Lessons Learned Future studies will use data obtained by Laplacian recording techniques. E.g., patches of 5 closely-spaced electrodes are used to determine a reasonable underlying source of activity. The previous study used data collected under the assumptions that: there is a genetic predisposition to alcoholism, and that this predisposition could be correlated to EEG activity. These assumptions were negated by the SMNI study: E.g., there were no statistical differences in intra-electrode coherencies or in inter-electrode circuitry, or in any other parameter, between the two groups. Especially in light of other studies, it seems that if such a predisposition exists, it is a multifactorial issue that requires a very large subject population to resolve the many parameters, more than was available for this EEG study.

SMNI

Lester Ingber

Future Sources of Data Laplacian techniques help to better localize sources of activity, and thereby become more suitable for the MNN modeling. By virtue of Poissons equation, ( ) ( , t) = s( , t) , r r r where ( ) is the tissue conductivity, s( , t) is the macrocolumnar-averaged r r 3 microsource typically 0.11 A/mm , and ( , t) is the micropotential. r Thus, the Laplacian of the EEG potential, directly related to , presents an EEG variable directly related to columnar rings. E.g., , instead of representing the EEG electric potential, then is a direct measure of current ows described by temporal changes in rings M G . There is at least one problem associated with the use of Laplacian ltering, a second-order differentiation process. Differentiation generally tends to emphasize the noise contributions owing to their typically sharper curvatures. As in similar modeling projects in combat analyses and nance, this can be alleviated by using the path-integral Lagrangian to determine the proper meshes. Then, this resolution of data must be available for further modeling. A recent 1996 study used evoked potential (EP) data from a multi-electrode array under a variety of conditions. The earlier 1991 study used only averaged EP data. As the SMNI model suggests, EEG noise likely possesses non-constant structure developed from the statistical mechanics of neocortical interactions, and the model should be t directly to the single-sweep data to be able to extract the maximum signal.

SMNI

Lester Ingber

Precursor Information of Epileptic Seizures Improve Temporal Prediction of Seizures If these SMNI techniques can nd patterns of such such upcoming activity some time before the trained eye of the clinician, then the costs of time and pain in preparation for surgery can be reduced. This project will determine inter-electrode and intra-electrode activities prior to spike activity to determine likely electrode circuitries highly correlated to the onset of seizures. This can only do better than simple averaging or ltering of such activity, as typically used as input to determine dipole locations of activity prior to the onset of seizures. Improve Spatial Resolution If a subset of electrode circuitries are determined to be highly correlated to the onset of seizures, then their associated regions of activity can be used as a rst approximate of underlying dipole sources of brain activity affecting seizures. This rst approximate may be better than using a spherical head model to deduce such a rst guess. Such rst approximates can then be used for more realistic dipole source modeling, including the actual shape of the brain surface to determine likely localized areas of diseased tissue.

SMNI OTHER APPLICATIONSMNN AND CHAOS

Lester Ingber

SMNI

Lester Ingber

Generic Mesoscopic Neural Networks SMNI describes reasonable mechanism for information processing in neocortex at columnar scales. Modern stochastic calculus permits development of alternative descriptions of path-integral Lagrangians, Fokker-Planck equations, and Langevin rate equations. The induced Riemannian geometry affords invariance of probability distribution under general nonlinear transformations. ASA presents a powerful global optimization that has been tested in a variety of problems dened by nonlinear Lagrangians. Parallel-processing computations can be applied to ASA as well as to a neural-network architecture. This MNN algorithm was published in Physical Review A Rapid Communications in 1992.

SMNI

Lester Ingber

Further Development of SMNI for MNN While the development of nearest-neighbor interactions into a potential term V was useful to explore local EEG dispersion relations, for present purposes this is not necessary and, as permitted in the development of SMNI, minicolumnar interactions with rings M G are simply incorporated into F G :
|G| |G| V G v G T G , F = |G| |G| |G| ( [(v G )2 + ( G )2 ]T G ) 1/2 G |G| |G| T G = aG N G +

1 2

|G| AG M G

|G| +aG N G + |G| +aG N G + G aG =

1 2 1 2

|G| AG M G |G| AG M G ,

1 2

G G AG + BG ,

AI = AE = AI = BI = BE = BI = 0 , E I I E I I aE = E
G

1 2

AE + BE . E E

M represent rings internal to a given minicolumn. M G represent rings among minicolumns within a macrocolumn. M E represent rings among minicolumns in macrocolumns under different electrodes (only G = E rings exist).

SMNI

Lester Ingber

MNN Minicolumnar Interactions Minicolumnar interactions are represented across three scales: intramacrocolumnar within a given macrocolumn, intra-regional and intermacrocolumnar within a given region and between macrocolumns, and interregional between regions. The large solid circles represent regions, the intermediate long-dashed circles represent macrocolumns, and the small shortdashed circles represent minicolumns. MNN permits an overlap in scales being investigated by SMNI and ANN.

SMNI

Lester Ingber

MNN Learning Learning takes place by presenting the MNN with data, and parametrizing the data in terms of the rings, or multivariate M G spins. The weights, or coefcients of functions of M G appearing in the drifts and diffusions, are t to incoming data, considering the joint effective Lagrangian (including the logarithm of the prefactor in the probability distribution) as a dynamic cost function. The cost function is a sum of effective Lagrangians from each node and over each time epoch of data. This program of tting coefcients in Lagrangian uses methods of adaptive simulated annealing (ASA). This maximum likelihood procedure (statistically) avoids problems of trapping in local minima, as experienced by other types of gradient and regression techniques.

SMNI

Lester Ingber

MNN Prediction Prediction takes advantage of a mathematically equivalent representation of the Lagrangian path-integral algorithm, i.e., a set of coupled Langevin rate equations. The Ito (prepoint-discretized) Langevin equation is analyzed in terms of the Wiener process dW i , which is rewritten in terms of Gaussian noise i = dW i /dt in the limit: M G (t + t) M G (t) = dM G = gG dt + gG dW i , i dM G G = M = gG + gG i , i dt M = { M G ; G = 1, . . . , } , = { i ; i = 1, . . . , N } , < j (t) > = 0 , < j (t), j (t) > = jj (t t) . Moments of an arbitrary function F( ) over this stochastic space are dened by a path integral over i . The Lagrangian diffusions are calculated as i i gGG = gG gG .
i=1 N

The calculation of the evolution of Langevin systems has been implemented in the above-mentioned systems using ASA. It has been used as an aid to debug the ASA tting codes, by rst generating data from coupled Langevin equations, relaxing the coefcients, and then tting this data with the effective Lagrangian cost-function algorithm to recapture the original coefcients within the diffusions dened by gGG .

SMNI

Lester Ingber

MNN Parallel Processing The use of parallel processors can make this algorithm even more efcient, as ASA lends itself well to parallelization. During learning, blocks of random numbers are generated in parallel, and then sequentially checked to nd a generating point satisfying all boundary conditions. Advantage is taken of the low ratio of acceptance to generated points typical in ASA, to generate blocks of cost functions, and then sequentially checked to nd the next best current minimum. Additionally, when tting dynamic systems, e.g., the three physical systems examined to date, parallelization is attained by independently calculating each time epochs contribution to the cost function. Similarly, during prediction, blocks of random numbers are generated to support the Langevin-equation calculations, and each node is processed in parallel.

SMNI

Lester Ingber

Generic MNN This SMNI MNN can be generalized to model other large-scale nonlinear stochastic multivariate systems, by considering general drifts and diffusions to model such systems, now letting G represent an arbitrary number of variables. Ideally, these systems inherently will be of the Fokker-Planck type, P (gG P) 1 2 (gGG P) + . = 2 M G M G M G t The topology, geometry, and connectivity of the MNN can of course be generalized. E.g., there need not be any restriction to nearest-neighbor interactions, although this is simpler to implement especially on parallel processors. Also, hidden layers can be included to increase the complexity of the MNN, although the inclusion of nonlinear structure in the drifts gG and diffusions gGG may make this unnecessary for many systems. This addresses some concerns in the neural network community relating to the ability of neural networks to be trusted to generalize to new contexts: If the nodes can be described by mechanisms inherently consistent with system, then more condence can be justied for generalization. This is more desirable and likely more robust, than using additional hidden layers to model such nonlinear structures.

SMNI

Lester Ingber

Nonlinear Dynamics Chaos? What if EEG has chaotic mechanisms that overshadow the above stochastic considerations? The real issue is whether the scatter in data can be distinguished between being due to noise or chaos. The SMNI-derived probability distributions can be used to help determine if chaos is a viable mechanism in EEG. The probability distribution itself is a mathematical measure to which tests can be applied to determine the existence of other nonlinear mechanisms. The path integral has been used to compare long-time correlations in data to predictions of models, while calculating their sensitivity, e.g., of second moments, to initial conditions. This also helps to compare alternative models, previously having their short-time probability distributions t to data, with respect to their predictive power over long time scales. Similar to serious work undertaken in several elds, the impulse to identify chaos in a complex system often has been premature. It is not supported by the facts, tentative as they are because of sparse data. Similar caution should be exercised regarding neocortical interactions. It is of interest to note a series of experimental and theoretical studies of nonlinear dynamics of the olfactory bulb in small mammals, in which distinctive EEG patterns on the bulb surface are shown to be associated with specic odors. Many such studies demonstrating chaos are quite model dependent, and as such it is only fair to present the models as possessing chaos, not necessarily the actual physical system. However, there are reports of model-independent experimental observations of distinctive stimulus-dependent parameters having extreme sensitivity to initial conditions. It has been widely noted that the correlation dimension of data is difcult to calculate; perhaps it is often not even a well-founded concept, e.g., since the EEG of event-related potentials is likely nonstationary and very much context and subject dependent. Its calculation, e.g., using the popular Grassberger-Procaccia algorithm, even when supplemented with ner statistical tests and noise reduction techniques, may prove fruitful, but likely only as a sensitivity index relative to shifting contexts and complementary to other models of EEG data.

SMNI

Lester Ingber

Dufng EEG Analog Chaos in Noise Nunez and Srinivasan (1993) studied chaos in a model of EEG, cast into a Dufng analog. x = f (x, t) ,
2 f = x 0 x + B cos t .

This can be recast as x=y, y = f (x, t) ,


2 f = y 0 x + B cos t .

Note that this is equivalent to a 3-dimensional autonomous set of equations, e.g., replacing cos t by cos z, dening z = , and taking to be some constant. Ingber, Srinivasan and Nunez (1996) studied a model embedding this deterministic Dufng system in moderate noise, e.g., as exists in such models as SMNI. Independent Gaussian-Markovian (white) noise is added to both x and y, j i , where the variables are represented by i = {x, y} and the noise terms are represented by j = {1, 2}, x = y + g1 1 , x y = f (x, t) + g2 2 , y < j (t) > = 0 , < j (t), j (t) > = jj (t t) . In this study, we take moderate noise and simply set gi = 1. 0 i . The equivalent short-time conditional probability distribution P, in terms of its Lagrangian L, corresponding to these Langevin rate-equations is 1 exp(Lt) , P[x, y; t + t|x, y, t] = (2 t)( g11 g22 )2
j j

( x y)2 ( y f )2 + . L= 2( g11 )2 2( g22 )2

SMNI

Lester Ingber

Dufng EEG Analog Preliminary Indications No differences were seen in the stochastic system, comparing regions of Dufng parameters that give rise to chaotic and non-chaotic solutions. More calculations must be performed for longer durations to draw more denitive conclusions.
Path Integral Evolution of Non-Chaotic Stochastic Duffing Oscillator t=15 0.00423 0.00339 0.00254 0.00169 0.000847

0.006 0.005 0.004 0.003 0.002 0.001 0

-25 -20 -15 -10

-5 X ->

10

15

20

-5 -10 -15 -20 -25

10

15

20

Y ->

Path Integral Evolution of Chaotic Stochastic Duffing Oscillator t=15 0.00425 0.0034 0.00255 0.0017 0.00085

0.006 0.005 0.004 0.003 0.002 0.001 0

-25 -20 -15 -10

-5 X ->

10

15

20

-5 -10 -15 -20 -25

10

15

20

Y ->

SMNI

Lester Ingber SMNI CANONICAL MOMENTA INDICATORSEEG

SMNI

Lester Ingber

Canonical Momenta Indicators (CMI) Some 1996 papers illustrated how canonical momenta derived from tted nonlinear stochastic processes, using ASA to t models to S&P 500 data, can be useful indicators of nonequilibrium behavior of nancial markets. L Momentum = G = (M G /t) Training Phase These techniques are quite generic, and can be applied to the SMNI model. In a 1997 Physical Review paper, a given SMNI model is t to EEG data, e.g., as performed in the 1991 Physical Review paper. This develops a zeroth order guess for SMNI parameters for a given subjects training data. Next, ASA is used recursively to seek parameterized predictor rules, e.g., modeled according to guidelines used by clinicians. The parameterized predictor rules form an outer ASA shell, while regularly ne-tuning the SMNI inner-shell parameters within a moving window (one of the outer-shell parameters). The outer-shell cost function is dened as some measure of successful predictions of upcoming EEG events. Testing Phase In the testing phase, the outer-shell parameters t in the training phase are used in out-of-sample data. Again, the process of regularly ne-tuning the innershell of SMNI parameters is used in this phase. Utility These momenta indicators should be considered as supplemental to other clinical indicators. This is how they are being used in nancial trading systems. In the context of other invariant measures, the CMI transform covariantly under Riemannian transformations, but are more sensitive measures of neocortical activity than other invariants such as the energy density, effectively the square of the CMI, or the information which also effectively is in terms of the square of the CMI (essentially integrals over quantities proportional to the energy times a factor of an exponential including the energy as an argument). Neither the energy or the information give details of the components as do the CMI. EEG is measuring a quite oscillatory system and the relative signs of such activity are quite important.

SMNI SMNI vs SMFM (SM of Financial Markets) Feature Data Training Testing ASA Inner-Shell Momenta ASA Outer-Shell Rules SMNI EEG a given subject clinical observation EEG-t SMNI clinical rules SMFM

Lester Ingber

coupled markets historical data trading for prot/loss market-t models trading rules

SMNI

Lester Ingber

SMNI CMI of Genetic Predisposition to Alcoholism Each set of results is presented with 6 gures, labeled as [{alcoholic|control}, {stimulus 1|match|no-match}, subject, {potential|momenta}], abbreviated to {a|c}_{1|m|n}_subject.{pot|mom} where match or no-match was performed for stimulus 2 after 3.2 sec of a presentation of stimulus 1. Data includes 10 trials of 69 epochs each between 150 and 400 msec after presentation. For each subjects run, after tting 28 parameters with ASA, epoch by epoch averages are developed of the raw data and of the multivariate SMNI canonical momenta. There are ts and CMI calculations using data sets from 10 control and 10 alcoholic subjects for each of the 3 paradigms. For some subjects there also are out-of-sample CMI calculations. All stimuli were presented for 300 msec. Note that the subject number also includes the {alcoholic|control} tag, but this tag was added just to aid sorting of les (as there are contribution from co2 and co3 subjects). Each gure contains graphs superimposed for 6 electrode sites (out of 64 in the data) which have been modeled by SMNI using the circuitry: Site F3 F4 T7 T7 T8 T8 P7 P7 P7 P8 P8 P8 Contributions From Time Delays (3.906 msec)

F3 T8 F4 T7 T7 P8 F3 T8 P7 F4

1 1 1 1 1 1 2 1 1 2

Data was collected by Henri Begleiter and associates at the Neurodynamics Laboratory at the State University of New York Health Center at Brooklyn. Calculations were performed on a Sun SPARC 20 at the University of Oregon, Eugene, courtesy of the Department of Psychology. The SMNI CMI give more signal to noise presentation than the raw data, especially for the signicant matching tasks between the control and the alcoholic groups. The CMI can be processed further as is the raw data, and also used to calculate energy and information/entropy densities.

SMNI Data vs SMNI CMI for Alcoholic Group S1


Train a_1_co2a0000364 10 8 6 4 2 (V) 0 -2 -4 -6 -6 -8 -10 -12 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 t (sec) Train a_1_co2a0000364 0.25 0.2 0.15 0.1 0.1 (1/V) 0.05 0 -0.05 -0.1 -0.15 -0.2 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 t (sec) -0.05 -0.1 (1/V) 0.05 0 F3 F4 P7 P8 T7 T8 0.25 0.2 0.15 F3 F4 P7 P8 T7 T8 -8 (V) F3 F4 P7 P8 T7 T8 8 6 4 2 0 -2 -4 F3 F4 P7 P8 T7 T8 Test a_1_co2a0000364

Lester Ingber

-10 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 t (sec) Test a_1_co2a0000364

-0.15 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 t (sec)

SMNI Data vs SMNI CMI for Control Group S1


Train c_1_co2c0000337 12 10 8 6 4 (V) 2 0 -2 -4 -6 -8 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 t (sec) Train c_1_co2c0000337 0.2 0.15 0.1 0.05 (1/V) (1/V) 0 -0.05 -0.1 -0.1 -0.15 -0.2 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 t (sec) -0.15 -0.2 0 -0.05 F3 F4 P7 P8 T7 T8 0.2 0.15 0.1 0.05 F3 F4 P7 P8 T7 T8 -4 -6 -8 (V) 0 -2 F3 F4 P7 P8 T7 T8 8 6 4 2 F3 F4 P7 P8 T7 T8 Test c_1_co2c0000337

Lester Ingber

-10 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 t (sec) Test c_1_co2c0000337

-0.25 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 t (sec)

SMNI

Lester Ingber Data vs SMNI CMI for Alcoholic Group S2 Match


Train a_m_co2a0000364 4 2 0 -2 -4 (V) (V) -6 -8 -10 -12 -14 -16 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 t (sec) Train a_m_co2a0000364 0.2 0.15 0.1 0.05 0.2 (1/V) 0 -0.05 -0.1 -0.15 -0.2 -0.25 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 t (sec) -0.1 -0.2 -0.3 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 t (sec) (1/V) 0.1 0 F3 F4 P7 P8 T7 T8 0.5 0.4 0.3 F3 F4 P7 P8 T7 T8 F3 F4 P7 P8 T7 T8 6 4 2 0 -2 -4 -6 -8 -10 -12 -14 -16 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 t (sec) Test a_m_co2a0000364 F3 F4 P7 P8 T7 T8 Test a_m_co2a0000364

SMNI Data vs SMNI CMI for Control Group S2 Match


Train c_m_co2c0000337 4 2 0 -2 -4 (V) -6 -8 -10 -12 -14 -16 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 t (sec) Train c_m_co2c0000337 0.2 0.15 0.1 0.05 0 -0.05 -0.1 -0.15 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 t (sec) F3 F4 P7 P8 T7 T8 0.15 0.1 0.05 0 -0.05 -0.1 -0.15 F3 F4 P7 P8 T7 T8 -10 -12 -14 (V) -6 -8 F3 F4 P7 P8 T7 T8 2 0 -2 -4 F3 F4 P7 P8 T7 T8 Test c_m_co2c0000337

Lester Ingber

-16 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 t (sec) Test c_m_co2c0000337

(1/V)

(1/V)

-0.2 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 t (sec)

SMNI

Lester Ingber Data vs SMNI CMI for Alcoholic Group S2 No Match


Train a_n_co2a0000364 6 4 2 0 -2 (V) -4 -6 -8 -10 -12 -14 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 t (sec) Train a_n_co2a0000364 0.3 F3 F4 P7 P8 T7 T8 0.4 0.3 0.2 0.1 0 -0.1 -0.2 -0.2 -0.3 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 t (sec) F3 F4 P7 P8 T7 T8 -6 -8 -10 -12 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 t (sec) Test a_n_co2a0000364 (V) -2 -4 F3 F4 P7 P8 T7 T8 6 4 2 0 F3 F4 P7 P8 T7 T8 Test a_n_co2a0000364

0.2

0.1 (1/V) (1/V) t (sec)

-0.1

-0.3 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45

SMNI

Lester Ingber Data vs SMNI CMI for Control Group S2 No Match


Train c_n_co2c0000337 4 2 0 -2 -2 (V) (V) -4 -6 -8 -10 -12 -14 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 t (sec) Train c_n_co2c0000337 0.25 0.2 0.15 0.1 (1/V) 0.05 0 -0.05 -0.1 -0.15 -0.2 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 t (sec) (1/V) F3 F4 P7 P8 T7 T8 0.3 0.25 0.2 0.15 0.1 0.05 0 -0.05 -0.1 -0.15 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 t (sec) F3 F4 P7 P8 T7 T8 -8 -10 -12 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 t (sec) Test c_n_co2c0000337 -4 -6 F3 F4 P7 P8 T7 T8 4 2 0 F3 F4 P7 P8 T7 T8 Test c_n_co2c0000337

SMNI SMNI GENERIC FEATURES

Lester Ingber

SMNI

Lester Ingber

Scaling Paradigm These results, in addition to their importance in reasonably modeling EEG with SMNI, also have a deeper theoretical importance with respect to the scaling of neocortical mechanisms of interaction across disparate spatial scales and behavioral phenomena. These features offer guides to overlap models across multiple scales. The derivation of the mesoscopic probability distribution, yields similar algebraic structures of the threshold factors at the neuronal scale, V j a jk v jk k Fj = 1 , ( a jk (v 2 + 2 )) 2 ) jk jk
k

to be compared with the mesoscopic threshold factor, F =


G |G| |G| (V G aG v G N G |G| ( [(v G )2

1 2

|G| |G| AG v G M G )

|G| |G| ( G )2 ](aG N G G

1 2

|G| AG M G )) 1/2 )

which contribute to the drifts g and diffusions gGG , gG = 1 (M G + N G tanh F G ) ,


G gGG = (gGG )1 = G 1 N G sech2 F G ,

illustrating common forms of interactions between their entities, i.e., neurons and columns of neurons, respectively. The macroscopic regional probability distribution has the same functional form as the mesoscopic distribution, where the macroscopic drifts and diffusions of the potentials described by the s are simply linearly related to the (nonlinear) mesoscopic drifts and diffusions of the columnar ring states given by the M G s. m < >= a < M E > +b < M I >= ag E + bg I ,
2 < ( )2 > < >2 = a2 g EE + b2 g II .

It was possible to scale the macrocolumnar distribution to describe electrode recording areas of several cm2 , and have the tted parameters lie within their experimentally observed ranges.

SMNI

Lester Ingber

Addressing Phenomena at Multiple Scales Analyzing, understanding, and attempting to predict neocortical phenomena at the spatial scales encountered in EEG scalp recordings involves measuring rings from millions of neurons; in intracortical measurements at least tens of thousands of neurons are involved. SMNI approaches such scales as new systems with emergent phenomena. Just as physics and chemistry could not have progressed this century if all theoretical and experimental research were inappropriately constrained to be understood at the level of quantum mechanics (or quarks or strings), so neuroscience must soon accept that all brain phenomena is not best understood or perhaps understood at all at the level of simple neuron-neuron interactions (or membrane dynamics or macromolecular interactions). Different scales present new complex systems that must be approached as such and often with different approaches.

SMNI

Lester Ingber

Increasing Signal to Noise/Audit Trail to Sources Logical and Testable Development Across Multiple Scales SMNI is a logical, nonlinear, stochastic development of aggregating neuronal and synaptic interactions to larger and larger scales. Paradigms and metaphors from other disciplines do not substitute for logical SMNI development. Validity Across Multiple Scales The SMNI theoretical model has independent validity in describing EEG dispersion relations, systematics of short-term memory, velocities of propagation of information across neocortical elds, recency versus primacy effects, etc. Fits of such models to data should do better in extracting signal from noise than ad hoc phenomenological models. Use of ASA and PATHINT on Nonlinear Stochastic Systems ASA enables the tting of quite arbitrary nonlinear stochastic models to such data as presented by EEG systems. Once tted, PATHINT can evolve the system, testing long-time correlations between the model(s) and the data, as well as serving to predict events. Inclusion of Short-Range and Long-Range Interactions SMNI proposes that models to be tted to data include models of activity under each electrode, e.g., due to short-ranged neuronal bers, as well as models of activity across electrodes, e.g., due to long-ranged bers. Riemannian Invariants Yet to explore are the ramications of using the derived (not hypothesized) Riemannian metric induced by multivariate Fokker-Plank-type systems. This seems to form a natural invariant measure of information, that could/should be used to explore ows of information between neocortical regions. Renormalization of Attenuated Frequencies The SMNI approach shows how to renormalize the spatial activity to get a model that more closely matches the experimental situation, wherein there is attenuation of ranges of wave numbers. MNN Real-Time Processing and Audit Trail to Finer Scales The MNN parallel algorithm may offer real-time processing of nonlinear modeling and tting of EEG data for clinical use. Regional EEG data can be interpreted as mechanisms occurring at the minicolumnar scales. Recursive ASA Optimization of Momenta Indicators + Clinical Rules Similar to codes developed for nancial systems, recursive ASA optimizations of inner-shell SMNI indicators and outer-shell clinical guides should improve predictions of and decisions on clinical observations.

SMNI

Lester Ingber

Generic Applications to Process Aggregated Information This statistical mechanics approach represents the mesoscale as a patternprocessing computer. The underlying mathematical theory, i.e., the path-integral approach, species a parallel-processing algorithm which statistically nds those parameter-regions of ring which contribute most to the overall probability distribution: This is a kind of intuitive algorithm, globally searching a large multivariate data base to nd parameter-regions deserving more detailed local information-processing. The derived probability distribution can be thought of as a lter, or processor, of incoming patterns of information; and this lter can be adapted, or updated, as it interacts with previously stored patterns of information. These technical methods are quite general, and I have applied them to: neuroscience detailing properties of short-term memory derived from neuronal synaptic interactions, and calculating most likely frequencies observed in EEG data and velocities of propagation of information across neocortex. nuclear physics detailing Riemannian contributions arising from velocitydependent forces to the binding energy of nucleons interacting via exchanges of mesons. combat systems modeling battalion- and brigade-level attrition, permitting comparison of combat computer models with eld exercise data. nancial markets dening an approach to perform superior ts of economic models to interest-rates, and to perform superior calculations of multivariate securities such as bonds and options. Bond-pricing, options-pricing, and many derivative theories present examples of interesting systems which possess multivariable, nonlinear, multiplicative-noise, Fokker-Planck dynamics. One class of theories develops interest rates, determined at several temporal scales, as the independent stochastic variables. Previous models tested against data have been two dimensional generalizations, e.g., short-term and long-term interest rates, of the well-known Black-Scholes model, but recent ts argue for the necessity of including at least a third variable, intermediate-term interest rates.

SMNI

Lester Ingber

Utility, Parsimony, & Beauty SMNI presents a computational algorithm faithful to a model of neocortical interactions that has been baselined to experimental observations across multiple scales. The present neuroscience project is constructing a tool to be used for realtime study and diagnoses of brain function. Similarly to the neocortex, in many complex systems, as spatial-temporal scales of observation are increased, new phenomena arise by virtue of synergistic interactions among smaller-scale entitiesperhaps more properly labeled quasientitieswhich serve to explain much observed data in a parsimonious, usually mathematically aesthetic, fashion. Many complex systems are in nonequilibrium, being driven by nonlinear and stochastic interactions of many external and internal degrees of freedom. For these systems, classical thermodynamical approaches typically do not apply. Such systems are best treated by respecting some intermediate mesoscale as fundamental to drive larger macroscopic processes. The use of ASA and PATHINT permit the processing of quite general nonlinear, stochastic, and multivariate descriptions of systems, without being limited to equilibrium energy-type cost functions. Phenomena specic to given scales of interaction must be modeled and studied, as well as the ow of information across these scales. The stochastic nonlinear SMNI paradigm permits this development.

You might also like