You are on page 1of 10

Multi-Agent Approach for Image Processing: A Case Study for MRI Human Brain Scans Interpretation

Nathalie Richard1,2, Michel Dojat1,2, Catherine Garbay2


Institut National de la Sant et de la Recherche Mdicale, U594 Neuroimagerie Fonctionnelle et Mtabolique, CHU - Pavillon B, BP 217, 38043 Grenoble Cedex 9, France {nrichard, mdojat}@ujf-grenoble.fr 2 Laboratoire TIMC-IMAG, Institut Bonniot, Facult de Mdecine, Domaine de la Merci, 38706 La Tronche Cedex, France (catherine.garbay@imag.fr)
1

Abstract. Image interpretation consists in finding a correspondence between radiometric information and symbolic labelling with respect to specific spatial constraints. To cope with the difficulty of image interpretation, several information processing steps are required to gradually extract information from the image grey levels and to introduce symbolic information. In this paper, we evaluate the use of situated cooperative agents as a framework for managing such steps. Dedicated agent behaviours are dynamically adapted function of their position in the image, topographic relationships and radiometric information available. Acquired knowledge is diffused to acquaintance and incremental refinement of interpretation is obtained through focalisation and coordination of agents tasks. Based on several experiments on real images we demonstrate the potential interest of multi-agents for MRI brain scans interpretation.

Modelling and Interpretation Processes

Automatic interpretation of Magnetic Resonance Imaging (MRI) brain scans could greatly help clinicians and neuroscientists in decision making. Due to various image artefacts and in spite of several research efforts, this presently remains a challenging application. Based on several experiments, we demonstrate in this paper the potential interest of situated cooperative agents as a framework to manage the information processing steps, essentially modelling and interpretation via fusion mechanisms, required in this context. 1.1 Context Three tissue classes exist inside the brain: grey matter (GM), white matter (WM) and cerebro-spinal fluid (CSF) distributed over several anatomical structures, such as cortical ribbon and central structures for GM, ventricles and sulci for CSF and myelin sheath for WM. 3D MRI brain scans are huge images (10Mb for one 3D image), whose interpretation consists either in tissue interpretation or in anatomical structures identification. Radiometric knowledge, i.e. knowledge about the tissue intensity distribution and about image acquisition artefacts, must be inserted for tissue

interpretation and anatomical knowledge, i.e. knowledge about the geometry and localization of these structures, has to be added for structures interpretation. To perform properly tissue interpretation three kinds of acquisition artefacts are generally taken into account: 1) a white noise over the image volume, which leads to an overlapping of tissue intensity distributions, 2) a partial volume effect due to the sampling grid of the MRI signal, which leads to mixtures of tissues inside given voxels and, 3) a bias field, due to intensity non homogeneities in the radio frequency field, which introduces variations in tissue intensity distribution over the image volume. Most of the methods proposed in the literature perform a radiometry-based tissue interpretation. We focus our paper on this issue. 1.2. Estimation and Classification via Fusion Mechanisms In image processing, decision making occurs at a voxel level since each voxel has to be labelled. Such a labelling is based on so-called "models", that characterize tissue intensity distributions and on interactions between neighbouring voxel labels to respect tissue homogeneity. Such models have to be learned from sufficient data and to be expressed in a common framework for information fusion at the voxel level. For MRI brain scan interpretation, model computation is hampered by: 1) the presence of noise, 2) the heterogeneity of classes and 3) their large overlapping. To cope with these difficulties, modelling process, via estimation and classification, should be refined through an iterative procedure. Several estimation techniques have been proposed in the literature. Most of them use bayesian classification algorithms where tissue intensity distributions are modelled as gaussian curves whose parameters, mean value and standard deviation, have to be estimated via generally an iterative E/M (Estimation/Maximization) approach [3, 5, 7, 8]. The prior probability required in Bayesian classifiers is based on the relative frequency of each class in the volume [3, 6] or on the introduction of spatial knowledge by the means of a digital brain atlas [1, 7]. In this context, Markov Random Fields (MRFs) are often used to model interactions between neighbouring voxel labels and define the a priori probability of tissue via a regulation term [7, 8]. The parameters of the MRF model may be given a priori [5] or also estimated iteratively during the EM process [7]. Image artefacts modelling can also be introduced mainly for bias field correction [1, 5, 7]. Estimation and classification should be continuously refined during the incremental image interpretation process. Most of the strategies proposed to date to control such a process are iterative optimal approaches that use MRF for modelling the neighbouring labelling topology, gaussian modelling for tissue intensity distribution and bayesian fusion for information combination [5, 7, 8]. Others approaches are rather based on incremental improvement of the interpretation [6]. For a robust decision making, it is essential to proceed incrementally, through successive and interleaved model estimation, classification, focalisation and fusion. Figure 1 exemplifies the information flow during modelling and interpretation processes and the central role of maps, which are implicitly used in MRF approaches, and explicitly represented in the approach we develop below. Maps are matrices organizing the image information according to its spatial coordinates, in order to keep track of topographical information. They constitute a naturally distributed information

repository, where focalisation mechanisms can elegantly take place. Maps represent explicitly the various types and levels of information that are gradually computed, exploited and fused along the entire interpretation process. We advocate in this paper that the use of a multi-agent approach is a powerful way to manage such explicit maps.
Modelling process Model level Tissue intensity distribution model Neighbouring labels interaction model

Voxel level

Neighbouring labelling based probability map Grey level based probability map Grey levels map

Final tissue probability map

Decision (labelling) maps

FUSION For one tissue

...
Low level maps Model based tissue probability maps (one per tissue and per model)

Other tissue final probability FUSION maps Interpretation process Final tissue probability maps (one per tissue) Decision maps

Classification mechanisms Model-based classification Model parameters estimation

Fusion mechanisms Information fusion by tissue Information fusion between tissues for labeling decision

Figure 1. Modelling and interpretation processes. Starting from radiometric information (low level maps) models are instantiated and gradually refined by the means of successive and interleaved steps of estimation, classification and fusion of probability maps to lead to the final labelling decision.

2. Our Situated and Incremental Multi-Agent Approach


Adopting a situated and incremental approach consists in introducing accurate focalisations of treatments and adequate coordination and cooperation mechanisms. We chose an approach based on incremental improvement of the interpretation, the voxels being labelled from the most reliable to the most uncertain ones, and the adequacy of models instantiation being reinforced all long the interpretation process. Control strategy drives the evolution of the interpretation process (through the graph of Figure 1) and the spatial exploration of the image (through the image volume). Focalising treatments signifies to choose, at a given step of the

interpretation process: 1) a goal to be reached (objects to identify ), 2) a region of interest to be processed (a set of voxels at a given location on the maps), and 3) a method that achieves the treatment (chosen between the four previously introduced modelling and fusion mechanisms). Organizing treatments signifies to choose how treatments should be distributed and coordinated for image interpretation, i.e. when a treatment should be launched and following which criteria and how should treatments cooperate to improve the global process. 2.1 Focalisation of Treatments Interpretation process is proceeded in a situated way i.e. with evolving goals, inside distributed regions of interest and achieved using dedicated mechanisms. For instance, simple region growing technique and sophisticated confrontation mechanisms are respectively used for voxels inside a tissue and for voxels located at tissue borders. Focalisation on Goals Presently, in the implemented system we have designed, situated treatments are dedicated to the local interpretation of brain images in three tissues, WM, GM, and CSF. Decision maps are gradually introduced. Firstly, an initial skull-striping map is built to differentiate brain tissues from the rest of the image. Then, decision maps for each tissue are extracted. A next step would consist in identifying anatomical structures by differentiation of tissue decision map. Focalisation on Distributed Regions of Interest To take into account the non uniformity of intensity distribution overall the image, mainly due to bias field , the interpretation is proceeded on volume partitions. Local radiometric models are introduced that are instantiated during local tissue distribution estimation steps and used during local labelling steps. Because estimation is performed locally, resulting models are prone to errors and some are missing. To cope with this difficulty, local models distributed over the volume are confronted to models interpolated from the neighbourhood, to maintain the global consistency of the distributed interpretation process and reinforce the robustness of the models. Focalisation for Selection of Dedicated Mechanisms To take into account the noise and the partial volume effect, that induce errors in the radiometric models instantiation, two phases are distinguished in the local interpretation process: 1) during the initial phase based on strict labelling constraints an under-segmentation of the image is produced: no labelling decision is taken for the most difficult voxels situated at the frontier between tissues, and 2) during the final phase, the radiometric models are firstly refined and then the remaining voxels are labelled. Each phase is composed of a radiometric model estimation step and of a voxel evaluation and labelling step. The initial phase. Radiometric model estimation is initialised with a k-means algorithm and refined with a bayesian EM algorithm. Five tissue classes are estimated, one for each tissue or tissue mixture (CSF, GM, WM, CSF-GM and GM-

WM). The prior probability is based on the relative frequency of each class in the volume and estimated during the E step of the algorithm. The obtained model is then confronted with a control model interpolated from the neighbourhood. Voxels classification into pure tissue classes is proceeded with a region growing process, following three steps: 1) region growing constraints are defined from the gaussian models in order to label only the most reliable voxels of each tissue (to obtain an under-segmentation), 2) seeds to start the region growing are selected using strict criteria (rooting mechanism) or transmitted from neighbouring regions (region growing propagation mechanism), and 3) pixels are labelled function of their grey level and of the labelling of the neighbouring voxels. The final phase. Voxels unlabeled during the initial phase are treated during the final phase in order to obtain a complete image interpretation. Radiometric model estimation is initialised with the previously under-segmented image and refined with a bayesian E/M classification algorithm. Voxel classification is done competitively between tissues, from the most reliable labelling to the most uncertain one. It relies on a more sophisticated model than this used in the initial phase and concerns only voxels at the tissue frontiers more difficult to label. Partial volume labels may be introduced. 2.2 Organization of Treatments Parallel interpretation processes are launched in each volume partition in a coordinated and cooperative way. Coordination and Information Diffusion Treatments have to be coordinated inside a given volume partition or between neighbouring partitions, function of the available and incrementally extracted knowledge. Local model estimation must be reinforced using estimations produced in the neighbourhood. This confrontation can only be achieved when the information from the neighbourhood is available. When a model is modified, corresponding information is propagated to neighbouring regions for new confrontations. The labeling process performed during the initial phase requires information about the location of seeds to launch the region growing mechanism. A rooting, timeconsuming process may be used to select seeds using local radiometric and topologic criteria. It can be advantageously replaced by mechanisms of region growing propagation from cube to cube. When a local region growing process reaches the corresponding frontier of the cube, it transmits the voxel candidates to the corresponding neighbouring process (which is eventually launched).The switch from one step of the local interpretation process to the other is launched autonomously in each cube function of criteria relative to the information available in the cube and relative to the neighbouring cubes. To launch the final estimation step, a large enough local under-segmentation have to be available, which depends on the advancement of the labelling process in the neighbouring partitions. Similarly, to launch the labelling steps, local models have firstly to be computed in the volume partition and then a robust enough model interpolation from the neighbourhood has to be available to verify the model.

Cooperation between Distributed Treatments Three kinds of cooperation defined in [2] are used in this context: 1) integrative cooperation: models estimation, models checking using neighbourhood and data analysis steps are interleaved, 2) augmentative cooperation: interpretation is a spatially distributed process, and 3) confrontational cooperation: information produced in the same region or in neighbouring regions are confronted (via fusion and interpolation mechanisms). 2.3 A Multi-Agent Architecture

To implement the mechanisms previously described, we introduce situated and cooperative agents as a programming paradigm. The system is composed of agents running under control of a system manager, whose role is to manage their creation, destruction, activation and deactivation. Each agent is in turn provided with several behaviours running under control of a behaviours manager. The agents are organized into groups running under control of a group manager which ensures their proper coordination. Briefly, (details about the implementation can be found in [4]), three types of agents coexist in the system : global and local control agents and tissue dedicated agents. The role of the global control agent is to partition the data volume into adjacent territories, and then assign to each territory one local control agent. The role of local control agents is to create tissue dedicated agents, to estimate model parameters and to confront tissue models for labelling decision. The role of tissue dedicated agents is to execute tasks distributed by tissue type: tissue models interpolation from the neighbourhood and voxels labelling using a region growing process. The agents have to be coordinated at several levels: 1) inside the cube the local control agent and the three tissue dedicated agents alternate the firing of their behaviours, 2) tissue dedicated agents from neighbouring cubes interact during their model control behaviour and their region growing behaviour, and 3) agent behaviour selection also depends on the global progress of the interpretation. Behaviour switching is decided either autonomously by agents when they have achieved their current behaviour and when the required information is available, or triggered by group coordination mechanisms. Agents are organized into groups depending on their type and on treatments they currently process. Four local control agents groups and three tissue dedicated agents groups (one group for each step to be processed by each kind of agents) coexist in the system. The agents share a common information zone, organized according to the tissue types and spatial relations, storing global and local statistical information. Qualitative information maps are introduced to efficiently gather, retrieve and easily add complementary information.

3. Evaluation
To evaluate our system, we acquired three dimensional anatomical brain scans (T1weighted, 3D flash sequence, voxel size =1mm3, matrix size=181*184*157) at 1.5 T on a clinical MR scanner (Philips ACS II). Such images are shown in Figures 2 and 3.

Image partitioning for local model estimation and classification: Figure 2 shows the high variability of tissue characteristics depending on the position in the image and illustrates the importance of local models adaptation. The anatomical volume was partitioned following a 15*15*15 voxels size grid. Six agents were considered per cube: one local control agent and five tissue dedicated agents, i.e. three agents dedicated to pure tissue (WM, GM and CSF) labelling and two agents dedicated to mixture labelling (WM-GM and CSF-GM). At the end, 686 local control agents and 3430 tissue dedicated agents were launched (segmentation in 3.5 min on a PC486, 256M RAM, 800MHz). In each cube, a local histogram was computed on a 20*20*20 voxels size region centred on the middle of the cube. For two selected cubes (drawn in white (bottom) and black (top) in Fig 2a), local histograms were computed (Fig 2c) As indicated in Fig 2c, the GM intensity distribution of the upper cube was equal to WM intensity distribution of the lower cube. Nevertheless, thanks to the local adaptation, the global result is satisfactory as indicated in Fig 2.b.
A 1
. . .

I Voxels B-8 WM peak F-2 GM peak B-8 F-2

5 Global
. . .

9 a. b. Grey levels c.

Figure 2. a. The partitioning grid is placed on one MR anatomical slice. The local histograms corresponding to two cubes located at B8 (in red) and F2 (in blue) are shown in c. The global histogram over the entire volume is plotted (in black). The final segmentation is indicated in b. Gradual interpretation refinement via fusion mechanisms: Partitioning can lead to some difficulties in model estimation. Two cases are emphasized. In case 1, due to the reduced size of the voxel population, the model estimation fails for some tissues and in case 2, the presence of different anatomical structures composed by the same tissue hampers the model estimation. Cooperation between neighbouring agents and progressive interpretation refinement are the solutions we propose. They are illustrated in Figures 3 and 4. In the anatomical part displayed in Fig3a., several tissue are shown, WM, CSF inside sulci, GM in the cortex and GM in the putamen, a central nucleus whose grey level is intermediate between those of the cortical GM and of the WM. Figures 3d to 3g show, at several interpretation steps, starting in d and ending in g, the histograms and the estimated gaussian models corresponding to the grid in Fig. 3a. Cube D4 and cube C3 are illustrative of Case 1 and Case 2 respectively. The evolution of the estimation of their histograms are zoomed in Figure 4. In Fig. 3d, initial gaussian models were estimated. Some were missing due to the absence of some tissue (see D4 in Fig. 4d) or to the existence of a new intensity distribution (putamen in C3, see Fig. 4d) between the distribution of cortical GM and WM. In C3 this led to a misinterpretation: putamen peak was interpreted as a GM peak and the

cortical GM peak was interpreted as a CSF peak. During the following interpretation step (Fig. 3e), these gaussian models were checked, corrected and/or computed by interpolation from the neighbouring models. Missing models were computed (see D4 in Fig. 4e).
GM cortex B A 1 2 3 4 5 C GM putamen GM label B D A C WM label D GM/WM partial volume label B A C D

a.
CSF sulci WM CSF label

b.
D A B C

c.
D

1 2 3 4 5
1 2 3 4

d.

e.

f. g. Figure 3. a. MR anatomical image and the partitioning grid. b. Under-segmented image obtained at the end of the initial phase. c. Final segmentation image. d-e local histograms and estimated gaussian models during the incremental interpolation process starting with d: initial gaussian estimation ending with e: re-evaluation in the final phase. False models were corrected (putamen is a small structure, thus the GM model computed correspond to cortical GM, see C3 in Fig. 4e). These models were used to compute an under-segmented image (see Fig. 3b). Note that at this step, the putamen stayed unlabelled because of its intensity which was intermediate between cortical GM and WM models. This under-segmentation was used to reestimate the gaussian models (Fig. 3f). Some models were refined by this way (see WM in D4, and GM in C3 in Fig. 4f). Once again (see Fig 3g), the resulting models were checked, missing

ones were computed (see CSF in D4 and C3 in Fig. 4g) and used to label the remaining voxels. (see final segmentation in Fig. 3c). Additional label corresponding to WM-GM and CSF-GM partial volume effect were added to the final labelling phase. Most of the voxels belonging to the putamen structure were labelled as WMGM partial volume voxels.
Cube C-3
Voxels Voxels Voxels

CSF WM

GM WM

WM CSF

Voxels

GM

GM

GM

Putamen intensity peak WM

CSF
Grey levels Grey levels Grey levels

Grey levels

d.

e.
Voxels Voxels

f.
Voxels

g.

Cube D-4
Voxels

No model computed

WM

WM

WM

GM
Grey levels Grey levels Grey levels

GM
Grey levels

d.

e.

f.

g.

Figure 4. Zooms for cube C3 and D4 of the local histograms and estimated gaussian mixtures during the incremental interpolation process (d-e).

Discussion and Perspectives

In this paper, we propose a framework based on situated and cooperative agents for the management of information processing steps required for image interpretation. The application to MRI brain scan interpretation has been reported. Several generic principles have driven our framework conception. Each agent is rooted in a three dimensional space, situated at a given position in the environment, with a given goal. It works locally, diffuses its partial results to its acquaintances (for instance agents dedicated to the same tissue in neighbouring regions), shares results via specific maps and coordinates its actions with other agents to reach a global common goal. On various realistic brain phantoms, we obtained results (about 84% of truth positive) comparable to other optimal methods, which rely on MRF models and include a bias field correction map, with a lower computational burden (less than 5 min to segment a complete volume) (see [4]). The present evaluation was performed on real MRI scans at 1.5 T. Our strategy for MRI brain scan interpretation is based on the partition of the image volume and on the introduction of local modelling mechanisms (similarly to [3, 5]) to take the bias field into account without introducing an explicit bias field map. As reported by results shown in Figure 2, this allows for a tissue intensity distribution estimation in different localizations in the image, despite of large intensity variations inside a same tissue. Our local approach implies the use of mechanisms for information diffusion as confirmed by results shown in Figures 3 and 4. Missing or

non optimal tissue models are defined or refined by this way. Because of gradual refinement, the quality of the estimator is not critical. The fusion of several qualitative maps gathering gradual acquired knowledge clearly improves the final decision. A bias field map estimation may advantageously be inserted in our system to correct the residual intra-partition bias field. Refinement of the results could as well be obtained by the insertion of anatomical knowledge. For this purpose, new low level maps can be computed using for instance mathematical morphology operators and interpreted using a particular model to obtain a specific structure probability map. Symbolic knowledge on structure geometry and location can also be introduced to compute rough structure probability maps, from previously obtained decision maps concerning others structures in spatial relationship with the structure to detect. A model of the object to detect, for instance sulci, obtained with active shape model can be inserted and deformed to fit the specificity of a new object. Knowledge derived from an atlas can be introduced where structures are identified on a reference grey level map, that can be deformed to fit the grey level map to be interpreted. The framework we report is open: the previously cited models and maps can be inserted to improve the radiometric-based approach we have described. To conclude, based on our experimentations with phantoms [4] and realistic MRI brain scans, situated and cooperative agents appear as an interesting framework to combine several information processing steps that are required for image interpretation.

References
[1] Ashburner, J., Friston, K.: Multimodal image coregistration and partitioning - a unified framework. NeuroImage 6 (1997) 209-17. [2] Germond, L., Dojat, M., Taylor, C., Garbay, C.: A cooperative framework for segmentation of MRI brain scans. Artif. Intell. in Med. 20 (2000) 277-94. [3] Joshi, M., Cui, J., Doolittle, K., Joshi, S., Van Essen, D., Wang, L., Miller, M.I.: Brain segmentation and the generation of cortical surfaces. NeuroImage 9 (1999) 461-476. [4] Richard, N., Dojat, M., Garbay, C.: Situated Cooperative Agents: a Powerful Paradigm for MRI Brain Scans Segmentation. In: Van Harmelen, F. (eds.): ECAI 2002. Proceedings of the European Conference on Artificial Intelligence (21-26 July 2002, Lyon, Fr). IOS Press, Amsterdam, (2002) 33-37. [5] Shattuck, D.W., Sandor-Leahy, S.R., Schaper, K.A., Rottenberg, D.A., Leahy, R.M.: Magnetic resonance image tissue classification using a partial volume model. NeuroImage 13 (2001) 856-876. [6] Teo, P.C., Sapiro, G., Wandell, B.A.: Creating connected representations of cortical gray matter for functional MRI visualization. IEEE Trans. Med. Imag. 16 (1997) 852-863. [7] Van Leemput, K., Maes, F., Vandermeulen, D., Suetens, P.: Automated model-based tissue classification of MR images of the brain. IEEE Trans. Med. Imag. 18 (1999) 897-908. [8] Zhang, Y., Brady, M., Smith, S. Segmentation of Brain MR images through a hidden Markov random field model and the expectation-maximisation algorithm. IEEE Trans. Med. Imag. 20 (2001) 45-57.

You might also like