You are on page 1of 16

Information Fusion 15 (2014) 114129

Contents lists available at SciVerse ScienceDirect

Information Fusion
journal homepage: www.elsevier.com/locate/inffus

A localized algorithm for Structural Health Monitoring using wireless sensor networks
Igor Leo dos Santos a,, Luci Pirmez a, rico T. Lemos a, Flvia C. Delicato b, Luiz A. Vaz Pinto c, J. Neuman de Souza d, Albert Y. Zomaya e
a

iNCE Universidade Federal do Rio de Janeiro (UFRJ), CEP 21941-916, Cidade Universitria, Rio de Janeiro, RJ, Brazil DCC, Universidade Federal do Rio de Janeiro (UFRJ), CEP 21941-916, Cidade Universitria, Rio de Janeiro, RJ, Brazil Departamento de Engenharia Naval e Ocenica, Universidade Federal do Rio de Janeiro (UFRJ), CEP 21941-909, Cidade Universitria, Rio de Janeiro, RJ, Brazil d Universidade Federal do Cear (UFC), CEP 60455-760, Campus do Pici, Fortaleza, CE, Brazil e Advanced Networks Research Group, School of Information Technologies, The University of Sydney, NSW 2006, Australia
b c

a r t i c l e

i n f o

a b s t r a c t
Structural Health Monitoring (SHM) has been proving to be a suitable application domain for wireless sensor networks, whose techniques attempt to autonomously evaluate the integrity of structures, occasionally aiming at detecting and localizing damage. In this paper, we propose a localized algorithm supported by multilevel information fusion techniques to enable detection, localization and extent determination of damage sites using the resource constrained environment of a wireless sensor network. Each node partakes in different network tasks and has a localized view of the whole situation, so collaboration mechanisms and multilevel information fusion techniques are key components of this proposal to efciently achieve its goal. Experimental results with the MICAz mote platform showed that the algorithm performs well in terms of network resources utilization. 2012 Elsevier B.V. All rights reserved.

Article history: Received 25 April 2010 Received in revised form 2 February 2012 Accepted 7 February 2012 Available online 16 February 2012 Keywords: Wireless sensor networks Structural Health Monitoring Damage localization Localized algorithm Information fusion Resource constrained networks

1. Introduction Recently, there has been much interest in the use of WSNs [1] in the elds of exploration and distribution of the oil and gas industry as well as in the renewable energy sector, particularly in wind farms, with the purpose of Structural Health Monitoring (SHM) [2]. The monitoring of physical structures enables damage prediction (fractures) and, therefore, repairs anticipation thus avoiding accidents. In applications built for that purpose, the sensor nodes are used to perform measurements of the structure which is affected by external events, delivering such measures to a data collection station, the sink node. In this context, WSNs enable the remote monitoring of structures to determine physical integrity through in situ data collection and processing. This work proposes a localized algorithm, called Sensor-SHM, to detect, localize and indicate the extent of damage on structures belonging to environments like offshore oil and gas industry and wind farms, making use of WSNs for a SHM system. The topology
Corresponding author.
E-mail addresses: igorlsantos@gmail.com (I.L. dos Santos), luci.pirmez@gmail. com (L. Pirmez), ericotl@gmail.com (.T. Lemos), fdelicato@gmail.com (F.C. Delicato), vaz@peno.coppe.ufrj.br (L.A. Vaz Pinto), neuman.souza@gmail.com (J.N. de Souza), zomaya@it.usyd.edu.au (A.Y. Zomaya). 1566-2535/$ - see front matter 2012 Elsevier B.V. All rights reserved. doi:10.1016/j.inffus.2012.02.002

of the WSN is assumed to be hierarchical, where sensors are grouped into clusters and each cluster is managed by a clusterhead (CH). The key idea of our work is to fully distribute the procedure associated with the task of monitoring a structure among the sensor nodes in a WSN, so that through collaboration among the CHs it is possible to detect, localize and determine the extent of damage. Unlike other approaches [3,4], all the SHM processing of our proposal runs inside the network (in-network processing) without any help from the sink node. When distributing the SHM processing inside the network, our work strongly takes advantage of using information fusion techniques, whose immediate benet is the reduction in the amount of data to be transmitted back to the sink node for further analysis. Consequently, less energy is spent due to transmissions, enabling the use of communication and energy resources for performing analysis and taking decisions within the network. In our proposal, we make use of a terminology reviewed by Nakamura et al. [5] regarding information fusion techniques applied to WSNs. Such terminology was originally proposed by Dasarathy [6] in its DataFeatureDecision (DFD) model. The terminology points three different abstraction levels of the manipulated data during the fusion process: measurement, feature and decision. The whole process considered in our proposed algorithm can be classied as a Multilevel Fusion, since it acts in the three existent data abstraction levels.

I.L. dos Santos et al. / Information Fusion 15 (2014) 114129

115

This work builds on our previous work [7], introducing several enhancements on it. The main contributions of our previous work are: (i) we introduced the core of the algorithm with no specic foundation, which could help to better understand the algorithm operation, and (ii) several experiments to evaluate the consumption of energy resources from the network were performed. We augmented our previous work with the main following contributions, discussed in this paper: (i) we provided a foundation for the proposed algorithm that relies on the information fusion theory, (ii) several new experiments were performed concerning the precision of our damage localization mechanism, and (iii) a comprehensive analysis on the use of communication resources in the algorithm was provided. The remainder of this paper is divided as follows. Section 2 presents an overview of Sensor-SHM algorithm. Section 3 depicts related works. Section 4 presents a motivational example based in the practical experiment seen in [8] and discusses the applicability of the algorithm. Section 5 presents the algorithm, discussing and detailing its procedures. Section 6 details the experiments performed to evaluate Sensor-SHM algorithm and the obtained results. Finally, Section 7 concludes this work.

2. Overview of the algorithm The diagram in Fig. 1 presents an overview of the proposed algorithm, Sensor-SHM, and its procedures. In this diagram, the roles of sensors and CH nodes are summarized, and the setup procedure and data collection stages are presented separately. After the execution of a setup procedure (Procedures 04), each sensor node acts in the data abstraction level named measurement, delivering the rst useful features to its respective CH. Each sensor is responsible for sensing the structure during a data collection stage (Procedures 516), which starts from the sink node through the transmission of messages to the CHs, which request the sensing of the structure by the sensors in their respective clusters (Procedures 5 and 6). Then, each sensor node acts collecting the acceleration measurements in the time domain, relative to its physical position (Procedure 7). After that, a Fast Fourier Transform (FFT) is performed by each sensor over the collected acceleration signals (Procedure 8). Such transformation corresponds to the information fusion technique classied as a Data InData Out (DAIDAO). Next, a method for extracting frequency values from the peaks of the power spectrum generated by the FFT is used (Procedure 9), which can be composed of a moving average lter (another example of DAI-DAO information fusion technique, where the input is the power spectrum, and the output is the smoothed power spectrum) and the peak extraction algorithm itself, applied on the smoothed

power spectrum. This peak extraction algorithm is an information fusion technique classied as a Data InFeature Out (DAIFEO), since the extracted peaks are the rst features which are considered useful to describe the structural health state and can be efciently manipulated among the sensors. The frequency values obtained in each sensor refer to the rst peaks of the power spectrum returned by the FFT, and will make up the signature of the structure. It is important to mention that for each sensor the initial signature of the structure is obtained from its current position in the beginning of the structure operation, i.e., at time zero, and is transmitted, during the network setup procedure by the sensor to its CH (Procedure 3). This signature is used as a reference for the undamaged structure. At later stages, each CH also receives the subsequent signature of the structure of all the sensors in its respective cluster (Procedure 10). CHs are responsible for performing the damage detection and determining the damage location and extent through the calculation and analysis of damage coefcients (Procedures 1114). The CH, after collecting the signatures from all sensor nodes of its cluster, performs a comparison (considering a given tolerance degree) between these values and the respective initial signatures from the respective sensors, to check whether the structure is damaged or it has been temporarily changed due to some external event. At this point, the CH starts its own sequence of information fusion procedures, acting in two levels of data abstraction (feature and decision). The presence of damage on a structure can affect both higher and lower frequencies, in a given sensor location, depending, respectively, if the sensor is located close to the damage or not [3]. Knowing that changes in the frequencies of the higher vibration modes mean changes in local vibration modes, each CH analyzes the signatures of the sensors located in its cluster in search of changes in these frequencies. In each CH and for each data collection stage, this analysis is performed with the help of a damage indicator coefcient (Di,t). The value of Di,t indicates how close a given sensor location is to the damage site. This rst damage coefcient is the result of applying a Feature InFeature Out (FEIFEO) information fusion technique. A second damage indicator coefcient for the cluster (Cj,t), which depends on the Di,t coefcients obtained for each sensor from the cluster, is set to indicate how close to the damage the cluster is as a whole. In this last technique, a Feature InDecision Out (FEIDEO) information fusion technique, each CH node compares its cluster damage coefcient with a tolerance, which is differently set for each CH depending on the specic features of the places where the cluster was installed. When the cluster damage coefcient exceeds the tolerance, the CH node should send a message stating the value of its coefcient to its immediate (single-hop) neighbor CHs (Procedures 13 and 14). In

Fig. 1. Overview of the proposed algorithm and its procedures.

116

I.L. dos Santos et al. / Information Fusion 15 (2014) 114129

a given neighborhood, the CH with the highest value of cluster damage coefcient assumes the role of a collector responsible for (i) issuing a warning and (ii) triggering a relay acting on the environment around it, aiming to prevent the damage progression to avoid causing further problems in the locality (Procedures 15 and 16). When using the highest cluster damage coefcient among the CHs, an aggregation method based on a maximum aggregation function is being performed to dene the collector. The collector will collect the decisions of each CH in its neighborhood to make its nal decision. So this nal collaboration among CHs can be classied as a Decision InDecision Out (DEIDEO) information fusion technique, since each CH has its own local decision for assuming damage or not, but it is allowed to consider the decisions of other local neighbors to present a more reliable decision. The joint use of all the processes mentioned in our proposed algorithm, each one existing in one or more levels of information fusion, is therefore the reason to classify it as a Multilevel Fusion.

3. Related work The related works presented in this section are classied based on the concept of generations of WSNs for SHM, which is an expansion to the concept of a rst-generation wireless structural monitoring system, used in [9]. We state that the rst generation of sensor networks for SHM is relative to wired devices, while the second generation of sensor networks for SHM is related to wireless devices. The second generation is further divided in two groups: Centralized Generation of WSNs for SHM and Decentralized Generation of WSNs for SHM. These two groups differ with respect to the degree of decentralization and in-network processing presented by their works. Centralized Generation of WSNs for SHM is characterized by centralized proposals, with few in-network processing aimed at damage characterization. Most of the in-network processing occurs in the measurement level of information fusion, for reliable and efcient raw data transport. Works which t this classication are for instance [3,810]. Decentralized Generation of WSNs for SHM is characterized by some degree of decentralization in its works. Some of these works aim at the calculation of damage indexes which can be efciently transmitted by the wireless network and the calculation of the damage indexes is performed over the raw acceleration data. Also, few levels of information fusion are presented by these works. Our work pertains to this generation as a fully decentralized proposal. Caffrey et al. [11] present an algorithm for SHM using WSNs and this algorithm is classied by us into the second generation of WSNs for SHM. The sensing technique in the related work uses electrodynamic shakers to generate vibrations on the structure, and uses accelerometers in the sensor nodes to collect data for a few seconds, in order to capture these vibrations. In order to perform a structural analysis, a Fast Fourier Transform (FFT) is applied over the acceleration data collected in each sensor node, converting the signal in time domain to a signal in frequency domain. Then, the power spectrum is analyzed and the frequencies of the structural modes of vibration, whose values correspond to the energy peaks of the spectrum, are extracted. Then, one sensor node is elected among all the sensors to be responsible for obtaining a more accurate result by aggregating all measures of modal frequencies and their associated energies extracted from all the network nodes. Finally, this aggregated result is sent to the sink and then the frequency variation analysis can be done. Our proposal differs from this one since the frequency variation analysis is performed within the network, with the collaboration of CHs. Chintalapudi et al. [4] present three methods to detect damage on structures. The rst one is a technique that performs data

reduction within the network, and is based on a time series of acceleration signals. In this method, the response of a structure is modeled using linear auto-regressive (AR), or auto-regressive moving average (ARMA) time series. The damage is detected by a signicant variation in the AR/ARMA coefcients, relative to the intact (healthy) structure coefcients. Each sensor node can locally compute these coefcients and forward them, instead of requiring that all sensors transmit their collected acceleration data. The second method consists in the methodology for damage detection in structures that makes use of the structures signature variation, and it is widely accepted. This variation is observed when signatures obtained when the structure is sound and signatures obtained when the structure is damaged are compared. The practical results of this methodology are found in [12,13]. The latter method makes use of a neural network to detect the possibility of damage. Most studies found in the literature are inspired in the second method, including our proposal. Unlike Messina et al. [13], in which analytical values of the structure natural frequencies taken from a nite element model are used, the system proposed in our work uses frequency values taken when the structure is sound. Another difference is that our algorithm runs on the sensors and the algorithm called Damage Location Assurance Criterion (DLAC) by Messina et al. [13], runs on the sink node. Hackmann et al. [14] make use of a WSN to monitor structural conditions. The proposed partially decentralized algorithm is used along with the DLAC method, allowing the sensor nodes to act on the collected data, signicantly reducing energy consumption since it minimizes the number of transmissions needed. In the algorithm, the data is collected and partially in-network processed by the sensor nodes. In the sink node, the frequency values that compose the signature of the structure are extracted by solving a mathematical equation expressing a curve that ts the resultant power spectrum. After, the DLAC algorithm runs on the sink, taking data relative to two sources as input to detect and locate damage: (i) the sensed data relative to the structural frequency response and (ii) data relative to the responses of an analytical model to the same scenario. The analytical model is developed through a nite element modeling. It is important to note that the damage detection and localization through the use of the DLAC method was still centrally held, in the sink node. The algorithm for damage detection proposed in our work is mainly inspired by the work presented in [14]. One of the differences between our proposal and such work is that in our solution the whole procedure of extracting the frequency values from the power spectrum is performed on the sensors, while in the related work it is conducted by the Curve Fitting stage. Unlike other algorithms proposed in the literature, all the SHM processing of our algorithm is performed on the sensor and CH nodes, without the help of the sink node. And through collaboration among CH nodes, it is possible detect, localize, and determine the extent of damage. It is also noticeable, in the literature, the widespread use of damage coefcients. In general, the numerical values of these coefcients are extracted from raw data through information fusion techniques, and their values indicate how intense a damage occurrence is, if it exists, or how close a site is to the damage site. Wang et al. [15] analyze the design requirements of WSN-based SHM applications and discuss the related challenges dealt with in distributed processing, presenting two algorithms. The rst algorithm, distributed damage index detection (DDID), is a distributed version of a centralized algorithm previously proposed by the same authors. The second algorithm, collaborative damage event detection (CDED), uses the results provided by the DDID algorithm and aims to improve the reliability and accuracy of the damage report by exchanging data among the sensor nodes. By using the DDID algorithm, every sensor node inspects the raw data and determines the damage candidates. Once the

I.L. dos Santos et al. / Information Fusion 15 (2014) 114129

117

candidates are captured, the sensor nodes in a group will use the CDED algorithm to cooperatively create the damage report containing the information about the location, scale, and index of the damage. CDED and DDID are complementary algorithms in which information fusion is performed in different abstraction levels. In DDID, the measurement and feature levels are more evident, whilst in CDED the decision level is more evident, what characterizes a multilevel data fusion as a whole. The sensors are clustered in this related work, and the network is comprised of two tiers, one containing the sensor nodes and a second one containing the master nodes. Moreover, two main issues are pointed: (i) how much information should still be sent back to the server in order to ensure the energy efciency and results precision, and (ii) how to automate the decision on the existence, localization, and extent of damages which is originally made through human interference. Our proposal is also based on a multilevel information fusion technique to generate damage indexes, and we have also adopted a hierarchical network organization. Both issues (i) and (ii) are also dealt with in our work. So, the main points in which our work differentiates from the proposal of [15] are (i) the way in which the damage coefcient is calculated, since our damage coefcient is based on the analysis of modal frequencies and this feature is not exploited in DDID, which makes use of raw acceleration signals, (ii) our algorithm proposes more operations of information fusion in different levels to calculate the damage coefcients and (iii) in our proposal the cluster meaning is closely related to SHM issues and its behavior is different. The centralized proposals are generally represented by wired networks with no in-network processing for assessing the structural integrity. The common approach of such solutions is completely opposite to the concept of network-centric solutions, and the presence of wires forbids large-scale deployments due to technical and economical issues. Both generations which are represented by WSNs present increasing levels of data compression, aggregation and fusion, with more distributed processing aimed at physical integrity characterization (of the structures). Therefore, the reason to use a decentralized approach is to achieve a longer lifetime for a wireless system which can monitor a structure more exibly than a wired system. The reduction in the need for transmissions in a decentralized approach is the main cause of the reduction in energy consumption. Nevertheless, the decentralized approach also points towards new challenges. The resource constrained environment of the WSNs forbids the extensive use of computational resources. The lack of memory and processing generally imposes restrictions and trade-offs which may affect negatively the precision and accuracy of the decentralized proposal. Also, the energy constraints cause the need to schedule the amount of sampling tasks, or the frequency of the monitoring cycles, very carefully.

by an impulse load generated by striking the fourth oor with a modally tuned impact hammer. The results were also recorded by wired sensors, and compared to the results of the MICAz platform, showing that the WSN platform performed well in comparison to the wired nodes. In the experimental scenario, the motes were at less than 1 m distance from each other, in average. The algorithm proposed in our work is applicable to this scenario, as well as to many others. After presenting several evaluations on the performance of our algorithm, in Section 6.8 we present and detail a procedure to deploy a wireless sensor network running Sensor-SHM. The procedure is based on the real experiment described in [8]. The scenario parameters were used to simulate the behavior of our algorithm analytically. 5. The proposed localized algorithm: Sensor-SHM Since Sensor-SHM is a distributed algorithm in which each node of the network has only a partial view of the global situation and nodes collaborate by sharing their views to achieve the nal goal, it can be classied as a localized algorithm [16]. Moreover, Sensor-SHM can be classied as a multilevel information fusion algorithm, since it encompasses a sequence of information fusion procedures, each of which acting in one or more of the three data abstraction levels. The description of Sensor-SHM is divided into two main procedures. Initially, a setup procedure is performed, which consists in setting the algorithm initial parameters before making the application and network deployment, i.e., before installing the program on the sensors and allocating the sensors on their xed positions in the structure to be monitored. The second procedure consists in the algorithm operation cycle. In our algorithm we consider a hierarchical network topology comprised of two layers. The lower layer contains sensor nodes, organized in clusters, which are in charge of performing information fusion techniques in data encompassed in the measurement and feature abstraction levels. The higher layer contains CHs in charge of applying information fusion techniques in data at the feature and decision abstraction levels. So, a cluster of sensors, composed by a CH and its subordinated sensor nodes, is considered the basic sensing unit in the network. Sensor nodes perform sensing tasks only and CHs do not perform sensing tasks and are responsible for coordinating the communication and processing inside their respective clusters. 5.1. Assumptions on the cluster formation Before describing the Sensor-SHM procedures in details, it is important to mention that we assume that the WSN clusters are already formed before the algorithm operation. The process of cluster formation and CH selection can be performed by a clustering protocol, or manually setting (directly over the images installed in the nodes). Sensor-SHM proper operation depends on the result of the network clustering process, from which the algorithm uses the outcomes to feed the arrays of sensors for each CH (detailed below). Sensor-SHM is agnostic to any specic clustering protocol. However, the criterion used to create clusters needs to be strongly related to the SHM application needs. The number of clusters should be preferably dened according to the number of structural elements which we want to sense. The number of sensors per cluster should denote the amount of redundant data around one same structural element (affects sensing accuracy and precision). CHs are vital for nding damage, so, the number of CHs is dened by the number of clusters (one CH per cluster) and their neighborhood should be set according to the structural properties of the surrounding structural elements. For this reason, in spite of the fact that Sensor-SHM algorithm is agnostic to the underlying clustering

4. Motivational example and the applicability of the algorithm In order to illustrate the applicability of the proposed algorithm, an example of a real application described in Clayton et al. [8] was chosen. This application is performed over a test structure in a controlled laboratory environment, what corresponds to the majority of the application scenarios adopted in the literature. The experiment described in the work of Clayton et al. was performed to validate the performance of MICAz motes for damage detection and localization. The considered test structure consisted in a ve bay lumped mass shear building model. MICAz motes were installed on oors 1, 2, 4 and 5, and recorded acceleration in the x-axis direction. Damage was simulated through the reduction of the inter-storey column stiffness, by exchanging the original columns by those with a lesser moment of inertia. The structure was excited

118

I.L. dos Santos et al. / Information Fusion 15 (2014) 114129

protocol, when choosing one of such protocol for using along with Sensor-SHM, the best choice may be to adopt a semantic clustering algorithm [17], which considers the semantic relationships among the nodes positions and the sensing area. Choosing a clustering protocol like LEACH [18,19] is not a very suitable choice, since the random election of CHs may ignore the semantic correlation among the sensor nodes and their positions in the monitored structure. Also in LEACH, the number of sensors in each cluster is chosen to minimize the distance between sensors and CHs, an approach that may assign sensors semantically correlated into different clusters, causing the Cj,t indexes to aggregate data from different semantic origins, i.e., different structural elements. Choosing a protocol as SKATER [20] is a more suitable choice, which still presents drawbacks. SKATER protocol assumes a network organization focused on data correlation among nodes. Therefore, the correlation among sensor nodes and their positions is maintained indirectly, since the data collected by each sensor is directly correlated to the position of the sensor. However, SKATER algorithm still predicts a dynamic reelection of CHs, what may impose changes in the scenario used for analysis. In SHM applications, any change in the spots chosen for sensing may provide an unreliable analysis, unless the semantic relationships between sensors and their positions are strictly respected. The adoption of such protocols may impose a deep reformulation of the Sensor-SHM algorithm for dealing with such issues, but may also provide longer network lifetime. For this reason, the formulation of semantic clustering protocols may be an important contribution for the performance of Sensor-SHM algorithm. Nevertheless, the clustering and multi-hop communication protocols may be freely chosen to improve the communication capabilities of the sensors only. The coexistence of clusters for performing the application requirements (Sensor-SHM clusters) and clusters for performing reliable and efcient data transportation only may result in a general performance increase, at the expense of energy.

5.2. Setup procedure In Sensor-SHM the structural monitoring is performed in a periodic basis, and each monitoring cycle is based on a collected signature sample. Since the data collection (measurement level) is the operation which takes the longest time to complete during the operation cycle of the algorithm, the operation cycle of the algorithm is referred by its respective data collection stage. A data collection stage is identied by an integer t, which is incremented by one for each performed data collection stage. These stages start at a given time, dened by the sink node. As an example, in our experiments the collection period, which represents the duration of each data collection stage, is dened as being long enough to collect 512 acceleration samples at a sampling rate of 1.0 kHz, resulting on a collection period which lasts for about 500 ms. These numbers are not xed in the algorithm. They can be dened for each application. In general, the number of collected samples is determined by the following criteria: (i) it must be enough to ensure a good resolution in the power spectrum that will be returned, which implies in better precision in the modal frequencies determination, (ii) it must be a power of 2, since this is a requirement for the entry of data in the FFT algorithm, and (iii) it shall not exceed the sensors storage capacity (Flash memory). The sampling rate is set according to the following criteria: (i) it must be greater than the value of the rst modal frequencies of interest so that these are shown in the power spectrum (commonly, values below 200 Hz for the rst ve modal frequencies of structures are expected), (ii) it must be high enough to ensure accuracy, (iii) it should be twice the highest modal frequency of interest, to meet the Nyquist criterion. In [3] a sampling rate of 1.0 kHz is used, with

hardware similar to the one used in our work, showing that it is possible to achieve this sampling rate in a practical situation. To ensure the synchronization among the data collected by each sensor, each data collection stage should start at the same time on all sensors, so that there is meaning in the comparison of signatures of this stage and earlier stages, and in comparison of signatures from different sensors. The synchronization problem is reported and addressed in [3,9]. A feasible network implementation to deal with the synchronization issue is to let the sink node be responsible for sending a message to the whole network scheduling the starting time of the next data collection stage, assuming that all sensors have their internal clocks synchronized. Other parameters, such as the array of CH neighbors and the array of sensors that are part of a cluster, must be set. The array of CH neighbors informs each CH who are its neighbors. Neighbors of a CH are CHs, from other clusters, which are allowed to communicate among each other in order to accomplish the tasks related to damage localization and extent determination. For each CH, the array of sensors informs which are the sensors subordinated to it. When using both arrays, all the necessary communications can be easily established. The sets of existing sensors and CHs are properly dened as a collection J of Z CHs where each CH is identied by j = {1, 2, , Z}, and has a subset of subordinated sensors. All the subsets of sensors subordinated to each CH are part of another collection dened by I, that includes all the N sensors in the network, where each sensor is identied by i = {1, 2, , N}. Constants like Ti, Lj and Ai must also be set before the deployment, although they can be changed during the network operation. The denitions of these constants and their use are explained in the following subsections. These constants are stored in the CHs. After all these settings, the network can be physically deployed over the sensing area (the chosen structure), and the nodes can be installed in their xed positions. As part of the initialization process of the sensor nodes, they collect the initial signature of the structure, xi,0. It is supposed that at this time the structure is at the beginning of its operation. Thus, each sensor generates its xi,0 vector and transmits it to its CH. These initial values will be used as a reference for the undamaged (healthy) structure. Then, sensors enter in sleep mode and wait for the next data collection stage, which will be identied by a value of t = 1. 5.3. Data collection stages from the sensors viewpoint: measurement and Feature levels The data collection stage starts at a given time, as requested by the sink node. A message is sent from the sink to the CHs, and those are responsible for sending messages to schedule the next sensing task on their subordinated sensors. Then, when the specied time comes, the sensors start collecting data. So, at every data collection stage t, each sensor node collects the acceleration data in the time domain at its relative position and performs a FFT on the respective collected signals, our rst DAIDAO information fusion. Then a simple method is applied to extract the modal frequencies in the power spectrum generated in the previous step, what can be considered a DAIFEO information fusion. This method has the same goal of the Polynomial Curve Fitting seen in [14], but is much less expensive in terms of energy, and is able to be fully implemented within the sensors. However, it is less accurate, leading to more errors. This lack of accuracy can be balanced by increasing the number of sensors in each cluster, generating more redundant data. The frequency values extracted from the power spectrum generated by the FFT, assuming no noise interference, are related to the modal frequencies of the structure, and are the rst useful features over which the CHs can take decisions. Formalizing, xi,t is a signature of a given structure, acquired at a data collection stage

I.L. dos Santos et al. / Information Fusion 15 (2014) 114129

119

t, and represented by a vector of M sensed frequencies in a sensor identied by i. Therefore, different sensors get different values of signatures for the structure, depending on their location and the instant of time in which the data collection stage started. For the rst 5 modal frequencies (M = 5), the vector has the form as described in Eq. (1). Finally, all vectors generated by each sensor at each data collection stage t are sent to their respective CHs. This nal step at the feature data abstraction level signals the end of the node activity for this data collection stage, and it goes into sleep mode. 5.4. Data collection stages from the CHs viewpoint: feature and decision levels

these kinds of changes. In each CH and for each data collection stage t, this analysis is performed with the help of the Di,t coefcient, which is calculated for each sensor i that has exceeded the given tolerance, in the given cluster (Eq. (3)). The Di,t coefcient is set so that its value indicates how close the sensor i is from the damaged site, and Ai is a vector of weights, assigned to each modal frequency. To identify the sensors that are closest to the damage site, higher values to the weights associated with the higher modal frequencies can be assigned.

2 AT iD

D i; t
The CHs are responsible for performing the following steps: damage detection and determination of damage location and extent. 5.4.1. Damage detection The CH is responsible for comparing the xi,0 vectors from all its subordinated nodes and the subsequent xi,t vectors, generated in a similar way as xi,0 in the further data collection stages. The verication of change in the modal frequencies of a structure is performed by comparing xi,0 and xi,t vectors. The comparison is done using the absolute value of the difference between xi,0 and xi,t, and the result is stored in the Dxi,t vector, as seen in

x i; t A1 A2 A3 A4 i i i i

7 6 6 Dx 2 i; t 7 7 6 7 6 Dx 3 A5 i; t 7 i 6 7 6 6 Dx 4 7 4 i; t 5 Dx 5 i; t

Dx 1 i; t

C j;t

k X i 1

Di;t

x1 i; t 6 2 7 6 xi;t 7 7 6 7 xi;t 6 7 6 x3 6 i; t 7 6 x4 7 4 i; t 5 x5 i; t
2 7 6 7 6 2 6 jxi;0 x2 6 Dx 2 i; t j 7 i; t 7 7 6 7 6 7 6 6 3 7 x3 Dxi;t jxi;0 xi;t j 6 jx3 i ; t j 7 6 Dx i ; t 7 7 6 7 6 i;0 6 jx4 x4 j 7 6 Dx4 7 4 i;0 4 i; t 5 i; t 5 5 j x5 Dx 5 i;0 xi;t j i; t
1 j x1 i;0 xi;t j

Dx 1 i; t

For Dxi,t values different from the null vector, the CH can assume, considering a given Ti tolerance value, that there has been a signicant change in the structure, which may indicate the presence of damage or the action of a temporary external event. If a value from one of the Dxi,t vector positions exceed its tolerance for a sensor i, the CH proceeds to the next step in the monitoring process, which refers to the damage localization, since it has detected an abnormal condition in the structure. This decision is the output of a FEIDEO information fusion and reduces the energy consumption since it avoids the waste of energy from the following unperformed steps. The tolerances for the Dxi,t vector are stored in the Ti vector. It is important to mention that the Ti vector is determined for each sensor based on knowledge and analysis of the localities in which each sensor will be installed. Also, the Ti vector can be statistically determined after making a series of experimental samples. The purpose of adopting a Ti tolerance vector is to prevent small random disturbances, which do not imply the occurrence of abnormal conditions, from being considered by the monitoring procedure as such. 5.4.2. Damage localization and extent determination Knowing that changes in the frequencies of the higher vibration modes mean changes in local vibration modes, each CH analyzes the Dxi,t vectors of all sensors located in its cluster in search for

So, according to Eqs. (3) and (4), Di,t and Cj,t coefcients are outputs of the FEIFEO information fusion. To sum up, the weights should be assigned so that sensors belonging to a given cluster located near the damage site obtain the highest Di,t coefcients of the whole network. In other clusters, the Di,t coefcients should be smaller, but still nonzero, since the lower frequency values will be changed and these changes are identied by many other sensors. In the following step, Di,t coefcients are aggregated in each cluster j, by summing their values for all k sensors in the cluster, resulting in a Cj,t coefcient (Eq. (4)). By its mathematical denition, Cj,t coefcient is an indicator of how close to the damage the cluster as a whole is. The algorithm uses this indicator to locate and determine the damage extent. Our algorithm of damage localization and extent determination is classied as a FEIDEO information fusion. In this algorithm, each CH node compares its Cj,t coefcient with a Lj tolerance value. When the Cj,t coefcient exceeds Lj, the CH sends a message informing its Cj,t coefcient to its neighbor CHs. The Lj tolerance is dened for each CH, in a similar way to the determination of the values of the Ti tolerance vector. The tolerance values depend on the structural characteristics, and therefore should be determined by an expert in the structure, and through statistical analysis. After the CH j transmits its Cj,t value to its immediate CH neighbors, it is expected that some of these neighbors also have exceeded their Lj tolerance value, and thus have sent their respective Cj,t coefcients to their neighbors. CHs then compare received Cj,t values with their own and the CH who has the greatest Cj,t value in a given neighborhood assumes the role of a collector. The collector node is responsible for two tasks: (i) aggregate the information about the xi,t values from all its neighboring CHs and build a report to be sent to the sink, issuing a warning and (ii) act on the environment around it by triggering a relay, aiming to prevent the progression damage and avoiding further problems in the locality. The collaboration among CHs can also be classied as a DEI-DEO information fusion, since each CH has its own local decision for assuming damage or not, but it is allowed to consider the decisions of other local neighbors to present a more reliable decision. To build the report that will be sent to the sink, the values of xi,t vectors at that data collection stage were chosen, because from these values it is possible to deduce all the other relevant information. Since the sink node has knowledge of xi,0 vectors, and all the weight and tolerance values, it is possible to calculate all the other related values that were previously shown in this

120

I.L. dos Santos et al. / Information Fusion 15 (2014) 114129

explanation, and still have a global view of the events that happened into the network during the data collection stage t. It is assumed that the damage location and extent are determined by the positions of the sensors that are CHs and whose Cj,t coefcient exceeded the Lj tolerance value at a certain moment. In case of multiple damage sites, or large damage sites that cover a large area on the structure, the trend is that there will be many emerging collectors, and multiple reports from different locations will arrive at the sink node. It is important to note that the sink node only plays the role of supporting the CHs in their tasks, sending a message to the whole network, scheduling the time of data collection stage, storing historical data over time and making requests that represent interventions on the network. No calculations relative to damage detection, localization and extent determination must be made at the sink. At the sink, the damage localization is done through the unique identiers and positions of the reporting sensors, and the extent is determined by the area covered by these sensors. Through the values contained in the report, it is possible to reproduce the situation that occurred within the network and take the appropriate decisions about the structural predictive maintenance. 6. Experiments with Sensor-SHM This section describes the experiments conducted to evaluate the performance of Sensor-SHM in two main ways: (i) the algorithm capability to precisely and accurately detect, localize and determine the extent of damage sites, and (ii) the algorithm efciency when running within a resource constrained WSN, in terms of communication. Regarding item (i), the efciency of the algorithm will be evaluated by using sensor nodes that sense simulated acceleration data instead of using real, sensor-collected, acceleration data. Regarding item (ii), in our previous work [7] we presented an energy consumption analysis and a partial communication analysis, in which we achieved promising results. So in this paper our focus regarding item (ii) will be on providing a more comprehensive analysis on the impact of the algorithm over the communication. Two experiments were set up to evaluate item (ii), (Sections 6.2, 6.3 and 6.4) and one experiment was set up to evaluate item (i) (Sections 6.5, 6.6 and 6.7), totalizing three experiments. 6.1. Network prototyping In all the performed experiments we prototyped a sensor network composed of MICAz motes from Crossbow Technology [21]. The motes are programmed in nesC language, under the TinyOS 2.1 development environment [23]. The implementation of our Sensor-SHM algorithm encompasses two programs; one for running inside motes assigned as CHs and other for running in motes acting as sensor nodes. The default implementations of the 802.15.4 protocol for lower level communication handling, and the Active Message protocol [22] for higher level communication handling in TinyOS 2.1 were used. Our experiments aim at evaluating the performance of the Sensor-SHM algorithm alone, so, a lean implementation of the whole system in our prototype was desired. From the point of view of the sensor nodes, the algorithm is structured in four main components (Fig. 2): SensorMainC, SensorCollectC, SensorFFTC and SensorRadioC. Besides them, basic TinyOS components were used to implement radio and sensor boards [24] (they are hidden in the gure for the sake of clarity). The MainC component provides the Boot interface, and it is the primary component of TinyOS, from which the nodes boot (initialize). SensorMainC component manages all the sensor nodes in the network. To achieve its goal, SensorMainC makes use of SensorCol-

lectC and SensorRadioC components accessed respectively through the SensorCollect and SensorRadio interfaces. SensorCollectC controls sensing tasks, using other basic interfaces provided by TinyOS 2.1. SensorCollectC also makes use of the SensorFFTC component, through the SensorFFT interface, which is responsible for performing a FFT within the wireless mote. SensorFFTC was built as a separate component to enable the reuse in other applications, given its generality and likely use in different domains. SensorRadioC is responsible for the communication between each sensor and its CH, and uses the basic TinyOS 2.1 radio interfaces, based on Active Message [22]. From the point of view of the CH nodes, the Sensor-SHM algorithm encompasses three main components (Fig. 2 (b)): LeaderMainC, LeaderApplicationC and LeaderRadioC. LeaderMainC component manages the Sensor-SHM algorithm in the CH nodes. LeaderApplicationC is used to calculate the Cj,t coefcients for the cluster, based on the values of natural frequencies received by the radio. The CH radio is managed by the LeaderRadioC, which uses the basic TinyOS 2.1 radio interfaces as the SensorRadioC component. However, the LeaderRadioC has a larger and more detailed implementation than the SensorRadioC, since the CH makes a more intensive use of its radio. For instance, the LeaderRadioC is in charge of requesting and collecting the frequency values of all sensors within its respective cluster, while the SensorRadioC component implements only the response to the received requests from the single respective CH. This separation is important since our network topology is static and, therefore, the CHs are always the same nodes (we do not assume the CH rotation). In this situation a CH node will never assume the roles of a sensor, and vice versa. So, to avoid waste of memory in the program used by the sensor nodes, only the tasks concerning the sensor node side in communications are implemented for their radio component, which are simpler than the tasks regarding the CH node side in communications. 6.2. Methodology to evaluate resources consumption Two experiments were performed in real sensor hardware, aiming to evaluate Sensor-SHM in terms of communication, meaning specically communication overhead and packet loss. To evaluate the communication overhead, we measured the number of packets, and the number of bytes, both transmitted by sensors and CHs, analyzing separately data messages and control messages transmissions. We aim at understating the relation that governs the communication overhead by increasing the number of clusters and sensors. Data messages are messages whose payload contains information on the structural state. There are two kinds of data message in our implementation: (a) messages to convey information on the values of natural frequencies measured by the sensors (mainly exchanged between sensors and their CHs), and (b) messages to carry information on the values of Cj,t computed by CHs (mainly exchanged among CHs). Each data message of kind (a) requires four bytes per frequency in its payload. Once we use 5 natural frequencies, it means that this kind of data message has a 20 bytes payload. Data messages of kind (b) require 4 bytes in its payload to transport a value of Cj,t. Control messages are messages with empty payloads, for general (often management) purposes. Their main uses are to establish communication among the nodes and to disseminate commands into the network, for instance, to start the data collection stages. Control messages are exchanged among all nodes of the network. We considered an indoor environment, using MICAz motes including only the mote board containing the processor, radio, memory and batteries. These items are depicted in [21]. We did not use sensor boards, once the sampled data were simulated in

I.L. dos Santos et al. / Information Fusion 15 (2014) 114129

121

Fig. 2. Component diagrams for (a) sensor and (b) cluster-head nodes.

this set of experiments. The acceleration values collected by the sensors were the same at every data collection stage and the rules to set the collected values were as presented in [7]. So, since we are focusing on the communication overhead generated by the raw data sampling, the meaning/semantics of acceleration values which are being collected is irrelevant. Five summed sinusoids with known frequencies made every sensor return modal frequencies of 20 Hz, 40 Hz, 60 Hz, 80 Hz and 100 Hz. The range of 100 Hz is where the most important natural frequencies of large structures lay in, thus it is possible to evaluate the precision of the FFT and peak extraction algorithms over this range. During the data collection stages, each sensor generates a standard error (standard deviation) of 2 Hz in the determination of the modal frequencies due to imprecisions in calculations (mainly relative to the own FFT algorithm and the peak extraction algorithm imprecision) and truncations (which are needed to transmit the data over the radio). All the tolerances mentioned in the algorithm description (see Section 5) were set to zero, so we assumed that in all the data collection stages damage sites were found, what is considered to be the most resource demanding scenario. Moreover, for the whole set of experiments, we used the default values for the MICAz radio parameters [21]. Therefore, the maximum message payload size was set to 28 bytes, and the radio data rate was 250 kbps. No sleep mode was implemented, so the radio duty cycle was 100% since the experiments were short and the amount of energy spent during the idle time is very low, thus not interfering with the results.

added. In experiment (II), the rst scenario is the same as in experiment I (one cluster, containing one CH and one sensor). The number of clusters is increased by one in the next three scenarios, equally spacing the CHs 1 m from the last one added in a linear topology in the x axis. The single sensor is placed at the same x position of its respective CH, and 1 m from its respective CH in the y axis. In all scenarios of both experiments, all sensors were within radio range of the sink node, considering the 20 m to 30 m indoor range of the MICAz motes in [21]. In every scenario of experiment (ii), all the CHs were considered neighbors among themselves. In our previous work [7], we found that each data collection stage takes around 10 s to complete, and the number of sensors in the network did not have a signicant inuence over this time. Then, for the experiments in this work, the chosen time between each data collection stage was 15 s, to assure that all data collection stages are nished and, at the same time, to minimize the idle time between each data collection stage. So, at every 15 s the sink sends a message for each CH to start a data collection stage. Ten data collection stages were performed.

6.4. Results and analysis of resources consumption The rst observation from the results of the set of experiments described in Section 6.2 (Fig. 3) is that the number of transmitted bytes related to Data Messages is bigger than the number of transmitted bytes related to Control Messages in both experiments. This result points towards a successful lean implementation of our prototype, due to the achieved low communication overhead caused by control mechanisms or auxiliary protocols, as we intended. We also consider our implementation successful, due to the low packet loss rate achieved among the sensor nodes in the network, what contributed for the quality of the analysis of the structural integrity. In fact, no retransmissions were needed among nodes in both experiments. This fact is explained by the proximity among nodes considered in each topology, assuring a good link quality during transmissions. This proximity reects a harsh scenario since in SHM applications the usual distance between motes is even smaller than 1 m, as it can be seen in [14]. But we have faced a packet loss rate of 3.6% between the sink node (which was promiscuously listening to all the network trafc) and the rest of the network. We attributed this outcome to a lack of computational power of the sensor node at the sink, which could not deal with the transmission of packets over the serial port at the same rate with which it received them from the radio. At certain moments, many nodes transmitted packets at the same time, and the buffer size at the node was not enough to deal with all messages. This

6.3. Description of the scenarios used to evaluate resources consumption The pair of experiments for evaluating communication is referred as experiments (I) and (II). In experiment (I), each scenario is characterized by the variation in the number of nodes in the same cluster, to evaluate communication inside one cluster (intra-cluster). In experiment (II), each scenario is characterized by the variation in the number of clusters, keeping a xed number of sensors per cluster, to evaluate communication among CHs (inter-clusters). Both experiments had 4 scenarios, which are described based on a two-dimensional Cartesian plane (x, y), where the sink node is always at the origin. In experiment (I), all the scenarios had only one CH. This CH was always placed at 1 m from the sink node in the x axis. The rst scenario had one sensor pertaining to this cluster, at 1 m from the CH in the x axis. The next three scenarios had the number of sensors pertaining to this cluster increased by one, following a linear topology along the x axis, spacing each sensor 1 m from the last one

122

I.L. dos Santos et al. / Information Fusion 15 (2014) 114129

Fig. 3. Results for experiments (I) and (II).

packet loss rate did not represent a signicant interference in our results, since we could t curves to our experimental data after the experiment and easily detect a lost packet in our controlled environment. But this computational limitation can be resolved through the adoption of a 2-tiered topology in which the sink node, or even the CH nodes, could detain a greater computational power. In experiment (I), for every new sensor added to the cluster, the total number of transmitted bytes increased linearly. This total number is the sum of transmitted bytes due to both Control and Data Messages. The CH was the only node which had its number of transmissions changed due to the increasing of the number of sensors and this change was in the number of transmitted Control Messages. The number of Control and Data Messages transmitted by each sensor and the number of Data Messages transmitted by the CH were not affected by the change in the number of sensors in the same cluster. In experiment (II) for every new cluster added, the total number of transmitted bytes increased following a polynomial relation. The shape of this curve is mainly explained by the number of transmissions of Data Messages. And the variation in the number of transmissions of Data Messages is explained by the number of Data Messages exchanged among CHs. Since every new CH added was considered neighbor of all the CHs already existent in the prior scenarios, the number of Data Messages (used to transmit the values of Cj,t among CHs) increased at a much faster rate. The number of

Control and Data Messages transmitted by each sensor and the number of Control Messages transmitted by the CH were not affected by the change in the number of clusters. The formulas in Fig. 3 are good predictors for the number of transmitted bytes in a situation comprising more nodes. Also, since the number of transmitted bytes for each CH changes (increases) with the addition of more sensors per cluster or more neighbor clusters, the CH may be overloaded at some number of sensors in its cluster, or neighbor clusters. We did not have the sufcient number of sensor nodes to reach this limit experimentally. By the obtained results, our prototype presented a good scalability when considering the expected number of sensors to be used in most currently available related applications in a real situation. This maximum number of sensors in SHM applications found by us in the literature was 64 [3]. The execution time of the data collection stages starts to increase when the number of bytes needed to be transmitted by the CH reaches the radio data rate (250 kbps for MICAz radio). And increasing the number of sensors in the same cluster is a faster way of reaching this limit than increasing the number of neighbor clusters. It is also important to mention that the number of bytes transmitted by a sensor node is a good estimation for the energy spent by it. Since it is well-known that radio transmissions are the most energy consumptive tasks in WSNs [14], it is possible to estimate that CHs will have a shorter lifetime when more sensors are added

I.L. dos Santos et al. / Information Fusion 15 (2014) 114129

123

to its cluster. So, in our prototype, it is always a better choice to allocate the number of needed sensors into as many clusters as possible. This result is also a strong justication for the use of CHs and the choice for distributing the algorithm among the nodes, making it localized. But still, this certainly increases the number of nodes needed for the same task, what may increase the overall cost of the system. So, there is a tradeoff to be managed and experiments and simulations performed previously to the WSN deployment are very useful to guide the design decisions.

the value where previously was the third peak. In (ii), adding or removing one of the peaks among the rst ve would also make the peaks to be perceived in a wrong order, generating a similar effect than in (i). So, the method (iii) is chosen since it directly causes the expected changes and is easier to debug in Sensor-SHM. 6.6. Description of parameters used to simulate raw data The simulated raw acceleration data correspond to an accelerometer output in the z axis, named az(x). The general rule to generate az (x) is dened by Eq. (5) as a damped linear combination of ve sinusoids weighted by the respective intensity (Im), i.e. the amplitude of each modal frequency in the signal.

6.5. Methodology for accuracy evaluation over simulated raw data In this experiment focus was given in the ability of the algorithm to correctly identify a damage site and its extent. The input data about location and extent of damage was introduced through changes in the simulated raw acceleration data collected by the sensors. Although this experiment is performed over simulated raw acceleration data, it is fully able to show how Sensor-SHM is expected to behave in a real-world situation, and is also a rst test for assessing the precision of the mathematical model to be used in future works. The hardware used in this experiment was the same used in the previous one, based on the MICAz motes. The network topology is shown in Fig. 4. Four clusters are considered in this topology, which are composed of two sensors and a CH each. Also, every cluster is considered to be deployed over a different homogenous structural element. Each of these elements could be represented by a xed beam, and all the four beams are connected among themselves by structural elements which will retransmit vibrations. The sensors are considered to be directly attached over the structure, but the CHs do not need to be attached. Each cluster-head must only be within radio range of all its neighbor (single-hop) cluster-head nodes. Sensors that are over the same homogenous structural element are expected to produce more redundant data among themselves. Based on the experience acquired during our experiments, we have found different methods to represent an unhealthy state of the structure in this simulated scenario. The most relevant of them are (i) increasing the value of intensity for one of the modal frequencies, (ii) adding a sixth modal frequency (x6) among the existent frequencies, or removing one of the ve existent ones, and (iii) shifting the values of one or more frequencies. Methods (i) and (ii) would not be directly perceived by Sensor-SHM, but their collateral effects can still be perceived. In (i) one of the peaks of the power spectrum can mask the other peaks, e.g. if it is the third peak, it may be perceived as the rst and probably the only one, what will be interpreted as if the rst peak had been shifted from its value to

" az x e0:5x

5 X Im sin2pxm x

# 5

m1

In Eq. (5), the time (x) varies according to the time period related to the sampling rate, which was chosen in this experiment as 1.0 kHz. The initial value for each parameter in each node is: 2 3 4 5 x1 i;t 20:0 Hz; xi;t 40:0 Hz; xi;t 60:0 Hz; xi;t 80:0 Hz; xi;t 100:0 Hz; and the intensities were all set to 0.25, regardless of the mode of vibration. The values of intensity are all the same, since we do not want to mask frequencies in the spectrum, with some frequencies being more dominant than others. Also, the intensities receive small values, generating data in the range that accelerometers are able to collect in real situations. Through numerical simulations we reached to the value of 0.25. This value generates a signal which can be reasonably dealt with in our peak extraction algorithm. The results of the experiments from [14] present acceleration data similar to the one which we generated by Equation (5). The initial values of frequencies were the same as described in Section 6.2. To generate the frequency shifts, new (Bmt) factors were inserted in the equation of each sensor node as shown in Eq. (6).

" A z x e
0:5x

# s X m m I sin2px Bm tx

m1

These factors are all functions of two different parameters: (i) the number of the current data collection stage t, and (ii) a constant of magnitude Bm. In Eq. (6), to make each frequency to shift at different rates, we added the variable t along with the Bm magnitude values. So, at every data collection stage, the variable t increases by one, and for each node, each frequency will be increased by the small value of its respective (Bmt) factor which increases at each data collection stage as a multiple of Bm. The values of Bm for each node are created according to the following criteria: (i) it is expected that Bm is higher for frequencies of the vibration modes which are more sensitive to damage at the node locality, i.e. the higher modes and (ii) the values of Bm are normalized, so the highest Bm value is 1, for the most sensitive mode of the node which is closest to damage. To perform a random damage generation, frequency variation patterns in presence of damage were dened as shown in Table 1. To set up the values in Table 1, we rstly assigned the highest value of Bm to the highest mode of vibration of nodes in the cluster which is closest to damage, so the rst chosen value is B5 to pattern A. Next, we assigned decreasing values to the other magnitudes of

Table 1 Frequency variation patterns in presence of damage. Variation pattern A B C B1 0.2 0.2 0.2 B2 0.4 0.2 0.0 B3 0.6 0.3 0.0 B4 0.8 0.4 0.0 B5 1.0 0.5 0.0

Fig. 4. Topology used for accuracy evaluation over simulated raw data. Each node is represented by a circle, and uniquely identied by an ID.

124

I.L. dos Santos et al. / Information Fusion 15 (2014) 114129

pattern A, using an arithmetical progression of ratio 0.2. Next, since the B1 mode is considered global, and is perceived in the same way by all sensors, the column of B1 received the same value for all damage patterns. The last values of pattern B are the values of pattern A divided by 2, since pattern B will be assigned to nodes that are in the neighborhood of the damage site, which will receive pattern A. Except for B1 mode, pattern C receives zero values for the other modes, since this pattern will be assigned to nodes which are far from the damage site, and will perceive shifts only in their global frequencies. Table 2 presents all the possible cases of damage pattern assignments for our experiment. Once congured at the network setup, the assignments remain the same until the end of the experiment. This assignment is randomly made and each damage pattern has 25% chance of being chosen. Since at rst sight it is not possible to know where the damage site will be, the rules to choose the values of Ai are based in the knowledge of the general rule which states that higher frequencies are more sensitive to damage. In order to speed up the processes of damage detection, localization and extent determination, higher values to the higher frequencies are assigned, so Cj,t will depend more on the variations of the higher frequencies. Also, the values of Ai are normalized, in which the higher value is assigned to the fth mode, and decreasing values are assigned to the other modes. The rule is similar to the one used to create pattern A in Table 1. So, for every sensor i, Ai = {0.2; 0.4; 0.6; 0.8; 1.0}. After performing preliminary tests and numerical simulations, by adopting an empirical methodology (based on trial and error), we found out that setting the value Lj = 20.0 for all clusters is enough to prevent a high occurrence of false positives, especially in the rst data collection stage, and to assure that damage will only be signalized when the frequencies have expressively shifted, in comparison to their initial values. So, for every cluster j, Lj = 20.0. The values of Ti are the same for every node and every modal frequency. The reason for this assignment is to prevent false-positives (see Section 5.4.1) since each sensor generates a standard error of 2 Hz in the determination of the modal frequencies due to imprecision in calculations and truncations. So, Ti will assume the value of two standard deviations. This is enough to assume that in a real situation less than 5% of frequency values will exceed the limit in an undamaged situation (false-positive), assuming that the extracted values of frequencies follow a normal distribution and the average is well estimated using 7 frequency samples. Then, the Ti limit assumes the value of 4 Hz for every mode of vibration and every node. Finally, the values of Tables 1 and 2 are stored within the sensors, and used to generate the random data. Our goal is to observe the behavior of Cj,t and the xi,t metrics during the data collection stages, i.e. a sensitivity analysis of the indicated variables will be performed over the variation of the acceleration signals collected.

6.7. Results and analysis of accuracy over simulated raw data The network performed the setup and 29 data collection stages in this experiment stopping after enough time for damage to be clearly characterized in the scenario. Our program that runs on the sink was set up to randomly choose one of the four cases in

Table 2 Possible cases for frequency pattern assignments. Node ID 1 3 5 7 and and and and 2 4 6 8 Case 1 A B C C Case 2 B A B C Case 3 C B A B Case 4 C C B A

Table 2. Case 1 was randomly chosen to be performed for this experiment. A packet loss rate of 0.4% for the data messages containing frequency values was detected. The explanation for this fact is the same from the previous experiments (Section 6.4). For this experiment, the values of frequencies missing due to this reason were interpolated after the end of the experiment. The peak extraction algorithm performed poorly. During the whole experiment, around 15.2% of the frequency values could not be correctly found. When the peak extraction algorithm does not nd the current frequency value correctly, it assumes that its value is equal to the respective initial value stored in the xi,0 vector. For instance, considering all the extracted frequency values from the respective mode in the experiment, the rst mode was not found in 12.5% of the cases, the second 5.0%, the third 20.8%, the fourth 7.5%, and the fth 30.0%. It means that the fth peak was the most difcult to nd and the best performance was achieved over the second peak. Such results pointed out that our peak extraction algorithm still needs improvements to perform well in terms of accuracy. So, we conclude that it may not be a good choice to generalize the use of this algorithm for every structure through the presented methodology of peak extraction. Instead, it may be a better choice to make prior studies about specic modes of vibration from the structure to be monitored, so that it is possible to indicate more parameters to the peak extraction algorithm, what will lead to an improvement in its accuracy. This fact does not invalidate the algorithm; it only imposes a restriction to the methodology of data acquisition. We chose a node which received the variation pattern A (Node ID 1), one which received variation pattern B (Node ID 3), and one which received variation pattern C (Node ID 5) to show in Fig. 5 the evolution of each frequency shift for each variation pattern. We can see the frequencies following linear trends. The angular coefcient is exactly the value of Bm. In fact, this mathematical relation helps understanding our proposal of damage simulation presented in Section 6.6. When changing the term xm from Eq. (5) to xm Bm t in Eq. (6), we imposed to the frequencies a linear dependency over the data collection stages. In Node ID 5, since its values for Bm are 0.0 for the last four natural frequencies, these last four natural frequencies followed a constant trend and were always oscillating into their control limits of Ti = 4 Hz. It is also possible to observe that the three nodes in Fig. 5, and all the other nodes which received any variation pattern, perceived damage in the lowest frequency, at the same rate. As expected, nodes which received variation pattern A had greater variations in their frequencies than the other nodes during the experiment. It is also important to understand the meaning of the peaks pointing down which are shown in almost all curves from all graphs. They represent moments in which one of the frequency values is not correctly extracted by the peak extraction algorithm. As previously mentioned, when this case happens, the algorithm assumes the value of xi,0 for that natural frequency. In the following data collection stages, it returned to the expected linear trend. When the value was not correctly extracted, it had implications in the calculations of Cj,t, and Fig. 6 illustrates this issue. Fig. 6 presents a graph with the evolutions of Cj,t values with the Lj tolerances evidenced during the data collection stages for the four clusters in the experiment. Five important situations are signalized in Fig. 6, which need to be discussed. Situation 1 points that the rst cluster to detect damage was cluster 1, at data collection stage 4. At this time, this cluster could be signalizing a false positive, but at stage 5, this same cluster detects damage again. So, it is possible to assure that damage really occurred close to cluster 1. Situation 2 points that damage had progressed so much that cluster 2 started detecting it, in smaller proportions. It is the concept of damage extent determination. Two clusters are now detecting damage, the one closer to damage with higher Cj,t values.

I.L. dos Santos et al. / Information Fusion 15 (2014) 114129

125

Fig. 5. Variation of frequencies for Node IDs 1, 3 and 5, during data collection stages.

Fig. 6. Variation of Cj,t values per cluster during data collection stages.

Situation 3 concerns to the problem detected in Fig. 5. When the frequencies were not correctly extracted, the values of Cj,t were disturbed, but since Cj,t depends on shifts of the ve natural frequencies, losing only one value of natural frequency is not enough to bring Cj,t below the Lj line. We therefore observed that to increase the reliability of our algorithm it is better to consider in the analysis as many natural frequencies as possible, so that if one of these frequencies is lost, or wrongly estimated, the disturbance over Cj,t will not be enough to harm the accuracy of the algorithm. Situation 4 shows the rst time in which one of the nodes from the clusters with similar behaviors (3) and (4) present a value for one of its natural frequencies that surpasses the tolerance Ti. This natural frequency is the rst one which started at 20 Hz. This disturbance in the rst natural frequency was enough to cause the calculation of C3,t for the rst time in moment t = 18 and C4,t in moment t = 21. At a future data collection stage, if the input parameters remained, clusters 3 and 4 would start signalizing damage, only due to the rst natural frequency shift, another case in damage

extent. The network did not go that far in time during its operation, since it would take much time to observe this fact. Finally, situation 5 shows the effect of three consecutive stages of data collection in which the fth natural frequency was wrongly extracted over C1,t. The fth natural frequencies of nodes 1 and 2 are the ones which are most sensitive to damage, so, not extracting them correctly made the values of C1,t drop below 50% of its expected value. If L1 were set higher than 80.0, cluster 1 would not signalize damage detection, when actually the damage is still progressing. If this state takes too long, this would be the case of a false negative. The key to avoid this issue is to rely on the opinions of the neighbor clusters. In this case, cluster 2 would keep signalizing damage if its limit L2 were set to 20.0. Therefore, false negative avoidance is one of the main reasons for stimulating collaboration mechanisms among the nodes in this algorithm. The impact of packet loss between a sensor and a CH for the proposed method is similar to the impact of a wrong peak extraction, as pointed in situation 5. But when a packet is lost, all the ve natural

126

I.L. dos Santos et al. / Information Fusion 15 (2014) 114129

frequencies are lost, and the respective Cj,t coefcient tends to drop lower than in the case where only one frequency is wrongly extracted from the power spectrum. In case of a packet lost in the communication among CHs, the whole region monitored by the respective cluster is instantly out of the analysis, what may deeply compromise the whole analysis. For this reason, the adoption of protocols to provide as much reliability as possible (for instance, by using acknowledgments of messages) in the communication among CHs is desired. 6.8. A case study for assessing the accuracy of the algorithm In this section we present and detail a procedure to deploy a wireless sensor network comprised of nodes running our algorithm. The procedure is based on the real experiment described in [8] (see Section 4). The scenario parameters were congured so that the behavior of our algorithm could be analytically simulated and analyzed. This case study presents a possible real application for Sensor-SHM algorithm, and allows analyzing its performance in terms of accuracy on assessing the integrity of a structure. We extracted the data about the natural frequency changes due to damage occurrence from the test performed in [8], and prepared a numerical simulation for assessing the behavior of Sensor-SHM algorithm in the same case. The scenario was described in Section 4, and we organized our network deployment in a similar way as Clayton et al. [8] did. One sensor node was considered in each of the same places where the sensors of the related work were deployed, and additionally, two clusters were set up. Cluster j = 1 comprising oors 1 and 2 (sensors i = 1 and i = 2, respectively), and the other cluster j = 2, comprising oors 4 and 5 (sensors i = 3 and i = 4, respectively). The CHs j = 1 and 2 were considered out of the structure, but still in the radio range of all the sensors. The physical presence of the CHs over the structure could affect the properties of the structure, adding more mass to it in a real situation. Since our CHs do not sense, our system could adapt very closely to the experimental scenario described, as the system of Clayton et al. did. Since there was information available about the healthy and damaged states of the structures, it was possible to calculate Dxi,t for t = 1 to 5, and we assumed that all the sensors detected the same values of modal frequencies. Since the structure is small and the sensors were relatively close, the sensing of the same global modal frequencies of the structure by all the sensors deployed over it is a reasonable assumption, but it is not true for every structure. There was information available to perform at least one rst (healthy) sensing and ve successive data collection stages with the presence of damage on each of the ve oors. Therefore, a study prior to the actual deployment of the WSN over the structure was conducted for nding good values for the constants Ti, Ai and Lj. In the experiment described in [8], three different methods for extracting the natural frequencies were used: a manual method, the Eigen system Realization Algorithm (ERA) and the Fractional Polynomial Curve-tting (FPCF). The determination of the natural frequencies therefore presented an average and standard deviation among the extractions through these three methods, for every situation of damage. The Ti values assumed, for every sensor i, the value of 3 times the standard deviation of the respective modes of vibration. Assuming that the natural frequencies presented a normal distribution, and considering that the algorithm uses the modulus of the difference of the modal frequencies, i.e., value of 3 corresponds to the amplitude of 6 times of the standard deviation, the algorithm will consider the structure healthy more than 99% of the times it samples the natural frequencies. Only when a value of Dxi,t surpasses the values of Ti the algorithm proceeds with the calculations, what corresponds (i) to 1% of the times even when the structure is still healthy (false-positive) and (ii) to the situations where a real damage has occurred. Once the changes in natural frequencies

due to a damage insertion on a oor in the structure were much larger than 3 standard deviations of the natural frequencies extraction, it was possible to successfully apply the algorithm. The value of Lj was dened in using the same principles, but according to the values of the Cj,t coefcient. Once it was possible to previously calculate the values of Dxi,t for every case of damage and every sensor, the determination of Ai constants was made using a simple procedure. The Ai constants should be the ones which maximize the values of the expected Di,t coefcients for the sensors localized close to each damaged site. So, we solved a simple linear programming problem whose objective was to maximize (D1,1 + D2,2 + D3,4 + D4,5), varying the values of Ai and restricting them between 0% and 100% for each mode of vibration, for each sensor. Considering that in t = 1 damage was present in oor 1, in t = 2 in oor 2, t = 4 in oor 4 and t = 5 in oor 5. So, we are maximizing the value of Di,t for the sensor i = 1 when the damage is in the oor 1, and so on. Since the values of Dxi,t were the same collected by all sensors, they were constants in the problem. Obviously, the modes of vibration which were more sensitive to the damage in one oor received 100% for the respective value of Ai in the sensor of that oor. In other words, when damage is at oor 1, and we know that the largest variation in the frequency of mode 2 occurs when damage is at oor 1, we can choose the value of 100% for this mode and 0% for all the other modes for the Ai constant of the sensor at oor 1 (i = 1). When the damage is not at oor 1, this sensor may still detect a variation in the modal frequency 2, and consequently multiply this variation by 100% generating some value of Di,t, but this value is still not the largest value possible for Di,t. Using these rules for setting the values of the constants of the algorithm and considering the existence of two clusters, as previously mentioned, allowed a reasonable performance of the algorithm. When damage was at oor 1 and at oor 2, cluster 1 signalized damage near it, and cluster 2 did not. When damage was at oor 3, since there was no sensor deployed on this oor, this situation was not predicted by our strategy, therefore both clusters acted in an unpredicted way, having both signalized damage. When damage was at oors 4 and 5, the expected behavior was for cluster 2 to signalize damage alone, but since the damage localization on these oors relied on the modal frequencies which mostly varied due to damage in every oor, both clusters signalized damage in both cases, what can be considered the case of a false-positive. Therefore, our algorithm presented 100% success in damage detection and 2 cases of false positives among the 8 available results for damage localization (two clusters signalizing damage or not near them in four predicted scenarios of damage). Although the performance of the algorithm can be improved through choosing different rules for setting the values of the constants, it is extremely dependable of the quality of the structural properties. In cases where the monitored structure does not exhibit a clear standard of modal frequency variations for each case of damage, it may be impossible to localize damage in some its places. In the scenario of Clayton et al., damages in oors 4 and 5 were not possible to clearly localize using the proposed rules. 6.9. Comparative analysis We chose two important related works to perform a comparison of our obtained results. The rst one is the work of Xu et al. [9], where the system named Wisden is introduced. This system is considered a centralized approach, where no processing aimed at assessing the structural integrity is performed within the network, only at the sink node. Wisden is based on a WSN, but its design is closer to the design of centralized wired systems. Wisden relies on event detection and data compression techniques for providing viable data transportation to the sink node, where a

I.L. dos Santos et al. / Information Fusion 15 (2014) 114129

127

complete analysis can be performed. The second work selected for our comparative analysis is described in Hackmann et al. [14], where a partially decentralized system for detecting and localizing damage is presented. Such system relies on the DLAC algorithm for detecting and localizing damage on structures, and is also based on a WSN. Part of the processing aimed at providing the inputs for the DLAC algorithm is performed within the network, but the DLAC algorithm itself runs on the sink node. Both works are considered previous stages of the evolution in the use of WSNs for SHM, and present increasing levels of decentralization. Our work is, therefore, intended to go a step further in this evolution, presenting a fully decentralized approach. This comparison is designed to point in which aspects the present work evolved with respect to a fully centralized and a partially decentralized approach. In Table 3 we compare our results in terms of (i) RAM memory consumption for performing the core functionalities which each system proposes as solution for data reduction, in order to enable efcient transmissions, (ii) latency, which corresponds to the time delay required for achieving the results over a performed sampling, and (iii) network trafc generated by all the nodes of the network. In order to provide a fair baseline for comparison, some data had to be normalized. Wisden uses 288 bytes of RAM to perform compression over a 128-sample array. Our solution consumes almost the double amount (considering the implementation of the Sensor node only), since it performs more costly procedures. Wisden uses a waveletbased compression, while our system relies on a FFT and a peak extraction algorithm. Our proposal is more easily comparable to Wisden, since the proposal of Hackmann et al. is based on the Imote2 platform, which provides and consumes more computational resources. However, Wisden presents a memory usage optimization, performing dynamic memory allocation, while our system consumes the same amount of RAM during its whole execution, as the proposal of Hackmann et al. also does. The criterion of memory usage clearly points that our proposal performs a larger amount of decentralized processing than a fully centralized case. The work of Hackmann et al. points out an even greater amount of RAM consumption for each collected sample dealt with, but this value is not fully comparable to our proposal because of two facts: (i) the Imote2 tool chain drastically inates the memory consumption of a program adding more software resources to it than needed, and (ii) the implementation of Hackmann et al. keeps the arrays used in intermediary operations in the program for debugging reasons, i.e., the sampled array is lled once and the FFT over this array originates another array, while in our implementation the FFT results are stored in the same array of
Table 3 Comparison of our proposal with similar related works. Criteria Hardware used Memory Consumption Latency Fully centralized case (Xu et al. [9]) MICA2 platform (4 kB RAM; 128 kB ROM) Uses 288 bytes of RAM to perform compression over a 128-sample array (2.25 bytes/sample) 7 min to collect 1 min vibration composed of 6000 samples at a 100 Hz sampling rate (0.070 s/sample) (takes 6 min compressing and transmitting)

samples. Considering such facts, our implementation could consume an amount of memory very similar to the partially decentralized case, if the same platforms and the reuse of arrays were considered in both works. Hackmann et al. also stated that the execution of all the procedures needed for extracting the frequencies inside the mote may not be always advantageous. This effort might cause an amount of memory consumption which does not justify the reduction in the amount of data transmitted. In our proposal we moved forward this barrier, performing all the processing for extracting only one coefcient from the whole sampled array. Although our choice might have generated a suboptimal system under the point of view of the memory usage in the stage between the sensor nodes and CHs, it is important to mention that our system presents a stage of collaborative decision among CHs, which is greatly improved by exchanging only one message containing a simple number among the neighbors, and such an stage is not provided by any other solutions. The comparison in terms of latency points out a better performance of our system when compared to a fully centralized approach. While Wisden needs 0.07 s per sample collected to transmit all results and allow that a complete analysis can start being performed on the sink node, our system needs 0.02 s per sample collected to perform all the required calculations and to communicate the results for taking a collaborative decision. The comparison was normalized according to the number of collected samples, since 1 min of vibration data contains more samples when a higher sampling rate is adopted. However, comparing our results with the results from the implementation of Hackmann et al. reveals, at rst sight, a much better performance for the partially decentralized approach. But the much smaller value of 0.002 s per sample is masked by the much greater computational power of the Imote2 platform. This platform performs all the processing at a much higher speed, what caused the latency to assume smaller values than it should if the implementation were made for the MICAz platform. However, even considering the same platforms, the latency observed in our proposal should be greater than the latency from the partially decentralized system, since our proposal provides a stage of collaborative decision among the CHs. The comparison in terms of network trafc generated by each work provides better results for our work. For this comparison, we consider a network composed of 15 nodes for each of the three works. For the case of Wisden we consider an estimate of the transmission of the full array of samples from 15 nodes, considering a 100% duty cycle (a structure which vibrates 100% of the time). For the case of Hackmann et al. we consider an estimate of the amount of trafc generated by 15 nodes transmitting the

Partially decentralized case (Hackmann et al. [14]) Imote2 platform (256 kB RAM; 32 MB ROM) Uses 73000 bytes of RAM to perform all distributed processing over a 2048sample array (35.64 bytes/sample) 5 s to collect 3.7 s vibration composed of 2048 samples at a 560 Hz sampling rate (0.002 s/sample) (takes 1.3vs calculating FFT, performing curve tting and transmitting)

Fully decentralized case (current proposal) MICAz platform (4 kB RAM; 128 kB ROM) Uses 2761 bytes of RAM to perform all distributed processing over a 512sample array (5.39 bytes/sample) 10 s to decide over 0.5 s vibration composed of 512 samples at a 1000 Hz sampling rate (0.020 s/ sample) (takes 9.5 s to perform FFT, peak extraction, data collection by CHs, coefcient calculation and collaborative decision) Transmits only natural frequencies between the sensor and the CH. Generates 28 bytes of trafc per second of network operation

Network trafc

Transmits full sample array. Generates 667 bytes of trafc per second of network operation

Transmits only coefcients for performing a curve tting. Generates 300 bytes of trafc per second of network operation

128

I.L. dos Santos et al. / Information Fusion 15 (2014) 114129

coefcients needed to perform the curve tting stage at the sink node during 10 immediately sequential rounds of data collection (approximately 100% duty cycle). For the case of our proposal we consider an estimate of the amount of trafc generated by a single cluster of 16 nodes (1 CH and 15 sensor nodes) during 10 data collection stages, with 100% duty cycle, the closest conguration of our system which reproduces the congurations of the other works. In the case of Wisden and our proposal, the bytes from the message headers are accounted, while for the case of Hackmann et al. only an estimate of the payload size is accounted, since there was no information about the sizes of message structures in the respective paper. The comparison reveals a very small amount of network trafc generated by our proposal. This is a measure for the lightness of the communication among CHs, which exchange messages to collaboratively decide over the structural integrity. In short, our proposal performs worse than a centralized and a partially decentralized approach in terms of memory consumption and latency. However, we believe that the choice of consuming more memory and time for processing pays off since it enables the key features of our system. The light communication among CHs through messages carrying small payloads, generating a small amount of trafc, enables: (i) decision making within the network through collaborative mechanisms, allowing local and faster acting against undesired conditions, (ii) low use of radio resources, therefore enabling a longer lifetime of the system due to energy saving, and the liberation of the remaining radio bandwidth for the adoption of other auxiliary protocols for improving the communication capabilities, especially on the layer of communication among CHs. 7. Conclusions Our proposal builds on the joint use of different information fusion techniques, in order to reduce data overhead and allowing a more rational use of the energy in the WSN. Therefore, one important contribution of this work is the performance testing of a multilevel fusion technique in the context of WSN. Another contribution lies in the fact that the proposed damage detection, localization and extent determination approach is a rst step towards a fully decentralized and autonomous SHM system, since there is not much knowledge of such kind of proposals in the current literature. The tests on the performance of the Sensor-SHM algorithm achieved good results. In overall, the use of the information fusion techniques presented in our work reduced the transmissions of data messages, Moreover, results showed that it is possible to isolate a single position of damage and localize it using our algorithm in a case where damage on a structure makes its frequencies shift as described in this work. The FFT was the most important information fusion technique used in our work, in terms of communication overhead. It reduced the need for transmissions from 512 values to only 5 values. But also, the other information fusion techniques helped to reduce these 5 values to only 1 value; as a consequence such a reduction allowed a faster and less energy consumptive information exchange among CHs. False negative avoidance is one of the main reasons for stimulating such collaboration mechanisms among the nodes in this algorithm. Some lessons were learned from this work. First, the resource constrained environment of WSNs is one of the most restrictive factors which hindrance their wide adoption for SHM applications. The amount of RAM memory and processing speed is still not enough for performing the most important information fusion technique, as previously mentioned, the FFT with satisfactory resolution of the frequency axis, what resulted in a high imprecision (of 2 Hz) in the determination of the natural frequencies mostly due to truncations. The solution to this problem is either to reduce the sampling rate to below 1.0 kHz, reducing the sampling

potential of the system, or to increase the amount of memory in the sensor. The development of sensors with more computational power available, as well as new information fusion techniques and approaches to the problem dealt in our work are some of the open challenges in the eld. Acknowledgements This work is partly supported by the National Council for Scientic and Technological Development (CNPq) through processes 481638/2007-5 for Neuman Souza, Luci Pirmez and Flvia C. Delicato; 4781174/2010-1 and 309270/2009-0 for Luci Pirmez; 311363/2011-3, 470586/2011-7 and 201090/2009-0 for Flvia C. Delicato; by the Financier of Studies and Projects (FINEP) through processes 01.10.0549.00 and 01.10.0064.00 for Luiz Vaz and Luci Pirmez; and by the Foundation for Research of the State of Rio de Janeiro (FAPERJ) through process E26/101.360/2010 for Luci Pirmez. References
[1] J. Yick, B. Mukherjee, D. Ghosal, Wireless sensor network survey, Comput. Netw. 52 (2008) 22922330. [2] H. Sohn, C.R. Farrar, F.M. Hemez, D.D. Shunk, D.W. Stinemates, B.R. Nadler, J.J. Czarnecki, A review of structural health monitoring literature: 19962001, Los Alamos National Laboratory, Los Alamos, New Mexico, USA, 2004, pp. 301. [3] S. Kim, S. Pakzad, D. Culler, J. Demmel, G. Fenves, S. Glaser, M. Turon, Health monitoring of civil infrastructures using wireless sensor networks, in: 6th International Conference on Information Processing in Sensor Networks. ACM Press, Cambridge, Massachusetts, USA, 2007. pp. 9. [4] K. Chintalapudi, J. Paek, N. Kothari, S. Rangwala, J. Caffrey, R. Govindan, E. Johnson, S. Masri, Monitoring civil structures with a wireless sensor network, IEEE Internet Comput. 10 (2006) 2634. [5] E.F. Nakamura, A.A.F. Loureiro, A.C. Frery, Information fusion for wireless sensor networks: Methods, models, and classications, ACM Comput. Surv. 39 (2007) 155. [6] B.V. Dasarathy, Sensor fusion potential exploitation-innovative architectures and illustrative applications, Proc. IEEE 85 (1997) 2438. [7] I.L. Dos Santos, L. Pirmez, E.T. Lemos, L.A.V. Pinto, F.C. Delicato, J.N. De Souza, Resource Consumption Analysis for a Structural Health Monitoring Algorithm Using Wireless Sensor Networks, in: XXVIII Simpsio Brasileiro de Redes de Computadores e Sistemas Distribudos, Sociedade Brasileira de Computao, Gramado, Rio Grande do Sul, Brazil, 2010, pp. 13. [8] E.H. Clayton, Y. Qian, O. Orjih, S.J. Dyke, A. Mita, C. Lu, Off-the-Shelf Modal Analysis: Structural Health Monitoring with Motes, in: XXIV International Modal Analysis Conference, Society for Experimental Mechanics, St. Louis, Missouri, USA, 2006, pp. 12. [9] N. Xu, S. Rangwala, K.K. Chintalapudi, D. Ganesan, A. Broad, R. Govindan, D. Estrin, A wireless sensor network for structural monitoring, in: ACM Conf. Embedd. Netw. Sens. Syst., ACM Press, Baltimore, Maryland, USA, 2004, p. 11. [10] J. Paek, K. Chintalapudi, J. Cafferey, R. Govindan, S. Masri, A Wireless Sensor Network for Structural Health Monitoring: Performance and Experience, in: II IEEE Workshop on Embedded Networked Sensors, IEEE Computer Society, Sydney, New South Wale, Australia, 2005, pp. 9. [11] J. Caffrey, R. Govindan, E. Johnson, B. Krishnamachari, S. Masri, G. Sukhatme, K. Chintalapudi, K. Dantu, S. Rangwala, A. Sridharan, N. Xu, M. Zuniga, Networked Sensing for Structural Health Monitoring, in: IV International Workshop on Structural Control, Columbia University, New York, New York, USA, 2004, pp. 10. [12] P. Cawley, R.D. Adams, The localization of defects in structures from measurements of natural frequencies, J. Strain Anal. 14 (1979) 4957. [13] A. Messina, A.I. Jones, J.E. Williams, Damage detection and localization using natural frequency changes, in: Conference on Identication in Engineering Systems, University of Wales, Swansea, Wales, UK, 1996, pp. 9. [14] G. Hackmann, F. Sun, N. Castaneda, C. Lu, S. Dyke, A holistic approach to decentralized structural damage localization using wireless sensor networks, in: XXIX IEEE Real-Time Systems Symposium, IEEE Computer Society, Barcelona, Spain, 2008, pp. 12. [15] M. Wang, J. Cao, B. Chen, Y. Xu, J. Li, Distributed processing in wireless sensor networks for structural health monitoring, Lect. Notes Comput. Sci. 4611 (2007) 103112. [16] S. Meguerdichian, S. Slijepcevic, V. Karayan, M. Potkonjak, Localized algorithms in wireless ad-hoc networks: location discovery and sensor exposure, in: II ACM International Symposium on Mobile ad hoc Networking and Computing, ACM Press, New York, USA, 2001, p. 10. [17] A.R. Da Rocha, I.L. Dos Santos, L. Pirmez, F.C. Delicato, D.G. Gomes, J.N. De Souza, Semantic Clustering in Wireless Sensor Networks, in: III IFIP International Conference on Wireless Communications and Information Technology in Developing Countries, Springer Boston, Brisbane, Australia, 2010, p. 11.

I.L. dos Santos et al. / Information Fusion 15 (2014) 114129 [18] W.B. Heinzelman, A. Chandrakasan, H. Balakrishnan, An application-specic protocol architecture for wireless microsensor networks, IEEE Trans. Wirel. Commun. 1 (2002) 660670. [19] W.B. Heinzelman, A. Chandrakasan, H. Balakrishnan, Energy-efcient communication protocol for wireless microsensor networks, in: Hawaii International Conference on System Sciences, IEEE Computer Society, Island of Maui, Hawaii, USA, 2000, pp. 10. [20] R.M. Assuno, M.C. Neves, G. Cmara, C.C. Freitas, Efcient regionalisation techniques for socio-economic geographical units using minimum spanning trees, Int. J. Geogr. Inf. Sci. 20 (2006) 797811. [21] MICAz DataSheet, in, Crossbow Technologies, Milpitas, California.

129

[22] P. Buonadonna, J. Hill, D. Culler, Active Message Communication for Tiny Networked Sensors, in: XX Annual Joint Conference of the IEEE Computer and Communications Societies, IEEE, Anchorage, Alaska, USA, 2001, pp. 11. [23] J. Hill, R. Szewczyk, A. Woo, S. Hollar, D. Culler, K. Pister, System architecture directions for networked sensors, in: IX International Conference on Architectural Support for Programming Languages and Operating Systems, ACM Press, Cambridge, Massachusetts, USA, 2000, p. 11. [24] J. Hill, A Software Architecture Supporting Networked Sensors, Master Thesis, Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, California, USA, 2000, pp. 68.

You might also like