You are on page 1of 14

Appl Compos Mater (2010) 17:114 DOI 10.

1007/s10443-009-9090-x

Predicting the Fatigue Life of Different Composite Materials Using Artificial Neural Networks
M. Al-Assadi & H. El Kadi & I. M. Deiab

Received: 28 May 2009 / Accepted: 30 June 2009 / Published online: 16 July 2009 # Springer Science + Business Media B.V. 2009

Abstract Artificial Neural Networks (ANN) have been recently used in modeling the mechanical behavior of fiber-reinforced composite materials including fatigue behavior. The use of ANN in predicting fatigue failure in composites would be of great value if one could predict the failure of materials other than those used for training the network. This would allow developers of new materials to estimate in advance the fatigue properties of their material. In this work, experimental fatigue data obtained for certain fiber-reinforced composite materials is used to predict the cyclic behavior of a composite made of a different material. The effect of the neural network architecture and the training function used were also investigated. In general, ANN provided accurate fatigue life prediction for materials not used in training the network when compared to experimentally measured results. Keywords Fatigue . Artificial neural networks . Fiber reinforced composite materials

1 Introduction Polymer-matrix composites are finding increased use in aerospace, automotive, marine and civil infrastructure applications. In many of these applications, the material used is subjected to cyclic loading triggering questions about the fatigue behavior of these materials. Since most of these composites are made from laminates consisting of unidirectional laminae, predicting the fatigue behavior of these laminae could be the initial step towards predicting the overall behavior of the laminate under cyclic loading. Artificial Neural Networks (ANN) have proved to be useful for various engineering applications. Due to their massively parallel structure, ANN can deal with many multivariable non-linear modeling for which an accurate analytical solution is difficult to obtain. ANN have already been used in medical applications, image and speech recognition, classification and control of dynamic systems, among others; but only recently
M. Al-Assadi : H. El Kadi (*) : I. M. Deiab College of Engineering, American University of Sharjah, Sharjah, UAE e-mail: hkadi@aus.edu

Appl Compos Mater (2010) 17:114

have they been used in modeling the mechanical behavior of fiber-reinforced composite materials [1, 2]. The system can be considered as a black box and it is unnecessary to know the details of its internal behavior. These nets may therefore offer an accurate and cost effective approach for modeling fatigue life. Artificial neural networks have been previously used to predict the fatigue life of the same composite material used in the training process of the ANN under different loading conditions. El Kadi, in a recent review [1], showed that ANN can give accurate predictions if not better than those normally obtained by conventional methods. It was also shown that, the accuracy of the network depends on the appropriate ANN architecture, the number of hidden layers and the number of neurons in each hidden layer. Lee et al. [3] evaluated the performance of neural networks in predicting the fatigue life of carbon fiber and glass fiber-reinforced composites subjected to different stress ratios. The maximum and minimum stress and failure probability level were used as the input parameters while the number of cycles to failure was the output from the network. They concluded that ANN can be trained to model constant-stress fatigue behavior at least as well as other current life-prediction methods. Al-Assaf and El Kadi [4] trained a feed-forward neural network (FNN) to predict fatigue failure of unidirectional glass/epoxy under tension-tension and tension-compression loading. They used a unidirectional material with different fiber orientation angles subjected to three stress ratios. The input parameters to the network were stress ratio, fibre orientation and the maximum stress while the output was the number of cycles to failure. A back propagation training algorithm was used. In spite of the small number of training data points used, the ANN predictions compared well with the experimental data. In a later work, El Kadi and Al-Assaf [5] used different neural networks structures to predict fatigue life and compared the results with those previously obtained using FNN. As before, the fiber orientation angle, the maximum stress and stress ratio were the inputs to the network that will predict the fatigue life of the unidirectional glass/epoxy composite. Four ANN architectures were investigated: modular neural networks (MNN), Radial basis function networks (RBF), self organizing features maps (SOFM) and Principal component analysis (PCA). They concluded that MNN with five sub-networks gave the best results. The normalized mean square-error was reduced from 14.27% in the case of FNN to 5.7% for MNN. Constant fatigue life diagrams are typically used to show the overall behavior of a material under cyclic conditions for various stress ratios. Building such a diagram requires a large number of experimental results and the successful use of artificial neural networks in building constant fatigue life diagrams for composite materials would reduce the number of experiments needed. Freire Jr. et al. [6] used twelve different S-N curves for a plastic reinforced with fiberglass of [90/0/45/0]s configuration in conjunction with a multi-layer FNN trained with a back-propagation algorithm. The input parameters were the mean stress and the number of cycles while the lone output parameter was the alternating stress. One hidden layer containing 230 neurons was used in the study. Training of the ANN was attempted with data obtained from 3, 4, 5 and 6 values of the stress ratio. Even by only using three S-N curves in the training, satisfactory results were obtained. On the other hand, when the number of training S-N curves increased, more reliable solution was obtained. In a follow-up study [7], using a modular neural network in the development of the constant life diagram, led to better results compared to the traditional feed-forward neural network. As for the FNN study, the results obtained using 6 values of the stress ratio in the training produced the best results. They also concluded that increasing the number of modules gives better predictions only if the training set is also increased.

Appl Compos Mater (2010) 17:114

One of the anticipated benefits of the successful application of ANNs would be that it could be possible to predict the cyclic behavior of a material for which no fatigue data is available by using the known characteristics of other composites. Lee et al. [3] trained an ANN on fatigue data from four different material systems to predict the cyclic behavior of an additional material not used in the training. The results obtained appear unsatisfactory as the average root mean square error (RMSE) was of the order 100% for a material with the same fiber system and of the order of 170% if the fiber used in the trained system is not of the same type used for the tested case. They consequently concluded that there seems little prospect of transferring the predictive capability of a network with any degree of accuracy from one family of composites to another. El Kadi and Al-Assaf [8], in a preliminary study, trained a MNN to predict the number of cycles to failure for different composite materials subjected to fatigue under constant stress ratio. Seven different materials were used to train the neural network in order to predict the fatigue behavior of an eighth material. The RMSE obtained was found to be 36.2%. The current investigation is continuation of this work.

2 Experimental Fatigue Data This work addresses the behavior of unidirectional fiber reinforced composites subjected to tension-tension fatigue loads. Constant stress ratio fatigue data collected from a variety of published investigations [915] are be used to test the suitability of the artificial neural networks in predicting the fatigue behavior of a composite not used in the training of the network. Once the procedure has been shown to generate acceptable predictions, the same method can be extended to predict the fatigue behavior under different values of the stress ratio. The experimental data used here is obtained for a constant stress ratio, R s min =s max 0:1. Table 1 shows the materials and fiber orientation angles of the experimental data used.

3 Artificial Neural Networks ANN generally consist of a number of layers (Fig. 1): the layer where the input patterns are applied is called the input layer. This layer could include static & cyclic properties of the composite material under consideration, its lay-up, the maximum applied stress, the stress
Table 1 Experimental fatigue data used in the current investigation Material Gevetex/Bakelite E-Glass/Epoxy AS/3501-5A Graphite/Epoxy Scotchply 1003 Glass/Epoxy E-Glass/Polyester T800H/2500 Carbon/Epoxy DOE-MSU Glass/Polyester XAS/914 Carbon/Epoxy Kevlar/914 Kevlar/Epoxy Fiber Orientation angles 0, 5, 10, 15, 20, 30, 60 0, 10, 20,30,45,60,90 0, 19, 45,71,90 0, 15, 45,75,90 0, 10, 15,30,45,90 0, 90 0 0 Reference Hashin & Rotem [9] Awerbuch Hahn [10] El Kadi & Ellyin [11] Philippidis & Vassilopous [12] Kawai & Suda [13] Epaarachchi& Clausen [14] Fernando & Dickson & Adam & Reiter & Harris [15] Fernando & Dickson & Adam & Reiter & Harris [15]

Appl Compos Mater (2010) 17:114

Fig. 1 General configuration of a feed-forward artificial neural network (with permission of Elsevier) [1]

ratio, etc. The layer where the output is obtained is the output layer which could, for example, contain the fatigue life of this composite under the specific loading conditions. In addition, there may be one or more layers between the input and output layers called hidden layers, which are so named because their outputs are not directly observable. The addition of hidden layers enables the network to extract higher-order statistics which are particularly valuable when the size of the input layer is large [16]. Neurons in each layer are fully or partially interconnected to preceding and subsequent layer neurons with each interconnection having an associated connection strength (or weight). The input signal propagates through the network in a forward direction, on a layer-by-layer basis. These networks are commonly referred to as multilayer feed-forward neural networks (FNN). Many publications discuss the development and theory of ANN (see for example [1619]). Although all neural network models share common operational features, input requirements and modeling and generalization abilities are different. Consequently, each paradigm would have advantages and disadvantages depending on the particular application and selecting the appropriate network class with suitable parameters is vital to ensure a successful application. More details about the various ANN structures, their similarities and their differences can be found in [1619]. In addition to the feed-forward neural network, the following architectures will be used in this work: CascadeForward Neural Networks (CFFN) CFFN are similar to feed-forward networks, but include a weight connection from the input to each layer, and from each layer to the successive layers. For example, a three-layer network has connections from layer 1 to layers 2, layer 2 to layer 3, and layer 1 to layer 3. The three-layer network also has connections from the input to all three layers. The additional connections might improve the speed at which the network learns the desired relationship. Elman Networks (ELM) ELM are multi-layer back-propagation networks, with the addition of a feedback connection from the output of the hidden layer to its input. This recurrent connection allows ELM networks to learn to recognize and generate temporal patterns, as well as spatial patterns [19]. A two-layer Elman network is shown in Fig. 2 [20].

Appl Compos Mater (2010) 17:114

Recurrent weights

Input vector p(x)=[1 x1.....xn]

input layer weights

Hidden Layer Tansig functions

Output layer Weights

Output layer Pureline function

Fig. 2 A two-layer recurrent Elman neural network architecture (with permission of Elsevier) [20]

Layer Recurrent Network (LRN) LRN distinguishes itself from a feed-forward neural network in that it has at least one feedback loop. For example a LRN may consist of a single layer of neurons with each neuron feeding its output signal back to the inputs of all the other neurons. The presence of feedback loops has a profound impact on the learning capabilities of the network and its performance. Moreover, the feedback loops involve the use of particular branches composed of unit-delay elements resulting in a nonlinear behavior assuming that the neural network contains nonlinear units [17]. Since fatigue predictions are not a temporal process, one would not see the advantage of using LRN to predict fatigue life. However, benefits may result due to having a feedback-structure pattern of the network that the overall weights calculations may result in obtaining a more optimal static neural network.

4 ANN Training Algorithms The back-propagation training algorithm [18] is commonly used to iteratively minimize the following cost function with respect to the interconnection weights and neurons thresholds: E
P N 1 XX di Oi 2 2 1 i1

where P is the number of experimental data pairs used in training the network and N is the number of output parameters expected from the ANN. di and Oi could be the experimental number of cycles to failure and the current life prediction of the ANN for each loading condition i respectively. Iteratively, the interconnection weights between the jth node and the ith node are updated as:
N   X wji t 1 a wji t h xi f 0 netjk dl Ol f 0 netl0 wlj l1

where is a momentum constant, the learning rate, xi the input pattern at the iterative 0 sample t, netN the input to node N at the output layer, netjk is the input to a node j in the kth layer and the function f is the derivative of the neuron activation function. The learning

Appl Compos Mater (2010) 17:114

rate determines what amount of the calculated error sensitivity to weight change will be used for the weight correction. It affects the convergence speed and the stability of weights during learning. The best value of the learning rate depends on the characteristics of the error surface. For rapidly changing surfaces, a smaller rate is desirable while for smooth surfaces, a larger value of the learning rate will speed up convergence. The momentum constant (usually between 0.1 and 1) smoothes weight updating and prevents oscillations in the system and helps the system escape local minima in the training process by making the system less sensitive to local changes. Much as the learning rate, the momentum constant best value is also peculiar to specific error surface contours. The training process is terminated either when the Mean-Square-Error (MSE), Root-Mean-Square-Error (RMSE), or Normalized-Mean-Square-Error (NMSE), between the actual experimental results and the ANN predictions obtained for all elements in the training set has reached a pre-specified threshold or after the completion of a pre-specified number of learning epochs. In addition to the typical back-propagation algorithm, the following training functions are also considered in this study: Resilient Back-Propagation (RP) Multilayer networks typically use sigmoid transfer functions in the hidden layers. These functions are often called squashing functions, because they compress an infinite input range into a finite output range. Sigmoid functions are characterized by the fact that their slopes must approach zero as the input gets large. This causes a problem when steepest descent is used to train a multilayer network with sigmoid functions because the gradient can have a very small magnitude and, therefore, cause small changes in the weights and biases, even though the weights and biases are far from their optimal values. The purpose of the resilient back-propagation training algorithm is to eliminate these harmful effects of the magnitudes of the partial derivatives [21]. Gradient Descent (GD) In the steepest descent training function, the weights and biases are updated in the direction of the negative gradient of the performance function. The learning rate is multiplied by the negative of the gradient to determine the changes to the weights and biases. The larger the learning rate, the bigger the step. If the learning rate is made too large, the algorithm becomes unstable. If the learning rate is set too small, the algorithm takes a long time to converge. The training stops if the number of iterations exceeds the predetermined number of epochs, the performance function drops below a specific goal, the magnitude of the gradient is less than a stipulated value, or the training time surpasses a preset time [21]. Gradient Descent with Momentum (GDM) Gradient descent with momentum allows a network to respond not only to the local gradient, but also to recent trends in the error surface. Acting like a low pass filter, momentum allows the network to ignore small features in the error surface. Without momentum a network can get stuck in a shallow local minimum. With momentum a network can slide through such a minimum [21]. Variable Learning Rate (GDA) With standard steepest descent, the learning rate is held constant throughout training. The performance of the algorithm is very sensitive to the proper setting of the learning rate. If the learning rate is set too high, the algorithm can oscillate and become unstable. If the learning rate is too small, the algorithm takes too long to converge. It is not practical to determine the optimal setting for the learning rate before

Appl Compos Mater (2010) 17:114

training, and, in fact, the optimal learning rate changes during the training process, as the algorithm moves across the performance surface. The performance of the steepest descent algorithm can be improved by allowing the learning rate to change during the training process. An adaptive learning rate attempts to keep the learning step size as large as possible while keeping learning stable. The learning rate is made responsive to the complexity of the local error surface [21]. Variable Learning Rate with Momentum (GDX) This function combines adaptive learning rate with momentum training. It is invoked in the same way as GDA except that it has the momentum coefficient as an additional training parameter [21].

5 Predicting Fatigue Life Using ANN In this study, the input parameters to the ANN were comprised of a combination of the following monotonic and cyclic properties: the modulus of elasticity in the direction of the fiber (E0), the modulus of elasticity in the direction perpendicular to the fibers (E90), the tensile strength of the laminate in the fiber direction (S0T), the tensile strength of the laminate in the direction perpendicular to the fibers (S90T), the fiber orientation angle () and the maximum applied stress (max). The number of cycles to failure (Nf) was the sole output from the network. Since the range of number of cycles to failure varied between 10 and 8,000,000 cycles, training the networks to learn such a wide range will produce unacceptable and unbalanced modeling performance. This will occur since the ANN will strive to minimize the overall error for all input patterns. Hence, minimizing the difference between the network output and observed data for high values of stress cycles would lead to incorrect results for the patterns associated with lower values of number of cycles to failure. A more suitable method would be the normalization of the logarithmic values of the number of cycles to reach a range between 0 and 1. The maximum applied stress varied between 12 to 1900 MPa. These values were also normalized after taking the logarithmic values of the stress reducing the scale to values between 0 and 1. All other mechanical properties as well as fiber orientation angles were normalized linearly between 0 and 1 in the usual fashion. The Matlab Software [21] was used to construct, train and test the networks. Fatigue experimental data from the materials shown in Table 1 was used to train and test the network. The network was trained using all-but-one of the materials while the testing was done for the remaining material. The effects of ANN architecture, training algorithm as well as number of neurons per hidden layer were considered to obtain the optimum fatigue life prediction. 5.1 Effect of Training Functions The training functions introduced in section 4, were used to predict the fatigue life of the various materials. Table 2 shows a typical comparison between the RMSE obtained with the various training functions using 16 and 20 neurons for Scotchply 1003 Glass/Epoxy laminae with 0, 19, 45, 71 and 90 fiber orientations. As shown, resilient backpropagation resulted in the lowest RMSE in all cases considered. Similar results were obtained for other materials [22]. Therefore, throughout the rest of this study, only results obtained using this training function will be reported.

Appl Compos Mater (2010) 17:114

Table 2 RMSE obtained as a function of the type of architecture and training function Training Function Neural Network Architecture FFN 16 neurons Resilient Back-propagation (RP) Gradient descent (GD) Gradient descent with momentum (GDM) Variable Learning Rate (GDA) Variable Learning Rate with momentum (GDX) 15.60% 53.50% 15.70% 33.10% 17.60% 20 neurons 9.70% 23.10% 17.80% 19.10% 24.30% CFFN 16 neurons 16.80% 34.50% 35.40% 22.20% 23.90% 20 neurons 21.10% 40% 26.80% 23.10% 30.60% ELM 16 neurons 13.60% 17.62% 19.50% 34.60% 30.40% 20 neurons 15.60% 35.10% 25.20% 17.02% 18.70%

5.2 Effect of Number of Hidden Neurons Using feed-forward neural networks with resilient back-propagation training, the effect of varying the number of hidden neurons on the fatigue life prediction was investigated. The number of neurons in the hidden layer was varied to obtain the lowest RMSE. Figure 3 shows typical variations of RMSE with the number of hidden neurons obtained while predicting fatigue life of Scotchply 1003 glass/epoxy, AS/3501-5A graphite/epoxy and T800H/2500 carbon/epoxy.

30

25

20

RMSE

15

10
Scotchply 1003 Glass/Epoxy

AS/3501-5A Graphite/Epoxy T800H/2500 Carbon/Epoxy

0 0 5 10 15 20 25 30

Number of Hidden Neurons


Fig. 3 Effect of the number of hidden neurons on the RMSE for different materials

Appl Compos Mater (2010) 17:114

5.3 Effect of Network Architecture Using resilient back-propagation training, different network architectures are used to predict the fatigue life of some of the materials under consideration. As mentioned before, in each case all-but-one of the materials will be used to train the neural network and the remaining material will be used to test the network. Figures 4, 5, 6 and 7 show typical fatigue life predictions of AS/3501-5A Graphite/ Epoxy using different neural network architectures with one hidden layer containing 1620 neurons. For comparison purpose, resilient back-propagation is used in all cases. The RMSE obtained using feed-forward, cascade-forward, Elman neural and layer recurrent networks were found to be 12.3%, 8.8%, 9.2% and 13.25% respectively. The figures show that the resilient backpropagation cascade-forward architecture predicts fatigue life with the least error. The figures also show a shift between the experiments and the predicted values along the Nf axis for the laminae with zero fiber orientations. This might be due to the significantly different failure modes of these laminae compared to all other off-axis specimens. A similar investigation was conducted to predict the fatigue life of other composites. Figures 8, 9 and 10 show typical fatigue life predictions of Scotchply 1003 glass/epoxy obtained using different network architectures and number of hidden neurons. The best predictions in this case were obtained using a layer recurrent neural network with one hidden layer and 16 hidden neurons (RMSE=11.5%). In this case, the above-mentioned shift observed along the Nf axis is not as prevalent as for the case of Graphite/Epoxy. Figure 11 shows an example of the fatigue life prediction of T800H/2500 carbon/epoxy using a feed-forward back-propagation network with one hidden layer and 20 hidden

10000
0-deg (ANN) 0-deg (Experimental) 10-deg (ANN) 10-deg (Experimental) 20-deg (ANN) 20-deg (Experimental) 30-deg (ANN) 30-deg (Experimental) 45-deg (ANN) 45-deg (Experimental) 60-deg (ANN) 60-deg (Experimental) 90-deg (ANN) 90-deg (Experimental)

RMSE=12.3%

Maximum Applied Stress (MPa)

1000

100

10

1 1 10 100 1,000 10,000 100,000 1,000,000 10,000,000

Number of Cycles to Failure


Fig. 4 Fatigue life prediction of AS/3501-5A Graphite/Epoxy using FNN with 20 neurons

10
10000
0-deg (ANN) 0-deg (Experimental) 10-deg (ANN) 10-deg (Experimental) 20-deg (ANN) 20-deg (Experimental) 30-deg (ANN) 30-deg (Experimental) 45-deg (ANN) 45-deg (Experimental) 60-deg (ANN) 60-deg (Experimental) 90-deg (ANN) 90-deg (Experimental)

Appl Compos Mater (2010) 17:114

RMSE = 8.8%

Maximum Applied Stress (MPa)

1000

100

10

1 1 10 100 1,000 10,000 100,000 1,000,000 10,000,000

Number of Cycles to Failure


Fig. 5 Fatigue life prediction of AS/3501-5A Graphite/Epoxy Using CFFN with 20 neurons

10000
0-deg (ANN) 0-deg (Experimental) 10-deg (ANN) 10-deg (Experimental)

RMSE = 9.2%

Maximum Applied Stress (MPa)

1000

20-deg (ANN) 20-deg (Experimental) 30-deg (ANN) 30-deg (Experimental) 45-deg (ANN) 45-deg (Experimental)

100

60-deg (ANN) 60-deg (Experimental) 90-deg (ANN) 90-deg (Experimental)

10

1 1 10 100 1,000 10,000 100,000 1,000,000 10,000,000

Number of Cycles to Failure


Fig. 6 Fatigue life prediction of AS/3501-5A Graphite/Epoxy Using ELM with 20 neurons

Appl Compos Mater (2010) 17:114


10000
0-deg (ANN) 0-deg (Experimental) 10-deg (ANN) 10-deg (Experimental) 20-deg (ANN) 20-deg (Experimental) 30-deg (ANN) 30-deg (Experimental) 45-deg (ANN) 45-deg (Experimental) 60-deg (ANN) 60-deg (Experimental) 90-deg (ANN) 90-deg (Experimental)

11

RMSE = 13.25%

Maximum Applied Stress (MPa)

1000

100

10

1 1 10 100 1,000 10,000 100,000 1,000,000 10,000,000

Number of Cycles to Failure


Fig. 7 Fatigue life prediction of AS/3501-5A Graphite/Epoxy Using LRN with 16 neurons

1000

Maximum Applied Stress (MPa)

100

10

1 1

0-deg (ANN) 0-deg (Experimental) 19-deg (ANN) 19-deg (Experimental) 45-deg (ANN) 45-deg (Experimental) 71-deg (ANN) 71-deg (Experimental) 90-deg (ANN) 90-deg (Experimental)

RMSE=13.6%

10

100

1,000

10,000

100,000

1,000,000

10,000,000

Number of Cycles to Failure


Fig. 8 Fatigue life prediction of Scotchply 1003 Glass/Epoxy Using ELM with 16 neurons

12
1000

Appl Compos Mater (2010) 17:114

Maximum Applied Stress (MPa)

100

10

0-deg (ANN) 0-deg (Experimental) 19-deg (ANN) 19-deg (Experimental) 45-deg (ANN) 45-deg (Experimental) 71-deg (ANN) 71-deg (Experimental) 90-deg (ANN) 90-deg (Experimental)

RMSE=15.6%

1 1 10 100 1,000 10,000 100,000 1,000,000 10,000,000

Number of Cycles to Failure

Fig. 9 Fatigue life prediction of Scotchply 1003 Glass/Epoxy Using ELM with 20 neurons

1000

Maximum Applied Stress (MPa)

100

0-deg (ANN) 0-deg (Experimental) 19-deg (ANN)

10

19-deg (Experimental) 45-deg (ANN) 45-deg (Experimental) 71-deg (ANN) 71-deg (Experimental) 90-deg (ANN) 90-deg (Experimental)

RMSE=11.5%

1 1 10 100 1,000 10,000 100,000 1,000,000 10,000,000

Number of Cycles to Failure

Fig. 10 Fatigue life prediction of Scotchply 1003 Glass/Epoxy Using LRN with 16 neurons

Appl Compos Mater (2010) 17:114


10000

13

RMSE=16.3%

Maximum Applied Stress (MPa)

1000

0-deg (ANN)

100

0-deg (Experimental) 10-deg (ANN) 10-deg (Experimental) 15-deg (ANN) 15-deg (Experimental)

10

30-deg (ANN) 30-deg (Experimental) 45-deg (ANN) 45-deg (Experimental) 90-deg (ANN) 90-deg (Experimental)

1 1

10

100

1,000

10,000

100,000

1,000,000 10,000,000100,000,000

Number of Cycles to Failure


Fig. 11 Fatigue life prediction of T800H/2500 Carbon/Epoxy Using FNN with 20 neurons

neurons. Although the RMSE obtained in this case was 16.3%, the overall prediction of the experimental data is not as accurate as in the previous two cases. This could be attributed to the lowest number of experimental points available for this material compared to the previous two cases. With the larger number of experiments used in the training, this could cause the ANN to, not only predict the trend of the fatigue behavior, but also the many variations within the experimental data used in training. This could be negatively affecting the prediction capability of the network. The predictions obtained show that, although acceptable fatigue life predictions of new materials can be obtained using ANN, there is no specific network architecture/training algorithm combination that always results in the best fatigue life prediction for all materials. Additional investigation is needed to determine the possibility of finding unique combinations that always result in the best fatigue life predictions.

6 Conclusion Different neural network architectures using a variety of training functions were used to predict the fatigue life of fiber reinforced composite materials. Training was performed on certain composites while the prediction was done for different materials. The results show that ANN can be accurately used to predict the fatigue failure of a composite material not used in the training of the network. The results however show that no singular architecture/training method combination can always produce the best results for all materials.

14

Appl Compos Mater (2010) 17:114

References
1. El Kadi, H.: Modeling the mechanical behavior of fibre reinforced polymeric composite materials using artificial neural networks A review. Compos. Struct. 73, 123 (2006) 2. Zhang, Z., Friedrich, K.: Artificial neural networks applied to polymer composites: a review. Compos. Sci. Technol. 63, 20292044 (2003) 3. Lee, J.A., Almond, D.P., Harris, B.: The use of neural networks for the prediction of fatigue lives of composite materials. Composites A 30, 11591169 (1999) 4. Al-Assaf, Y., El Kadi, H.: Fatigue life prediction of unidirectional glass fibre/epoxy composite laminate using neural networks. Compos. Struct. 53, 6571 (2001) 5. El Kadi, H., Al-Assaf, Y.: Prediction of fatigue life of unidirectional glass fibre/epoxy composite laminae using different neural network paradigms. Compos. Struct. 55, 239246 (2002) 6. Freire Jr., S.R.C., Neto, A.D.D., de Aquino, E.M.F.: Use of modular networks in the building of constant life diagrams. Int. J. Fatigue 29, 389396 (2007) 7. Freire Jr., S.R.C., Neto, A.D.D., de Aquino, E.M.F.: Building of constant life diagrams of fatigue using artificial neural networks. Int. J. Fatigue 27, 746751 (2005) 8. El Kadi, H., Al-Assaf. Y.: The use of neural networks in the prediction of the fatigue life of different composite materials. 16th International Conference on Composite Materials, Japan, July 813, (2007). 9. Hashin, Z., Rotem, A.: A fatigue failure criterion for fiber reinforced materials. J. Compos. Mater. 7, 448464 (1973) 10. Awerbuch, J., Hahn, H.T.: Off-axis fatigue of graphite/epoxy composites. In: Lauraitis, K.N. (ed.) Fatigue of fibrous composite materials, ASTM STP 723, pp. 243273. American Society for Testing and Materials, Philadelphia, PA (1981) 11. El Kadi, H., Ellyin, F.: Effect of stress ratio on the fatigue of unidirectional glass fibre/epoxy composite laminae. Composites 25, 917924 (1994) 12. Philippidis, T.P., Vassilopoulos, A.P.: Complex stress state effect on fatigue life of GRP laminates. Part I, experimental. Int. J. Fatigue 24, 813823 (2002) 13. Kawai, M., Suda, H.: Effects of non-negative mean stress on the off-axis fatigue behavior of unidirectional carbon/epoxy composites at room temperature. J. Compos. Mater. 38, 833854 (2004) 14. Epaarachchi, J.A., Clausen, P.D.: An empirical model for fatigue behavior prediction of glass fiber reinforced plastic composites for various stress ratios and test frequencies. Composites A 34, 313326 (2003) 15. Fernando, G., Dickson, R.F., Adam, T., Reiter, H., Harris, B.: Fatigue behavior of hybrid composites: Part 1 Carbon/Kevlar hybrids. J. Mater. Sci. 23, 37323743 (1988) 16. Schalkoff, R.J.: Artificial neural networks. McGraw-Hill (1997). 17. Haykin, S.: Neural networksa comprehensive foundation. Second edition, Prentice Hall, (1999). 18. Skapura, D.: Building neural networks. ACM, Addison-Wesley (1996) 19. Elman, J.L.: Finding structure in time. Cogn. Sci. 14, 179211 (1990) 20. Al-Assaf, Y., El Kadi, H.: Fatigue life prediction of composite materials using polynomial classifiers and recurrent neural networks. Compos. Struct. 77, 561569 (2007) 21. MATLAB. www.mathworks.com. 22. Al-Assadi, M.: Predicting the fatigue failure of fiber reinforced composite materials using artificial neural networks. MS Thesis, American University of Sharjah, (2009).

You might also like