^{1}

^{*}

^{2}

^{*}

In this paper, an
Artificial Neural Network (ANN) model is used for the analysis of any type of
conventional building frame under an arbitrary loading in terms of the
rotational end moments of its members. This is achieved by training the
network. The frame will deform so that all joints will rotate an angle. At the
same time, a relative lateral sway will be produced at the r_{th} floor level, assuming that the effects of axial
lengths of the bars of the structure are not altered. The issue of choosing an
appropriate neural network structure and providing structural parameters to
that network for training purposes is addressed by using an unsupervised
algorithm. The model’s parameters, as well as the rotational variables, are
investigated in order to get the most accurate results. The model is then evaluated by using the iteration method of
frame analysis developed by Dr. G. Kani. In general, the new approach
delivers better results compared to several commonly used methods of structural
analysis.

In the past decades, great strides have been taken in developing frame analysis. Throughout the evolution of structural science, most of the work has been done regarding frame analysis. However, the elasticity theory is available for all approaches.

Four approaches have been presented. The strength of materials approach is the simplest one among them. It is suitable for simple structural members which are subjected to a specific loading. For the analysis of entire systems, this approach can be used in conjunction with static. The solutions are based on linear isotropic infinitesimal elasticity and Euler-Bernoulli beam theory. A second approach, called moment distribution method, was commonly used in the 1930’s. Its essential idea involves no mathematical relations other than the simplest arithmetic [

Recently, the neural network approach had been applied to many branches of science. This approach is becoming a strong tool for providing structural engineers with sufficient details for design purposes and management practices.

This paper evaluates a neural network approach on frame analysis using an unsupervised algorithm. The results are obtained programming the entire formulation of the algorithm using MATLAB. The aim of the study is to estimate the rotational end moment, and this is depicted in

An ANN is an information processing system which operates on inputs to extract information, and produces outputs corresponding to the extracted information [

An artificial neural network model is a system composed of many simple processors, each having a local memory. Processing elements are connected by unidirectional links that carry discriminating data. The linear feed forward net has been found to be a suitable one for training techniques. Outputs of neurons in one layer are transferred to their corresponding neuron in another layer through a link that amplifies or inhibits such outputs through weighting factors. Except for the processing elements of the input layer, the input of each neuron is the sum of the weighted outputs of the node in the prior layer and a bias. Each neuron is activated according to its input, activation function, and threshold value.

_{1}), representing a set of variables

Building frame under any given loading

Feed-forward multilayer network

(x_{1}, x_{2}, x_{3})^{T}.

The inputs and outputs for the i_{th} neuron are:

where f_{i} constitutes an activation function (linear transfer function). Its behavior is that of a threshold function, in which the output of the neuron is generated if a threshold level, is reached. The net input and output to the j_{th} neuron are similarly treated as in (1) and (2).

Output processing in a network

each neuron of the layer, and passing this argument to the transfer function to compute the output. The net input and outputs to and from the i_{th} neuron of the L_{th} layer are:

And the error,

The output for the

The output for the

Specifically,

But _{j} for the i_{th} neuron and

Consequently,

where

The connectivity of the neural network model allows processors on one level to communicate with each neuron at the next level. Each processing element in one layer is connected to its corresponding processing element in the next one by the means of an excitatory weight and bias. This is known as a “locally-connected” topology. Discrepancies between actual and target output values results in evaluation of weights and bias changes. After a complete presentation of the training data, a new set of weights and biases are obtained, and new outputs are again evaluated in a feed-forward manner until a specific tolerance for error is obtained. Unsupervised training uses unlabeled training data and requires no external teaching.

In our neural network model, a processing element’s input is connected to a specific node. The node has associated node function which carries out local computation based on the input and bias values. In the input layer, the value of W_{ij} represents the synaptic weight between the recipient node, whose activity is x_{i}, and the previous node whose activity is x_{j}.

There are four descriptors used in the algorithm definition:

• Equation type: Algebraic, the net performs calculations determined primarily by the state of the network.

• Connection topology: The connectivity of the network is the measure of how many processors on one level communicate with each processor at the next level. This is the “locally-connected” topology we discussed earlier, and for a one-dimensional space the matrix will be banded diagonally.

• Processing scheme: Nodes in the network are updated synchronously, since the network output at the current iteration depends entirely on its prior state.

• Synaptic transmission mode: The neural network model takes neural values multiplied by synaptic weights summed across the input to a neuron. The neuron acts on the summed value and its output is multiplied by weights and used as an input for other neurons.

It is known that supervised learning in neural networks based on the popular back propagation method can be often trapped in a local minimum of the error function. How did the proposed algorithm with the “locally-con- nected” topology overcome such question? This will be asked in a future paper, as well as the characteristics and properties, in detail, of the proposed model. The complexity of the model in the case we have more than two hidden layers depends on the structure to be analyzed: one neuron corresponds to one node in the structure, and the unsupervised training algorithm can deal with any building frame, that means, with any neurons configuration.

The most pertinent variables in structural analysis are the

end;

dians) for the

displacement due to the floor sway,

ment distribution factor.

members meeting at the node

The final expression developed for the total end moments

And for columns:

In the frame analysis developed in the neural network model, the final end moments

The distribution factors corresponding to the first layer of the frame’s nodes are presented as an input vector to the input layer, and rotational end moments

. Structural analysis parameters

Total End Moment Formulas | Selected Variables | |
---|---|---|

Dependent Variables | Independent Variables | |

Beams | ||

Column | ||

Node i |

contain a suitable number of neurons. The network was trained with seven iterations. The number of neurons in the hidden layers and adjustable parameters like weights and biases were determined by the number of nodes in the frame, the distribution factors and the rotational end moments.

For any member

The values

The procedure depends on the solution of three problems for the determination of member constants on fixed end moments, the stiffness at each end of member, and of the over-carry factor (distribution factors

The network has four layers, three inputs and five output values. The function network creates the net, which generates the first layer weights and biases for the four linear layers required for this problem. These weights and biases can now be trained incrementally using the algorithm. The network must be trained in order to obtain first layer weights and biases. For the second, third and fourth layers, the weights and biases are modified in response to network’s inputs and will lead to the correct output vector. There are not target outputs available. The linear network was able to adapt very quickly to the change in the outputs. The fact that it takes only seven iterations for the network to learn the input pattern is quite an impressive accomplishment.

The scheme for entering the calculations systematically is shown in

The fixed end moments for the different loaded members are calculated by using the standard formula available in any structural handbook. Having completed these preliminary calculations, the training can be initiated. The network was set up with the three parameters (distribution factors of the beam) as the input, and the rotational end moments due to rotation as the outputs determined by the first layer.

The calculation starts in the input layer and continues from one layer to the next. Such calculation is carried out quickly. After 6 or 7 iterations have been performed, as explained earlier, it will be noted that there is little or no change in the values of two consecutive sets of calculations. The calculations are now stopped and the values of the last iteration are taken as the correct ones, with the previous values being ignored. For the sake of clarity, these final values have been indicated separately in

A comparison between the presented ANN model and Kani’s method was performed on the same example, and

it can be shown in

is the rotational end moment (output network) or total end moment and

Rotational end moments with lateral displacement

Total end moments with lateral displacement

shown in

. Accuracy of formulas for rotational end moments

Nodes | Layers | Statistical Mean Value | Statistical Standard Deviation |
---|---|---|---|

Nodes 1s | 1 | 1.0375 | 0.0178 |

2 | 1.0097 | 0.0013 | |

3 | 1.0067 | 0.0008 | |

Nodes 2s | 1 | 0.9776 | 0.0131 |

2 | 0.9995 | 0.0147 | |

3 | 1.0016 | 0.1107 | |

4 | 0.9976 | 0.0036 | |

Nodes 3s | 1 | 0.9529 | 0.0036 |

2 | 1.0118 | 0.0108 | |

3 | 1.0578 | 0.0710 | |

4 | 1.0005 | 0.0032 | |

Nodes 4s | 1 | 1.0000 | 0.0000 |

2 | 1.0000 | 0.0000 | |

3 | 1.0000 | 0.0000 | |

4 | 1.0000 | 0.0000 |

. Accuracy of formulas for total end moments

Nodes | Layers | Statistical Mean Value | Statistical Standard Deviation |
---|---|---|---|

Nodes 1s | 1 | 0.9633 | 0.0000 |

2 | 1.0289 | 0.0412 | |

3 | 1.0015 | 0.0054 | |

Nodes 2s | 1 | 1.0023 | 0.0112 |

2 | 1.0025 | 0.0064 | |

3 | 0.9970 | 0.0155 | |

4 | 0.9964 | 0.0037 | |

Nodes 3s | 1 | 1.0075 | 0.0076 |

2 | 1.0061 | 0.0091 | |

3 | 0.9841 | 0.0186 | |

4 | 1.0040 | 0.0023 | |

Nodes 4s | 1 | 1.0100 | 0.0000 |

2 | 1.0167 | 0.0000 | |

3 | 0.9783 | 0.0000 | |

4 | 1.0048 | 0.0000 |

analysis executed by neural topology (whose values are outside the parentheses) that can improve the accuracy and speed of the results.

Artificial neural networks are parallel computational models since the computation of the components

1) The networks, as fine-grained parallel implementations of linear systems, can overcome other approaches.

2) ANNs are very fast even on regular PCs. Enormous data sets can be processed, in comparison with traditional approaches.

3) The presented ANN model is constructed by using only structural model, and it has no boundary conditions in application.

4) Site engineers can calculate rotational moments

5) Artificial neural network models can accept any number of effective variables as input parameters without omission or simplification, as commonly done in conventional approaches.

L. R. P. M. thanks to Prof. Alfonzo G. Cerezo for his advice developing this work.