Seismic Vulnerability Assessment from Earthquake Damages Historical Data Using Constrained Optimization Technique

This present work falls within the context of efforts that have been made over the past many years, aimed in improving the seismic vulnerability modelling of structures when using historical data. The historical data describe the intensity and the damages, but do not give information about the vulnerability, since only in the ’90 the concept of vulnerability classes was introduced through the EMS92 and EMS98 scales. Considering EMS98 definitions, RISK-UE project derived a method for physical damage estimation. It introduced an analytical equation as a function of an only one parameter (Vulnerability Index), which correlates the seismic input, in term of Macroseismic Intensity, with the physical damage. In this study, we propose a methodology that uses optimization algorithms allowing a combination of theoretical-based with expert opinion-based assessment data. The objective of this combination is to estimate the optimal Vulnerability Index that fits the historical data, and hence, to give the minimum error in a seismic risk scenario. We apply the proposed methodology to the El Asnam earthquake (1980), but this approach remains general and can be extrapolated to any other region, and more, it can be applied to predictive studies (before each earthquake scenarios). The mathematical formulation gives choice for regarding, to the optic of minimizing the error, either for the: 1) very little damaged building (D0-D2 degree) or 2) highly damaged building (D4-D5 degree). These two different kinds of optics are adapted for the people who make organizational decisions as for mitigation measures and urban planning in the first case and civil protection and urgent action after a seismic event in the second case. The insight is used in the framework of seismic scenarios and offers advancing of damage estimation for the area in which no recent data, or either no data regarding vulnerability, are available. How to cite this paper: Benaïchouche, A., Negulescu, C., Sedan, O. and Boutaraa, Z. (2018) Seismic Vulnerability Assessment from Earthquake Damages Historical Data Using Constrained Optimization Technique. Journal of Geoscience and Environment Protection, 6, 89-111. https://doi.org/10.4236/gep.2018.62007 Received: December 20, 2017 Accepted: February 25, 2018 Published: February 28, 2018 Copyright © 2018 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/


Introduction
Optimization algorithms have started to be widely used in the field of earthquake engineering because of their capability to calibrate the model when no data are available for important parameters associated to behavior of the system (e.g. a system can be a building, a bridge, a network…).The first utilization of the optimization algorithms has been at the scale of one structure generally for fitting the model parameters (e.g. as the Young modulus, reinforcement bar diameter, compression resistance…).Several studies have also focused on effective computational methods of cost optimization for bridges [1] [2] [3] and structures in general [4].Recently, [5] used optimization process for the design of structures.The application of the framework proposed by them is illustrated in a bridge design optimization problem where the column reinforcement bar diameter and concrete cover are considered as design parameters.[6] presents a hybrid optimization methodology for the probabilistic finite element model updating of structural systems.In paper [6], the model updating process is formulated as an inverse problem, analyzed by Bayesian inference, and solved using a hybrid optimization algorithm.
Even if the optimization process is widely used at the scale of one structure, it is less common to apply it at the scale of a group of buildings, networks or the scale of town analysis.More recently, the optimization algorithms were also used for the natural hazard studies at large scales, and more particularly, for manage the decision after the catastrophic event.[7] proposes a simulation model to find out optimum evacuation routes, during a tsunami using Ant Colony Optimization (ACO) algorithms.[8] proposes a framework that clarifies the interrelationships between notions of coping capacity, preparedness, robustness, flexibility, recovery capacity, and resilience, previously espoused as independent measures, and provides a single mathematical decision problem for quantifying these measures congruously and maximizing their values.[9] proposes a fuzzy multi-criteria model to deal with "qualitative" (unquantifiable or linguistic) or incomplete information and illustrates it within the post-earthquake reconstruction problem in Central Taiwan, including the restoration concerning the safe and serviceable operation of "lifeline" systems, such as electricity, water, and transportation networks, immediately after a severe earthquake.
The seismic risk scenario is now one of the most powerful tools for assessing damages either for prevention and mitigation aims or for crises management situation.Many studies [10] [11] [12] [13], more or less detailed, are published Journal of Geoscience and Environment Protection on this subject, which demonstrate the demand of stakeholders and first responders (e.g.civil protection) to have a better understanding and evaluation of damages and the various outcomes from these kind of studies.Different methodologies are applied for the assessment of the earthquake damages, which are generally based on two mandatory terms: the level of hazard and the level of vulnerability of the exposure (i.e.element at the risk).A very complete state of art of the development of seismic vulnerability assessment methodologies for variable geographical scales over the past 30 years is presented in [14].Different methodologies are facing over the years: the widest used methodologies based on a vulnerability index [15] [16] [17] [18]; pushover-based vulnerability analysis [19]; displacement-based vulnerability assessment procedure [20] and buildings' vulnerability assessment using the parameter less scale of seismic intensity [21].[22] [23] provide a detailed description (Summary of Software, Methodology, IT Details, Exposure Module, Hazard Module, Vulnerability Module and Output) of the existing software for seismic risk assessment that is either open source or has been made available to the GEM Risk Team.Here we are dealing with empirical methodologies based on the vulnerability index obtained from observational data after earthquakes events.These methods are very useful for the representation of the vulnerability at large scale (see [13] for more details of the used methodology).
Concerning empirical methods, starting from the first "Earthquake damage probability matrices" derived by [24] after the saint Fernando earthquake and presented to the "Fifth World conference on earthquake engineering", all the empirical methods for assessment of the seismic damage of structure are based on the relation between the observed intensity and the observed damage.One key moment in the collecting and the utilisation of the Earthquake damage data is the development of the European Macroseismic Scale-EMS98 [25] which is the first intensity scale that clearly defines the concept of vulnerability.Table 1 gives the classification of damage in EMS98 according to masonry and reinforced concrete buildings.Before this scale, the previous scales Mercalli-Cancani Sieberg Scale-MCS [26], Modified Mercalli scale (MM-31 and MM-56) [27] and Medvedev-Sponheuer-Karnik scale (MSK-64) [28], do not clearly correlate the vulnerability of structure with the intensity and the damage scale.One of the main objects of the EMS98 scale was the fact to be consistent with the pervious scale; hence, that data collected using previous scales can be adapted to the definitions of the EMS98 scale.The data sets are a very laborious and time-consuming task, but is the key of the all these methods, and their representatively depend on the accuracy of the collected data.In Italy, the concerted effort to collect earthquake damage data over the past 30 years has led to the development of an extensive database from which vulnerability predictions for the Italian building stock can be derived [29].In France, a database which repertory the historical earthquake [30] [31] is filled, but the information related to the damages are quite thin.Some other databases exist, developed by other countries, or within framework of some European Commission projects, but their accessibility, especially to original "raw" data, is fastidious.
[32] systematically compared statistical modelling techniques for different empirical datasets and explored many of the issues raised regarding the treatment of uncertainty.
Database typologies and their typical issues depend on the manner on which the observations were obtained they were been obtained: for Detailed "Engineering" Surveys and Surveys by Reconnaissance Teams the main issue is the possibility of unrepresentative samples; for the Rapid Surveys methods, generally the evaluation concern the habitability or the safety not the evaluation of the damage; while for remotely sensed survey method, which is quite new and not already able to efficiently evaluate other damage degrees than the collapse or very heavy damage [33].For all these cases it is quite rare to have the three essential terms at the scale of the building or census block: intensity measure, building typology, and hence it vulnerability, and damage degree.Moreover, once this "original/unchanged" dataset is defined, usually data manipulation and combination were done in order to associated to some related parameters and develop new method for vulnerability assessment.
Hence, in our paper we are using a literature available "original/unchanged" dataset that represent the large majority of the available historical dataset, namely dataset that have only the intensity and the damages degrees by census block.Our paper proposes a new mathematical manner of treating the existing datasets, in which vulnerability information are not available, and of estimating Journal of Geoscience and Environment Protection the best vulnerability index and its characteristics to be use in the seismic risk scenarios using optimization procedure.

Historical Case Study and Available Observed Data on Damages
A retro-scenario of the October 10, 1980 El Asnam earthquake was performed by the authors [34] [35] using Armagedom tools [13] developed by the French Geological Survey (BRGM).In this paper, we are just using the intensity and damage observations in order to find the vulnerability index that will give the smaller error between the observed damage and the damages calculated numerically with Armagedom software.
Chlef city (formerly named El Asnam) situated 200 km west of the capital of Algeria Algiers, is an area that suffers of important seismic activities due to the interaction of the two Eurasian and African plates.Major earthquakes rocked the city during the last century, the last event that caused major damages being the one of October 10, 1980; known as El Asnam earthquake and killed around 3000 persons.This event was qualified as the largest earthquake ever known in the west-Mediterranean region.Different international studies [36] [37] [38] were performed based, or in collaboration, with the Algerian Technical Inspection of Construction (CTC) that conducted a large field investigation and damage inventory in El Asnam city on 5131 buildings.Figure 1 presents the ten sectors in which the city was divided after the CTC's survey (adapted from [37]).
Based on this data, [36] established a buildings damage classification, in which the damage levels are: Green-very little damage.Can be occupied immediately; Orange: needs further study before it can be either occupied or condemned; Red:  condemned and should be demolished.Table 2 presents the number of buildings and the damage classification (number of red, orange and green) performed by [36] in each of the ten sectors of the city.The seismic vulnerability assessment of the existing buildings has never been done either.
The data given in Table 2 represents the collected observational dataset.This representation of information is the very common for post-earthquake damages assessment after seismic event.This concern ancient's earthquake as well as the most recent earthquake (Aquila 2009), for which intuitively we can imagine to have more detailed information's, but in reality, this is not the case.

Model Description
Figure 2 schematize the general methodology for seismic damage calculation.It presents each of the most important phases necessary for creating an earthquake scenario: regional seismic hazard, local seismic hazard, etc.In the case of El Asnam, the hazard modules (module 1, 2 and 3) are not performed, since we got the observed macroseismic intensity.Here all performed simulations (9800 runs) were executed on the Armagedom tools [13] developed by the French Geological Survey (BRGM) and used for simulation of the damage scenarios.
For damage assessment, in Armagedom software the procedure developed within RISK-UE is used [39].Firstly, the vulnerability to earthquake shaking of exposed elements at risk (for this study, buildings) are characterized by vulnerability indices (V i ), which range from zero (not vulnerable) to one (building is highly vulnerable).Depending on the buildings proprieties (materials, age of construction, height…), these last are regrouped in categories; commonly known as typologies (see [40] for more details).Therefore, a value of V i is assigned to each building typology.For each typology, the mean damage degree (µ D : between zero and five) is estimated based on a vulnerability function: I represent the seismic hazard described in terms of macroseismic intensity (EMS98 scale), V i the vulnerability index, and φ the ductility index, which is evaluated taking into account the building typology and its constructive features ( [39]); it controls the slope of the curves and assumes different values to fit the data obtained through damage surveys.For residential buildings, it takes a value of 2.3.
Finally, the damage distribution is derived using beta probability density function (Equation ( 2)) and the beta cumulative density function (Equation (3)). ) ( ) ( ) With  <  <  and ( ) 0.007 0.052 0.287 . The parameters a, b, t, and r are the parameters of the distribution and Γ is the gamma function.The parameter r is the variance and controls the shape of the distribution.
The parameter t (named after T beta ) resulting from the probability calculation by the law β determines the dispersion of the values; it therefore represents the propagation of uncertainties that can come from different sources as the input data, or the variable behaviour of the same typology structure to the seismic hazard.In the Risk-EU method, this one is fixed to t = 8 which represent the best fits for the European buildings [40].However, the study posits to leave it variable in a range of values, hence the optimal value for each typology can be found numerically.
In order to use the beta distribution, it is necessary to refer to the damage grades D k (k = 0 to 5) defined by EMS scale; for this purpose, it is advisable to assign value 0 to the parameter a and value 6 to the parameter b [18].
The outcome is the distribution in terms of the six levels defined in EMS98: D0 (undamaged), D1 (slight damage), D2 (moderate damage), D3 (heavy damage), D4 (partial collapse) and D5 (total collapse) for each location that is separately considered, given by the equation (Equation ( 4)).
( ) ( ) To perform seismic damage scenarios, very often, many typologies of buildings exist in the same studying area.Therefore, to run the simulation model (Armagedom software) we need to describe: • the total number of typologies; • the vulnerability indices (V i ) and T beta parameters for each typology; • the spatial distribution of the typologies in each district (polygon) of the affected area vulnerability (nbBat).
In Armagedom software, these parameters are stored in file (*.txt) as follows: • the V i and T beta are regrouped in the same file (Figure 3) called *.tvi_t.The file is structured in four columns; • type of building as character, vulnerability index as float, T beta parameters as float and period as float.Each line represents a specific typology; • the nbBat are stocked in another file (Figure 3) called *.nbbat.This file is given as table in which, each line represents the area, the number of columns represent the number of typology and the value the number of buildings.

Vulnerability Assessment Using Inverse Optimization
Here, the proposed methodology that answer to the question: "What is (are) the value(s) of vulnerability parameter(s) that optimize the fit of the model's prediction The observed data in Table 2 give two major information: i) the macroseismic EMS98 intensity, which is uniform (IX) for all the city; ii) the observed damages, regrouped on three classes: green (regroups D0, D1 and D2), orange (represents D3) and red (regroups D4 and D5).
Our problem is clearly considered underdetermined (for number of typologies higher than 1) because there is fewer equations than unknowns.Here, the total number of parameters needed to run the model depends on the total number of typology.Therefore, for  PQRS the number of parameters to determine is ( ) The expert can also reduce the solution space by introducing additional constraints to get more "realistic solutions".These constraints can be related to the operational needs, better knowledge on the studying area, etc. Follow are exam-ples of questions than can be answered by experts: • What error needs to be reduced?
1) Uniform error on all damage degree D0 to D5; 2) Error on D4 to D5 damage, useful for civil protection in case of seismic catastrophe; 3) Error on D0 to D2 damage, useful for mitigation and planning.• Which parameters can be fixed?1) Number of typology and its corresponding V i and T beta ; 2) The nbBat parameters; repartition of number of buildings per typology in each area.
• Which parameter can be constrained?1) Number of typology between 1 and 6; 2) Values of V i , not varying from 0 to 1 but constrained by the fuzzy function (Risk-UE) method (Figure 4); 3) T beta between 4 and 16.
In our study, we have explored a large number of solution in order to evaluate the impact of expert's constraints on the final solutions.Finally, validate the most appropriate one.We underline that, the range of variation of the vulnerability index is governed by the fuzzy function method (Figure 4), which adapt with the number of typologies [18] [40] [41].
For our simulations, we used Armagedom for the forward modelling and the augmented Lagrange multiplier method described in [42] for the inverse one.
Figure 5 illustrate the conceptual scheme followed for solving the problem.
The proposed methodology was implemented in R programming language and environment [43] using the SOLNP algorithm available in "Rsolnp" package [44].This process was repeated 100 times to generate 100 values of V i , T beta and nbBat values.These solutions represent potential solution, which will be analyzed by the expert in order to select the most appropriate one.We will show in the next section, that all the 100 solutions present a very little difference.The algorithmic parameters used are as follows: population size = 50, generations = 200, crossover fraction = 0.8, and stall generations = 100.In fitting convex function  For each observed damage grade, we compute the misfit error between observed and estimated damage (Equation ( 5)).In order to have a uniform distributed damage grade error for all polygons we define for each damage grade the entity defined in Equation (6).Finally, the objective function to minimize can be defined as a weighted sum of the theses entities (Equation ( 7)).
The choice of the weights 1 2 3 , , ω ω ω depends on the application and should be fixed by the expert.For example, for people who make organizational decisions J 1 must be important for them, therefore 1 ω should be greater than 2 ω and 3 ω .In the other hand for civil protection and urgent action after a seismic event, 3 ω should be greater than 2 ω and 1 ω .
In this study, we have tested 4 hypotheses: 1) Case 1: error on D4 to D5 useful for civil protection in case of seismic catastrophe; in this case, we take ( ) ( ) ω ω ω = 2) Case 2: error on the damage degree D0 to D2, useful for mitigation and planning; in this case, we take ( ) ( ) ω ω ω = 3) Case 3: error on damage degree D3 the most complicated to evaluate; in this case, we take ( ) ( ) ω ω ω = 4) Case 4: uniform error on all damage degree; in this case, we take ( ) ( ) ω ω ω = The number of simulations runs needed was determined in order that providing stable predictions for numerical results.The choice of 100 was guided by two raisons: -The variability of 100 estimated V i indices values is very low.This is shown in section 4 (Figure 8(a1) and Figure 8(a2)).
-We first execute 10, 20, 50 then 100 model runs during the experimentation phase.We clearly observe that first (mean) and second (variance) moment-order for 50 and 100 models run were similar.As optimization process is not time-consuming process, we fix to number of model runs to 100.

Results and Comments
In this section, we present a synthetic analysis of obtained results.We remember that our main objective here, is the determination of optimal vulnerabilities parameters (V i , T beta and nbBat) fitting at best the historical observed damages.The originality of the method relies on practical and operational aspects.Indeed, the method gives a panel of optimal solutions (ill-posed problem), and final choice return experts.For clarity of exposition, we will first present the typical results obtained for a specific hypothesis (here case 4; uniform distributed error for all damages) and for a fixed number of typologies (here 4 typologies).Then, synthetic results for two cases (from D0 to D2 in Figure 9 and, from D4 to D5 in Figure 8) of all executed simulations (2400 for each case).remains distributed uniformly.This can be explained by the fact that the problem is ill posed and infinity of local minima are possible, and very different solution can satisfy the same optimization criteria.However, this limitation is very interesting in our study, were experts must validate the optimal solution.These results emphasise the idea of global consideration of results, by looking to the error at the scale of the city and in all the districts of the studied area.Looking separately at a zoomed area the interpretation of results can be inexact, since, for example for the solution 17, for the district five the error is very reduced, and vice versa, for example for the district one of the solution 61.
In follow, we will present the obtained results for two practical cases.• The first one, very useful for crisis management.Where, the authority is interested by the number of highly damaged buildings; here named as D4 to D5.All other categories, remains less important in this case.
• The second one, very useful for mitigation and planning.Where, concerned authority need information about habitability.
For each case presented below, we have executed 2400 simulations in order to test some hypothesis and it allows to answer to two important questions: 1) Does higher number of typologies implies lower misfit error?
2) In the Risk-UE method, T beta parameter was calibrated according to the European buildings and fixed to 8. So, changing its value improves the accuracy of the model predictions?

Case1: Crisis Management
In this section, we will summarize the obtained results for the 2400 executed simulations.In Figure 8, the graphics in the left column show results for fixed T beta parameters (equal to 8) according to the European buildings, and the right one for optimal T beta .We present the results for the three sets of observational damages grade described below (D0 to D2, D3, D4 to D5).
In Figure 8(a1) and Figure 8(a2), boxplots represent results for possible value for vulnerability indices (y-axis) for each number of typologies (x-axis).Each boxplot contains 100 values.We observe that the variability of V i indices values is very low for the two cases; fixed and optimal T beta .In Figure 8(a1) for one typology, the V i value is always the same because the optimization process converges for the global minimum.In this specific case, the well-posed problem has a unique solution, which is V i equal to 0.70; T beta is fixed to 8 and nbBat is equal to total number of buildings.
In Figure 8(b1) and Figure 8(b2), boxplots represent results for possible value for T beta parameters (y-axis) for each number of typologies (x-axis).Each boxplot contains 100 values.In Figure 8(b1), all T beta values are equal to 8 because it is fixed input parameter; the graphic plotted only to show for comparison with Figure 8(b2).This last, show clearly that for all number of typologies, the T beta parameters varies from 4 to 16 (expert's constraint) with a high dispersion.
In Figure 8(c1) and Figure 8(c2), boxplots represent results for mean Figure 8. Results for crisis management case: (a1) the value of V i indices for different typologies with fixed T beta , (a2) the value of V i indices for different typologies with optimal T beta , (b1) the value of T beta parameters for different typologies with fixed T beta , (b2) the value of T beta parameters for different typologies with optimal T beta , (c1) the mean absolute error in percentage for different typologies with fixed T beta , (c2) the mean absolute error in percentage for different typologies with optimal T beta , (d) Comparison between minimal mean absolute error in percentage for fixed and optimal T beta .Journal of Geoscience and Environment Protection absolute error (y-axis) for each number of typologies (x-axis).Each boxplot contains 100 values.Here, we have compare the results for three indicators: mean absolute error on D0 to D2 (green), on D3 (orange) and on D4 to D5 (red).We remember that here, we are minimizing according to high weight on D4 to D5.Therefore, we expect a best performance on D4 to D5 (red) and less on the other one.The two graphics Figure 8(c1) and Figure 8(c2) confirm that error on D4 to D5 is very small (less than 10%) for the 2 cases; with fixed and optimal T beta comparing to the error on D3 and D4 to D5.  8 are the 25 th and 75 th percentile, the ban near the middle of the box is the median and the ends of the whiskers are the minimum and maximum.This graphics allows the comparison between the two cases (fixed and optimal T beta ).
We observe that for: • D4 to D5 indicator that for fixed and optimal T beta the misfit error on D4 to D5 is less than 2%, except for the case with one typology.Moreover, the increase of the number of typologies does not improve the model's results.This can be explained by the fact that increasing a number of typologies, the V i index have more constraints for variation range given by the fuzzy function.
And the second one that T beta fixed to 8 was accurately calibrated in the Risk-UE project [15] and it is representative of European buildings.
• D0 to D2 and D3 indicators, we show that considering T beta variable and increasing the number of typologies reduce the misfit error (up to 20%).However, it's meaningless because, the optimization criterion is highly weighted on D4 to D5.
For El Asnam area, for the crisis management situation, after the analysis of the mathematical solutions, we suggest to use for the predictive damage scenario five or six typologies, with the V i equal to the values presented in the Figure 4 and the T beta fix equal to 8. The variation of the T beta do not ameliorate the results and the use of less typologies give less information for the same error (less than 2%).The choice of the expert is constantly balanced between the acceptable error and the needed information.In the case of El Asnam, if we are looking only to the D4 to D5 damages, as is generally the case for crisis management, we prefer to choose more typologies (that will represent better the geographical distribution of the damage) for the same error.However, if for one reason, we are also interested by D3 damages, in this case the error increase at around 30% for five typologies and hence, we recommend using two typologies for which the error is less than 10%.9. Results for mitigation and planning case: (a1) the value of V i indices for different typologies with fixed T beta , (a2) the value of V i indices for different typologies with optimal T beta , (b1) the value of T beta parameters for different typologies with fixed T beta , (b2) the value of T beta parameters for different typologies with optimal T beta , (c1) the mean absolute error in percentage for different typologies with fixed T beta , (c2) the mean absolute error in percentage for different typologies with optimal T beta and (d) comparison between minimal mean absolute error in percentage for fixed and optimal T beta .same manner as Figure 8.In Figure 9(a1) and Figure 9(a2), boxplots represent results for possible value for vulnerability index (y-axis) for each number of typologies (x-axis).Each boxplot contains 100 values.We observe that minor differences in the V i indices value variability exist for the two cases; fixed and optimal T beta .In Figure 9(a1) for 1, 3 or 5 typologies, the V i value always converge to a global minimum.This is very specific to study case and cannot be generalized.The same analysis can be reported in Figure 9(a2) for 5 typologies.

Case 2: Mitigation and Planning
In Figure 9(b1) and Figure 9(b2), boxplots represent results for possible value for T beta parameters (y-axis) for each number of typologies (x-axis).The same analysis can be done here as in Figure 9(b1) et Figure 9(b2); the values of T beta parameters are highly dispersive.
In Figure 9(c1) and Figure 9(c2), boxplots represent results for mean absolute error (y-axis) for each number of typologies (x-axis).Each boxplot contains 100 values.Here, we have compare the results for three indicators: mean absolute error on D0 to D2 (green), on D3 (orange) and on D4 to D5 (red).We remember that we are minimizing with high weight on D0 to D2.Therefore, we expect a best performance on D0 to D2 (green) and less on the other one.The two graphics seems giving equivalent results in order of magnitude for D0 to D2 errors.In addition, we observe that this error grow with the increase of the number of typologies.ics allows the comparison between the two cases (fixed and optimal T beta ).We observe that: • For D0 to D2 indicator that for fixed and optimal T beta the misfit error on D0 to D2 is less than 20%.We observe that the increase of the number of typologies increase the misfit error.This can be explained by the fact that increasing a number of typologies, the V i index have more constraints for variation range given by the fuzzy function.And the second one that T beta fixed to 8 was accurately calibrated in the Risk-UE project (Mouroux et al., 2004) and it is representative of European buildings.
• For D0 to D2 and D3 indicators, we show that considering T beta variable and increasing the number of typologies reduce the misfit error (up to 2%) for D4 to D5 indicator.We also notice, that, for 2 typologies the result for D4 to D5 indicators with fixed T beta is better than with optimal one.These results depend on the study case and it's meaningless to extrapolate the results except for D0 to D2 indicator in this case.
For El Asnam area, for the mitigation and planning situation, after the analysis of the mathematical solutions, we suggest to use for the predictive damage scenario two typologies, with the V i equal to (0.6 and 0.87) and the T beta fix equal to 8. If for one reason, the used decide to use a different number of typologies, in this case we recommend using variable values of T beta , since in this case this ame-liorate the results (from Figure 9(d).we can see that the error is quite stable for 3, 4 and 5 typologies with optimal T beta , but increase for the same number of typologies with T beta fixed to 8).
In Figure 9, the bottom and top of the boxplot are the 25 th and 75 th percentile, the ban near the middle of the box is the median and the ends of the whiskers are the minimum and maximum.From the results of the two cases, we can deduce that the methodology is calibrated for table the important damage degrees, that can be managed by the optimization process as can be noticed in the Figure 8(d), where the error is less than 2% for any number of typology.This is more hardly to manage for the error relative to the case 2: mitigation and planning, for which, the error, although stable for 1, 2 and 3 typologies, grow up for the situations with 4, 5 and 6 typologies.
The fact that we use different typologies introduces more error, but that also gives supplementary information relative to the location of the damages.If the question of the user is just to know the total error without knowing the distribution by districts, in this case the use of only one typology can be useful.However, generally, even in crises management situation, in addition to this question, the second question is the identification of the most affected districts in order to send the rescue.In this case, the geographical description of the error is required.

Conclusions
We have aimed in this work to develop unified computational method for assessment of vulnerability of structures, to be used for seismic risk scenarios at the town scale, when the underlying uncertainties are modelled using random variables, interval analysis, and (or) fuzzy variables.The developed approach is based on the minimization of the error between observed and estimated damages and it allows the determination of the number of typologies classes, the estimation of their vulnerability index and associated T beta value as well as the spatial distribution in the studied area (nbBat).This method remains general and can be applied for any observed dataset, for which generally the information related to the typologies and their vulnerability are not available.The main insights after the analysis of the results are: • The Risk-EU methodology and the model that it proposed fit very well the damages D4 to D5.When the attention is played to the damages D0 to D2 the estimation is correct but damage D3 remains the most difficult to be assessing because the diagnosis experts on this category remain very subjective and the description remains very vague.
• The results of our work corroborate the affirmation on the Risk-EU methodology, where the T beta was fixed equal to 8, based on the statistic treatments of all the European data.Our analysis (Figure 9(b2) and Figure 8(b2)) shows that by modifying it between 4 and 16, the best results are improved by 10%.Thus, our approach confirms that the latter does not bring too much variability on the results and the choice of T beta = 8 must be respected.• Contrary to the expectance, increasing the number of typologies does not reduce the error for the results.Indeed, the fact of constraining by the expert the V i by the fuzzy function restricts the search space.Therefore, the total error is increased but, by increasing the number of typologies, the geographic description of the vulnerability in the studied area is improved and brings supplementary needed information (e.g. the hierarchizing of the most damaged areas).The two cases (crisis management and planning and mitigation) have two different trends response: i) for mitigation case, increasing the number of typology increases the misfit error; ii) on the other hand in the case of crisis management increasing the number of typology improves the model prediction.In some cases, the optimization algorithm can detect the number of classes that minimize the error, that means on one hand that, a smaller number of classes are insufficient (small number of degree of freedom), and on the other hand that, spending time for dividing and refining the number of classes is useless.In the case of El Asnam situation, the optimum number of typologies was not clearly identified.The mathematical formulation of the objective function gives the opportunity to have a perfect fit of the vulnerability parameters (error that tends to zero) by minimizing the error for a specific damage grade (e.g. a minimization related to D4 to D5 damage gives the most accurate vulnerability parameters for civil protection for urgent action after a seismic event).Hence, the approach can answer to different actors that work at the scale of the town in different situations (e.g.planning, mitigation action, retrofitting, crises management).Of course, the total error (total number of building as a sum of building in each area) is much less big than the error related to each area.This observation should be considered in the studies in which only the total error is computed, which have certainly big utilities in certain almost "real time crises situation" when the first question is the rough estimate of number of victims; but quickly after, the spatial localization of damages is asked, in order to know where to send the first aids, and hence the error at the area level become very important.The method that we propose presents a better integration of the vulnerability and loss results could allow city councils or regional authorities to plan interventions based on a global view of the site under analysis, leading to more accurate and comprehensive risk mitigation strategies that support the requirements of safety and emergency planning.

Figure 1 .
Figure 1.Identification of the ten El Asnam sectors for CTC survey (adapted from [37]): collapse-black solid fill, very heavy damage-dark checkerboard pattern, heavy damage-dark diagonal line pattern, moderate damage-lighter diagonal line pattern and slight or no damage-lighter checkerboard pattern.

Figure 2 .
Figure 2. General methodology and modulus in Armagedom software for seismic damage calculation.

Figure 3 .
Figure 3. Required file for Armagedom software for the case with three typologies (A1, A2, A3): top-the *.tvi_t file containing the vulnerability index V i and bottom-the *.nbbat file containing the distribution of the typologies in each polygon.
for this ill-posed problem, the optimization algorithm leads to local solutions.This limitation, often problematic for solving optimization problem is very interesting for us.Where, not only one final solution is obtained but all solutions minimizing the objective function are retained.The final decision comes to the expert to choose/validate the most appropriate one.

Figure 4 .
Figure 4. Fuzzy function for vulnerability classes (from A to F following EMS98) that constrain the vulnerability index for the optimization method.

Figure 5 .
Figure 5. Conceptual scheme for optimization process.

Figure 6
Figure 6 illustrates the organigram of explored solutions.

Figure 6 .Figure 7
Figure 6.Number of runs for all the study by each step.

Figure 7 (Figure 7 .
Figure 7(c) and Figure 7(d) present the errors on the number of structures given in the number and in percentage for two arbitrary solutions (samples 17 and 61).The barplots presents the error on the damages degree D0 to D2 (green color), D3 (orange color) and D4 to D5 (red color) for the 10 districts.It shows that the distribution is very different for the two case, in the other hand the error

Figure 8 (
Figure 8(d) reports the lowers band in Figure 8(c1) (continues lines) et Figure 8(c2) (dashed lines) in the same graphics.Each plot represents the minimum error (among the 100) for the considered indicator for each number of typologies.Bottom and top of the boxplot in Figure 8 are the 25 th and 75 th percen-

Figure 9
Figure 9 summarize results of the obtained 2400 simulations.It's organized in

Figure
Figure 9. Results for mitigation and planning case: (a1) the value of V i indices for different typologies with fixed T beta , (a2) the value of V i indices for different typologies with optimal T beta , (b1) the value of T beta parameters for different typologies with fixed T beta , (b2) the value of T beta parameters for different typologies with optimal T beta , (c1) the mean absolute error in percentage for different typologies with fixed T beta , (c2) the mean absolute error in percentage for different typologies with optimal T beta and (d) comparison between minimal mean absolute error in percentage for fixed and optimal T beta .

Figure 9 (
Figure 9(d) reports the lowers band in Figure 9(c1) (continues lines) and Figure 9(c2) (dashed lines).Each plot represents the minimum error (among the 100) for the considered indicator for each number of typologies.This graph-

Table 1 .
Classification of damage to masonry and reinforced concrete buildings (according to EMS98).