Comparison and Validation of Distance on the Balanced Assignments of Group Having Entities with Multiple Attributes

In this paper, the balanced assignment is studied in classification of a group with multiple attribute into many subgroups without losing its similarity. The similarity or closeness in clustering is often measured as a distance. The Ma-halanob distance is considered as one of the tools for measuring its closeness. The comparison between the distance criterion is shown by changing a specific assignment standard, and finally comparing it against the MTS method.

characteristics among subgroups. So that the characteristics among groups disappear as all groups have the same properties. For example, a group consisting of 10 male students and 10 female students is classified into two groups based on their gender. Regarding the gender as an attribute, the result of the clustering is that two groups are classified as a male group and a female group. On the other hand, in the balanced assignment problem, each group consists of five male students and five female students, and the characteristics of each group are the same.
The classification of entities with multiple attributes is different from the well-known solution methodology such as partitioning method or hierarchical method, but the efforts are being made to improve it due to the difficulty of the optimization process. A mathematical model or an application methodology with a constraint such as a series of processes for finding an appropriate compromise among attributes that are in conflict with each other is modeled as a multi-criteria function, and its necessity is increasing. Among the methods of solving the multi-criteria function, the most commonly used approaches are the weighting method and the goal programming method. The weights or specific numerical goals should be established appropriately for each function using the mathematical programming approaches in the optimization process. Barron and Schmidt [2] study the constrained problem using the distance and fuzzy measure as a mathematical approach to multiple attributes problem. The MTS (Mahalanobis Taguchi System) method is a pattern recognition method applied to classify data into some categories [3] [4]. In the MTS method, the Mahalanobis distance is used to measure the degree of abnormality of patterns, and principles of Taguchi methods are used to evaluate accuracy of predictions based on the scale constructed. The excellency of the Mahalanobis distance is that it considers correlations between the variables, which are essential in pattern analysis. Fora balanced assignment, the closeness between the entities is usually measured by the specific distance measure such as the Euclidean distance [5]. Recently, the MTS method is applied to solve the balanced assignment of allocating group with multiple attributes into small subgroups [1].
In the clustering process, a distance between entities or attributes is applied in the Ward method and partitioning method as a measure of clustering accuracy. Here, the distance means not the physical distance but the distance between the attributes in the entities. Usually, the Manhattan distance, the Euclidian distance and the Mahalanobis distance are applied as a distance selection. The Manhattan distance, also known as the Taxicab distance, is a one-dimensional distance connecting two points. On the other hand, the Euclidean distance is a method to obtain the shortest distance between two points in n-dimensional space. This distance is a way to generalize dimensions to n dimensions using Pythagorean Theorem. Finally, the Mahalanobis distance is calculated by considering the correlation of variables as an index to measure the degree of diffusion of variables. Since the Mahalanobis distance is very sensitive to standardized variables, this distance could be increased significantly, even though the standar-Y. Rhee dized variable for the reference group is slightly different. In this paper, the comparison between the distance criterion is shown by changing a specific assignment standard, and finally comparing it against the MTS method. This paper is a sequel to an earlier paper by Rhee [1].
The paper is organized as follows. We review related works in Section 2. And a balanced assignment is considered with respect to associated distances for an example in Section 3. In Section 4, the comparison between the suggested distance criterion is shown by changing a specific assignment standard, and the result of the MTS method is checked by comparing it against the given criterion.

Literature Review
In this section, the distances required in the balanced assignment process are introduced since the effectiveness in clustering is often measured a distance as its closeness. The choice of distance measures is very crucial, and it has a strong influence on the clustering results. Usually, the Euclidean distance is considered to the common distance measure in clustering. Depending on the type of data and the researcher questions, correlation-based distance is often used as an alternative. The methods used for the distance measurement include the Manhattan distance, the Euclidian distance and the Mahalanobis distance. And the MTS method is also presented to implement the balanced assignment using these distances. The MTS method is one of the well-known clustering methodologies and this method is considered to be very helpful for the purpose of classifying large groups with multiple attributes into many subgroups.

Manhattan Distance
The Manhattan distance is a distance metric between two points in N-dimensional vector space. It is used extensively in a vast area of field from regression analysis to frequency distribution. It was introduced by Hermann Minkowski [6]. The Manhattan distance is also known as 1 L distance or city block distance. It is named so because it is the distance a car would drive in a city laid out in square blocks, like Manhattan. The Manhattan distance function computes the distance that would be traveled to get from one data point to the other if a grid-like path is followed. The Manhattan distance between two items is the sum of the differences of their corresponding components. This distance between two points ( ) (1) in each dimension as follows and ( ) 1 2 , , , n y y y =  y in N-dimensional space is expressed the sum of the distances in each dimension as follows, The properties of the Manhattan distance are, first, there exist several paths between two points whose length is equal to the Manhattan distance. Secondly, a straight path with length equal to the Manhattan distance has two permitted moves such vertical or horizontal by one direction only. Finally, for a given point, the other point at a given the Manhattan distance lies in a square. The Manhattan distance is frequently applied in regression analysis, specially, linear Y. Rhee regression to find a straight line that fits a given set of points. In solving an underdetermined system of linear equations, the regularization term for the parameter vector is expressed in terms of the Manhattan distance. This approach appears in the signal recovery framework called compressed sensing. The Manhattan distance is also used to assess the differences in discrete frequency distributions. Finally, the Manhattan distance heuristic is an attempt to measure the minimum number of steps required to find a path to the goal state. The closer to get the actual number of steps, the fewer nodes have to be expanded during search, where at the extreme with a perfect heuristic, and the nodes that are guaranteed to be on the goal path can be expanded.

Euclidean Distance
The choice of distance measures is very important, as it has a strong influence on the clustering results. For most common clustering software, the default distance measure is the Euclidean distance [7]. The Euclidean distance or Euclidean metric is an ordinary straight-line distance between two points in Euclidean space. Euclidean space was originally devised to study the relationships between angles and distances. This system of geometry is still in use today and is the one that high school students study most often. Euclidean geometry specifically applies to spaces of two and three dimensions. However, it can easily be generalized to higher order dimensions. It is, also, known as Euclidean norm, Euclidean metric, 2 L norm, 2 L metric and Pythagorean metric. The Euclidean distance is applied under the assumption that the properties of the attributes that an object is inherent are consistent. Properties of the Euclidean distance are that there is a unique path between two points whose length is equal to Euclidean distance, and the other point lies in a circle such that the Euclidean distance is fixed for a given point. The radius of the circle is the fixed Euclidean distance. With this distance, Euclidean space becomes a metric space. The Euclidean distance is defined as the shortest distance connecting two points. For example, the distance of two points (1) in each dimension as follows and ( ) 1 2 , , , n y y y =  y in n dimensions is expressed as Simply, this is a basic distance measurement in which the correlation between attributes is not considered.
The Euclidean distance is frequently used in Euclidean Geometry to find the shortest distance between two points in a Euclidean space and the length of a straight line between two points. This distance is commonly used in clustering algorithms such as K-means. If the Euclidean distance is chosen, then observations with high values of features will be clustered together. The same holds true for observations with low values of features. Finally, it is used as a simple metric to measure the similarity between two data points in associated areas. Correlation-based distance considers two objects to be similar if their features are highly correlated, even though the observed values may be far apart in terms of Euclidean distance. The distance between the two objects is 0 when they are perfectly correlated.

Mahalanobis Distance
The distance by correlation between the data can be very effective to the clustering analysis rather than the distance scale discussed in the previous section. In particular, it seems desirable to apply correlation when there are multiple attributes of an entity. This is because the disadvantages of the Euclidean distance can be compensated by analyzing the relationship between attributes and the effect of clustering can be augmented. Clustering by correlation based distance or by the Euclidean distance is quite sensitive to outliers, but generally the correlation based distance is more effective than the Euclidean distance. One of the correlation based distances is the Mahalanobis distance in the clustering methodology.
The Mahalanobis distance is known to be an appropriate measure of distance between two elliptic distributions having different locations but a common shape, and also known as an effective way to simply compare between groups with well-known characteristics and to those who are not familiar with the characteristics [8]. Since the Mahalanobis distance is very sensitive to standardized variables, it leads to a large increment, even though the standardized variable is slightly different for the reference group [4]. Applying this to all attributes in the entity, the Mahalanobis distance can be readjusted by considering the correlation between attributes. The Euclidian distance has the form of a circle, since it does not take into account the correlation between attributes. On the other hand, the Mahalanobis distance takes the form of an ellipse in consideration of the correlation, and is expressed as follows.

( )
, MD x y represents the Mahalanobis distance between entity x and entity y, where x and y denote object vector. And also

MTS Method
The first step to measure the Mahalanobis distance is to apply data conversion, a statistical process to provide a kind of reference point for comparing two or more different groups. The standard normal conversions are only applicable if the attributes of data in the group follow a standard normal distribution. Assuming that ( ) 2 , X µ σ has random variables with mean, µ and variance, 2 σ for the random variable in a data group, then X can be transformed into Y using the simple data conversion without losing its statistical property. The result of data conversion can be used to data comparison between attributes, since the statistics that represent a data group are different each other. As can be seen .
The MTS method is a pattern information technology, which has been used in different diagnostic applications to help in making quantitative decisions by constructing a multivariate measurement scale using data analytic methods [9].
In the MTS approach, the Mahalanobis distance is used to measure the degree of  (1) and (2). And the inverse matrix of covariance using the correlation analysis should be followed to convert the data. The correlation coefficient between attribute i and attribute j is already known as The SN is computed to determine how much each attribute is affected by the Mahalanobis space. Therefore, this procedure is to apply as an evaluation criterion by reducing the low impact characteristics and to select the high impact characteristics among the various characteristics affecting the Mahalanobis distance. The SN plays a critical role to determine the influence between entity and the Mahalanobis space. The quadratic loss function for the smaller the better is used as seen in (3), since the smaller distance between the Mahalanobis space Y. Rhee American Journal of Industrial and Business Management and the entity means the closer it is.
The balanced assignment should be executed to ensure that the characteristics of the subgroups are similar, and that the attributes included in the characteristics are also similar by assuming that the balanced assignments should be made taking into account all attributes specified in the entity [5]. After getting the SN, the orthogonal array is used to determine which objects are closer to the designated Mahalanobis space. The calculation of the Mahalanobis distance, which is the last stage of the MTS method, is to apply the SN as an influence indicator.

Data Collection
In this section, the case where an entity contains three attributes will be analyzed and the results of calculating the Mahalanobis distance suggested in the previous section will be presented. However, the Manhattan distances and the Euclidean distances, which are easier to compute than the Mahalanobis distance, are not presented in this section. The collected data is shown in Table 1 as an example of case study. The balanced assignment is executed to classify into 3 subgroups, and is tried to make the characteristic of 3 subgroups the same.
In order to compute the Mahalanobis distance, it is necessary to define the Mahalanobis space that can be used as a reference entity. In this study, the entities having the most extreme value of each attribute are set as the reference entities, and those are the Mahalanobis space. The entity with high value and low value for each attribute is defined as a reference point or the Mahalanobis space. Since the given example consists of three attributes, 6 shaded entities in Table 1 are represented as a Mahalanobis space The entity by each attribute must be converted using (2), and followed by correlation inverse matrix. The Mahalanobis distance between the space and each entity using (1) is shown in Table 2, and 6 entities of A, B, C, D, E, and Fare represented as corresponding the Mahalanobis space.
Furthermore, the balanced assignment by the MTS method is accompanied by calculating SN ratios using (3), and by assigning all entities into many subgroups using orthogonal array.

Comparisons and Validation
In this section, the comparison between the suggested distance criterion is shown by changing assignment standard, and finally comparing it against the MTS method. The mean value of the suggested attributes in Table 1   The result of the balanced assignment by applying the MTS method is shown in Table 3, as it were, all entities presented in Table 1 are distributed into 3 subgroups under the certain criterion. Table 3 also shows relatively good results when analyzed in terms of the mean in the balanced assignment. However, considering the variance as a criterion, it is not a satisfactory result. This result can for the balanced assignment. The Mahalanobis distance is considered to be the better choice for the given example, even though the difference depends on the criterion for selecting attributes. Finally, the balanced assignment is carried out by applying the MTS method, and its result under each distance criteria is shown in Table 5. As seen in Table 5, the results of the MTS method are comparatively satisfactory even if all distances are considered. In addition, since attribute 1 and 2 have a statistical property having a strong positive correlation, these two attributes can be integrated to reduce the number of attributes, which can lead to a dimensional reduction in terms of modeling.

Conclusion
In the clustering, the distance between entities or attributes is applied as a measure of clustering accuracy. The Manhattan distance, the Euclidian distance and the Mahalanobis distance are considered as a tool for measuring its closeness.
In this paper, the comparison between distance criterion is shown by changing specific assignment details, and finally comparing it against the MTS method. Since the standards for calculating the distances are different, it is not meaningful to compare them one by one. However, the mean and the difference between the maximum values and the minimum values within the subgroup, can be used to analyze which method represents a good indicator for the balanced assignment. In general, the balanced assignment by the Mahalanobis distance is seen as a better choice, even though the difference depends on the criterion for selecting attributes. Finally, the balanced assignment is carried out by applying the MTS method.