Transfer of Global Measures of Dependence into Cumulative Local

We explore an idea of transferring some classic measures of global dependence between random variables , , , 1 2 n X X X  into cumulative measures of dependence relative at any point ( ) , , , n x x x 1 2  in the sample space. It allows studying the behavior of these measures throughout the sample space, and better understanding and use of dependence. Some examples on popular copula distributions are also provided.


Introduction
The ideas of transferring integral measure of dependence into local measures are probably not new. We got the clue from the works of [1]- [3]. Cooke raised the idea of the use of the indicator random variables (r.v.'s) in regression models to consider the regression coefficients as measures of local dependence. Genest [4] gave some more motivations on the need of transfer of overall (integrated) measures to the cumulative local functions. Dimitrov [2] revived some old measures of dependence between random events proposed by Obreshkov [5], and noticed their convenience in the use of the studies of local dependence between r.v.s. [6] is an attempt looking at the cumulative dependence from the point of view of the so called Sibuya function, which is also used in studies of dependence.
We propose here a review of the classic statistical analysis approach, where the procedures are more or less routine to be used in the analysis of local dependence. The manuscript is built in the following way: Section 2 introduces the idea of the transfer in detail. Section 3 considers the covariance and correlation matrices between indicator functions for a finite family of random events. At this point it is worth noticing that the expressions for the entries ( ) , i j A A Σ of the covariance matrix for identificators of random events 1  R of the correlation matrix coincide with the expression called in [5] as correlation coefficient between the events i A and j A . Obreshkov approach is using different categories. Following these ideas, [2] exposes a relatively complete analysis of the two-dimensional dependence between random events, and establishes some more detailed properties of the Obreshkov measures of dependence. Similar expressions for the correlation between random events, are used in [4]. They call it correlation coefficient associated with pairs of dichotomous random variables. Long before [7], p.311 uses correlation coefficient between indicator functions of two events, but does not elaborate any details. Let 1  and 2  be two σ -sub-algebras of  . The survey of [3] proposes the measure ( ) ( ) ( ) ( ) Most of the conventional measures of dependence between r.v.'s used in the practice are integral (global).
Usually they have the form of expected values of some specifically selected function ( ) ., ,. g  can be a real number, a vector, or a matrix. The dependence measure "of type g " is expressed by the By implementing the idea of the previous section, we introduce Covariance matrix between random events Let us focus on some basic properties of the entries of covariance and correlation matrices. They reveal also the information contained in terms of these matrices. Details can be seen in [2].
The equalities in the middle of (0.8) hold only if the two events , ; , .
The advantage of the considered measures is that they are completely determined by the individual (marginal) probabilities ( ) i P A of the random events, and by their joint probabilities ( ) .

i j P A A 
An interesting fact is that we do not need the joint probability to calculate conditional probabilities between the events. If either of the is known then the the conditional probability ( ) Remark 1. If the covariance, or the correlation between two events i A and j A is positive, the conditional probability of one event, given the other one, increases, compare to its unconditional probability, i.e.
and the exact quantity of increase equals to Conversely, if the correlation coefficient is negative, the information that one of the events has occurred, the chances that the other one occurs decrease. The net amount of decrease in probability is also given by (1.10).

Covariance and Correlation Functions Describe Cumulative Local Dependence for Random Variables
Here we continue developing the idea of Section 2 to illustrate how the cumulative local measures of dependence between r.v.'s are obtained and expressed in terms of the conventional probability distribution functions. Let 1 2 , , , n X X X  be arbitrary finite sequence of r.v.'s. For any choice of the real numbers The covariance and correlation matrices for random events . We discuss these matrices here, and focus on their specific forms in terms of the respective distribution functions. The covariance matrix function represents the cumulative local covariance structure between r.v.'s 1 2 , , , n X X X  at the point ( ) Obviously, these matrices are functions of the joint c.d.f.'s ( ) and of the marginals of the participating r.v.'s. The properties of these entries are described by Theorem 1, and Theorem 2. We notice here the following particular and important facts.
Theorem 3. a) The equality The statement is just an illustration of the transfer of the results of Theorems 1 and 2 from random events to the case of events depending on parameters. We omit details.

Theorem 3 d) provides the opportunity to make predictions for either of the events
given that the other event occurs, or does not occur. The absolute values of the increase (or decrease) of the chances of the r.v.

Monotone Transformations of the Random Variables and Related Dependence Functions. Copula
Here we study how the dependence functions behave under increasing or decreasing transformations. Before we move on, we introduce the following notations in order to make exposition clearer. We will use the notations and equivalent are also the events for all , 1, 2, , i j n =  . If , . , , , , , , , , , .
Proof. The proof of these relationships is a simple consequence from Theorem 4, when we notice that the copula is just the distribution of a set of random variables, obtained after a monotonically increasing transformation  Both theorems explain how after transformation the local cumulative dependence structure for the new variables is transfered to the points related with the same transformations at each coordinate of the other set of random variables. The fact of curiosity is that the relationships in Theorem 5 completely repeats the relationships (1.13) and (1.14) which define the copula.
The next examples illustrate the covariance and correlation functions for two popular multivariate copula, which we borrow from [8].

2) Clayton Multivariate Archimedean copula
The copula is given for any 2 n ≥ , by the equation   , .

Maps for Dependence Structures on the Sample Space
Here the events { } , . , .
At each point of such a curve ( ) with the level curves. In our plots we have used the program system Maple, and graph just the correlation function.

Examples
Here we illustrate the information one may visually get from the mapping of the 3-d graphs for the considered above copula. We plot their correlation function Since the pictures are the same for any pair of indices , i j , we use notations without any indices.

1) The correlation functions for FGM copula (continued)
The two-dimensional marginal distributions of the multivariate FGM copula are given by one and the same equation, and we drop the subscripts:

2) Clayton Archimedean copula (continued)
The two-dimensional marginal distributions of the multivariate Clayton copula also are given by the same equations, and we drop the subscripts: The copula is given by the equation and in the multivariate case it is required that 0.
θ > However the Clayton copula in the two-dimensional case is defined for all 1, θ ≥ − and 0. θ ≠ The respective correlation function is The 3-d graphs for this particular measure shown on Figure 2(b) of dependence between the two components of the copula closely imitate the situations given in the scatter-plots in the reference book [8], but are more impressive and information giving.
Together with the surface plots, the correlation function and its level curves are quite more informative than the simulated scatter-plots used in copula.

On the Use of the New Measures
At this point we believe that the introduced local measures of dependence can be used in similar ways as the covariance and correlation matrices are used in the multivariate analysis. However, because we think that the ideas developed here are so natural and simple, there will be no wonder if what we propose here is already in use. But, with the best of our knowledge, we have never seen anything similar, and we dare to say it under the risk we are not the first who offers such approach. The results of Section 3 for random events may suit analysis of non-numerical, categorical, and finite discrete cases of random data similar to the ways as the numeric data are treated.
The results of Section 4 show that we are getting an opportunity to treat mixed discrete and continuous variables equally, using the cumulative distribution functions (marginal, and the two-dimensional).
If one treats the induces , , i j  of the events , Obtained results may have impact on various side effect questions, and we leave it for other's comments. We insist on main practical orientation of our results-same and similar as everything else build on the use of the covariance and correlation matrices in the multivariate analysis.

Further Expansion of the Idea of Transforming Measures
In addition, the new measures may be used in studying the local structures of dependence within rectangular areas of the sample spaces when the sets of random events are defined as type of Cartesian products  i j X X Since the densities do not have meaning of probabilities, for now we cannot use them in similar constructions. However, the picture changes in case of discrete variables.
As an illustration of this approach in the study of local dependence structure we consider the discrete case. After substitution of this form of the joint distribution and the marginal Poisson distributions in the above expressions, we get the covariance and correlation functions and the study of the local dependence can start. We omit detailed analytical expressions for these functions which is not a problem to get but are too cumbersome to write here.
On Figure 3 we give an illustration of these functions by their 3-d graphs in the case when 1  i i P A X ∈ In our opinion, there is a number of follow up questions in the multivariate dependence analysis, and more illustrations on real statistical data will trace the utility of the offered approach.

Conclusions
In this paper we discuss the possibility to transfer conventional global measures of dependence onto measures of (a) (b) dependence between random events, and then using another transfer to turn them into cumulative local measures of dependence between random variables. We illustrate the method on the example of covariance and correlation matrices between random variables. Important results can be found in the discussion on some specific properties of the new measures, and in the qualitative information that may be derived from the meaning of these measures.
The behavior of these measures under monotonic transformations is established. It shows full coincidence between the pictures in the sample space of the original random variables and their respective copulas.
We found that locally the random variables may have areas of positive dependence and areas of negative dependence, as well as areas of local independence. This seems something interesting from informational point of view compared to the global measures of dependence.
Our suggestion is that the local measures of dependence offer important topological pictures to study. Our examples illustrate the new opportunities in this field.